Microsoft Fabric is an all-encompassing data storage and analytics platform that naturally boasts substantial data gravity, but how can users overcome this force and find the data they need? In the world of Microsoft Fabric, we have AI skills to help us navigate this landscape. AI Skills are powered by Generative AI, coupled with a definitive grounding on your data and its structure to provide an ability to translate natural language questions from your business, into the relevant code needed to generate an accurate response.
Fabric AI Skills are extremely flexible to your data, and can pivot the queries that are written based on the way the data is stored and how it is structured. Users seeking information can ask a question in natural language and the AI Skill will automatically determine the appropriate data source and then write the query in the language that is most suited to that type of data. For example, if the data is derived from a real-time source, the query will be written using KQL. Conversely, if the data sits inside a Lakehouse, an SQL query will be written. Additionally, DAX queries can also be constructed if the data resides in a Semantic Model.
Conveniently, multiple data sources can be attached to an AI Skill, so interconnected concepts like customer or product don’t have to be tackled separately. However, rather than dilute the accuracy of an AI Skill by opening it up to an entire Lakehouse or Semantic Model, developers can select specific tables from the various models to provide perspectives on data that fit the needs of the business.
Additional customisation can be implemented by providing example questions and corresponding queries to the AI Skill, essentially showing the generative AI agent the preferred way to format a query or navigate a set of joins when asked similar questions. Also, instructions can be provided using natural language to guide the bot on certain topics or explain organisational terminology. For example, explaining to the AI Skill that financial data should come from a semantic model, or that log type questions should go to a KQL database.