Appian World 2023: Focus On AI, Data Management
Appian Data Fabric has been adopted by 94 percent of Appian’s new customers, despite being an optional bolt on
This year’s Appian World Congress took place in the US city of San Diego, and the focus was very much on the management of data and artificial intelligence.
To this end Appian has implemented robust data management and AI capabilities in its platform, as a result of customer demand.
These developments are necessary advances for a company that has specialised in offering management and automation of processes in practically any business field. Appian has been helped here as data and its correct treatment, are part of the origin of this Virginia-based company.
Appian track record
Since it was founded back in 1999, Appian has understood the real needs of organisations and seeks to provide customers with an increasingly extensive platform, so clients can focus on their business, and take advantage of their own data, which is their most valuable asset.
This understanding has seen Appian double in size in the past four years, since the last time it gathered customers and partners in San Diego for Appian World 2019.
This is according to Matt Calkins, Appian co-founder and CEO, who welcomed delegates during the inaugural presentation of Appian World Congress 2023.
He explained how thanks to offering end-to-end process management, Appian has been able to reach a greater number of customers. It has done this thanks to the addition of processes to its system design, passing through its low-code development, its automation, its analysis, as well as its adaptation to changing regulations.
Calkins explained that everything is available on a unified platform that grows organically each year through acquisitions, as is the case of the Spanish Novayre or the German Lana Labs deals.
Calkins explained that this unification at the time of managing the processes from end-to-end has a clear objective: to provide value to the organisations that utilise the platform, and in addition, increasing the speed and efficiency at the time of managing any business process.
This is achieved with a modern and easy-to-use interface, but above all with the components mentioned above, namely data and artificial intelligence, which is the true focus of this year’s Appian World Congress.
Appian Data Fabric adoption
Announced at the end of the year during Appian Europe , where Silicon Spain was also present, Appian Data Fabric has quickly become one of the main drivers of the company’s strategy.
Calkins confirmed the Appian Data Fabric solution has already been adopted by 94 percent of new customers:
“The adoption of Appian Data Fabric has been extraordinary,” said Calkins. “Despite being optional, everyone wants it”.
Indeed, the CEO said that in a few months, more than 1,000 million queries have been launched to this “virtual database.”
Basically Appian Data Fabric is capable of unifying all the data sources that each company has, regardless of their nature and storage location. The data always remain in its original location but is available in a unified way in this virtual database in order to facilitate its understanding, its relationship, or the execution of processes from within the Appian platform itself as if it were a single repository.
However, the main difference compared to other market alternatives, Calkins stated, is the ability not only to read the data from all these sources, but also to write about them: “No company in the technological industry can match these writing capabilities of data.”
Calkins then expanded upon the unique capabilities of Data Fabric.
“When you listen to the Data Fabric industry, it really refers to the use of APIs and connectors, while Appian provides a semantic layer that also unifies queries on these external sources but with the results already filtered and indexed globally,” Calkins said. “In addition, the data is filtered at the queue level, which is essential to maintain privacy and comply with the regulations of each country or sector.”
In practice, this set of data based on multiple sources is perfect for the training of artificial intelligence (AI) algorithms, something that is examined in greater detail in another article located here.