Practical AI
AI Changes Our Relationship with Applications
The fastest and easiest path to obtaining truly practical, pragmatic, productive, and profitable AI applications is to have someone who is already expert at producing them to create them for you.
You may, of course, consider hiring developers but that may take months and be very costly. You could consider having some of your existing staff train on how to produce them, but that too can be very expensive both in time and funds.
We’re at the point now, however, where many software development experts have stepped up to the challenge and made the investments necessary to provide you with a valuable resource. One such firm has recently made a significant investment in making these resources far more easily available to companies seeking such solutions.
Founded in the early days of personal computing in 1982, headquartered in Reston VA with offices in Maryland, North Carolina, Ohio, Texas, Indiana, and India,Applied Information Sciences (AIS) has helped hundreds of global enterprises optimize their technology investments to overcome IT challenges and achieve business goals. They deliver cloud transformation solutions that help commercial and federal enterprises improve business results by migrating to the cloud, modernizing apps, and using data intelligence.
$2.7M Investment in Microsoft-Based AI Solutions for Customers
Most recently, in December 2024, AIS announced the launch of its Microsoft Solutions Center (MSC) the product of a US$2.7M investment in delivering secure, Microsoft-based solutions. The AIS MSC provides customers with ready-to-deploy, cross-functional teams of highly skilled IT professionals delivering Microsoft-endorsed specializations through proven, low-risk engagement models.
"The MSC removes the friction of forming new teams and onboarding vendors, allowing us to deliver immediate value for our clients. These short-term projects deliver results quickly without the noise of typical new team formation and vendor onboarding challenges for our clients," stated Mikala Kennell, the Director of the MSC for AIS. "Our established, purpose-built teams bring repeatable, well-tested delivery patterns that can be adapted to each client’s unique needs, ensuring impactful results without the delays of traditional team ramp-up."
AI Redefines the User Relationship with Applications
"English is the next programming language," declares Yared Tsegaye, AIS Vice President of Cloud Modernization and AI Solutions. Regarding the resulting user interaction, which he describes first as being far more immersive, Tsegaye explains, "It's not a menu anymore. It's not saying 'get this report' or 'find this for me. 'You're actually telling it, 'I need to know this about this, and please present it in this way.'"
When describing the AI-enhanced applications his team is providing to customers, Tsegaye talks about 'intent' and asking ‘what are you trying to do?’ His descriptions position the application not so much as an application, but more as an assistant. Instead of approximating a meaning for a keyword he describes the software as 'understanding the meaning' wherein, "we talk to them, and they try to understand our intent and give us the answer we need." He then warns that this understanding may only be 85% accurate, requiring more involvement from a human user.
"Yared Tsegaye, AIS Vice President of Cloud Modernization and AI Solutions: "It's not a menu anymore. It's not saying 'get this report' or 'find this for me.' You're actually telling it, 'I need to know this about this, and please present it in this way.'"
He also points out that the interaction is now multi-modal. That is, input from the user may be in the form of written or spoken words, images, recordings, and other forms of telemetry. This opens the dialogue to being far more robust, expanding the potential for deeper and broader analysis, all based on intent, what the user intends to accomplish.
Where the Knowledge Comes From
Tsegaye speaks of a time in the past when AI developers had to create a machine learning algorithm to create a large language model (LLM) containing all the knowledge the AI would need to function usefully. Today, he recommends that those wishing to use AI-based solutions not seek to build their own LLMs due to the excessive time and expense involved in doing so. Instead, he points to the growing inventory of commercially available pre-trained LLMs and suggests identifying the one most closely aligned to the application they intend to build.
When asked how that purchased LLM will be able to contain the user’s own company information, he points to Retrieval-Augmented Generation (RAG) which enables the addition of one’s own data to an existing purchased LLM.
Fine-Tuning Requires Telling the LLM What Not to Do!
Tsegaye acknowledges that owning a commercial LLM brings immense amounts of data not relevant to the application being developed.
He explains that the process of fine-tuning the LLM to address the desired functionality involves telling the model, in plain English, what not to answer, or to define what it should answer and tell it not to answer anything else.
One intriguing example he cites is the occasion when the LLM does not know the answer to an inquiry. "LLMs don’t like to say they don’t know," he explains, then recommending that you tell the LLM to say it doesn’t know when it doesn’t really know. Otherwise, he says, it will make something up.
The Changing Application Paradigm
Since the earliest days of computing, the approach to programming applications was monolithic; provide every instruction necessary to perform every function required of the application into one enormous block of programming code. Each instruction is executed sequentially, with the ability to branch elsewhere in the body of code based on various empirical decisions. Every instruction was inextricably linked to the instructions preceding and following it, and one error could bring the entire application to a halt.
To increase resilience, the strategy of building microservices, each responsible for a single function of the entire application, and putting them in containers that were highly transportable and could be quickly replaced if found to be damaged. Each instruction, therefore, was far more loosely coupled to those before and after it, reducing or eliminating dependencies that could totally halt operation.
Creating AI-based applications takes the model even further. There is no list of instructions, and decisions are not made based on empirical values. Instead of zero or one, on or off, or black or white, decisions in the AI-driven application world have many shades of grey. There are fewer hard-shelled decisions, and more interpretations or evaluations based on criteria that may be defined by huge experiential data sets.
Instead of commands, you converse with your AI application. Instead of reports you anticipate outcomes and responses. You find yourself in an immersive, productive discussion as to how best to fulfill your objectives. This holds the promise of a new age of application and a new definition of computing.
To discuss your ideas and plans around an AI-driven future, one of the resources you may wish to reach out to is AIS. To contact AIS, click here.
About the Author
Technologist, creator of compelling content, and senior "resultant" Howard M. Cohen has been in the information technology industry for more than four decades. He has held senior executive positions in many of the top channel partner organizations and he currently writes for and about IT and the IT channel.