AI Watch

When Good Chatbots Go Bad: Would You Like Fries with That Tuba?

Several stories caught my eye recently, reminding me how brittle a conversational AI deployment can be. The headline grabber was McDonald's decision to end, for now, its AI ordering system at some of its drive-thrus after customer complaints went viral on social media. The fast-food giant partnered with IBM for a test run of its Automated Order Taker at more than 100 restaurants. The system will be shut off no later than July 26, according to a memo sent to franchisees late last week, CNBC reported.

"As we move forward, our work with IBM has given us the confidence that a voice ordering solution for drive-thru will be part of our restaurants' future," the company said in a statement. "We see tremendous opportunity in advancing our restaurant technology and will continue to evaluate long-term, scalable solutions that will help us make an informed decision on a future voice ordering solution by the end of the year."

Fast-food chains are increasingly integrating generative AI into their systems. McDonald's, Wendy's, Hardee's, Carl's Jr., and Del Taco are among the companies utilizing AI technology at their drive-thrus. Earlier this year, Yum Brands, the parent company of Taco Bell and KFC, declared an "AI-first mentality" for its restaurants.

I'm not going to try to pin-point what went wrong under the golden arches, but I would like to share some of my thoughts on how to build responsible, resilient, and robust chat systems for the customer service experience, and why it’s more important than ever to keep building them.

One reason conversational chatbots jump the track is a failure to consider their ability to lie and even make up business processes on the fly, which can result in unexpected (and unwelcome) outcomes. The fix here is the application of a simple framework for building bots that are safe and resilient in production. I call it The Four Ps for Building a Good Bot Knowledge Base.

Purpose: Use the system prompt to ensure the bot knows its purpose. What is the job you want it to do? How do you want it to act? What tone do you want it to have"?

People: Teach the bot who it's there to help and how it can help them. Provide details on the customer avatars and information those avatars might be looking for.

Precision: Establish guardrails by providing examples of what both desirable conversations and undesirable conversations look like. Give three to five examples for each. This will provide the bot with essential guidance on how to make better decisions when it encounters similar conversations.

Perfection: How can you make a bot perfect? You can’t, of course, but you can identify some clear metrics to ensure that you're checking your bot's performance and helping it to become better with each interaction. Here are some of the metrics I would encourage you to consider:

Relevance: Are the responses consistently relevant to what the user was asking?
Coherence: Are the responses coherent and understandable by the user?
Conciseness: Are the responses short and clear? Sometimes generative models can be overly verbose in their responses creating the opportunity for confusion and misunderstanding. This is critically important to monitor.
Factual Accuracy: This is one of the biggest concerns about using generative models in customer service solutions. What happens if the model makes a mistake or provides an incorrect or even made-up answer? You must create a process that insures you are checking for factual accuracy over time.
Emotional Intelligence: It's important to make sure your model has the appropriate tone for a specific conversation or brand. For example, if you're managing life insurance and customers seeking help will have experienced a death of a loved one, you'll absolutely want to consider the tone and expressed empathy the bot conveys in its communication.

Another reason chatbots fail is a lack of diversity in the teams that test the model before it goes to production. Having a diverse team of testers (ethnically and otherwise) offers several valuable benefits:

Broader Perspectives: Diverse backgrounds bring a wide range of perspectives and experiences, which can lead to more comprehensive testing scenarios and better identification of potential issues.

Enhanced Creativity and Innovation: A mix of cultural viewpoints can foster creativity and innovative thinking, leading to novel solutions and improvements in software quality.

Improved Problem Solving: Diverse teams are often better at problem-solving due to the variety of approaches and ideas contributed by team members.

Better User Representation: A diverse team can better represent the diversity of the end-users, ensuring that the software is tested for usability and accessibility across different demographics.

Increased Coverage of Edge Cases: Ethnically diverse testers may identify edge cases and cultural nuances that might be overlooked by a more homogenous team.

Enhanced Communication Skills: Working in a diverse environment can improve communication skills and foster an inclusive atmosphere, benefiting overall team collaboration.

Cultural Sensitivity and Compliance: Diverse teams are more likely to be aware of and sensitive to cultural differences and legal requirements, helping ensure that the software is appropriate and compliant for different markets.

Competitive Advantage: Companies with diverse teams can better understand and cater to a global customer base, giving them a competitive edge in the marketplace.

During a recent AI Read Teaming workshop, for which I was presenting to a financial institution, one of the students realized the team that was building and testing the application in her company lacked diversity. After learning the value of creating a diverse team of testers, the company decided to use an AI Red Team to stretch and even break models ethnically to ensure that the most unlikely scenarios are caught early, before real customers had begun to interact with their solutions.

Finally, chatbots often fail due to a lack of clarity around those success metrics I mentioned earlier. But more specifically, what does success look like? When you build any AI solution it's critical to identify key metrics that establish when you can call an AI project successful. What is success? Is it saving time? Creating revenue? Saving expenses? Whatever metric you select must allow you to establish a baseline, where you started, and demonstrate clear success in moving that number up or down. Many AI projects fail because leaders do not understand the importance of documenting these critical metrics, KPIs or OKRs, to demonstrate the return on investment for an AI project.

Knowing all that can go wrong with a chatbot implementation is likely to raise concerns about whether this is the right use case. However, providing a system that allows a customer to get the right answer to the right question at the right time has proven to be incredibly valuable to a wide range of businesses. We're seeing customer service chatbots increase sales, increase customer satisfaction, and increase the lifetime value of the customer.

When built safely and responsibly, chat systems for the customer service can shift the trajectory of your company and allow you to tap into the benefits of building inclusive innovation at scale.

BTW: Here's a LinkedIn post from Simon Stenning, founder of, that illustrates what can happen when you fail to implement even the simplest forms of responsible and safe practices for conversational agents.

Posted by Noelle Russell on 07/03/20240 comments

AI for Personalized Education: The Three Pillars of Personalized Learning

The recent Microsoft Build conference brought me once again to Seattle where I have spent much of my career. Build is one of my favorite developer conferences because it brings together dreamers who are builders. Over the last year Microsoft has continued to release some incredible tools to help bring AI application builders’ dreams to life. I've had the chance to take the stage at several Build conferences, but this year was different from my usual experience, because I wasn’t locked into a predetermined demo in an executive keynote. This time I was able to define my own session and share my passion and projects with the audience.  

During my sessions, I focused on two topics I am especially passionate about: AI for Personalized Education and the power of being connected to a global AI community. In my demo session about AI for Personalized Education, I covered the three pillars for personalized learning: 

Enhanced Learning through Assistive Technology

As you may know, I have a son who was born with disabilities. What I've learned as he has grown is that all my children benefit from the technology that helps him every day.  In the first morning keynote, Satya Nadella spoke about the importance of building a more accessible world. He announced a new laptop called the "Copilot PC ." I was able to get my hands on one and take it for a test drive. I was very impressed by the new NPU (Neural Processing Unit) that was added to the machine; it allowed offloading of real-time inferencing from the CPU and GPUs. This allows for those who need assistive technology to use it without the "Accessibility Tax "—in other words you can use accessibility technology without your machine getting overloaded and limiting the number of apps or tasks you can take on at the same time. 

One of the accessibility apps that took Build by storm was Cephable, a free software platform for PCs and Macs that allows users to add adaptive voice controls, head movements, face expressions, and virtual buttons/switches to use as controls in their favorite games and apps. We loaded Cephable onto the new Copilot PC and were able to map keystrokes that controlled an application to head movements of a user. The application was created by Alex Dunn and the idea came from his desire to create a way for his brother to use gestures to keep up with his friends while playing Minecraft. I tested the application and even downloaded it to my machine. I used it to navigate a PowerPoint presentation, and used gestures for common coding tasks, such as tilting my head to save my code. 

Those with significant physical limitations can leverage Cephable to customize application profiles to assist with any task controlled with a key binding, and with the new NPU, the CPU stays level. And it's free for individuals and educators, so test it out!

Tailored Learning Paths for Success

Another highlight of Build was the Imagine Cup! The Microsoft Imagine Cup is an annual global competition that challenges students to use technology to solve real-world problems.  I met one of the finalists and I was so inspired by what they had built. Using Azure OpenAI Service, built an AI-powered productivity coach to help people with ADHD who are experiencing task paralysis. The coach asks questions to identify the user's obstacles, suggests strategies, and teaches the user about their work style.

You can try this one too! What I liked about it was the ability for a teacher or a student to use it to help break down work in a personalized way and have an AI coach that understands the challenges of someone with ADHD.

Empowering Caregivers in Education

I also shared some of the incredible advancements being made by Erin Reddick , founder and CEO of ChatBlackGPT, which developed a chatbot dedicated to offering insights and perspectives rooted in Black, African American, and African information sources and culturally aware AI. She joined me on stage to share how her company has created a custom model to help care givers understand the special education process and the creation of individualized education programs. 

It was an incredible week spent learning about how AI is not only helping enterprises, but individuals, regardless of their abilities and challenges.

Other News and Updates 

In other news, this week I was at SAP Sapphire where some very exciting announcements were made! In recent months, I've been asked by companies to help them unlock the power of their Microsoft Copilot licenses. I created a Microsoft CoPilot Coaching program to provide these companies with the right skills to amplify the productivity of their teams and successfully integrate Copilot into their everyday operations. This week, SAP announced an exciting release : SAPs generative assistant Joule is now integrated with Microsoft Copilot, making it possible for business applications in SAP to use natural language to assist in the configuration of even the most complicated business applications. 

Keep an eye out for other exciting events this month. I will be one of the keynoters at Splunk’s annual conference, as well as hosting a video training to help users complete the Microsoft Build AI Skills Challenge. It's not too late to join and begin to learn by doing! 

The world will continue to evolve its understanding and capability with AI. Now, more than ever, we need mindful, intentional , and responsible leadership to guide the creation and use of this technology to ensure we build systems that are helpful and not harmful. Reach out to me on LinkedIn anytime as you progress in your responsible AI journey.

Posted by Noelle Russell on 06/12/20240 comments

AI Watch: Building AI Solutions with Scaled Agile in Government

This week I had the opportunity to join the leaders and practitioners implementing AI projects with scaled agile in government organizations. There was quite a bit of buzz around the topic of AI, especially with some of the new federal guidance that had been released. The federal government released a memo that will help federal agencies advance the efforts around governance, innovation, and risk management when it comes to AI. Here are three main points that echo what I have been working on with companies in the private sector.

The importance of AI governance
AI governance is a key component of an AI strategy that helps to identify and mitigate risk. It's important for companies to identify a chief leader responsible for spearheading this effort, establishing the vision, and delivering accountable results. 

Responsible AI in Innovation
It's critical to ensure that all AI projects start with responsible AI safeguards in place. This means that organizations must think about building in AI safety systems and implementing an AI Red Teaming strategy. This includes having human-ai experience testing, prompt engineering, model selection and evaluation guidelines as well as deploying to a well architected public cloud.

Establishing a framework for managing and mitigating risk
This can be a complicated part of the process, but organizations do not have to go through it blindly. NIST (National Institute of Standards and Technology has issued an initial public draft of the AI Risk Framework. This framework helps both federal agencies and commercial companies to have a common understanding of risk and what to be considering as AI projects begin and continue.

The good news is that there are long-researched approaches for managing risk and fairness in AI deployments. As a company, whether government-aligned or not, there are clear guidelines and best practices that have emerged you can take advantage of as you begin your journey.

Among the most needed skills in these types of projects are those practiced in the art and science of scaled Agile. In the past 2 years and 25+ generative AI projects, I have come to realize that the scaled agelist is the glue that holds the engineering and AI work together and sees it to a successful end. 

When it comes to AI projects, it's less about the technology as it is about what will be available through applied APIs, and much more about humans and change. In a recent "Change Management in the Age of AI" workshop, one of the attendees asked how they would be able to motivate their leaders to invest and innovate when they seem to be focused on cutting costs and saving money. I explained that we are now in a world that prefers show versus tell. People want to see the value before they invest in it. This is why I have coined the phrase "from the boardroom to the whiteboard to the keyboard." It’s important to get alignment between the business leadership and its technical leadership, and then to build something that is small, yet remarkable, to demonstrate the concept. I call it an "MRP," or minimum remarkable product

Innovation can be stifled by taking on too much too fast. A minimum remarkable product focuses on a small, meaningful feature that generates an impressive result. There are three benefits to starting with an MRP:

  1. You will create a project that has a specific goal with measurable results
  2. You will have a short timeline and be able to deliver results quickly
  3. You will generate momentum, which will empower you to take on more MRPs

One way to find an MRP is to study the activities in which your customers are engaged. Specifically, examine those difficult activities your customer must engage in. Look for that pain point. If you can find a process that's filled with friction, you'll have an opportunity to leverage AI to reduce that friction and delight the user. Always focus on creating a remarkable solution that delights the user you are trying to help. This will create the momentum you need to take on additional use cases and ensure that you stay focused on the most important thing: features your customers need and will love.

If you have questions about MRPs and building responsible AI at scale, reach out to me on LinkedIn any time!

Posted by Noelle Russell on 05/20/20240 comments

AI Watch: Stanford's Annual AI Index Reveals Top Trends

This week, Stanford University's Institute for Human-Centered Artificial Intelligence (HAI) released the highly anticipated seventh edition of its annual report on AI trends ("Artificial  Intelligence Index Report 2024"). The index tracks, collates, distills, and visualizes data related to AI, and it helps to make the rapidly evolving world of AI more generally accessible.

There's a lot to like in this report, including the research team's powerful mission statement:

"Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI."

This year there are more eyes on this report than ever, and I thought it would be useful to underscore three insights you won't want to miss:

  • Industry is leading academia in model development: This echoes my own experience. We are seeing a new interest in investing and building AI systems by the industries that need it most. Especially in finance and healthcare, we are seeing industry drive innovation and research, while academic research is lagging. This makes sense to me, as AI is becoming more accessible and economical; companies don’t have to wait for academic research. I'm also seeing organizations invest in private research teams, rather than collaborating with academic institutions. I'm excited to report that the number of industry and academic collaborations reached an all-time high this year, and I see more of that in our future.
  • AI regulation in the US is on the rise, and the US leads the world in AI Model Development: We saw a 50% jump in AI regulation in the past year. We also saw regulation appear at local, state, and federal levels. The Executive Order was released by the White House last month set some clear guidelines for government agencies, but also US companies, to help navigate the complexities arising from fast-paced of innovation. I get asked often what country is leading the race in AI. If you use the metric of models in production, the US is leading with 61 notable models being deployed.
  • AI helps workers to be more productive and leads to higher quality work This observation is a highlight for me personally. As companies begin to navigate the world of AI, they're often looking for quick wins. One way to start seeing a positive impact on your business is to educate your workforce on how to use the AI that is rapidly showing up in the tools they use every day. At the AI Leadership Institute, we did our own survey and found that 70% of workers who received training saw an immediate boost in productivity and an increase in higher value work. This is such an important step that I created an Upskilling Guide for the teams that I work with help them get a jump start on this new, but required, skillset.

Among other things, this report crystalized much of what I experienced earlier this year after participating in the "Foundation Model Scholars Program," one of the inaugural executive workshops offered by HAI. The workshop provided access to decades-old research that led to the technologies in and surrounding Generative AI, and it gave me and my team insights that allowed us to see ahead of the curve and anticipate what's coming next. It also fueled many of the successful deployments I was part of, and now we can see those patterns emerging at scale.

The report is free, and I strongly encourage you to read it.

This week I also had the chance to speak to the National Association of Broadcasters about the importance of measuring the benefits and the risks when deploying GenAI. As you can imagine, misinformation, fake news, and deepfakes were a big part of the conversations, especially in an election year. There were hundreds of new and incumbent companies sharing how they have evolved their products and services with AI. Check out the NAB keynote for a unique perspective from the world of broadcasting!

Until next time, embrace the world of learning by doing, and share what you are learning as you try out new AI features in the products you use every day! As always, connect with me on LinkedIn with any questions, feedback, or ideas for an article.

Posted by Noelle Russell on 04/16/20240 comments

AI Watch: Leading Safe AI System Development--Research, Policy, and Engineering

This past week I had the opportunity to witness the progression of AI excitement across three separate and distinct pathways.

First, in my time with the executives at the ACHE 2024 Congress on Healthcare Leadership in Chicago, I delivered one of the opening keynotes and shared some of the exciting advancements that are making the lives of healthcare practitioners better and more efficient. As many of these healthcare companies are beginning to experiment with AI solutions, I reminded them that going from the playground to production would not only require new technology, but also a new mindset. During the Q&A I got two great questions, and I wanted to share my answers with you:

Question: What is the most exciting thing you've experienced in healthcare and AI?

Answer: I've seen rapid advancement of the use of AI in the healthcare industry and one of my favorite use cases is the development of personalized medicine and advancing patient-centered care. Imagine a world where we combine what we're learning from companies like 23andMe with predictive analytics to help us craft highly tailored personalized treatment plans. We have more data about our biological and physiological makeup that we can now use to create better treatment options personalized by demographic, and sociographic elements.

Question: What are you most worried about? 

Answer: I often get this question across industries like finance and healthcare. As a mom of a child with special needs and a caregiver to my dad who suffered a traumatic brain injury, I worry that technologists are beginning to enthusiastically build solutions in healthcare that might unintentionally perpetuate and amplify bias that leads to the creation of systems that generate unfair outcomes. I'm so worried about this, that I now offer services that safeguard generative AI applications as we move from the playground to production. Microsoft recently launched a set of tools that will make this accessible to every organization. I’ll share more about this in a moment.

Also, Adobe held its annual Adobe Summit 2024 in Las Vegas this week. I was an Adobe AI Insider and had a chance not only to get front row seats to the amazing keynotes, but also have direct one-on-one Q&A time with some of the Adobe Digital Executives. Here are a couple announcements I'm most excited about:

Adobe announced GenStudio for creating marketing assets at scale with the help of AI. They revealed new and specialized tooling to help creatives spend more time creating and less time on the routine tasks that often fill a creative’s day as they roll out what they have created to the world.

Microsoft and Adobe announced the integration of Adobe Experience Cloud workflows and insights with Microsoft Copilot for Microsoft 365 to help marketers overcome application and data silos and more efficiently manage everyday workflows. Imagine being able to get Generative AI to help with the creation of your brief for a new campaign and have that brief help you create consistency across executive reviews, assets, reports, and messages. Insight integration across Microsoft Teams, Word and PowerPoint means less hopping around across tools to get the work done, allowing marketers and creatives to do more in less time. Let’s hope the time they save allows them to think and be more creative and we don’t fill those new-found hours with meetings instead.

Finally, I had the unique opportunity to meet with Microsoft's Responsible AI team last week in New York to test drive their new tooling for helping organizations safeguard their AI systems. I was asked to be a part of this exclusive workshop because I'm a Microsoft AI MVP and I offer AI Red Teaming services through my company, the AI Leadership Institute. One of my key takeaways from this event was the way the team was structured, and I think every company can learn something from this.

Three Microsoft executives participated in the workshop: Sarah Bird, Chief Product Officer of Responsible AI; Natasha Crampton, Chief Responsible AI Officer; and Hanna Wallach, Partner Research Manager. Together, these leaders establish a kind of three-legged stool for responsible AI at scale. You need someone who leads engineering, someone who owns policy and governance, and someone who spearheads the research side of the organization. This team announced updates to the tooling to help teams build better and more robust safety systems for their Generative AI applications.

As you begin to go from AI playgrounds to production, remember to consider not only the enthusiastic part of AI, but also how you are safeguarding these solutions for safe and equitable outcomes for everyone involved. AI Red Teaming can help you go from enthusiasm to execution responsibly. Next time we will dive into how to build a safety system for your generative AI applications. As always, connect with me on LinkedIn to learn more about what's happening in responsible AI around the globe.

Posted by Noelle Russell on 04/02/20240 comments

AI Watch: Five Strategies for Building Responsible AI at Scale

This week I'm attending the Microsoft Global MVP Summit. The event provides MVP awardees with an annual opportunity to connect with each other and with Microsoft Product Leadership to talk about the future of technology and developer experience at the company.

While much of what was discussed is still under NDA (non-disclosure), I can share some themes that I found interesting as the race to responsibly use AI at scale continues. I have summarized 5 key messages from my week with like-minded experts and innovators leading product development.

Year of the Agent
2023 was the year of the Chatbot (renewed) and 2024 is the year of the Agent. An Agent is a purpose built, autonomous AI solution that operates within the bounds we give it. Last month, Microsoft announced the public preview of the Assistants API that enables the creation of high-quality copilot/generative experiences inside applications. This is interesting because it's the first of a series of capabilities aimed at giving developers the ability to integrate generative AI into any application more easily. Imagine building an insurance application with this new API, and then being able to offer customers a real-time conversational interface that can call and execute actions autonomously. You could set rules and thresholds that allow certain types of transactions to be facilitated by an agent that can call other agents and APIs and orchestrate an entire workflow with natural language.

Multi-task Orchestration
Over the past 6 months I've been working with executive teams to evolve their thinking around responsible AI at scale. Rather than focusing on building that killer app, or a single use case, we need to shift our thinking to how to orchestrate models together. I call this multi-task orchestration. In a recent executive workshop, a client shared that they have more than 900 unique use cases that they need to consider and prioritize. I led them through an exercise that helped them pick the right project at the right time with the right model. We also focused on selecting use cases that would create opportunities to unlock more value over time, rather than focusing on a single use case that couldn’t be built upon. The next year will bring more sophistication in how we use both large and small language models to solve the right problem at the right time for the right customer as cost effectively as possible.

Responsible AI
Another key theme of the summit was advancements in the tooling offered by the Responsible AI team at Microsoft. There are several ways to think about your strategy to build responsible AI solutions at scale. Here are some principles to consider: Transparency, Fairness, Trust, Reliability, and Explainability. I'll have more to say about these in future posts. If you can’t wait to dive in, you can check out this article from Microsoft on the ABCs of Responsible AI.

Safe Deployments are All About Risk
In the world of generative AI, the quickest way to fail is to blindly launch your solutions without weighing the risks involved. The list of risks you should consider includes grounded output errors (hallucinations), jail breaking attempts, prompt injection, copyright infringement, and manipulation of human behavior (deep fakes, for example). With these risks in mind, you'll want to establish a red team, something that my company offers as a service, or creating one for your own AI efforts. This can help you identify risks, measure their potential impact (for prioritization), and mitigate risks in the system. You can expect to see more and more tools designed to help these team become more effective this year.

It's All About the Builders
It was very exciting to see the tools and resources that are being released for AI builders. I've had to navigate across so many Microsoft studio products over the years, that I have a separate folder dedicated to the bookmarks to keep them all straight. Thankfully, Microsoft felt our collective pain and launched a single AI studio this year. Azure AI Studio, currently in beta, combines the development tools needed to build generative AI applications with a unified studio for all the other Azure AI services, such as speech and computer vision.

Another tool that's available now is Microsoft's Human-AI Experience (HAX) Toolkit. Developed through a collaboration between Microsoft Research and Aether, Microsoft's advisory body on AI Ethics and Effects in Engineering and Research, the Toolkit was designed for teams building user-facing AI products. It was developed specifically to help developers conceptualize what an AI system will do and how it will behave. I recommend using it as early in your design process as you can.

In this fast-moving world of generative AI, it’s critical to be mindful of our approach. Remember the three keys to any responsible AI solution, build it so it can be Helpful, Honest, and Harmless. 

It's been a fun week in Redmond, and I can’t wait for Microsoft BUILD in May!


Posted by Noelle Russell on 03/14/20240 comments

AI Watch: The House that AI Built

This week I'm presenting at the International Builders' Show, the largest annual light construction conference in the world. NAHB, the organizer, expects to attract nearly 70,000 visitors from more than 100 countries to Las Vegas for this event. I'm appearing on the Game Changers track, and I'll be talking about how artificial intelligence is quickly becoming a game-changing technology in home building. Chances are many of the conference attendees are already feeling the influence of AI, but I'm going to be presenting use cases that underscore the innovative ways AI will be changing the world of home building. I'd like to share three examples here:

Welcome to AI Watch

Welcome to the inaugural post of our new "AI Watch" blog, written by Noelle Russell, Microsoft MVP and founder and Chief AI Officer at the AI Leadership Institute. An expert in the Azure Machine Learning Platform and  Azure AI Services, Noelle specializes in helping companies with emerging technologies, cloud, AI and generative AI. You can find her on X at @NoelleRussell_.

Smart Home: Some of us have been anticipating the maturation of the smart home for about a decade. It's been a long trip, and the road has been a bumpy one, but we're getting there. We now have smart thermostats, smart lightbulbs, and vacuum cleaners that know more about the nooks and crannies of our homes than we do. And we have smart speakers that get smarter with every generation. New home buyers are looking for residences that are equipped with interconnected devices and systems that can be controlled and automated through a central hub or smartphone app.

Speaking personally, smart-home tech has been a blessing to me and my family. I have a son with Down Syndrome and a father who suffered a traumatic brain injury, and their needs have motivated me to find, test, and deploy AI-enable home solutions to make their lives better. I have more than 100 AI-enabled devices in my home, and I've tested dozens more. I'm closer to the cutting edge than most homeowners, but not by much, and the gap is closing, fast. In the future, builders will find ways to integrate technology into homes in contextually appropriate ways that will drive increased sustainability and satisfaction of the homeowners.

Digital Twins for Home Building: Virtual objects known as "digital twins" have found their way into the home construction industry, and generative AI has become a critical enabler of this technology. GenAI can be used to create fully immersive virtual homes that builders can "walk through" as they make decisions about designs and materials. Builders can simulate different design options and evaluate their performance before they've driven a single nail, reducing—sometimes drastically—the need for costly changes during construction. 

Virtual twins can ingest a vast amount of data and use it to replicate processes that predict potential performance outcomes and issues that might not be obvious in the real-world. They can simulate the performance of various sustainable materials and construction techniques. They can make it possible to analyze data about bio-based materials, passive solar designs, and plans for natural light, and then incorporate that data into building plans.

Productivity: One of the greatest potential limiters of growth in construction is productivity. While other industries are increasing their ability to produce, in construction that ability is declining. According to Fannie Mae, we will be about 400 million units short of the needed inventory of housing in the US over the next decade. AI-based technologies can help construction companies connect designers with contractors and builders, reducing the friction currently associated with mass producing housing units. AI can be used to analyze regulation in real time to offer insights that help construction companies build the right thing, at the right time, under certain regulations. AI can be the amplifier of productivity that the industry has been so hungry for.
The explosion of AI-based technologies and solutions is fueling an urgent need to rethink how homes are built. AI empowers us to reinvent the way we mass produce homes in a responsible, economical, and sustainable way, allowing us to meet the shortage of homes with growing productivity and increased outputs.

Join me next time when I will share the latest from my trip to the Microsoft Global MVP Summit.

Posted by Noelle Russell on 03/01/20240 comments