Managing AI projects requires a different approach from traditional IT project management. What are these differences, and how can you manage an AI project for success?
In 2019, the number of AI projects that ultimately failed was roughly around 85%, with 96% of organizations reporting that they were encountering problems with data quality, data labeling, and building model confidence. It was also reported that senior management lacked an understanding of artificial intelligence and the value that it could deliver.
Today, AI (and AI projects) are still in early stages of deployment. If companies use AI, they are using it in prefabricated systems from outside vendors where the vendors have developed the AI, not their customer-companies.
Going forward, however, more companies will find a reason to develop their own internal AI—and that means defining a project management approach that works with AI.
How is an AI project different from traditional projects?
In traditional project management, even if it is done with methodologies like Agile, the success of the project is defined by the software that is produced and a well understood process. Even if project development isn’t done linearly as in Agile, the basic steps are still define, design, develop, test and deploy. The data that these apps operate on is almost always a structured system of record data that already is vetted for quality, and quite mature in its form and substance.
Because the data that traditional software development operates on is reliable, and because everyone understands the development steps used in the project, there is considerably less uncertainty in traditional software development projects. This makes it possible to attach credible project deadlines based upon past project history.
Unfortunately, AI projects don’t have this same stability, nor is it as easy to assign hard deadlines for project completion.
SEE: Hiring kit: Project manager (TechRepublic Premium)
Navigating uncertainty in AI projects
There is no absolute “end” to an AI project, unless it is a project where you are pulling the plug.
If you are an AI project manager, you have to live with that “no end” reality—and so do management and the sponsors of your project.
Why isn’t there an end?
Because AI asks questions of the data it analyzes based upon the data that it operates on, and that data is constantly changing. As you add new sources of data, outcomes will change. The AI itself will also contain machine learning (ML) that recognizes data patterns and learns from those patterns. This can also change outcomes.
Your management and users should have an understanding (and an expectation) that as data changes, outcomes can, too. Part of this process includes accepting uncertainty as a part of AI system evolution.
Defining your AI project deliverable
At some point from a project perspective, an AI project should be considered finished.
The goal with most AI projects is to attain at least 95% conformity of AI outcomes with what subject matter experts would conclude. Once this 95% threshold is reached, the project is deemed to be accurate enough to go live. It is at this point that the project should be declared complete.
That doesn’t mean that all work on the resulting AI app or systems is over. There will be
“drift” over time that could cause the AI to lose some of its accuracy. At these points, the AI will need to be re-calibrated to deliver once again at optimal quality—but this is software maintenance.
SEE: Top keyboard shortcuts you need to know (free PDF) (TechRepublic)
Do AI project deliverables always go as planned?
The answer is a resounding “No!”
There are times when the data that is being used by the AI isn’t properly prepared, especially when new and unfamiliar data sources are introduced. Dirty data will distort AI results.
Secondly, if your business case changes (and the value users want to derive from it), the AI will no longer fit with what the company wants.Finally, there are just cases when AI projects don’t work, no matter how hard you try. That possibility should be discussed upfront with management—and everyone should be onboard to “pull the plug” as soon as an AI project shows it can’t succeed.