Artificial intelligence has been hyped as a "startup supercharger," capable of growing a business at a scale and speed that's impossible for humans to achieve alone. But pushing forward with innovative technologies in a responsible way can be tricky, which is why Responsible Innovation Labs (RIL), a coalition of forward-looking tech founders and investors, has partnered with the United States Department of Commerce to develop a five-step protocol for the responsible use of A.I. by startups.
The coalition also released a list of 35 venture capital firms who have formally committed to encouraging their portfolio companies to adopt the responsible A.I. protocol, and to taking that protocol into account when conducting due diligence on potential investments. "Investors are going to increasingly be asking about risk when talking about funding with A.I. startups," says Responsible Innovation Labs executive director Gaurab Bansal.
Here are the five steps you can take to protect your business while still taking advantage of all A.I. has to offer, with commentary from Bansal.
1. Obtain organizational buy-in
The first step toward implementing responsible A.I. in your business is to get your key stakeholders on the same page. For some startups, the best way to keep everyone informed may be through a recurring meeting, in which representatives from all aspects of your business can offer their perspectives on the best way to use the tech. These "Responsible A.I. Key Stakeholder Forums," as RIL calls them, should be inclusive, so everyone involved feels comfortable to share their ideas.
"When various parts of a company are siloed off," says Bansal, "risk gets magnified, and companies aren't able to quickly respond to crises. Having people with different areas of expertise in the room is going to be critical when making choices regarding A.I."
2. Forecast risks and benefits
Once you've obtained buy-in, it's time to sit down and conduct a thorough risk/benefit assessment. When conducting this assessment, you should consider any A.I. model's reliability, security, any potential vulnerabilities, along with its bias or lack of bias, its data set, and its alignment with your values.
And it isn't just private investors who are going to want risk-assessment data, as "public markets are going to care about those risks too," says Bansal. "So are regulators. Getting ahead of that curve is a heck of a lot easier than trying to back into forecasting use cases and potential harms."
3. Audit and test your products
Now that you've figured out your risks and benefits, you can actually build a product. Once that product has been built, your next step is to conduct regular testing and auditing in order to fully understand the tech you've created. By conducting audits, you can come up with strategies to mitigate risks that your specific A.I. use case opens you up to. RIL suggests monitoring your systems for errors, disclosing key information about your models to the public, involving humans when using the A.I. to make decisions, and limiting usage of your models for purposes beyond your stated use cases.
"There's no such thing as perfect. Every company updates its software, but there's a baseline of responsibility that companies should undertake before putting something out to their customer base," says Bansal. "You're always going to be in a better position to build and lead if you've thought through these things ahead of time."
4. Foster trust through transparency
You need to be able to easily communicate how the use of A.I. aligns with your company's mission, as well as be able to answer more technical questions about how your A.I. works. According to the RIL's guidelines, entrepreneurs using A.I. should consider publishing a value statement that communicates how their business intends to use the tech, what the potential risks are, and what actions the company is taking to mitigate those risks. Another suggestion is to publish a Model Card for each new version of an A.I. model that you deploy. These cards contain information like how the model is used, as well as info on safety evaluations conducted to test the model.
According to Bansal, fostering trust will be critical, and is about more than being responsible. He says, "for businesses looking to scale, you're going to need to be transparent about how your tech works. It's a business-growth problem: If you're selling A.I.-powered solutions to enterprise-level businesses, you can bet they'll care about transparency."
5. Make regular and ongoing improvements
The final step in responsibly integrating A.I. into your business is to stay vigilant. You've established the foundation, but now you need to not become complacent. That means transparently communicating progress and changes to your use of A.I. with your key stakeholders, continuing to forecast new risks and benefits that pop up as your A.I. strategy changes, and constantly testing your A.I.-powered products to ensure they're running at peak efficiency and without bias.
"There's this false idea that you have to choose between scale and responsibility," says Bansal. "But increased responsibility will give you more durability as you scale. Being transparent when communicating with your customers about what you've improved is key for fostering relationships. You're bringing them into the process."
Photo Credit: Getty Images.