Did you know that almost 30% of Americans say they interact with AI on a daily basis? Meanwhile, over 50% of companies use or are planning on using AI to enhance their business operations. These statistics prove that artificial intelligence is quickly becoming ingrained in our culture, bringing with it the possibility of more business-orientated regulation. In this article, we explore when AI regulations are likely to hit America, and what they will cover. Keep reading to learn more.
When Are AI Regulations Coming?
The exact date of the upcoming AI regulations is unknown, as nothing has yet been “put to paper,” so to speak. However, AI experts and tech leaders have congregated to express their concerns regarding AI and the need for regulation to the Senate. Additionally, with 45% of Americans also apprehensive about AI, this ultimately means it is not a question of if regulation is coming, but when.
In fact, some states have already started to implement statewide AI regulations. For example, Illinois now requires employers to disclose the use of an AI tool for remote interview analysis. Meanwhile in New York, local authorities are regulating how employers use automated employment decision tools in hiring and promotions.
What Areas Will the Regulations Cover?
Based on recent news events, the coming AI regulations will likely cover the following areas:
- Employment: As we’ve seen in New York and Illinois, regulations surrounding employment, labor, and recruiting are highly probable. This is unsurprising, considering that a recent survey revealed that 69% of Americans are worried AI will take their jobs.
- Data Collection: OpenAI, the creator of ChatGPT, has recently found itself embroiled in several court cases wherein they are being sued for collecting data without the expressed permission of copyright holders. With several high-profile authors, including George R.R. Martin and John Grisham, involved in this ongoing spat, there’s a good chance that AI regulations will cover how training data is allowed to be collected.
- Data Privacy: Data privacy is a large concern in the digital world, especially in regards to the rise of AI. Every time a person speaks with a chatbot, they are revealing some type of information and every time an AI company scrapes a website, they are collecting potentially sensitive information. As a result, more robust data privacy laws are likely to come into effect.
- Healthcare: AI has helped facilitate a number of medical breakthroughs, from disease identification to simply making medical practices more efficient. However, when dealing with people’s health and, in many cases, their lives, people are not so quick to accept new technology. For this reason, it’s probable the FDA will have some interest in regulating how AI is used in healthcare.
The above is just a glimpse into what the future of AI regulation might look like–the reality is that even more areas will be covered.
Do I Need to Consider Non-US Regulations?
If you’re planning on selling your AI product or operating in another country while leveraging AI, then, yes, you will need to consider non-U.S. AI regulations.
China, for instance, began regulating the use of AI algorithm recommendation technologies that provide online services in early 2022. More recently, in August of this year, the Chinese government also announced a new set of AI-focused regulations called the “Provisional Provisions on Management of Generative Artificial Intelligence Services” aimed at guiding the use of generative AI among businesses and individuals.
While the European Union and the U.K. are not far behind with their own proposed frameworks, China is the only country so far that has properly implemented AI regulations. However, the rest of the world will likely soon follow.
How Do I Prepare My AI-Powered Software for the Coming Regulations?
Of course, after having invested in developing your own AI software, it’ll be a great blow to your business if regulation comes along and makes your product legally unusable. Fortunately there are a number of steps you can take to put your AI software in the best possible position for future regulations.
Improving your software’s scalability means ensuring that you have enough server capacity and functionality room to adapt your data collection processes. This is because, as data collection regulations are implemented and bureaucracy increases, you’ll need to jump through additional hoops to get the data your software needs.
For example, ChatGPT’s scraping tool, GPTBot, first identifies whether a website is a viable scraping candidate, if the site hasn’t blocked them, and then cleans the data of any personal or unethical information before using it as training data. This process comes as pressure mounts around ChatGPT and OpenAI. Potentially, all AI software will have to go through a similar process when collecting data, requiring more server and processing space.
As regulations roll in, you might find yourself having to add, remove, and update the features of your AI software. For example, you could have to implement a form of biometric security for improved data handling. As a result, you’ll need to ensure that your software is flexible.
In real terms, this means shifting toward a development framework that allows the easy modification of features. If, for instance, your AI software is made for mobile devices, both iOS and Android, but you’ve built it using native frameworks, you should consider switching to a cross-platform development model. This way, you will only have to update one codebase instead of two separate ones.
Tighten Data Security
Because data security is likely to be a staple of the upcoming regulations, you should take extra precautions regarding how you handle and store your data. This could potentially mean upgrading your cloud infrastructure to a more secure provider or transferring your in-house data to a cloud service. Otherwise, you’ll need to ensure your in-house servers are watertight.
You should also review and update user permissions and who has access to important information. As few people as necessary should be able to view what data goes into your AI model and sensitive client information should be anonymized where possible.
Implement a Code of Conduct
Implementing a code of conduct that outlines how you expect employees to use your AI software and data is an excellent way to keep everyone informed and on the same page. This alone will not protect you from AI regulations, but it will help your business’s transition by taking a collaborative approach to updating your software.
An AI code of conduct should include the intended use cases of your software, how data should be stored, security procedures, and who to contact in the event of an issue. This should be easily accessible for all employees and users to find, with any updates clearly communicated through company channels.
Keep in the Loop
Keeping up to date with the latest AI news is the best way to prepare for AI regulations, as understanding the industry’s landscape will help you better anticipate coming changes. For example, you may notice a shift in public opinion or a new lawsuit against an AI provider that may be indicative of events to come.
To stay up to date, we recommend subscribing to our free newsletter, AI Business Report, written and maintained by our experts. We release three weekly articles that are delivered straight to your inbox, keeping you in the loop of all AI-related news. By subscribing, you’ll be the first to know of upcoming industry regulations.
AI Software Solutions From Idea Maker
If you’re looking to update your AI software or build a custom product from scratch that’s in line with upcoming regulations, then you’re in the right place. At Idea Maker, we have a team of AI and software development experts ready to help you bring your project to life. Schedule a free consultation with us today to learn more.