18 Jun Regulatory laws surrounding AI: what to expect and what will happen
Mandates will reign in the growth of AI in the coming years
AI and machine learning have evolved rapidly over the years. With the explosive success of ChatGPT, the promise of what “could be” has the tech industry in a buzz. Even now, numerous competitors are chasing the ChatGPT model in hopes to cut out their own slice of the market.
This expedited introduction, however, has created security concerns and ethical dilemmas. AI, in itself, is a buzzword phrase meant to easily communicate what the tech is meant to do. It’s not so much a thinking, conscious machine. It’s the result of years of work related to machine learning, and will usually produce results when prompted correctly. These datasets are built on, well, data.
How that data is harvested and where its sourced from is in itself a point of ethical concern. Content creators such as artists have readily voiced their concern given that AI models such as Stable Diffusion utilize media without the consent of its sources. Generative models also source from text all across the website to effectively create an advanced Google search response. Even these words in this article could be subjected to learning models without the knowledge of Bytagig.
But beyond the worries surrounding how data models are built is security. AI systems following prompted responses have the capacity to do good, and bad. And, as history has shown, it’s already used for both.
The uncertainties surrounding the capabilities of AI-generated prompts have generated a global response regarding laws, regulations, and actions meant to reign in the otherwise freeform nature of machine learning and AI. The balance will determine the future of AI’s place in our daily lives, industries, and security standards.
How it’s changing around the world
Different countries have established varying approaches to how they will handle generative AI and its place in their public and economic life. For example, in the United States, guidelines and formative documentation were being written as early as 2016. The National Science and Technology Council published “Preparing for the future of Artificial Intelligence,” a report summarizing how then-AI operated and the impact it could have on the world socially and economically.
European Parliament voted to introduce the Artificial Intelligence Act, focusing on limiting the development or use of “high-risk” AI models. Early examples focused primarily on devices or machines that used “AI,” primarily smart or self-driving vehicles.
Since then, however, a variety of US-based AI-related laws, bills, and regulations have circulated. For instance, in 2019 the Regulation of Artificial Intelligence in Selected Jurisdictions was introduced. Two other bills in 2021 and 2022 respectively were also introduced, the Advancing American AI Act and the AI Training Act.
In 2020, the FTC stepped in to establish guidelines and mandates regarding the formative nature of AI. As machine learning and AI tools grew commonplace in certain business sectors, it was important to establish boundaries that did not lead users into high-risk or dangerous behavior.
It’s clear that there is a concentrated approach – and has been – to curtail the dangers of machine learning and AI models. As AI expands (or declines) in use, these regulatory structures will coincide with it. It’s nothing new, as all major breakout technologies and services see a structured approach with laws and mandates at some point or another. The short-term goal is to discover how current tech laws apply to existing AI technology, but as said technology develops, new laws will need an introduction.
Considerations for SMBs and tech organizations
Naturally, it’s difficult to find strong footing with the onset of such technology. AI and machine learning models have been around for years, but their integration into business models and services is still new. And, preparing your “readiness” for AI-related laws, regulation, and mandates is important as both the technology and legislation around it continues to change.
If you’re planning to integrate AI and machine learning into your business endeavors – whatever they are – it’s strongly recommended to introduce an LLM. “Large Language Models” provide greater control over datasets and what AI models can draw from. Since AI is only capable of providing results from its datasets, having control over said datasets is incredibly important. Using toolsets such as CATO for internal observability can help.
AI will continue to have a lasting impact on the tech sector, whether as a mainstay or a simple part of production flows. Regardless of its exact future, businesses should remain vigilant and train their models correctly while staying within the lines of incoming laws, mandates, and regulations.
For additional assistance and information, you can reach out to Bytagig today.
Share this post:
Sorry, the comment form is closed at this time.