The European Commission (EC) proposed new rules covering the impact of AI on humans and businesses, targeting the creation of a trustworthy environment for the development of innovative products and services in the European Union (EU).
In its statement, the EC explained rules would be founded on a risk-based approach to AI, and seek to find a balance between promoting the uptake of the technology with protecting people.
For the digital economy to achieve its potential for everyone, consumers must trust the online environment. The implications of big data analytics can be far-reaching; at the GSMA we work with governments, regulators and the wider mobile industry to promote transparency and choice and to encourage responsible privacy governance practices.
It plans to ban AI systems considered to pose “a clear threat” to safety, livelihoods and rights of people, including services that manipulate human behaviour “to circumvent users’ free will” or allow government-developed social credit systems.
AI systems deemed high risk include those covering employment; education; law enforcement; and critical infrastructure, for example, transport. These will be subjected to “strict obligations” before appearing on the market.
Our Mobile Privacy and Big Data Analytics document sets out some of the safeguards organisations can adopt to identify and reduce privacy risks when engaging in services or projects that involve big data analytics. Several European operators have published their commitments to Responsible AI and only recently, Orange announced the creation of their Data and AI Ethics council.