Fujitsu CTO: Tech Industry Needs To Take Responsibility For Impact Of AI

The technology industry needs to take responsibility in the development of ethics and checks and balanced for artificial intelligence (AI) systems to prevent any negative impacts from smart systems, says Fujitsu’s chief technology officer Dr Joseph Reger.

When asked by TechWeekEurope in an interview at Fujitsu Forum 2016 in Munich if the technology industry should feel obliged to address concerns that AIs will leave some people without jobs, Reger replied it certainly should because “we are creating [AI]”.

”We’ve got to start discussions about the consequences,” he said, highlighting that AI will significantly disrupt the job market forcing all manner of sectors and blue and white collar workers to need to adapt to changes if they wish to keep up with the change.

“There is a clear disruption to the job market and the only way to respond to that as an individual is to reskill, and as a society to create a framework for that to be possible.”

He added there is a need in particular to look at what children are taught in order to give them the skills for the future.

AI growing pains

Concerns about the impact of AI and other digital technology on current and future jobs is nothing new. But there is some confusion as to how to assess and address the situation, with some championing the need for government to put AI and its potential disruption in the spotlight.

Reger noted Fujitsu is already taking such a responsible approach to AI, which it is presenting to businesses as a means to carry out digital transformation, with its ‘human-centric AI’, an approach to AI technologies that involves creating systems that are complementary to people’s lives rather than looking at completely replacing them in jobs and duties.

“Human-centric AI is the natural next step and that’s what we’ll do and the question is whether society will do that or not,” he added.

Breaking bias

However, creating such AI systems needs a careful approach as the dynamic nature of the code contained in machine learning algorithms and deep learning-based AI systems makes it difficult to see exactly what prompted an AI to come to a certain decision or take a particular action.

This means that is can be difficult to spot unwanted and detrimental biases that AIs could potentially create through parsing masses of data without any human interference at the code level. And Reger highlighted that this throws up problems of establishing who is accountable for the decisions an autonomous AI system makes.

“There is concern about this bias that is not built in it is just generated; there is concern about the accountability of this stuff, because the systems will make decisions [but] who will they make accountable for it.” he said.

Unfortunately, there appears to be no easy answer to this problem, though according to Reger AIs are created and taught with a form of moral and ethical frameworks then such issues could be avoided somewhat.

“There isn’t much you can do outside what we normally do as mankind in that while we are raising children we create moral and ethical frameworks,” he explained.

“There is no guarantee that every participant will abide by that but if that happens and we have regulations and laws and judicial systems and son on, by and large we are going to be ok,”

“We’ve got to create these checks and balances; it has to come to moral and ethical frameworks for AI systems.”

Such measures will need to be established sooner than later, as the technology industry is pursuing AI with an almost heady abandon, from adding smart virtual assistants into smartphones to Microsoft looking at turning its Azure cloud into an AI platform.

How much do you know about the world’s technology leaders? Take our quiz!

Roland Moore-Colyer

As News Editor of Silicon UK, Roland keeps a keen eye on the daily tech news coverage for the site, while also focusing on stories around cyber security, public sector IT, innovation, AI, and gadgets.

Recent Posts

X’s Community Notes Fails To Stem US Election Misinformation – Report

Hate speech non-profit that defeated Elon Musk's lawsuit, warns X's Community Notes is failing to…

1 day ago

Google Fined More Than World’s GDP By Russia

Good luck. Russia demands Google pay a fine worth more than the world's total GDP,…

1 day ago

Spotify, Paramount Sign Up To Use Google Cloud ARM Chips

Google Cloud signs up Spotify, Paramount Global as early customers of its first ARM-based cloud…

2 days ago

Meta Warns Of Accelerating AI Infrastructure Costs

Facebook parent Meta warns of 'significant acceleration' in expenditures on AI infrastructure as revenue, profits…

2 days ago

AI Helps Boost Microsoft Cloud Revenues By 33 Percent

Microsoft says Azure cloud revenues up 33 percent for September quarter as capital expenditures surge…

2 days ago