Artificial intelligence and “gap” in data processing

AI is the simulating technology of human thinking and learning processes by machines, especially computer systems.

Hollywood sci-fi blockbusters together with pessimistic-based articles have caused a vague but consistent fear of Artificial intelligence and Machine Learning that one day, machines will be likely to replace human. 

This fear is so big that in the Meeting of Finance Ministers and Central bank Chiefs from the world’s leading developed and emerging economies group (G20) on 9 June, they have agreed to build a set of principles to manage activities using AI in order to make sure that these technologies comply with laws, human rights and democratic values and don’t cause irrational risks. 

Better understanding of the potential of AI 

AI is the simulating technology of human thinking and learning processes by machines, especially computer systems. These processes include collecting information and rules of using information, reasoning to gain approximate or defined result, and self-correcting.

Special applications of AI consist of systems that can recognize voice, face, objects or handwriting. These are applied widely on many products such as smartphones, smart speakers, security cameras or even more advanced products such as server and self-operated vehicles. 
When it comes to AI, we can’t ignore Big Data. This term refers to data files with so huge and complicated volume that it is hard for traditional data-processing software to collect, manage and analyze data quickly and accurately.

Together with Big Data, AI can help us understand data from different sources better. Such as in confidentiality issue, data can come from videos, IoT devices and other sources.

Ethical issues in data processing

One of the biggest concerns related to AI and Big Data is using data ethically. Video data, for example, is a type of data that is collected regularly by both public and private objects to document activities and locations that individuals usually do or visit. Most of this data exploitation is completely aimed at investigating in emergencies or in case of security threats. 
However, only when there are more and more enterprises find a way to use data to collect understanding and additional information of potential customers, the gaps related to this field are present clearly. 

Another issue that AI faces is bias in data processing. Although AI building algorithms

are designed to avoid discrimination based on sex, race or religion, the bias can still find a way into this complicated system. For example, in 2018, Amazon had to eliminate an AI which specializes in filtering candidates because it has a discrimination behavior toward female.

 Although many governments have established a strategy of AI development, there isn’t any specific and consistent system globally for this activity. Confidentiality policy of user’s information in Europe and the US or AI management set of principles of G20 is the first step forward, however, technology enterprises concern that they are too rigid and will “suffocate” the potential of this young sector. 

To avoid this happening, companies have to actively engage in building code of conduct of data in AI developing activities. When AI continues to be further studied, there will be ethical guidelines and thorough standard given out to help strengthen the trust in companies which use private information to develop this technology.

At that time government-level regulations on AI can be completed, helping companies and organizations utilize the strength of AI, and simultaneously comply with ethical rules of data and confidentiality. 

Leave a Reply

Your email address will not be published. Required fields are marked *