Year 2024 sẽ mở ra một loạt tiến bộ đáng sợ trong lĩnh vực trí tuệ nhân tạo như tạo ra mầm bệnh, robot sát thủ và thậm chí cả những dữ liệu giả nhưng ‘thật’ hơn cả thực tế.
Phần lớn những đổi mới trong trí tuệ nhân tạo (AI) giúp cải thiện cuộc sống, như giúp chẩn đoán y tế và khám phá khoa học tốt hơn.
Nhưng cạnh đó cũng có những AI gây nguy hiểm như máy bay không người lái sát thủ đến AI đe dọa tương lai của nhân loại.
Page Live Science chọn ra một số đột phá AI đáng sợ nhất có thể xảy ra vào năm 2024.
Trí tuệ nhân tạo tổng hợp (AGI)
No one knows exactly why OpenAI executives, Sam Altman, was fired again and reinstated at the end of the year 2023. But amid the chaos at OpenAI, Rumors revolve around an advanced technology that could threaten the future of humanity.
Reuters reports that the technology is called Q* (pronounced Q-star) capable of realizing the breakthrough of General artificial intelligence (AGI).
AGI is a hypothetical tipping point, also called “Singularity”, in which AI becomes more intelligent than humans.
Current generations of AI still lag behind in areas where humans excel, such as context-based reasoning and true creativity. Most AI-generated content only regurgitates training data.
But scientists say AGI is capable of performing specific tasks better than humans. It can also be weaponized and used to create pathogens, launch large-scale or staged cyber attacks, mass manipulation of networks.
OpenAI reaching this tipping point will certainly come as a shock, but it's not impossible. For example, Sam Altman laid the foundation for AGI in October 2-2023, outlines OpenAI's approach to AGI in a blog post.
Experts began predicting an imminent breakthrough. CEO of Nvidia, Jensen Huang, in month 11-2023 has said that AGI will be achieved within 5 next year, page Barrons reported.
Does year 2024 was AGI's breakthrough year? Only time will tell.
Surreal deepfake election fraud
One of the most pressing cyber threats is deepfake – completely fabricated images or videos of the people they want to misrepresent, accuse or bully.
Deepfake AI technology is not yet good enough to become a significant threat, But that may be about to change.
AI can now create deepfakes in real time – in other words, a live video feed – and now it's getting very good at creating human faces, to the point where people can no longer distinguish between real and fake.
Another study, published in the magazine Psychological Science on the day 13-11, discovered the phenomenon “surreal”, where AI-generated content is more likely to be considered “real” than actual content.
Although tools can help people detect deepfakes, they are not yet widespread and mature.
When AI matures, a scary possibility is that people could deploy deepfakes to try to sway elections in countries.
Popularization of AI-powered killer robots
Governments around the world are increasingly incorporating AI into their tools of war.
We have seen AI drones hunting soldiers in Libya without human intervention.
In the year 2024, it is likely that we will not only see AI used in weapons systems but also in logistics and decision support systems, as well as research and development. For example, in the year 2022 AI created 40.000 New hypothetical chemical weapon.
According to the news site NPR, Israel has also used AI to quickly identify targets at least faster than humans 50 times in the latest war between Israel and Hamas.
But one of the scariest areas of development is lethal autonomous weapons systems (LAWS), or killer robots.
Some worrying developments suggest the year 2024 could be a breakthrough year for killer robots. For example, in Ukraine, Russia has deployed the Zala KYB-UAV drone, This type of aircraft can identify and attack targets without human intervention, according to a report from the Bulletin of the Atomic Scientists.