It is the AI work being done behind the scenes in 2023 that AUCloud believes will set the stage for the biggest changes in the future of AI technology and cyber security in 2024.
“How we are all currently using AI is primitive in comparison to what we will start to see in 2024,” said AUCloud CEO Peter Maloney.
“AI will fundamentally change how we operate as a society driving new products, specialised assistants and copilots throughout existing systems, accelerate coding for developers, better discovery and automate more efficient practices for the entire of society.
“But in the wrong hands, AI will also drive cyber criminals and we will see that impact every one of us in 2024,” he said.
“Hackers use AI to create malware that can hide from security software and is therefore harder to detect allowing unauthorised access to computers or steal data.”
“AI is also used to conceal malicious codes that are programed to execute months after the applications have been installed, or when a target is reached such as a certain number of users have subscribed to an application.”
“There are effectively millions of malicious codes in place all over Australia waiting for their activation to be triggered. And that will happen in 2024.”
“AI-powered tools can also analyse vast amounts of data to craft highly targeted and convincing phishing emails.”
“AI-generated deepfake technology will be utilised more frequently in 2024 to create convincing fake videos or audio recordings, facilitating social engineering attacks. Cyber criminals could use deepfakes to impersonate trusted individuals, tricking victims into divulging sensitive information or performing unauthorised actions.”
In 2024, we anticipate significant technological advancements. As we embrace these innovations, it is our collective responsibility to establish guidelines, legislation, and ethical frameworks to ensure that these technologies are used for positive rather than negative purposes.