Latest AI trends

Artificial intelligence, or AI, is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.
Source: www.ibm.com

AI is no longer a strange term to people. After the idea was first mentioned in Alan Turing’s paper “Computing Machinery and Intelligence” in 1950, the idea has been through lots of research to finally become something real. In the late 20th century and early 21st century, AI gained obvious notice when it beat a world chess champion and won in the “Jeopardy!” game show. Now, even my mom knows about AI, everyone talks about AI and uses AI in their everyday work. 

AI’s hype has been around since 2020 and there’s no sign of slowing down. Let’s, together, catch up with the latest news in AI, how it is creating impacts on almost all industries and what we should do to keep up with it.

1. How AI is trained 

Before taking a look at the latest AI trends, it is important to understand how AI learns, how it would be possible to answer our questions, to recognize things and to do all the things it is doing.

The idea about AI is to create something that mimics our brain, something that can think and solve problems like we think and solve problems. That’s why the way AI learns things is somehow similar to the way we do.

Artificial model is considered the brain of AI. It has a network just like our brain. This network has an input layer, many hidden layers and an output layer. Each layer has nodes or sometimes called artificial neurons. Input information will be processed (based on our algorithm and parameters) at each node and passed to another, so input of one node is the output from the previous node and the process continues from layer to layer until final output. Imagine parameters are like seasoning when we cook, add or reduce this would change the output.

Training a model is in order to make it return the most accurate output.

When we were toddlers, we learned how to speak by mimicking adults so many times, until we understood words and formed a sentence, or, when we were shown pictures to distinguish one thing from another.

The same thing is applied to the model, it will be fed with a lot of labeled datas (aka. supervised learning, means we tell it what is what), and unlabeled datas (aka. unsupervised learning, random input but we won’t tell it what is what), then it receives our feedback (aka. reinforcement learning). This process is constantly repeated until the model realizes the pattern and knows what is correct, what is not and the accuracy will improve over time.

There are many models out there, each will be different in terms of speed, capability, cost. Some models are easy to use, some are specialized for specific cases, some are integrated into products. Popular models to mention are:

  • GPT
  • Llama
  • Gemini
  • Neural network
  • Multimodal AI

2. Latest AI trends

a) Smaller language models

Earlier this year, CEO of OpenAI, Sam Altman, said in an event in April: “I think we’re at the end of the era where it’s going to be these giant models, and we’ll make them better in other ways”. Indeed, large models have given the AI era a jumpstart but they also have their drawbacks.

At the moment, only big companies have resources to maintain the process of training these models. The electricity consumption for training a single GPT-3-sized model is even bigger that for 1,000 households; a standard day of ChatGPT queries takes up a daily energy consumption of 33,000 U.S. households.

There are ongoing papers to show that reducing model size doesn’t mean compromising the performance so this is inevitable trend to make AI cost less and more accessible as well as explainable.

b) GPU shortages and cloud costs

To be able to adapt to smaller models, we are in need of cheaper and more accessible hardware solutions.

Cloud providers handle much of computational work but hardware shortages make on-premise servers for AI tasks difficult and expensive.

Enterprises have been advised by IBM CEO Arvin Krishna to be flexible in selecting and deploying models; it’s crucial to cater to diverse deployment preferences. Being flexible and balanced between smaller efficient models with occasional uses of larger high performance ones to meet evolving demands and constraints.

c) More realistic expectations

Standalone AI tools like chat GPT or DallE seem to take the center stage but integration of generative AI into established services is what we can look forward to.

Many AI tools have been implemented in enterprise environments in order to enhance and complement existing tools rather than revolutionizing or replacing them. So, it is important for business owners to have realistic expectations on transformative impacts of AI and understand that it doesn’t happen in the short term.

d) Multimodal AI

We have already been familiar with AI performing one direction tasks like, text-to-image, text-to-speech, … But the upcoming generation of interdisciplinary models, including proprietary ones like GPT-4V and Gemini, as well as open-source models like LLaVa, Adept, and Qwen-VL, will be able to move freely between natural language processing (NLP) and computer vision tasks.

This would diversify inputs, by that, enrich the training process and open the door to more intuitive and versatile AI applications and virtual assistants.

e) More powerful virtual agents

How would we be able to have multimodal AI without the development of more powerful virtual agents, right! The need for more real life interactions and vision has made this trend rise.

You must have once used the virtual try-it-on when you try to find a new pair of shoes in Adidas or Nike app. So, this is not new but we’ll definitely expect to see more applications in the near future.

f) Accessible model optimization

There have been advancements in foundation models, techniques and resources that back this trend like:

  • Low Rank Adaptation (LoRA): dramatically reduces the number of parameters that need to be updated, which, in turn, dramatically speeds up fine-tuning and reduces memory needed to store model updates.
  • Quantization: reduce file size and latency, quantization lowers the precision used to represent model data points to reduce memory usage and speed up inference.
  • Direct Preference Optimization (DPO): to align model outputs to human preferences and remain computationally lightweight and substantially simpler.

This may provide smaller groups, like startups and amateurs, with sophisticated AI capabilities that were previously out of reach.

g) Customized local models and data pipelines

With this shift, enterprises will be able to pursue differentiation through developing custom AI models tailored to their specific needs, rather than relying solely on repackaged services from major AI providers.

Open-source AI models and tools can be fine-tuned using an organization’s proprietary data to create powerful, specialized AI systems for a variety of use cases, especially in industries like legal, healthcare, and finance, where specialized vocabulary and concepts may not have been adequately covered by foundation models during pre-training.

h) Shadow AI

Organizations must not only have a careful, coherent and clearly articulated corporate policy around generative AI, but also be wary of shadow AI: the “unofficial” personal use of AI in the workplace by employees.

In one study from Ernst & Young, 90% of respondents said they use AI at work. However, without proper cautiousness or going through IT for approval or oversight, they might be at risk of feeding trade secrets to open AI models.

i) Regulation, copyright and ethical AI concerns

The elevated capabilities (voice, vision, text) of AI systems also raise concerns about potential abuse, such as deepfakes, privacy issues, perpetuation of bias, and evasion of CAPTCHA safeguards.

The regulation seems to be slow to catch up with the growth of AI at the moment but at least the problem has been addressed and awared of. The European Union (EU) reached provisional agreement on the Artificial Intelligence Act. In the US, the Biden administration issued a comprehensive executive order detailing 150 requirements for use of AI technologies by federal agencies. China has moved more proactively toward formal AI restrictions, banning price discrimination by recommendation algorithms on social media and mandating the clear labeling of AI-generated content.

Legal adjustment is always a long run, but we expect to see more measurements to protect ourselves and raise people’s awareness in responsibly using AI.

3. Conclusion

Just like any other revolution stages, the 4.0 revolution is inevitable. What we can do is understand the situation and be prepared. AI’s influence on businesses and industries will keep expanding, with a focus on transparency, accountability, and responsible practices. Stay tuned for these trends as they are shaping AI’s future!

関連記事

カテゴリー:

ブログ

情シス求人

  1. 登録されている記事はございません。
ページ上部へ戻る