163 Recommended Topics

Remove Filter
Member Avatar for
Member Avatar for usmanmalik57

In my previous article, I presented a [comparison of GPT-4o and Claude 3.5 Sonnet for multi-label text classification](https://www.daniweb.com/programming/computer-science/tutorials/542629/openai-gpt-4o-vs-claude-3-5-sonnet-for-multi-label-text-classification). The accuracies achieved by both models were relatively low. Fine-tuning is one solution to overcome the low performance of large-language models. With fine-tuning, you can incorporate custom domain knowledge into an LLM's …

1
80
Member Avatar for usmanmalik57

In one of my previous articles, you saw a [comparison of GPT-4o vs. Claude 3.5 sonnet for zero-shot text classification](https://www.daniweb.com/programming/computer-science/tutorials/542132/comparing-gpt-4o-vs-claude-3-5-sonnet-for-zero-shot-text-classification). In that article; we performed multi-class text classification where input tweets belonged to one of the three categories. In this article, we will go a step further and perform zero-shot …

1
92
Member Avatar for usmanmalik57

On September 25, 2024, Meta released [the Llama 3.2 series of multimodal models](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/). The models are lightweight yet extremely powerful for image-to-text and text-to-text tasks. In this article, you will learn how to use the Llama 3.2 Vision Instruct model for general image analysis, graph analysis, and facial sentiment prediction. …

3
186
Member Avatar for usmanmalik57

This article explains how to create a retrieval augmented generation (RAG) chatbot in LangChain using open-source models from [Hugging Face serverless inference API](https://huggingface.co/docs/api-inference/en/index). You will see how to call large language models (LLMs) and embedding models from Hugging Face serverless inference API using LangChain. You will also see how to …

2
128
Member Avatar for V3N0M

Hi Everyone I am a first year student but I started thinking about a third year project. I would like to know something that is programming related and less algorithms I am currently tring to master C and then go and learn how to develop GUI for C programs. And …

Computer Science
Member Avatar for Brandon_38
0
234
Member Avatar for usmanmalik57

In my previous article, I explained how to fine-tune [OpenAI GPT-4o model for natural language processing tasks](https://www.daniweb.com/programming/computer-science/tutorials/542333/how-to-fine-tune-the-openai-gpt-4o-model-the-wait-is-finally-over). In OpenAI DevDay, held on October 1, 2024, OpenAI announced that users can now fine-tune OpenAI vision and multimodal models such as GPT-4o and GPT-4o mini. The best part is that fine-tuning vision …

2
174
Member Avatar for usmanmalik57

Open-source LLMS, owing to their comparable performance with advanced proprietary LLMs, have been gaining immense popularity lately. Open-source LLMs are free to use, and you can easily modify their source code or fine-tune them on your systems. [Alibaba's Qwen](https://www.alibabacloud.com/en/solutions/generative-ai/qwen?_p_lc=1) and [Meta's Llama](https://ai.meta.com/blog/meta-llama-3-1/) series of models are two major players in …

Member Avatar for Brandon_38
2
529
Member Avatar for usmanmalik57

In one of my previous articles, I explained [how to generate stunning images for free using diffusion models](https://www.daniweb.com/programming/computer-science/tutorials/541898/generate-stunning-ai-images-for-free-using-diffusion-models) and showed how to generate Stability AI's diffusion models for text-to-image generation. Since then, the AI domain has progressed considerably, particularly in image generation. Black Forest Labs has released [Flux.1 series of …

Member Avatar for autowrecking
3
151
Member Avatar for usmanmalik57

On September 19, 2024, [Alibaba released the Qwen 2.5 series of models](https://qwenlm.github.io/blog/qwen2.5/). The Qwen 2.5-72B base and instruct models outperformed larger state-of-the-art models like Llama 3.1-405B on multiple benchmarks. It is safe to assume that Qwen 2.5-72B is a state-of-the-art open-source large language model. This article will show you how …

3
624
Member Avatar for usmanmalik57

## Introduction ## In a previous article, I explained [how to fine-tune the vision transformer model for image classification in PyTorch](https://www.daniweb.com/programming/computer-science/tutorials/540749/fine-tuning-vision-transformer-for-image-classification-in-pytorch). In this article, I will explain how to fine-tune the pre-trained OpenAI Whisper model for audio classification in PyTorch. Audio classification is an important task that can be applied …

Computer Science audio python
Member Avatar for meyerrluanna
3
2K
Member Avatar for usmanmalik57

The AI wave has introduced a myriad of exciting applications. While text generation and natural language processing are leading the AI revolution, image, and vision-based technologies are quickly catching up. The intersection of text and vision applications has seen a rapid surge recently. In this article, you'll learn how to …

2
377
Member Avatar for usmanmalik57

Large language models (LLMS) are trained to predict the next token (set of characters) following an input sequence of tokens. This makes LLMs suitable for unstructured textual responses. However, we often need to extract structured information from unstructured text. With the Python [LangChain](https://www.langchain.com/) module, you can extract structured information in …

2
104
Member Avatar for usmanmalik57

Retrieval augmented generation (RAG) allows large language models (LLMs) to answer queries related to the data the models have not seen during training. In my previous article, I explained [how to develop RAG systems using the Claude 3.5 Sonnet model](https://www.daniweb.com/programming/computer-science/tutorials/542136/retrieval-augmented-generation-with-claude-3-5-sonnet). However, RAG systems only answer queries about the data stored …

1
120
Member Avatar for usmanmalik57

On August 20, 2024, [OpenAI enabled GPT-4o fine-tuning](https://openai.com/index/gpt-4o-fine-tuning/) in the OpenAI playground and the OpenAI API. The much-awaited feature is free for fine-tuning 1 million daily tokens until September 23, 2024. In this article, I will show you how to fine-tune the OpenAI GPT-4o model for text classification and summarization …

2
528
Member Avatar for usmanmalik57

In a previous article, I compared [GPT-4o mini vs. GPT-4o and GPT-3.5 Turbo for zero-shot text summarization](https://www.daniweb.com/programming/computer-science/tutorials/542208/gpt-4o-mini-vs-gpt-4o-vs-gpt-3-5-turbo-for-text-summarization). The results showed that the GPT-4o mini achieves almost similar performance for zero-shot text classification at a much-reduced price compared to the other models. I will compare Meta Llama 3.1 70b with OpenAI …

2
511
Member Avatar for usmanmalik57

In my previous articles, I presented a [comparison of OpenAI GPT-4o mini model with GPT-4o and GPT-3.5 turbo models for zero-shot text classification](https://www.daniweb.com/programming/computer-science/tutorials/542182/gpt-4o-mini-a-cheaper-and-faster-alternative-to-gpt-4o). The results showed that GPT-4o mini, while significantly cheaper than its counterparts, achieves comparable performance. On 8 August 2024, OpenAI enabled GPT-4o mini fine-tuning for developers across …

1
252
Member Avatar for usmanmalik57

In my previous [article on GPT-4o mini](https://www.daniweb.com/programming/computer-science/tutorials/542182/gpt-4o-mini-a-cheaper-and-faster-alternative-to-gpt-4o), I compared the performance of GPT-4o mini against GPT-3.5 Turbo and GPT-4o for zero-shot text classification. We saw that GPT-4o mini, being 36% times cheaper, achieves only 2% less accuracy than GPT-4o. Furthermore, while being 1/3 of the price, the GPT-4o mini significantly …

1
216
Member Avatar for VAIBHAV_20
Member Avatar for rproffitt
0
125
Member Avatar for Techwriter10

Imagine my surprise when I learned this morning that an [URL="http://news.cnet.com/8301-13924_3-10216733-64.html?part=rss&subj=news&tag=2547-1_3-0-5"]IBM researcher believes[/URL] that Moore's Law-- that the number of transistors on a micro processor would double nearly every two years-- could be nearing the end of its run. Amazingly Moore made this prediction in 1965 and his law has …

Computer Science legal
Member Avatar for Madisson
1
502
Member Avatar for usmanmalik57

On July 18th, 2024, [OpenAI released GPT-4o mini](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/), their most cost-efficient small model. GPT-4o mini is around 60% cheaper than GPT-3.5 Turbo and around 97% cheaper than GPT-4o. As per OpenAI, GPT-4o mini outperforms GPT-3.5 Turbo on almost all benchmarks while being cheaper. In this article, we will compare the …

3
185
Member Avatar for usmanmalik57

In my article on [Image Analysis Using OpenAI GPT-4o Model](https://www.daniweb.com/programming/computer-science/tutorials/542030/image-analysis-using-openai-gpt-4o-model), I explained how GPT-4o model allows you to analyze images and answer questions related images precisely. In this article, I will show you how to analyze images with the [Anthropic Claude 3.5 Sonnet](https://www.anthropic.com/news/claude-3-5-sonnet) model, which has shown state-of-the-art performance for …

2
128
Member Avatar for usmanmalik57

In my [previous article](https://www.daniweb.com/programming/computer-science/tutorials/542132/comparing-gpt-4o-vs-claude-3-5-sonnet-for-zero-shot-text-classification) I presented results comparing Anthropic [Claude 3.5 Sonnet](https://www.anthropic.com/news/claude-3-5-sonnet) and [OpenAI GPT-4o](https://openai.com/index/hello-gpt-4o/) models for zero-shot text classification. The results showed that the Claude 3.5 Sonnet significantly outperformed GPT-4o. These results motivated me to develop a simple retrieval augmented generation system with [LangChain](https://www.langchain.com/) that enables the Claude 3.5 …

3
877
Member Avatar for usmanmalik57

Are you interested in finding out what a YouTube channel mostly discusses? Do you want to analyze YouTube videos of a specific channel? If yes, we are in the same boat. YouTube video titles are a great way to determine the channel's primary focus. Plotting a word cloud or a …

Computer Science data-science python
4
74
Member Avatar for usmanmalik57

On June 20, 2024, Anthropic released the [Claude 3.5 sonnet](https://www.anthropic.com/news/claude-3-5-sonnet) large language model. Claude claims it to be the state-of-the-art model for many natural language processing tasks, surpassing the [OpenAI GPT-4o model](https://openai.com/index/hello-gpt-4o/). My first test for comparing two large language models is their zero-shot text classification ability. In this article, …

3
175
Member Avatar for usmanmalik57

As a data scientist, I have extensively used the Hugging Face library for processing unstructured data such as images, text, and audio. My previous blogs have covered various transformer models for these types of data. Lately, however, I discovered that Hugging Face also provides transformer models for tabular data. One …

Computer Science machine-learning python
2
70
Member Avatar for usmanmalik57

# Comparison Between Fine-tuned and Default GPT-3 Turbo for Text Classification In one of my previous articles, I showed you how to perform [zero-shot text classification using OpenAI GPT-4o and Meta Llama 3 models](https://www.daniweb.com/programming/computer-science/tutorials/542001/openai-gpt-4o-vs-meta-llama-3-for-zero-shot-text-classifiation). I used the default models for predicting sentiments of airline tweets. The default models perform substantially …

2
496
Member Avatar for usmanmalik57

OpenAI announced the [GPT-4o (omni)](https://community.openai.com/t/announcing-gpt-4o-in-the-api/744700) model on May 13, 2024. The GPT-4o model, as the name suggests, can process multimodal inputs, such as text, image, and speech. As per OpenAI, GPT-4o is the state-of-the-art and best-performing large language model. Among GPT-4o's many capabilities, I found its ability to analyze images …

3
1K
Member Avatar for usmanmalik57

On April 18, 2024, Meta AI released [Llama 3](https://ai.meta.com/blog/meta-llama-3/), which they claimed to be the most capable openly available LLM to date. Concurrently, OpenAI announced [GPT-4o (omni)](https://community.openai.com/t/announcing-gpt-4o-in-the-api/744700) on May 13, 2024, which is touted as the state-of-the-art proprietary model for various NLP benchmarks. As a guy who loves to compare …

2
214
Member Avatar for usmanmalik57

In this tutorial, you will see how to generate stunning AI-generated images from text inputs using state-of-the-art diffusion models from [Hugging Face](https://huggingface.co/). You'll learn about base diffusion models and how combining them with a refiner creates even more detailed, refined results. Diffusion models are powerful because they iteratively refine an …

Member Avatar for rproffitt
2
159
Member Avatar for usmanmalik57

## Introduction Text-to-speech (TTS) technology has revolutionized how we interact with devices, making accessing content through auditory means easier. TTS is vital in various applications such as virtual assistants, audiobooks, accessibility tools for the visually impaired, and language learning platforms. This tutorial will explore how to convert text-to-speech using Hugging …

2
151

The End.