- Strength to Increase Rep
- +2
- Strength to Decrease Rep
- -0
- Upvotes Received
- 142
- Posts with Upvotes
- 66
- Upvoting Members
- 25
- Downvotes Received
- 0
- Posts with Downvotes
- 0
- Downvoting Members
- 0
I am a machine learning and data science researcher passionate about writing on data science topics.
https://twitter.com/UsmanMalik_PhD
Open-source LLMs are gaining significant traction due to their ability to match the performance of advanced proprietary LLMs. These models are free to use and allow users to modify their source code or fine-tune them on their own systems, making them highly versatile for various applications. Alibaba's [Qwen](https://www.alibabacloud.com/en/solutions/generative-ai/qwen?_p_lc=1) and Meta's … | |
This tutorial demonstrates how to build an AI agent that queries SQLite databases using natural language. You will see how to leverage the [LangGraph framework](https://www.langchain.com/langgraph) and the [OpenAI GPT-4o](https://openai.com/index/gpt-4/) model to retrieve natural language answers from an SQLite database, given a natural language query. So, let's begin without ado. ## … | |
In a previous article, I explained [how to extract tabular data from PDF image documents using Multimodal Google Gemini Pro](https://www.daniweb.com/programming/computer-science/tutorials/541449/pdf-image-table-extractor-web-app-with-google-gemini-pro-and-streamlit#post2296083). However, there are a couple of disadvantages with Google Gemini Pro. First, Google Gemini Pro is not free, and second, it needs complex prompt engineering to retrieve table, columns, and … | |
On November 20, 2024, OpenAI updated its GPT-4o model, claiming it is more creative and accurate on several benchmarks. In this article, I compare the GPT-4o November update with the previous version (August update) for text summarization and classification tasks. By the end of this article, you will see whether … | |
In my previous article, I presented a [comparison of GPT-4o and Claude 3.5 Sonnet for multi-label text classification](https://www.daniweb.com/programming/computer-science/tutorials/542629/openai-gpt-4o-vs-claude-3-5-sonnet-for-multi-label-text-classification). The accuracies achieved by both models were relatively low. Fine-tuning is one solution to overcome the low performance of large-language models. With fine-tuning, you can incorporate custom domain knowledge into an LLM's … | |
In one of my previous articles, you saw a [comparison of GPT-4o vs. Claude 3.5 sonnet for zero-shot text classification](https://www.daniweb.com/programming/computer-science/tutorials/542132/comparing-gpt-4o-vs-claude-3-5-sonnet-for-zero-shot-text-classification). In that article; we performed multi-class text classification where input tweets belonged to one of the three categories. In this article, we will go a step further and perform zero-shot … | |
On September 25, 2024, Meta released [the Llama 3.2 series of multimodal models](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/). The models are lightweight yet extremely powerful for image-to-text and text-to-text tasks. In this article, you will learn how to use the Llama 3.2 Vision Instruct model for general image analysis, graph analysis, and facial sentiment prediction. … | |
This article explains how to create a retrieval augmented generation (RAG) chatbot in LangChain using open-source models from [Hugging Face serverless inference API](https://huggingface.co/docs/api-inference/en/index). You will see how to call large language models (LLMs) and embedding models from Hugging Face serverless inference API using LangChain. You will also see how to … | |
Open-source LLMS, owing to their comparable performance with advanced proprietary LLMs, have been gaining immense popularity lately. Open-source LLMs are free to use, and you can easily modify their source code or fine-tune them on your systems. [Alibaba's Qwen](https://www.alibabacloud.com/en/solutions/generative-ai/qwen?_p_lc=1) and [Meta's Llama](https://ai.meta.com/blog/meta-llama-3-1/) series of models are two major players in … | |
In my previous article, I explained how to fine-tune [OpenAI GPT-4o model for natural language processing tasks](https://www.daniweb.com/programming/computer-science/tutorials/542333/how-to-fine-tune-the-openai-gpt-4o-model-the-wait-is-finally-over). In OpenAI DevDay, held on October 1, 2024, OpenAI announced that users can now fine-tune OpenAI vision and multimodal models such as GPT-4o and GPT-4o mini. The best part is that fine-tuning vision … | |
In one of my previous articles, I explained [how to generate stunning images for free using diffusion models](https://www.daniweb.com/programming/computer-science/tutorials/541898/generate-stunning-ai-images-for-free-using-diffusion-models) and showed how to generate Stability AI's diffusion models for text-to-image generation. Since then, the AI domain has progressed considerably, particularly in image generation. Black Forest Labs has released [Flux.1 series of … | |
On September 19, 2024, [Alibaba released the Qwen 2.5 series of models](https://qwenlm.github.io/blog/qwen2.5/). The Qwen 2.5-72B base and instruct models outperformed larger state-of-the-art models like Llama 3.1-405B on multiple benchmarks. It is safe to assume that Qwen 2.5-72B is a state-of-the-art open-source large language model. This article will show you how … | |
The AI wave has introduced a myriad of exciting applications. While text generation and natural language processing are leading the AI revolution, image, and vision-based technologies are quickly catching up. The intersection of text and vision applications has seen a rapid surge recently. In this article, you'll learn how to … | |
## Introduction ## In a previous article, I explained [how to fine-tune the vision transformer model for image classification in PyTorch](https://www.daniweb.com/programming/computer-science/tutorials/540749/fine-tuning-vision-transformer-for-image-classification-in-pytorch). In this article, I will explain how to fine-tune the pre-trained OpenAI Whisper model for audio classification in PyTorch. Audio classification is an important task that can be applied … | |
Large language models (LLMS) are trained to predict the next token (set of characters) following an input sequence of tokens. This makes LLMs suitable for unstructured textual responses. However, we often need to extract structured information from unstructured text. With the Python [LangChain](https://www.langchain.com/) module, you can extract structured information in … | |
Retrieval augmented generation (RAG) allows large language models (LLMs) to answer queries related to the data the models have not seen during training. In my previous article, I explained [how to develop RAG systems using the Claude 3.5 Sonnet model](https://www.daniweb.com/programming/computer-science/tutorials/542136/retrieval-augmented-generation-with-claude-3-5-sonnet). However, RAG systems only answer queries about the data stored … | |
On August 20, 2024, [OpenAI enabled GPT-4o fine-tuning](https://openai.com/index/gpt-4o-fine-tuning/) in the OpenAI playground and the OpenAI API. The much-awaited feature is free for fine-tuning 1 million daily tokens until September 23, 2024. In this article, I will show you how to fine-tune the OpenAI GPT-4o model for text classification and summarization … | |
In a previous article, I compared [GPT-4o mini vs. GPT-4o and GPT-3.5 Turbo for zero-shot text summarization](https://www.daniweb.com/programming/computer-science/tutorials/542208/gpt-4o-mini-vs-gpt-4o-vs-gpt-3-5-turbo-for-text-summarization). The results showed that the GPT-4o mini achieves almost similar performance for zero-shot text classification at a much-reduced price compared to the other models. I will compare Meta Llama 3.1 70b with OpenAI … | |
In my previous articles, I presented a [comparison of OpenAI GPT-4o mini model with GPT-4o and GPT-3.5 turbo models for zero-shot text classification](https://www.daniweb.com/programming/computer-science/tutorials/542182/gpt-4o-mini-a-cheaper-and-faster-alternative-to-gpt-4o). The results showed that GPT-4o mini, while significantly cheaper than its counterparts, achieves comparable performance. On 8 August 2024, OpenAI enabled GPT-4o mini fine-tuning for developers across … | |
In my previous [article on GPT-4o mini](https://www.daniweb.com/programming/computer-science/tutorials/542182/gpt-4o-mini-a-cheaper-and-faster-alternative-to-gpt-4o), I compared the performance of GPT-4o mini against GPT-3.5 Turbo and GPT-4o for zero-shot text classification. We saw that GPT-4o mini, being 36% times cheaper, achieves only 2% less accuracy than GPT-4o. Furthermore, while being 1/3 of the price, the GPT-4o mini significantly … | |
On July 18th, 2024, [OpenAI released GPT-4o mini](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/), their most cost-efficient small model. GPT-4o mini is around 60% cheaper than GPT-3.5 Turbo and around 97% cheaper than GPT-4o. As per OpenAI, GPT-4o mini outperforms GPT-3.5 Turbo on almost all benchmarks while being cheaper. In this article, we will compare the … | |
In my article on [Image Analysis Using OpenAI GPT-4o Model](https://www.daniweb.com/programming/computer-science/tutorials/542030/image-analysis-using-openai-gpt-4o-model), I explained how GPT-4o model allows you to analyze images and answer questions related images precisely. In this article, I will show you how to analyze images with the [Anthropic Claude 3.5 Sonnet](https://www.anthropic.com/news/claude-3-5-sonnet) model, which has shown state-of-the-art performance for … | |
Are you interested in finding out what a YouTube channel mostly discusses? Do you want to analyze YouTube videos of a specific channel? If yes, we are in the same boat. YouTube video titles are a great way to determine the channel's primary focus. Plotting a word cloud or a … | |
In my [previous article](https://www.daniweb.com/programming/computer-science/tutorials/542132/comparing-gpt-4o-vs-claude-3-5-sonnet-for-zero-shot-text-classification) I presented results comparing Anthropic [Claude 3.5 Sonnet](https://www.anthropic.com/news/claude-3-5-sonnet) and [OpenAI GPT-4o](https://openai.com/index/hello-gpt-4o/) models for zero-shot text classification. The results showed that the Claude 3.5 Sonnet significantly outperformed GPT-4o. These results motivated me to develop a simple retrieval augmented generation system with [LangChain](https://www.langchain.com/) that enables the Claude 3.5 … | |
On June 20, 2024, Anthropic released the [Claude 3.5 sonnet](https://www.anthropic.com/news/claude-3-5-sonnet) large language model. Claude claims it to be the state-of-the-art model for many natural language processing tasks, surpassing the [OpenAI GPT-4o model](https://openai.com/index/hello-gpt-4o/). My first test for comparing two large language models is their zero-shot text classification ability. In this article, … | |
As a data scientist, I have extensively used the Hugging Face library for processing unstructured data such as images, text, and audio. My previous blogs have covered various transformer models for these types of data. Lately, however, I discovered that Hugging Face also provides transformer models for tabular data. One … | |
# Comparison Between Fine-tuned and Default GPT-3 Turbo for Text Classification In one of my previous articles, I showed you how to perform [zero-shot text classification using OpenAI GPT-4o and Meta Llama 3 models](https://www.daniweb.com/programming/computer-science/tutorials/542001/openai-gpt-4o-vs-meta-llama-3-for-zero-shot-text-classifiation). I used the default models for predicting sentiments of airline tweets. The default models perform substantially … | |
OpenAI announced the [GPT-4o (omni)](https://community.openai.com/t/announcing-gpt-4o-in-the-api/744700) model on May 13, 2024. The GPT-4o model, as the name suggests, can process multimodal inputs, such as text, image, and speech. As per OpenAI, GPT-4o is the state-of-the-art and best-performing large language model. Among GPT-4o's many capabilities, I found its ability to analyze images … | |
On April 18, 2024, Meta AI released [Llama 3](https://ai.meta.com/blog/meta-llama-3/), which they claimed to be the most capable openly available LLM to date. Concurrently, OpenAI announced [GPT-4o (omni)](https://community.openai.com/t/announcing-gpt-4o-in-the-api/744700) on May 13, 2024, which is touted as the state-of-the-art proprietary model for various NLP benchmarks. As a guy who loves to compare … | |
## Introduction Text-to-speech (TTS) technology has revolutionized how we interact with devices, making accessing content through auditory means easier. TTS is vital in various applications such as virtual assistants, audiobooks, accessibility tools for the visually impaired, and language learning platforms. This tutorial will explore how to convert text-to-speech using Hugging … |