LMs are great , but if you don't know how to ask the right questions , break your thoughts in manageable pieces they can't do it for you. Even when you do , you must have a level of experience to distinguish a wrong or crappy response from a legitimate one. More over I know that are many programmers that aren't yet familiar with what those new AI tools can provide them. I believe that this cycle of AI hype is going to burst (rather soon) , but these new tools are useful and are here to stay.

So I was thinking to create a simple "AI app" tutorial in parts here in DaniWeb. Here is the concept. You query with something (question or even a phrase) the app and you get as output a "relevant" quote from a historical figure using a vector database (and a free quotes database) ( probably ChromaDB ) and then a justification from an open source LLM ( probably Llama 3 8b ) why this quote is relevant to your query. All this in your local machine , using only CPU , without paying anything or registering in any service - API e.t.c. . What do you think about this concept ?

It will be a multipart tutorial and I was thinking to create a github repository to go along with it for the source code (of course there there will be a link back to the tutorials chain. The name of it "Chat with Quotes" Do you think that it worth the time ?

Also if you do , where should I post it in DaniWeb ? Programming > Software Development > Tutorial Draft ? Is this the right place ?

I am not an expert in Python , and I am not an expert in AI either ;) . I have just created some real life applications using AI models and Python ( I tried so hard to replace it with C++ ..... but ... ) and figured out that is still hard to connect the AI advancements to the real world, some of these are in production quietly (without making any noise that are AI) and some are pending because they are more ... "AI in your face" , and we are not sure how customers will respond to that at this moment.

I would love to read your thoughts

rproffitt commented: Go for it. I'll read them too. +0

I'd definitely follow that. My older son does a fair bit of AI work and he exclusively uses Python with Pytorch. I would strongly recommend that over C++. Also, if you aren't familiar with Jupyter notebooks I suggest you have a look - I found that vscode was the interface that was the easiest to set up and use.

Hello , I have started writing it ( in my free time ) but it is a little difficult because I want it to be safe but also for someone that doesn't know a lot about Python. So I have to start from what pyenv is and why you should use virtual environments in Python.
Less than half a year ago , I have written Python here and there in my life (or more translated Python to Java or PHP) , but I didn't have an understanding why you should do things the "weird way" in Python. So this tutorial will sketch some of those concepts in an order , along with what embeddings are , what are sentence transformers models , what are vector databases and how can you tight all those to a open source LLM like Llama 3 8b. I will write little defining those concepts , and the reader can have an idea of what it is and search more of it if wants to.
I never understood why we use Python for A.I. , it doesn't make sense to me. I understand it in the historical context ( how it came to be ) , but still since there isn't any explanation that really makes sense of the benefits of it , I tried to use C++ where ever I could. All these Python packages that we use for A.I. apps are in reality C++ binds. But here is the catch , you can't use them directly in C++. That is because 1. few care and 2. if your implementation is model agnostic , many (if not most) models have problem translating in a way that you can use them in C++ ( for example through lib torch that you would need a .pt model ). There are some C++ solutions out there like lib torch and llama cpp (for C++) but those hit limits that you can't predict because you can't incorporate everything that you could in Python.
It is rather ironical that we use the slowest "modern" language to run such resource intensive apps ( especially in CPU ) , but of course most of these Python packages are C++ binds in their core, so it could be that this isn't a big deal (I believe that it is , but because I can't change it , I accepted it , for now ) .
As for IDE ( referring to Reverend Jim ) , I use Eclipse ( with PyDev plugin in this case) . I like that there are other IDEs out there ( most of them build on top of Eclipse , but others like vscode on their own ) , but in my point of view Eclipse has gained the title as the industry's standard.
If you have any thought on these , please share ;)

I would absolutely love, of course, if you could write a tutorial, or series of tutorials, on how to use LLM without a ChatGPT API. I did not even realize that there were open source LLMs available these days!! (Perhaps I'm just behind the times?)

Yes, just create a tutorial draft and then you can manage it from the Editorial Workshop until it's ready to be published.

commented: Thank you Dani, I am thinking one thread with three parts and interactions in Software Development > Tutorial Draft +0
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.