Creating A News Summarizer Using ChatGPT (Part 1)
Warren Buffet once said to Bill Gates after he said the computer is going to change everything: “I’ll stick to chewing gums, you stick to computers.”. We have to admit that we don’t understand a lot of things. It is wise to focus on our competencies. However, it should not stop us from exploring new ideas and technologies for the circle of competence can also be expanded through learning. I think general AI is another thing that is going to change everything.
Creating a news summarizer is a simple project I did this weekend after I learned that it is possible to train our own ChatGPT. Large Language Models (LLMs) are pre-trained using a massive amount of public data. It has no feasibility for private data and is not fine-tuned to handle specific tasks. From my limited understanding, there are two ways to build custom LLMs. First, prompt engineering. Users can submit context and ask for specific tasks in the prompt. Second, fine-tune. Basically, it changes the model itself instead of changing the prompt. I tried the first method using Llamaindex.
I subscribed to different new feeds using an RSS reader called Feedly. Whenever there are updates on the subscribed feeds, I can read them in the RSS reader. So, the first thing I did this weekend is to write a program that can extract the new feeds. At first, I was writing my own scripts using the official python client from Feedly. But after I was nearly done, I found that Llamindex offers data connectors developed by the community: Llamahub. So, I think I can put my Feedly connector into the community project. The connector returns a document object including the title, publishing date, summary, author, keywords, common topics, and the main content of each piece of news in the RSS reader. The object can then be used to create an index where users can query (More on that in part 2).
I created this pull request to add the Feedy connector. Let’s see how it goes.
Comments ()