2023-05-22 OpenAI experiments diary

After three weeks or so of learning how to create a summarization and question-answer program that uses a downloaded open-source model and my own local documents I have a few observations.

  1. the OpenAI models (and in particular GPT4) are starting to do a good job summarizing and list primary topics in several ad-hoc collections of documents. n.b.: these are not the free and open-source options, and they are not running on my home computer.
  2. I have a couple of open and available models on my computer (so far I am only using ~12GB of storage)
  3. the two models I have been successful in using do not produce very good results ... HOWEVER,
  4. this is in part due to the large number of available posts, tutorials, github websites, etc., that provide a wealth of usable information, and that the popular frameworks of tools is changing very fast (i have had two updates to Python packages in one day) and this makes existing code examples needing their own updates.
  5. the upshot is that while I have some programs that do some things I do not have a way to explain them to myself or to others, AND
  6. in order to use these models and their frameworks on a home computer requires a lot of futzing and fiddling around with tools, and ways to think about what I am doing.
  7. if development and adoption of AI, LLMs, and chatbots follows the usual "S" curve, I think I'm on the steep part and the plateau is not in sight.
  8. I still hope to document some of what I have learned, but that's gonna be a minute.

Related pages:


Pages that link to this page