Thanks for sharing your insights.
I’m curious why instances offering free search are defederated? I would have guessed everyone wants better search. Is it because of privacy concerns or instances don’t want to be indexed or have traffic directed elsewhere?
I was hoping that if I index only for the purpose of embeddings (which would prevent recreating the original content) and only share urls to the content that it should eliminate privacy and traffic concerns.
I’m still in the process of understanding how and if this would work. It’s only a personal project at this stage but you are right cpu/gpu and vector stores would be things I’d need to consider.
I have found luck with Ikea smart bulbs. They don’t need to be connected to the internet to operate.
Tp-link devices are notoriously bad about connecting to the internet. There is no way to operate them without an internet connection. On top of that, each device connects independently so even when you have a vLAN, there’s extra steps required each time a new device is set up.
I am surprised some of the big ones haven’t been mentioned yet -
Radiolab - Not really sure how to describe this podcast. It’s superb journalism at its core. They do both short and multi-episode long form about a variety of topics from science to history to current events. For example, how dinosaurs died when the asteroid hit earth, the story of a Guantanamo convict with the same name as the host, and how poorly computer databases are designed for names that are outside the norm.
Planet Money - An excellent economics podcast where complex topics are distilled in fairly short episodes. They recently released a completely AI generated episode which was incredibly scary with how good it was.
More Perfect - Everything the US Supreme Court
Serial - One multi-episode series at a time about complicated criminal cases.
What Roman Mars can Learn about Con Law - Started off during the Trump Presidency when tough questions about the US constitution are being asked given his penchant for pushing the legal boundaries and norms.
Mental Illness Happy Hour by Paul Gilmartin, if you like a podcast that talks honestly about the struggles of mental health.
Paul interviews a different person each week and discuss their journeys on dealing with their mental health. Paul is also been very open about his struggles. It helps that he is a comedian and has a subtle but dark humor that I enjoy.
I also really like the short surveys that he reads and people have filled out on his website because they make me feel connected that I’m not alone.
Throughline has been my favorite since it launched a few years ago. The hosts take a deep dive into the historical events leading up to topical events of the present weaving a thread through them, hence the name.
Some of the examples are the history of policing in the US and how capitalism became the dominant economic system.
I cannot recommend this podcast enough!
You can absolutely self host LLMs. HELM team has done an excellent job benchmarking the efficiency of different models for specific tasks so that would be a good place to start. You can balance model performance for your specific task with the model’s efficiency - in most situations, larger models are better performing but use more GPUs or are only available via APIs.
There are currently 3 different approaches to use AI for a custom task and application -
Train a base LLM from scratch - this is like creating your own GPT-by_autopilot model. This would be the maximum level of control, however the amount of compute, time, and data required for training does not make this an ideal approach for the end user. There are many open source base LLMs already published on HuggingFace that can be used instead.
Fine-tune a base LLM - starting with a base LLM, it can be fine tuned for a certain set of tasks. For example, you can fine tune a model to follow instructions or use as a chatbot. InstructGPT and GPT3.5+ are examples of fine tuned models. This approach allows you to create a model that can understand a specific domain or a set of instructions particularly well as compared to the base LLM. However, any time that training a large model is needed, it will be an expensive approach. If you are starting out, I’ll suggest exploring this as a v2 step for improving your model.
Prompt engineering or indexing using an existing LLM - starting with an existing model, create prompts to achieve your objective. This approach gives you the least control over the model itself, but is the most efficient. I would suggest this as the first approach to try. Langchain is the most widely used tool for prompt engineering and supports using self hosted base- or instruct-LLM. If your task is search and retrieval, an embeddings model is used. In this scenario, you generate embeddings for all your content and store the embeddings as vectors. For a user query, you then convert it to an embedding using the same model, and finally retrieve the most similar content based on vector similarity. Langchain provides this capability, but IMO, sentence-transformers may be a better starting point for a self hosted retrieval application. Without any intention to hijack this post, you can check out my project - synology-photos-nlp-search - as an example of a self hosted retrieval application.
To learn more, I have found the recent deeplearning.ai short courses to be quite good - they are short, comprehensive, and free.