VibeBuilders.ai Logo
VibeBuilders.ai

Mentally

Explore resources related to mentally to help implement AI solutions for your business.

Why you should consider using small open source fine-tuned models
reddit
LLM Vibe Score0
Human Vibe Score0.929
hamada0001This week

Why you should consider using small open source fine-tuned models

Context I want to start off by giving some context on what fine-tuning is, why it's useful and who it would be useful for: What is fine-tuning? When controlling the output of an LLM there are, broadly, three levels. Prompt engineering, RAG and fine-tuning. Most of you are likely familiar with the first two. Prompt engineering is when you try to optimize the prompt to get the model to do what you want better. RAG (retrieval augmented generation) is when you first do a search on some data (usually stored in a vector database which allows you to search by similarity), then you insert the results into the prompt so that the model can use that context to more accurately answer any questions. It's like letting the LLM access external information right before answering, using that additional context to improve its response Fine-tuning is when you want to fundamentally teach a model something new or teach it to behave in a particular way. You would provide the model with high quality data (i.e. inputs and outputs) which it will train on. Why is it useful? At the moment, many of you use the largest and best LLMs because they give the best results. However, for a lot of use cases you are likely using a sledgehammer for a small nail. Does it do a great job? Damn yeah! Well... why not use a smaller hammer? Because it might miss or hit your finger. The solution shouldn't be to use a sledgehammer, but rather to learn how to use a smaller hammer properly so you never miss! That's exactly what fine-tuning a smaller model is like. Once you fine-tune it on a specific task with good high quality data, it can surpass even the best models at that specific task. It'll be 10x cheaper to run, much faster and, if you use an open source model, you'll own the model (no vendor lock-in!). If you run a SaaS and your biggest expense is AI costs then you should definitely consider fine-tuning. It'll take some time to set up but it'll be well worth it in the medium/long term (a bit like SEO). You can always resort to the best models for more complex tasks. How to fine-tune? I'm going to give you a breakdown of the process from beginning to end. You do need to be (a bit) technical in order to do this. Getting the data Let's suppose we want to fine-tune a model to make high-quality SEO content. At the moment, you might be using a large sophisticated prompt or using multiple large LLMs to write different parts or utilizing RAG. This is all slow and expensive but might be giving you great results. Our goal is to replace this with a fine-tuned model that is great at one thing: writing high-quality SEO content quickly at a much lower cost. The first step is gathering the appropriate data. If you want the model to write 3 or 4 paragraphs based on a prompt that contains the topic and a few keywords, then your data should match that. There are a few way you can do this: You can manually gather high-quality SEO content. You'd write the prompt and the response that the model should give. You can use a larger more powerful LLM to generate the content for you (also known as synthetic data). It'll be expensive but remember that it'll be a larger one-off cost to get the data. If you already have a pipeline that works great then you can use the prompts and the generated content that you already have from that pipeline. You can buy a high-quality dataset or get someone to make it for you. The data is the most important part of this process. Remember, garbage in garbage out. Your data needs to have a good variety and should not contain any bad examples. You should aim for around 1000 examples. The more the better! The actual fine-tuning. At this stage you are now ready to choose a model and setup the fine-tuning. If you are unsure I'd stick to the Llama 3.1 family of models. They are great and reliable. There are three models: 8b, 70b and 405b. Depending on the complexity of the task you should select an appropriate size. However, to really reap the cost saving benefits and the speed you should try to stick with the 8b model or the the 70b model if the 8b is not good enough. For our SEO example, let's use the 8b model. Important note on selecting a model: You might see multiple models with the 8b flag. You might see 4bit-bnb or instruct. The instruct version of the models have basically been trained to be chatbots. So if you want to keep the chatbot-like instruction-following functionality then you should use the instruct version as the base. The non-instruct version simply generates text. It won't 'act' like a chatbot which is better for use cases like creative writing. The 4bit-bnb means that the model has been 'quantized'. Basically it has been made 4x smaller (the original is in 16 bits) so that it is faster to download and faster to fine-tune. This slightly reduces the accuracy of the model but it's usually fine for most use cases :) Fine-tuning should be done on a good GPU. CPU aren't good enough. So you can't spin up a droplet on digital ocean and use that. You'll specifically need to spin up a GPU. One website that I think is great is Runpod .io (I am not affiliated with them). You simply pay for the GPU by the hour. If you want the training to be fast you can use the H100, if you want something cheaper but slower you can use the A40. Although the A40 won't be good enough to run the 70b parameter model. For the 405b model you'll need multiple H100s but let's leave that for more advanced use cases. Once you've spun up your H100 and ssh-ed into it. I would recommend using the unsloth open source library to do the fine-tuning. They have great docs and good boilerplate code. You want to train using a method called QLoRA. This won't train the entire model but only "part of it". I don't want to get into the technical details as t3hat isn't important but essentially it's a very efficient and effective way of fine-tuning models. When fine-tuning you can provide something called a 'validation set'. As your model is training it will be tested against the 'validation set' to see how well it's doing. You'll get an 'eval loss' which basically means how well is your model doing when compared with the unseen validation data. If you have 1000 training examples I'd recommend taking out 100-200 so it can act as the validation set. Your model may start off with an eval loss of 1.1 and by the end of the training (e.g. 3 epochs - the number of epochs is the number of times your model will be trained on the entire dataset. It's like reading a book more than once so you can understand it better. Usually 3-5 epochs is enough) the eval loss would drop to 0.6 or 0.7 which means your model has made great progress in learning your dataset! You don't want it to be too low as that means it is literally memorizing which isn't good. Post fine-tuning You'll want to save the model with the best eval loss. You actually won't have the whole model, just something called the "QLoRA adapters". These are basically like the new neurons that contain the "understanding" of the data you trained the model on. You can combine these with the base model (using unsloth again) to prompt the model. You can also (and I recommend this) convert the model to GGUF format (using unsloth again). This basically packages the QLoRA adapters and model together into an optimized format so you can easily and efficiently run it and prompt it (using unsloth again... lol). I would then recommend running some evaluations on the new model. You can do this by simply prompting the new model and a more powerful model (or using your old pipeline) and then asking a powerful model e.g. Claude to judge which is better. If your model consistently does better then you've hit a winner! You can then use runpod again to deploy the model to their serverless AI endpoint so you only pay when it's actually being inferenced. (Again, I'm not affiliated with them) I hope this was useful and you at least got a good idea of what fine-tuning is and how you might go about doing it. By the way, I've just launched a website where you can easily fine-tune Llama 3.1 models. I'm actually hoping to eventually automate this entire process as I believe small fine-tuned models will be much more common in the future. If you want more info, feel free to DM me :)

After building an AI Co-founder to solve my startup struggles, I realized we might be onto something bigger. What problems would you want YOUR AI Co-founder to solve?
reddit
LLM Vibe Score0
Human Vibe Score0
Consistent_Yak6765This week

After building an AI Co-founder to solve my startup struggles, I realized we might be onto something bigger. What problems would you want YOUR AI Co-founder to solve?

A few days ago, I shared my entrepreneurial journey and the endless loop of startup struggles I was facing. The response from the community was overwhelming, and it validated something I had stumbled upon while trying to solve my own problems. In just a matter of days, we've built out the core modules I initially used for myself, deep market research capabilities, automated outreach systems, and competitor analysis. It's surreal to see something born out of personal frustration turning into a tool that others might actually find valuable. But here's where it gets interesting (and where I need your help). While we're actively onboarding users for our alpha test, I can't shake the feeling that we're just scratching the surface. We've built what helped me, but what would help YOU? When you're lying awake at 3 AM, stressed about your startup, what tasks do you wish you could delegate to an AI co-founder who actually understands context and can take meaningful action? Of course, it's not a replacement for an actual AI cofounder, but using our prior entrepreneurial experience and conversations with other folks, we understand that OUTREACH and SALES might actually be a big problem statement we can go deeper on as it naturally helps with the following: Idea Validation - Testing your assumptions with real customers before building Pricing strategy - Understanding what the market is willing to pay Product strategy - Getting feedback on features and roadmap Actually revenue - Converting conversations into real paying customers I'm not asking you to imagine some sci-fi scenario, we've already built modules that can: Generate comprehensive 20+ page market analysis reports with actionable insights Handle customer outreach Monitor competitors and target accounts, tracking changes in their strategy Take supervised actions based on the insights gathered (Manual effort is required currently) But what else should it do? What would make you trust an AI co-founder with parts of your business? Or do you think this whole concept is fundamentally flawed? I'm committed to building this the right way, not just another AI tool or an LLM Wrapper, but an agentic system that can understand your unique challenges and work towards overcoming them. Whether you think this is revolutionary or ridiculous, I want to hear your honest thoughts. But more importantly, I want to hear your unfiltered feedback in the comments. What would make this truly valuable for YOU? Edit 1: The AI cofounder will take no equity in your startup.

From Running a $350M Startup to Failing Big and Rediscovering What Really Matters in Life ❤️
reddit
LLM Vibe Score0
Human Vibe Score1
Disastrous-Airport88This week

From Running a $350M Startup to Failing Big and Rediscovering What Really Matters in Life ❤️

This is my story. I’ve always been a hustler. I don’t remember a time I wasn’t working since I was 14. Barely slept 4 hours a night, always busy—solving problems, putting out fires. After college (LLB and MBA), I was lost. I tried regular jobs but couldn’t get excited, and when I’m not excited, I spiral. But I knew entrepreneurship; I just didn’t realize it was an option for adults. Then, in 2017 a friend asked me to help with their startup. “Cool,” I thought. Finally, a place where I could solve problems all day. It was a small e-commerce idea, tackling an interesting angle. I worked 17-hour days, delivering on a bike, talking to customers, vendors, and even random people on the street. Things moved fast. We applied to Y Combinator, got in, and raised $18M before Demo Day even started. We grew 100% month-over-month. Then came another $40M, and I moved to NYC. Before I knew it, we had 1,000 employees and raised $80M more. I was COO, managing 17 direct reports (VPs of Ops, Finance, HR, Data, and more) and 800 indirect employees. On the surface, I was on top of the world. But in reality, I was at rock bottom. I couldn’t sleep, drowning in anxiety, and eventually ended up on antidepressants. Then 2022 hit. We needed to raise $100M, but we couldn’t. In three brutal months, we laid off 900 people. It was the darkest period of my life. I felt like I’d failed everyone—myself, investors, my company, and my team. I took a year off. Packed up the car with my wife and drove across Europe, staying in remote places, just trying to calm my nervous system. I couldn’t speak to anyone, felt ashamed, and battled deep depression. It took over a year, therapy, plant medicine, intense morning routines, and a workout regimen to get back on my feet, physically and mentally. Now, I’m on the other side. In the past 6 months, I’ve been regaining my mojo, with a new respect for who I am and why I’m here. I made peace with what I went through over those 7 years—the lessons, the people, the experiences. I started reconnecting with my community, giving back. Every week, I have conversations with young founders, offering direction, or even jumping in to help with their operations. It’s been a huge gift. I also began exploring side projects. I never knew how to code, but I’ve always had ideas. Recent advances in AI gave me the push I needed. I built my first app, as my first attempt at my true passion—consumer products for kids. Today, I feel wholesome about my journey. I hope others can see that too. ❤️ EDIT: Wow, I didn’t expect this post to resonate with so many people. A lot of you have DM’d me, and I’ll try to respond. Just a heads-up, though—I’m juggling consulting and new projects, so I can’t jump on too many calls. Since I’m not promoting anything, I won’t be funneling folks to my page, so forgive me if I don’t get back to everyone. Anyway, it’s amazing to connect with so many of you. I’d love to write more, so let me know what topics you’d be interested in!

After building an AI Co-founder to solve my startup struggles, I realized we might be onto something bigger. What problems would you want YOUR AI Co-founder to solve?
reddit
LLM Vibe Score0
Human Vibe Score0
Consistent_Yak6765This week

After building an AI Co-founder to solve my startup struggles, I realized we might be onto something bigger. What problems would you want YOUR AI Co-founder to solve?

A few days ago, I shared my entrepreneurial journey and the endless loop of startup struggles I was facing. The response from the community was overwhelming, and it validated something I had stumbled upon while trying to solve my own problems. In just a matter of days, we've built out the core modules I initially used for myself, deep market research capabilities, automated outreach systems, and competitor analysis. It's surreal to see something born out of personal frustration turning into a tool that others might actually find valuable. But here's where it gets interesting (and where I need your help). While we're actively onboarding users for our alpha test, I can't shake the feeling that we're just scratching the surface. We've built what helped me, but what would help YOU? When you're lying awake at 3 AM, stressed about your startup, what tasks do you wish you could delegate to an AI co-founder who actually understands context and can take meaningful action? Of course, it's not a replacement for an actual AI cofounder, but using our prior entrepreneurial experience and conversations with other folks, we understand that OUTREACH and SALES might actually be a big problem statement we can go deeper on as it naturally helps with the following: Idea Validation - Testing your assumptions with real customers before building Pricing strategy - Understanding what the market is willing to pay Product strategy - Getting feedback on features and roadmap Actually revenue - Converting conversations into real paying customers I'm not asking you to imagine some sci-fi scenario, we've already built modules that can: Generate comprehensive 20+ page market analysis reports with actionable insights Handle customer outreach Monitor competitors and target accounts, tracking changes in their strategy Take supervised actions based on the insights gathered (Manual effort is required currently) But what else should it do? What would make you trust an AI co-founder with parts of your business? Or do you think this whole concept is fundamentally flawed? I'm committed to building this the right way, not just another AI tool or an LLM Wrapper, but an agentic system that can understand your unique challenges and work towards overcoming them. Whether you think this is revolutionary or ridiculous, I want to hear your honest thoughts. But more importantly, I want to hear your unfiltered feedback in the comments. What would make this truly valuable for YOU? Edit 1: The AI cofounder will take no equity in your startup.

MMML | Deploy HuggingFace training model rapidly based on MetaSpore
reddit
LLM Vibe Score0
Human Vibe Score1
qazmkoppThis week

MMML | Deploy HuggingFace training model rapidly based on MetaSpore

A few days ago, HuggingFace announced a $100 million Series C funding round, which was big news in open source machine learning and could be a sign of where the industry is headed. Two days before the HuggingFace funding announcement, open-source machine learning platform MetaSpore released a demo based on the HuggingFace Rapid deployment pre-training model. As deep learning technology makes innovative breakthroughs in computer vision, natural language processing, speech understanding, and other fields, more and more unstructured data are perceived, understood, and processed by machines. These advances are mainly due to the powerful learning ability of deep learning. Through pre-training of deep models on massive data, the models can capture the internal data patterns, thus helping many downstream tasks. With the industry and academia investing more and more energy in the research of pre-training technology, the distribution warehouses of pre-training models such as HuggingFace and Timm have emerged one after another. The open-source community release pre-training significant model dividends at an unprecedented speed. In recent years, the data form of machine modeling and understanding has gradually evolved from single-mode to multi-mode, and the semantic gap between different modes is being eliminated, making it possible to retrieve data across modes. Take CLIP, OpenAI’s open-source work, as an example, to pre-train the twin towers of images and texts on a dataset of 400 million pictures and texts and connect the semantics between pictures and texts. Many researchers in the academic world have been solving multimodal problems such as image generation and retrieval based on this technology. Although the frontier technology through the semantic gap between modal data, there is still a heavy and complicated model tuning, offline data processing, high performance online reasoning architecture design, heterogeneous computing, and online algorithm be born multiple processes and challenges, hindering the frontier multimodal retrieval technologies fall to the ground and pratt &whitney. DMetaSoul aims at the above technical pain points, abstracting and uniting many links such as model training optimization, online reasoning, and algorithm experiment, forming a set of solutions that can quickly apply offline pre-training model to online. This paper will introduce how to use the HuggingFace community pre-training model to conduct online reasoning and algorithm experiments based on MetaSpore technology ecology so that the benefits of the pre-training model can be fully released to the specific business or industry and small and medium-sized enterprises. And we will give the text search text and text search graph two multimodal retrieval demonstration examples for your reference. Multimodal semantic retrieval The sample architecture of multimodal retrieval is as follows: Our multimodal retrieval system supports both text search and text search application scenarios, including offline processing, model reasoning, online services, and other core modules: https://preview.redd.it/mdyyv1qmdz291.png?width=1834&format=png&auto=webp&s=e9e10710794c78c64cc05adb75db385aa53aba40 Offline processing, including offline data processing processes for different application scenarios of text search and text search, including model tuning, model export, data index database construction, data push, etc. Model inference. After the offline model training, we deployed our NLP and CV large models based on the MetaSpore Serving framework. MetaSpore Serving helps us conveniently perform online inference, elastic scheduling, load balancing, and resource scheduling in heterogeneous environments. Online services. Based on MetaSpore’s online algorithm application framework, MetaSpore has a complete set of reusable online search services, including Front-end retrieval UI, multimodal data preprocessing, vector recall and sorting algorithm, AB experimental framework, etc. MetaSpore also supports text search by text and image scene search by text and can be migrated to other application scenarios at a low cost. The HuggingFace open source community has provided several excellent baseline models for similar multimodal retrieval problems, which are often the starting point for actual optimization in the industry. MetaSpore also uses the pre-training model of the HuggingFace community in its online services of searching words by words and images by words. Searching words by words is based on the semantic similarity model of the question and answer field optimized by MetaSpore, and searching images by words is based on the community pre-training model. These community open source pre-training models are exported to the general ONNX format and loaded into MetaSpore Serving for online reasoning. The following sections will provide a detailed description of the model export and online retrieval algorithm services. The reasoning part of the model is standardized SAAS services with low coupling with the business. Interested readers can refer to my previous post: The design concept of MetaSpore, a new generation of the one-stop machine learning platform. 1.1 Offline Processing Offline processing mainly involves the export and loading of online models and index building and pushing of the document library. You can follow the step-by-step instructions below to complete the offline processing of text search and image search and see how the offline pre-training model achieves reasoning at MetaSpore. 1.1.1 Search text by text Traditional text retrieval systems are based on literal matching algorithms such as BM25. Due to users’ diverse query words, a semantic gap between query words and documents is often encountered. For example, users misspell “iPhone” as “Phone,” and search terms are incredibly long, such as “1 \~ 3 months old baby autumn small size bag pants”. Traditional text retrieval systems will use spelling correction, synonym expansion, search terms rewriting, and other means to alleviate the semantic gap but fundamentally fail to solve this problem. Only when the retrieval system fully understands users’ query terms and documents can it meet users’ retrieval demands at the semantic level. With the continuous progress of pre-training and representational learning technology, some commercial search engines continue to integrate semantic vector retrieval methods based on symbolic learning into the retrieval ecology. Semantic retrieval model This paper introduces a set of semantic vector retrieval applications. MetaSpore built a set of semantic retrieval systems based on encyclopedia question and answer data. MetaSpore adopted the Sentence-Bert model as the semantic vector representation model, which fine-tunes the twin tower BERT in supervised or unsupervised ways to make the model more suitable for retrieval tasks. The model structure is as follows: The query-Doc symmetric two-tower model is used in text search and question and answer retrieval. The vector representation of online Query and offline DOC share the same vector representation model, so it is necessary to ensure the consistency of the offline DOC library building model and online Query inference model. The case uses MetaSpore’s text representation model Sbert-Chinese-QMC-domain-V1, optimized in the open-source semantically similar data set. This model will express the question and answer data as a vector in offline database construction. The user query will be expressed as a vector by this model in online retrieval, ensuring that query-doc in the same semantic space, users’ semantic retrieval demands can be guaranteed by vector similarity metric calculation. Since the text presentation model does vector encoding for Query online, we need to export the model for use by the online service. Go to the q&A data library code directory and export the model concerning the documentation. In the script, Pytorch Tracing is used to export the model. The models are exported to the “./export “directory. The exported models are mainly ONNX models used for wired reasoning, Tokenizer, and related configuration files. The exported models are loaded into MetaSpore Serving by the online Serving system described below for model reasoning. Since the exported model will be copied to the cloud storage, you need to configure related variables in env.sh. \Build library based on text search \ The retrieval database is built on the million-level encyclopedia question and answer data set. According to the description document, you need to download the data and complete the database construction. The question and answer data will be coded as a vector by the offline model, and then the database construction data will be pushed to the service component. The whole process of database construction is described as follows: Preprocessing, converting the original data into a more general JSonline format for database construction; Build index, use the same model as online “sbert-Chinese-qmc-domain-v1” to index documents (one document object per line); Push inverted (vector) and forward (document field) data to each component server. The following is an example of the database data format. After offline database construction is completed, various data are pushed to corresponding service components, such as Milvus storing vector representation of documents and MongoDB storing summary information of documents. Online retrieval algorithm services will use these service components to obtain relevant data. 1.1.2 Search by text Text and images are easy for humans to relate semantically but difficult for machines. First of all, from the perspective of data form, the text is the discrete ID type of one-dimensional data based on words and words. At the same time, images are continuous two-dimensional or three-dimensional data. Secondly, the text is a subjective creation of human beings, and its expressive ability is vibrant, including various turning points, metaphors, and other expressions, while images are machine representations of the objective world. In short, bridging the semantic gap between text and image data is much more complex than searching text by text. The traditional text search image retrieval technology generally relies on the external text description data of the image or the nearest neighbor retrieval technology and carries out the retrieval through the image associated text, which in essence degrades the problem to text search. However, it will also face many issues, such as obtaining the associated text of pictures and whether the accuracy of text search by text is high enough. The depth model has gradually evolved from single-mode to multi-mode in recent years. Taking the open-source project of OpenAI, CLIP, as an example, train the model through the massive image and text data of the Internet and map the text and image data into the same semantic space, making it possible to implement the text and image search technology based on semantic vector. CLIP graphic model The text search pictures introduced in this paper are implemented based on semantic vector retrieval, and the CLIP pre-training model is used as the two-tower retrieval architecture. Because the CLIP model has trained the semantic alignment of the twin towers’ text and image side models on the massive graphic and text data, it is particularly suitable for the text search graph scene. Due to the different image and text data forms, the Query-Doc asymmetric twin towers model is used for text search image retrieval. The image-side model of the twin towers is used for offline database construction, and the text-side model is used for the online return. In the final online retrieval, the database data of the image side model will be searched after the text side model encodes Query, and the CLIP pre-training model guarantees the semantic correlation between images and texts. The model can draw the graphic pairs closer in vector space by pre-training on a large amount of visual data. Here we need to export the text-side model for online MetaSpore Serving inference. Since the retrieval scene is based on Chinese, the CLIP model supporting Chinese understanding is selected. The exported content includes the ONNX model used for online reasoning and Tokenizer, similar to the text search. MetaSpore Serving can load model reasoning through the exported content. Build library on Image search You need to download the Unsplash Lite library data and complete the construction according to the instructions. The whole process of database construction is described as follows: Preprocessing, specify the image directory, and then generate a more general JSOnline file for library construction; Build index, use OpenAI/Clip-Vit-BASE-Patch32 pre-training model to index the gallery, and output one document object for each line of index data; Push inverted (vector) and forward (document field) data to each component server. Like text search, after offline database construction, relevant data will be pushed to service components, called by online retrieval algorithm services to obtain relevant data. 1.2 Online Services The overall online service architecture diagram is as follows: ​ https://preview.redd.it/nz8zrbbpdz291.png?width=1280&format=png&auto=webp&s=28dae7e031621bc8819519667ed03d8d085d8ace Multi-mode search online service system supports application scenarios such as text search and text search. The whole online service consists of the following parts: Query preprocessing service: encapsulate preprocessing logic (including text/image, etc.) of pre-training model, and provide services through gRPC interface; Retrieval algorithm service: the whole algorithm processing link includes AB experiment tangent flow configuration, MetaSpore Serving call, vector recall, sorting, document summary, etc.; User entry service: provides a Web UI interface for users to debug and track down problems in the retrieval service. From a user request perspective, these services form invocation dependencies from back to front, so to build up a multimodal sample, you need to run each service from front to back first. Before doing this, remember to export the offline model, put it online and build the library first. This article will introduce the various parts of the online service system and make the whole service system step by step according to the following guidance. See the ReadME at the end of this article for more details. 1.2.1 Query preprocessing service Deep learning models tend to be based on tensors, but NLP/CV models often have a preprocessing part that translates raw text and images into tensors that deep learning models can accept. For example, NLP class models often have a pre-tokenizer to transform text data of string type into discrete tensor data. CV class models also have similar processing logic to complete the cropping, scaling, transformation, and other processing of input images through preprocessing. On the one hand, considering that this part of preprocessing logic is decoupled from tensor reasoning of the depth model, on the other hand, the reason of the depth model has an independent technical system based on ONNX, so MetaSpore disassembled this part of preprocessing logic. NLP pretreatment Tokenizer has been integrated into the Query pretreatment service. MetaSpore dismantlement with a relatively general convention. Users only need to provide preprocessing logic files to realize the loading and prediction interface and export the necessary data and configuration files loaded into the preprocessing service. Subsequent CV preprocessing logic will also be integrated in this manner. The preprocessing service currently provides the gRPC interface invocation externally and is dependent on the Query preprocessing (QP) module in the retrieval algorithm service. After the user request reaches the retrieval algorithm service, it will be forwarded to the service to complete the data preprocessing and continue the subsequent processing. The ReadMe provides details on how the preprocessing service is started, how the preprocessing model exported offline to cloud storage enters the service, and how to debug the service. To further improve the efficiency and stability of model reasoning, MetaSpore Serving implements a Python preprocessing submodule. So MetaSpore can provide gRPC services through user-specified preprocessor.py, complete Tokenizer or CV-related preprocessing in NLP, and translate requests into a Tensor that deep models can handle. Finally, the model inference is carried out by MetaSpore, Serving subsequent sub-modules. Presented here on the lot code: https://github.com/meta-soul/MetaSpore/compare/add\python\preprocessor 1.2.2 Retrieval algorithm services Retrieval algorithm service is the core of the whole online service system, which is responsible for the triage of experiments, the assembly of algorithm chains such as preprocessing, recall, sorting, and the invocation of dependent component services. The whole retrieval algorithm service is developed based on the Java Spring framework and supports multi-mode retrieval scenarios of text search and text search graph. Due to good internal abstraction and modular design, it has high flexibility and can be migrated to similar application scenarios at a low cost. Here’s a quick guide to configuring the environment to set up the retrieval algorithm service. See ReadME for more details: Install dependent components. Use Maven to install the online-Serving component Search for service configurations. Copy the template configuration file and replace the MongoDB, Milvus, and other configurations based on the development/production environment. Install and configure Consul. Consul allows you to synchronize the search service configuration in real-time, including cutting the flow of experiments, recall parameters, and sorting parameters. The project’s configuration file shows the current configuration parameters of text search and text search. The parameter modelName in the stage of pretreatment and recall is the corresponding model exported in offline processing. Start the service. Once the above configuration is complete, the retrieval service can be started from the entry script. Once the service is started, you can test it! For example, for a user with userId=10 who wants to query “How to renew ID card,” access the text search service. 1.2.3 User Entry Service Considering that the retrieval algorithm service is in the form of the API interface, it is difficult to locate and trace the problem, especially for the text search image scene can intuitively display the retrieval results to facilitate the iterative optimization of the retrieval algorithm. This paper provides a lightweight Web UI interface for text search and image search, a search input box, and results in a display page for users. Developed by Flask, the service can be easily integrated with other retrieval applications. The service calls the retrieval algorithm service and displays the returned results on the page. It’s also easy to install and start the service. Once you’re done, go to http://127.0.0.1:8090 to see if the search UI service is working correctly. See the ReadME at the end of this article for details. Multimodal system demonstration The multimodal retrieval service can be started when offline processing and online service environment configuration have been completed following the above instructions. Examples of textual searches are shown below. Enter the entry of the text search map application, enter “cat” first, and you can see that the first three digits of the returned result are cats: https://preview.redd.it/d7syq47rdz291.png?width=1280&format=png&auto=webp&s=b43df9abd380b7d9a52e3045dd787f4feeb69635 If you add a color constraint to “cat” to retrieve “black cat,” you can see that it does return a black cat: ​ https://preview.redd.it/aa7pxx8tdz291.png?width=1280&format=png&auto=webp&s=e3727c29d1bde6eea2e1cccf6c46d3cae3f4750e Further, strengthen the constraint on the search term, change it to “black cat on the bed,” and return results containing pictures of a black cat climbing on the bed: ​ https://preview.redd.it/2mw4qpjudz291.png?width=1280&format=png&auto=webp&s=1cf1db667892b9b3a40451993680fbd6980b5520 The cat can still be found through the text search system after the color and scene modification in the above example. Conclusion The cutting-edge pre-training technology can bridge the semantic gap between different modes, and the HuggingFace community can greatly reduce the cost for developers to use the pre-training model. Combined with the technological ecology of MetaSpore online reasoning and online microservices provided by DMetaSpore, the pre-training model is no longer mere offline dabbling. Instead, it can truly achieve end-to-end implementation from cutting-edge technology to industrial scenarios, fully releasing the dividends of the pre-training large model. In the future, DMetaSoul will continue to improve and optimize the MetaSpore technology ecosystem: More automated and wider access to HuggingFace community ecology. MetaSpore will soon release a common model rollout mechanism to make HuggingFace ecologically accessible and will later integrate preprocessing services into online services. Multi-mode retrieval offline algorithm optimization. For multimodal retrieval scenarios, MetaSpore will continuously iteratively optimize offline algorithm components, including text recall/sort model, graphic recall/sort model, etc., to improve the accuracy and efficiency of the retrieval algorithm. For related code and reference documentation in this article, please visit: https://github.com/meta-soul/MetaSpore/tree/main/demo/multimodal/online Some images source: https://github.com/openai/CLIP/raw/main/CLIP.png https://www.sbert.net/examples/training/sts/README.html

How me and my team made 15+ apps and not made a single sale in 2023
reddit
LLM Vibe Score0
Human Vibe Score0.818
MichaelbetterecycleThis week

How me and my team made 15+ apps and not made a single sale in 2023

Hey, my name is Michael, I am in Auckland NZ. This year was the official beginning of my adult life. I graduated from university and started a full-time job. I’ve also really dug into indiehacking/bootstrapping and started 15 projects (and it will be at least 17 before the year ends). I think I’ve learned a lot but I consciously repeated mistakes. Upto (Nov) Discord Statuses + Your Location + Facebook Poke https://preview.redd.it/4nqt7tp2tf5c1.png?width=572&format=png&auto=webp&s=b0223484bc54b45b5c65e0b1afd0dc52f9c02ad1 This was the end of uni, I often messaged (and got messaged) requests of status and location to (and from my) friends. I thought, what if we make a social app that’s super basic and all it does is show you where your friends are? To differentiate from snap maps and others we wanted something with more privacy where you select the location. However, never finished the codebase or launched it. This is because I slowly started to realize that B2C (especially social networks) are way too hard to make into an actual business and the story with Fistbump would repeat itself. However, this decision not to launch it almost launched a curse on our team. From that point, we permitted ourselves to abandon projects even before launching. Lessons: Don’t do social networks if your goal is 10k MRR ASAP. If you build something to 90% competition ship it or you will think it’s okay to abandon projects Insight Bites (Nov) Youtube Summarizer Extension ​ https://preview.redd.it/h6drqej4tf5c1.jpg?width=800&format=pjpg&auto=webp&s=0f211456c390ac06f4fcb54aa51f9d50b0826658 Right after Upto, we started ideating and conveniently the biggest revolution in the recent history of tech was released → GPT. We instantly began ideating. The first problem we chose to use AI for is to summarize YouTube videos. Comical. Nevertheless, I am convinced we have had the best UX because you could right-click on a video to get a slideshow of insights instead of how everyone else did it. We dropped it because there was too much competition and unit economics didn’t work out (and it was a B2C). PodPigeon (Dec) Podcast → Tweet Threads https://preview.redd.it/0ukge245tf5c1.png?width=2498&format=png&auto=webp&s=23303e1cab330578a3d25cd688fa67aa3b97fb60 Then we thought, to make unit economics work we need to make this worthwhile for podcasters. This is when I got into Twitter and started seeing people summarize podcasts. Then I thought, what if we make something that converts a podcast into tweets? This was probably one of the most important projects because it connected me with Jason and Jonaed, both of whom I regularly stay in contact with and are my go-to experts on ideas related to content creation. Jonaed was even willing to buy Podpigeon and was using it on his own time. However, the unit economics still didn’t work out (and we got excited about other things). Furthermore, we got scared of the competition because I found 1 - 2 other people who did similar things poorly. This was probably the biggest mistake we’ve made. Very similar projects made 10k MRR and more, launching later than we did. We didn’t have a coherent product vision, we didn’t understand the customer well enough, and we had a bad outlook on competition and a myriad of other things. Lessons: I already made another post about the importance of outlook on competition. Do not quit just because there are competitors or just because you can’t be 10x better. Indiehackers and Bootstrappers (or even startups) need to differentiate in the market, which can be via product (UX/UI), distribution, or both. Asking Ace Intro.co + Crowdsharing ​ https://preview.redd.it/0hu2tt16tf5c1.jpg?width=1456&format=pjpg&auto=webp&s=3d397568ef2331e78198d64fafc1a701a3e75999 As I got into Twitter, I wanted to chat with some people I saw there. However, they were really expensive. I thought, what if we made some kind of crowdfunding service for other entrepreneurs to get a private lecture from their idols? It seemed to make a lot of sense on paper. It was solving a problem (validated via the fact that Intro.co is a thing and making things cheaper and accessible is a solid ground to stand on), we understood the market (or so we thought), and it could monetize relatively quickly. However, after 1-2 posts on Reddit and Indiehackers, we quickly learned three things. Firstly, no one cares. Secondly, even if they do, they think they can get the same information for free online. Thirdly, the reasons before are bad because for the first point → we barely talked to people, and for the second people → we barely talked to the wrong people. However, at least we didn’t code anything this time and tried to validate via a landing page. Lessons Don’t give up after 1 Redditor says “I don’t need this” Don’t be scared to choose successful people as your audience. Clarito Journaling with AI analyzer https://preview.redd.it/8ria2wq6tf5c1.jpg?width=1108&format=pjpg&auto=webp&s=586ec28ae75003d9f71b4af2520b748d53dd2854 Clarito is a classic problem all amateur entrepreneurs have. It’s where you lie to yourself that you have a real problem and therefore is validated but when your team asks you how much you would pay you say I guess you will pay, maybe, like 5 bucks a month…? Turns out, you’d have to pay me to use our own product lol. We sent it off to a few friends and posted on some forums, but never really got anything tangible and decided to move away. Honestly, a lot of it is us in our own heads. We say the market is too saturated, it’ll be hard to monetize, it’s B2C, etc. Lessons: You use the Mom Test on other people. You have to do it yourself as well. However, recognizing that the Mom Test requires a lot of creativity in its investigation because knowing what questions to ask can determine the outcome of the validation. I asked myself “Do I journal” but I didn’t ask myself “How often do I want GPT to chyme in on my reflections”. Which was practically never. That being said I think with the right audience and distribution, this product can work. I just don’t know (let alone care) about the audience that much (and I thought I was one of them)/ Horns & Claw Scrapes financial news texts you whether you should buy/sell the stock (news sentiment analysis) ​ https://preview.redd.it/gvfxdgc7tf5c1.jpg?width=1287&format=pjpg&auto=webp&s=63977bbc33fe74147b1f72913cefee4a9ebec9c2 This one we didn’t even bother launching. Probably something internal in the team and also seemed too good to be true (because if this works, doesn’t that just make us ultra-rich fast?). I saw a similar tool making 10k MRR so I guess I was wrong. Lessons: This one was pretty much just us getting into our heads. I declared that without an audience it would be impossible to ship this product and we needed to start a YouTube channel. Lol, and we did. And we couldn’t even film for 1 minute. I made bold statements like “We will commit to this for at least 1 year no matter what”. Learnery Make courses about any subject https://preview.redd.it/1nw6z448tf5c1.jpg?width=1112&format=pjpg&auto=webp&s=f2c73e8af23b0a6c3747a81e785960d4004feb48 This is probably the most “successful” project we’ve made. It grew from a couple of dozen to a couple of hundred users. It has 11 buy events for $9.99 LTD (we couldn’t be bothered connecting Stripe because we thought no one would buy it anyway). However what got us discouraged from seriously pursuing it more is, that this has very low defensibility, “Why wouldn’t someone just use chatGPT?” and it’s B2C so it’s hard to monetize. I used it myself for a month or so but then stopped. I don’t think it’s the app, I think the act of learning a concept from scratch isn’t something you do constantly in the way Learnery delivers it (ie course). I saw a bunch of similar apps that look like Ass make like 10k MRR. Lessons: Don’t do B2C, or if you do, do it properly Don’t just Mixpanel the buy button, connect your Stripe otherwise, it doesn’t feel real and you won’t get momentum. I doubt anyone (even me) will make this mistake again. I live in my GPT bubble where I make assumptions that everyone uses GPT the same way and as much as I do. In reality, the argument that this has low defensibility against GPT is invalid. Platforms that deliver a differentiated UX from ChatGPT to audiences who are not tightly integrated into the habit of using ChatGPT (which is like - everyone except for SOME tech evangelists). CuriosityFM Make podcasts about any subject https://preview.redd.it/zmosrcp8tf5c1.jpg?width=638&format=pjpg&auto=webp&s=d04ddffabef9050050b0d87939273cc96a8637dc This was our attempt at making Learnery more unique and more differentiated from chatGPT. We never really launched it. The unit economics didn’t work out and it was actually pretty boring to listen to, I don’t think I even fully listened to one 15-minute episode. I think this wasn’t that bad, it taught us more about ElevenLabs and voice AI. It took us maybe only 2-3 days to build so I think building to learn a new groundbreaking technology is fine. SleepyTale Make children’s bedtime stories https://preview.redd.it/14ue9nm9tf5c1.jpg?width=807&format=pjpg&auto=webp&s=267e18ec6f9270e6d1d11564b38136fa524966a1 My 8-year-old sister gave me that idea. She was too scared of making tea and I was curious about how she’d react if she heard a bedtime story about that exact scenario with the moral that I wanted her to absorb (which is that you shouldn’t be scared to try new things ie stop asking me to make your tea and do it yourself, it’s not that hard. You could say I went full Goebbels on her). Zane messaged a bunch of parents on Facebook but no one really cared. We showed this to one Lady at the place we worked from at Uni and she was impressed and wanted to show it to her kids but we already turned off our ElevenLabs subscription. Lessons: However, the truth behind this is beyond just “you need to be able to distribute”. It’s that you have to care about the audience. I don’t particularly want to build products for kids and parents. I am far away from that audience because I am neither a kid anymore nor going to be a parent anytime soon, and my sister still asked me to make her tea so the story didn’t work. I think it’s important to ask yourself whether you care about the audience. The way you answer that even when you are in full bias mode is, do you engage with them? Are you interested in what’s happening in their communities? Are you friends with them? Etc. User Survey Analyzer Big User Survey → GPT → Insights Report Me and my coworker were chatting about AI when he asked me to help him analyze a massive survey for him. I thought that was some pretty decent validation. Someone in an actual company asking for help. Lessons Market research is important but moving fast is also important. Ie building momentum. Also don’t revolve around 1 user. This has been a problem in multiple projects. Finding as many users as possible in the beginning to talk to is key. Otherwise, you are just waiting for 1 person to get back to you. AutoI18N Automated Internationalization of the codebase for webapps This one I might still do. It’s hard to find a solid distribution strategy. However, the idea came from me having to do it at my day job. It seems a solid problem. I’d say it’s validated and has some good players already. The key will be differentiation via the simplicity of UX and distribution (which means a slightly different audience). In the backlog for now because I don’t care about the problem or the audience that much. Documate - Part 1 Converts complex PDFs into Excel https://preview.redd.it/8b45k9katf5c1.jpg?width=1344&format=pjpg&auto=webp&s=57324b8720eb22782e28794d2db674b073193995 My mom needed to convert a catalog of furniture into an inventory which took her 3 full days of data entry. I automated it for her and thought this could have a big impact but there was no distribution because there was no ICP. We tried to find the ideal customers by talking to a bunch of different demographics but I flew to Kazakhstan for a holiday and so this kind of fizzled out. I am not writing this blog post linearity, this is my 2nd hour and I am tired and don’t want to finish this later so I don’t even know what lessons I learned. Figmatic Marketplace of high-quality Figma mockups of real apps https://preview.redd.it/h13yv45btf5c1.jpg?width=873&format=pjpg&auto=webp&s=aaa2896aeac2f22e9b7d9eed98c28bb8a2d2cdf1 This was a collab between me and my friend Alex. It was the classic Clarito where we both thought we had this problem and would pay to fix it. In reality, this is a vitamin. Neither I, nor I doubt Alex have thought of this as soon as we bought the domain. We posted it on Gumroad, sent it to a bunch of forums, and called it a day. Same issue as almost all the other ones. No distribution strategy. However, apps like Mobin show us that this concept is indeed profitable but it takes time. It needs SEO. It needs a community. None of those things, me and Alex had or was interested in. However shortly after HTML → Figma came out and it’s the best plugin. Maybe that should’ve been the idea. Podcast → Course Turns Podcaster’s episodes into a course This one I got baited by Jason :P I described to him the idea of repurposing his content for a course. He told me this was epic and he would pay. Then after I sent him the demo, he never checked it out. Anyhow during the development, we realized that doesn’t actually work because A podcast doesn’t have the correct format for the course, the most you can extract are concepts and ideas, seldom explanations. Most creators want video-based courses to be hosted on Kajabi or Udemy Another lesson is that when you pitch something to a user, what you articulate is a platform or a process, they imagine an outcome. However, the end result of your platform can be a very different outcome to what they had in mind and there is even a chance that what they want is not possible. You need to understand really well what the outcome looks like before you design the process. This is a classic problem where we thought of the solution before the problem. Yes, the problem exists. Podcasters want to make courses. However, if you really understand what they want, you can see how repurposing a podcast isn’t the best way to get there. However I only really spoke to 1-2 podcasters about this so making conclusions is dangerous for this can just be another asking ace mistake with the Redditor. Documate Part 2 Same concept as before but now I want to run some ads. We’ll see what happens. https://preview.redd.it/xb3npj0ctf5c1.jpg?width=1456&format=pjpg&auto=webp&s=3cd4884a29fd11d870d010a2677b585551c49193 In conclusion https://preview.redd.it/2zrldc9dtf5c1.jpg?width=1840&format=pjpg&auto=webp&s=2b3105073e752ad41c23f205dbd1ea046c1da7ff It doesn’t actually matter that much whether you choose to do a B2C, or a social network or focus on growing your audience. All of these can make you successful. What’s important is that you choose. If I had to summarize my 2023 in one word it’s indecision. Most of these projects succeeded for other people, nothing was as fundamentally wrong about them as I proclaimed. In reality that itself was an excuse. New ideas seduce, and it is a form of discipline to commit to a single project for a respectful amount of time. https://preview.redd.it/zy9a2vzdtf5c1.jpg?width=1456&format=pjpg&auto=webp&s=901c621227bba0feb4efdb39142f66ab2ebb86fe Distribution is not just posting on Indiehackers and Reddit. It’s an actual strategy and you should think of it as soon as you think of the idea, even before the Figma designs. I like how Denis Shatalin taught me. You have to build a pipeline. That means a reliable way to get leads, launch campaigns at them, close deals, learn from them, and optimize. Whenever I get an idea now I always try to ask myself “Where can I find 1000s leads in one day?” If there is no good answer, this is not a good project to do now. ​ https://preview.redd.it/2boh3fpetf5c1.jpg?width=1456&format=pjpg&auto=webp&s=1c0d5d7b000716fcbbb00cbad495e8b61e25be66 Talk to users before doing anything. Jumping on designing and coding to make your idea a reality is a satisfying activity in the short term. Especially for me, I like to create for the sake of creation. However, it is so important to understand the market, understand the audience, understand the distribution. There are a lot of things to understand before coding. https://preview.redd.it/lv8tt96ftf5c1.jpg?width=1456&format=pjpg&auto=webp&s=6c8735aa6ad795f216ff9ddfa2341712e8277724 Get out of your own head. The real reason we dropped so many projects is that we got into our own heads. We let the negative thoughts creep in and kill all the optimism. I am really good at coming up with excuses to start a project. However, I am equally as good at coming up with reasons to kill a project. And so you have this yin and yang of starting and stopping. Building momentum and not burning out. I can say with certainty my team ran out of juice this year. We lost momentum so many times we got burnt out towards the end. Realizing that the project itself has momentum is important. User feedback and sales bring momentum. Building also creates momentum but unless it is matched with an equal force of impact, it can stomp the project down. That is why so many of our projects died quickly after we launched. The smarter approach is to do things that have a low investment of momentum (like talking to users) but result in high impact (sales or feedback). Yes, that means the project can get invalidated which makes it more short-lived than if we built it first, but it preserves team life energy. At the end of 2023 here is a single sentence I am making about how I think one becomes a successful indiehacker. One becomes a successful Indiehacker when one starts to solve pain-killer problems in the market they understand, for an audience they care about and consistently engage with for a long enough timeframe. Therefore an unsuccessful Indiehacker in a single sentence is An unsuccessful Indiehacker constantly enters new markets they don’t understand to build solutions for people whose problems they don’t care about, in a timeframe that is shorter than than the time they spent thinking about distribution. However, an important note to be made. Life is not just about indiehacking. It’s about learning and having fun. In the human world, the best journey isn’t the one that gets you the fastest to your goals but the one you enjoy the most. I enjoyed making those silly little projects and although I do not regret them, I will not repeat the same mistakes in 2024. But while it’s still 2023, I have 2 more projects I want to do :) EDIT: For Devs, frontend is always react with vite (ts) and backend is either node with express (ts) or python. For DB either Postgres or mongo (usually Prisma for ORM). For deployment all of it is on AWS (S3, EC2). In terms of libraries/APIs Whisper.cpp is best open source for transcription Obviously the gpt apis Eleven labs for voice related stuff And other random stuff here and there

How me and my team made 15+ apps and not made a single sale in 2023
reddit
LLM Vibe Score0
Human Vibe Score0.818
MichaelbetterecycleThis week

How me and my team made 15+ apps and not made a single sale in 2023

Hey, my name is Michael, I am in Auckland NZ. This year was the official beginning of my adult life. I graduated from university and started a full-time job. I’ve also really dug into indiehacking/bootstrapping and started 15 projects (and it will be at least 17 before the year ends). I think I’ve learned a lot but I consciously repeated mistakes. Upto (Nov) Discord Statuses + Your Location + Facebook Poke https://preview.redd.it/4nqt7tp2tf5c1.png?width=572&format=png&auto=webp&s=b0223484bc54b45b5c65e0b1afd0dc52f9c02ad1 This was the end of uni, I often messaged (and got messaged) requests of status and location to (and from my) friends. I thought, what if we make a social app that’s super basic and all it does is show you where your friends are? To differentiate from snap maps and others we wanted something with more privacy where you select the location. However, never finished the codebase or launched it. This is because I slowly started to realize that B2C (especially social networks) are way too hard to make into an actual business and the story with Fistbump would repeat itself. However, this decision not to launch it almost launched a curse on our team. From that point, we permitted ourselves to abandon projects even before launching. Lessons: Don’t do social networks if your goal is 10k MRR ASAP. If you build something to 90% competition ship it or you will think it’s okay to abandon projects Insight Bites (Nov) Youtube Summarizer Extension ​ https://preview.redd.it/h6drqej4tf5c1.jpg?width=800&format=pjpg&auto=webp&s=0f211456c390ac06f4fcb54aa51f9d50b0826658 Right after Upto, we started ideating and conveniently the biggest revolution in the recent history of tech was released → GPT. We instantly began ideating. The first problem we chose to use AI for is to summarize YouTube videos. Comical. Nevertheless, I am convinced we have had the best UX because you could right-click on a video to get a slideshow of insights instead of how everyone else did it. We dropped it because there was too much competition and unit economics didn’t work out (and it was a B2C). PodPigeon (Dec) Podcast → Tweet Threads https://preview.redd.it/0ukge245tf5c1.png?width=2498&format=png&auto=webp&s=23303e1cab330578a3d25cd688fa67aa3b97fb60 Then we thought, to make unit economics work we need to make this worthwhile for podcasters. This is when I got into Twitter and started seeing people summarize podcasts. Then I thought, what if we make something that converts a podcast into tweets? This was probably one of the most important projects because it connected me with Jason and Jonaed, both of whom I regularly stay in contact with and are my go-to experts on ideas related to content creation. Jonaed was even willing to buy Podpigeon and was using it on his own time. However, the unit economics still didn’t work out (and we got excited about other things). Furthermore, we got scared of the competition because I found 1 - 2 other people who did similar things poorly. This was probably the biggest mistake we’ve made. Very similar projects made 10k MRR and more, launching later than we did. We didn’t have a coherent product vision, we didn’t understand the customer well enough, and we had a bad outlook on competition and a myriad of other things. Lessons: I already made another post about the importance of outlook on competition. Do not quit just because there are competitors or just because you can’t be 10x better. Indiehackers and Bootstrappers (or even startups) need to differentiate in the market, which can be via product (UX/UI), distribution, or both. Asking Ace Intro.co + Crowdsharing ​ https://preview.redd.it/0hu2tt16tf5c1.jpg?width=1456&format=pjpg&auto=webp&s=3d397568ef2331e78198d64fafc1a701a3e75999 As I got into Twitter, I wanted to chat with some people I saw there. However, they were really expensive. I thought, what if we made some kind of crowdfunding service for other entrepreneurs to get a private lecture from their idols? It seemed to make a lot of sense on paper. It was solving a problem (validated via the fact that Intro.co is a thing and making things cheaper and accessible is a solid ground to stand on), we understood the market (or so we thought), and it could monetize relatively quickly. However, after 1-2 posts on Reddit and Indiehackers, we quickly learned three things. Firstly, no one cares. Secondly, even if they do, they think they can get the same information for free online. Thirdly, the reasons before are bad because for the first point → we barely talked to people, and for the second people → we barely talked to the wrong people. However, at least we didn’t code anything this time and tried to validate via a landing page. Lessons Don’t give up after 1 Redditor says “I don’t need this” Don’t be scared to choose successful people as your audience. Clarito Journaling with AI analyzer https://preview.redd.it/8ria2wq6tf5c1.jpg?width=1108&format=pjpg&auto=webp&s=586ec28ae75003d9f71b4af2520b748d53dd2854 Clarito is a classic problem all amateur entrepreneurs have. It’s where you lie to yourself that you have a real problem and therefore is validated but when your team asks you how much you would pay you say I guess you will pay, maybe, like 5 bucks a month…? Turns out, you’d have to pay me to use our own product lol. We sent it off to a few friends and posted on some forums, but never really got anything tangible and decided to move away. Honestly, a lot of it is us in our own heads. We say the market is too saturated, it’ll be hard to monetize, it’s B2C, etc. Lessons: You use the Mom Test on other people. You have to do it yourself as well. However, recognizing that the Mom Test requires a lot of creativity in its investigation because knowing what questions to ask can determine the outcome of the validation. I asked myself “Do I journal” but I didn’t ask myself “How often do I want GPT to chyme in on my reflections”. Which was practically never. That being said I think with the right audience and distribution, this product can work. I just don’t know (let alone care) about the audience that much (and I thought I was one of them)/ Horns & Claw Scrapes financial news texts you whether you should buy/sell the stock (news sentiment analysis) ​ https://preview.redd.it/gvfxdgc7tf5c1.jpg?width=1287&format=pjpg&auto=webp&s=63977bbc33fe74147b1f72913cefee4a9ebec9c2 This one we didn’t even bother launching. Probably something internal in the team and also seemed too good to be true (because if this works, doesn’t that just make us ultra-rich fast?). I saw a similar tool making 10k MRR so I guess I was wrong. Lessons: This one was pretty much just us getting into our heads. I declared that without an audience it would be impossible to ship this product and we needed to start a YouTube channel. Lol, and we did. And we couldn’t even film for 1 minute. I made bold statements like “We will commit to this for at least 1 year no matter what”. Learnery Make courses about any subject https://preview.redd.it/1nw6z448tf5c1.jpg?width=1112&format=pjpg&auto=webp&s=f2c73e8af23b0a6c3747a81e785960d4004feb48 This is probably the most “successful” project we’ve made. It grew from a couple of dozen to a couple of hundred users. It has 11 buy events for $9.99 LTD (we couldn’t be bothered connecting Stripe because we thought no one would buy it anyway). However what got us discouraged from seriously pursuing it more is, that this has very low defensibility, “Why wouldn’t someone just use chatGPT?” and it’s B2C so it’s hard to monetize. I used it myself for a month or so but then stopped. I don’t think it’s the app, I think the act of learning a concept from scratch isn’t something you do constantly in the way Learnery delivers it (ie course). I saw a bunch of similar apps that look like Ass make like 10k MRR. Lessons: Don’t do B2C, or if you do, do it properly Don’t just Mixpanel the buy button, connect your Stripe otherwise, it doesn’t feel real and you won’t get momentum. I doubt anyone (even me) will make this mistake again. I live in my GPT bubble where I make assumptions that everyone uses GPT the same way and as much as I do. In reality, the argument that this has low defensibility against GPT is invalid. Platforms that deliver a differentiated UX from ChatGPT to audiences who are not tightly integrated into the habit of using ChatGPT (which is like - everyone except for SOME tech evangelists). CuriosityFM Make podcasts about any subject https://preview.redd.it/zmosrcp8tf5c1.jpg?width=638&format=pjpg&auto=webp&s=d04ddffabef9050050b0d87939273cc96a8637dc This was our attempt at making Learnery more unique and more differentiated from chatGPT. We never really launched it. The unit economics didn’t work out and it was actually pretty boring to listen to, I don’t think I even fully listened to one 15-minute episode. I think this wasn’t that bad, it taught us more about ElevenLabs and voice AI. It took us maybe only 2-3 days to build so I think building to learn a new groundbreaking technology is fine. SleepyTale Make children’s bedtime stories https://preview.redd.it/14ue9nm9tf5c1.jpg?width=807&format=pjpg&auto=webp&s=267e18ec6f9270e6d1d11564b38136fa524966a1 My 8-year-old sister gave me that idea. She was too scared of making tea and I was curious about how she’d react if she heard a bedtime story about that exact scenario with the moral that I wanted her to absorb (which is that you shouldn’t be scared to try new things ie stop asking me to make your tea and do it yourself, it’s not that hard. You could say I went full Goebbels on her). Zane messaged a bunch of parents on Facebook but no one really cared. We showed this to one Lady at the place we worked from at Uni and she was impressed and wanted to show it to her kids but we already turned off our ElevenLabs subscription. Lessons: However, the truth behind this is beyond just “you need to be able to distribute”. It’s that you have to care about the audience. I don’t particularly want to build products for kids and parents. I am far away from that audience because I am neither a kid anymore nor going to be a parent anytime soon, and my sister still asked me to make her tea so the story didn’t work. I think it’s important to ask yourself whether you care about the audience. The way you answer that even when you are in full bias mode is, do you engage with them? Are you interested in what’s happening in their communities? Are you friends with them? Etc. User Survey Analyzer Big User Survey → GPT → Insights Report Me and my coworker were chatting about AI when he asked me to help him analyze a massive survey for him. I thought that was some pretty decent validation. Someone in an actual company asking for help. Lessons Market research is important but moving fast is also important. Ie building momentum. Also don’t revolve around 1 user. This has been a problem in multiple projects. Finding as many users as possible in the beginning to talk to is key. Otherwise, you are just waiting for 1 person to get back to you. AutoI18N Automated Internationalization of the codebase for webapps This one I might still do. It’s hard to find a solid distribution strategy. However, the idea came from me having to do it at my day job. It seems a solid problem. I’d say it’s validated and has some good players already. The key will be differentiation via the simplicity of UX and distribution (which means a slightly different audience). In the backlog for now because I don’t care about the problem or the audience that much. Documate - Part 1 Converts complex PDFs into Excel https://preview.redd.it/8b45k9katf5c1.jpg?width=1344&format=pjpg&auto=webp&s=57324b8720eb22782e28794d2db674b073193995 My mom needed to convert a catalog of furniture into an inventory which took her 3 full days of data entry. I automated it for her and thought this could have a big impact but there was no distribution because there was no ICP. We tried to find the ideal customers by talking to a bunch of different demographics but I flew to Kazakhstan for a holiday and so this kind of fizzled out. I am not writing this blog post linearity, this is my 2nd hour and I am tired and don’t want to finish this later so I don’t even know what lessons I learned. Figmatic Marketplace of high-quality Figma mockups of real apps https://preview.redd.it/h13yv45btf5c1.jpg?width=873&format=pjpg&auto=webp&s=aaa2896aeac2f22e9b7d9eed98c28bb8a2d2cdf1 This was a collab between me and my friend Alex. It was the classic Clarito where we both thought we had this problem and would pay to fix it. In reality, this is a vitamin. Neither I, nor I doubt Alex have thought of this as soon as we bought the domain. We posted it on Gumroad, sent it to a bunch of forums, and called it a day. Same issue as almost all the other ones. No distribution strategy. However, apps like Mobin show us that this concept is indeed profitable but it takes time. It needs SEO. It needs a community. None of those things, me and Alex had or was interested in. However shortly after HTML → Figma came out and it’s the best plugin. Maybe that should’ve been the idea. Podcast → Course Turns Podcaster’s episodes into a course This one I got baited by Jason :P I described to him the idea of repurposing his content for a course. He told me this was epic and he would pay. Then after I sent him the demo, he never checked it out. Anyhow during the development, we realized that doesn’t actually work because A podcast doesn’t have the correct format for the course, the most you can extract are concepts and ideas, seldom explanations. Most creators want video-based courses to be hosted on Kajabi or Udemy Another lesson is that when you pitch something to a user, what you articulate is a platform or a process, they imagine an outcome. However, the end result of your platform can be a very different outcome to what they had in mind and there is even a chance that what they want is not possible. You need to understand really well what the outcome looks like before you design the process. This is a classic problem where we thought of the solution before the problem. Yes, the problem exists. Podcasters want to make courses. However, if you really understand what they want, you can see how repurposing a podcast isn’t the best way to get there. However I only really spoke to 1-2 podcasters about this so making conclusions is dangerous for this can just be another asking ace mistake with the Redditor. Documate Part 2 Same concept as before but now I want to run some ads. We’ll see what happens. https://preview.redd.it/xb3npj0ctf5c1.jpg?width=1456&format=pjpg&auto=webp&s=3cd4884a29fd11d870d010a2677b585551c49193 In conclusion https://preview.redd.it/2zrldc9dtf5c1.jpg?width=1840&format=pjpg&auto=webp&s=2b3105073e752ad41c23f205dbd1ea046c1da7ff It doesn’t actually matter that much whether you choose to do a B2C, or a social network or focus on growing your audience. All of these can make you successful. What’s important is that you choose. If I had to summarize my 2023 in one word it’s indecision. Most of these projects succeeded for other people, nothing was as fundamentally wrong about them as I proclaimed. In reality that itself was an excuse. New ideas seduce, and it is a form of discipline to commit to a single project for a respectful amount of time. https://preview.redd.it/zy9a2vzdtf5c1.jpg?width=1456&format=pjpg&auto=webp&s=901c621227bba0feb4efdb39142f66ab2ebb86fe Distribution is not just posting on Indiehackers and Reddit. It’s an actual strategy and you should think of it as soon as you think of the idea, even before the Figma designs. I like how Denis Shatalin taught me. You have to build a pipeline. That means a reliable way to get leads, launch campaigns at them, close deals, learn from them, and optimize. Whenever I get an idea now I always try to ask myself “Where can I find 1000s leads in one day?” If there is no good answer, this is not a good project to do now. ​ https://preview.redd.it/2boh3fpetf5c1.jpg?width=1456&format=pjpg&auto=webp&s=1c0d5d7b000716fcbbb00cbad495e8b61e25be66 Talk to users before doing anything. Jumping on designing and coding to make your idea a reality is a satisfying activity in the short term. Especially for me, I like to create for the sake of creation. However, it is so important to understand the market, understand the audience, understand the distribution. There are a lot of things to understand before coding. https://preview.redd.it/lv8tt96ftf5c1.jpg?width=1456&format=pjpg&auto=webp&s=6c8735aa6ad795f216ff9ddfa2341712e8277724 Get out of your own head. The real reason we dropped so many projects is that we got into our own heads. We let the negative thoughts creep in and kill all the optimism. I am really good at coming up with excuses to start a project. However, I am equally as good at coming up with reasons to kill a project. And so you have this yin and yang of starting and stopping. Building momentum and not burning out. I can say with certainty my team ran out of juice this year. We lost momentum so many times we got burnt out towards the end. Realizing that the project itself has momentum is important. User feedback and sales bring momentum. Building also creates momentum but unless it is matched with an equal force of impact, it can stomp the project down. That is why so many of our projects died quickly after we launched. The smarter approach is to do things that have a low investment of momentum (like talking to users) but result in high impact (sales or feedback). Yes, that means the project can get invalidated which makes it more short-lived than if we built it first, but it preserves team life energy. At the end of 2023 here is a single sentence I am making about how I think one becomes a successful indiehacker. One becomes a successful Indiehacker when one starts to solve pain-killer problems in the market they understand, for an audience they care about and consistently engage with for a long enough timeframe. Therefore an unsuccessful Indiehacker in a single sentence is An unsuccessful Indiehacker constantly enters new markets they don’t understand to build solutions for people whose problems they don’t care about, in a timeframe that is shorter than than the time they spent thinking about distribution. However, an important note to be made. Life is not just about indiehacking. It’s about learning and having fun. In the human world, the best journey isn’t the one that gets you the fastest to your goals but the one you enjoy the most. I enjoyed making those silly little projects and although I do not regret them, I will not repeat the same mistakes in 2024. But while it’s still 2023, I have 2 more projects I want to do :) EDIT: For Devs, frontend is always react with vite (ts) and backend is either node with express (ts) or python. For DB either Postgres or mongo (usually Prisma for ORM). For deployment all of it is on AWS (S3, EC2). In terms of libraries/APIs Whisper.cpp is best open source for transcription Obviously the gpt apis Eleven labs for voice related stuff And other random stuff here and there

How I Automated Amazon Affiliate Marketing: A Developer's Journey
reddit
LLM Vibe Score0
Human Vibe Score1
siom_cThis week

How I Automated Amazon Affiliate Marketing: A Developer's Journey

From Manual Labor to 1000x Efficiency As a developer who ventured into affiliate marketing, I discovered a significant gap between technical possibilities and current practices. This revelation led me to create AutoPin, a tool that's now helping hundreds of affiliate marketers reclaim their time. The Problem: A Time-Consuming Reality Every affiliate marketer knows this scenario: you spend hours copying and pasting links, checking prices, and updating product information. I found myself dedicating 4-6 hours daily to these repetitive tasks. As a programmer, this felt fundamentally wrong. The typical affiliate marketing workflow looked like this: Find promising products Generate affiliate links one by one Monitor price changes manually Check product availability regularly Update content when things change Repeat this process daily This manual process had several critical issues: Time Waste: 20-30 hours weekly on repetitive tasks Missed Opportunities: Unable to scale beyond 100 products Human Error: Inevitable mistakes in manual updates Delayed Updates: Lost commissions due to outdated information The Solution: Building AutoPin After three months of development and six months of testing, I created a system that could: Generate hundreds of affiliate links in minutes Monitor price changes automatically Update product availability in real-time Export data in multiple formats Scale infinitely without additional effort Real Results, Real Impact The impact was immediate and significant: 📊 Efficiency Metrics: Link generation: From 2 minutes per link to 0.1 seconds Monitoring capacity: From 50 to 5000+ products Update frequency: From daily to real-time Error rate: Reduced by 99.9% 💡 User Success Stories: "Increased my product portfolio by 10x without adding work hours" "Revenue grew 300% in the first month" "Finally able to focus on content creation instead of link management" Technical Insights The system architecture focuses on three core components: Data Extraction Engine Efficient web scraping Rate limiting and proxy management Data validation and cleaning Real-time Monitoring System Websocket connections for instant updates Queue management for large-scale monitoring Smart scheduling based on price volatility Export Framework Multiple format support (CSV, HTML, Markdown) Custom templating engine Batch processing capabilities The Future of Affiliate Marketing Automation We're currently developing AI capabilities to: Generate product descriptions automatically Optimize link placement for conversion Predict price trends and best promotion times Create content variations for different platforms Key Learnings Automation is Essential The future of affiliate marketing lies in automation. Manual processes simply can't compete with automated systems in terms of efficiency and accuracy. Focus on Value Creation When marketers spend less time on repetitive tasks, they can focus on strategy and content quality. Scale Matters With automation, the difference between managing 10 products and 1000 products becomes minimal. Getting Started If you're an affiliate marketer spending hours on manual tasks, it's time to automate. Here's what you can do: Analyze your current workflow Identify repetitive tasks Start with basic automation Scale gradually Monitor and optimize Conclusion The transformation from manual to automated affiliate marketing isn't just about saving time—it's about unlocking potential. When you remove the tedious aspects of affiliate marketing, you create space for creativity, strategy, and growth. Want to experience the difference? Visit AutoPin at autopin.pro and join the automation revolution. Remember: The best time to automate was yesterday. The second best time is now. About the Author: A developer turned affiliate marketer who believes in the power of automation to transform digital marketing. #AffiliateMarketing #Automation #Programming #DigitalMarketing #SaaS #ProductivityTools

How me and my team made 15+ apps and not made a single sale in 2023
reddit
LLM Vibe Score0
Human Vibe Score0.818
MichaelbetterecycleThis week

How me and my team made 15+ apps and not made a single sale in 2023

Hey, my name is Michael, I am in Auckland NZ. This year was the official beginning of my adult life. I graduated from university and started a full-time job. I’ve also really dug into indiehacking/bootstrapping and started 15 projects (and it will be at least 17 before the year ends). I think I’ve learned a lot but I consciously repeated mistakes. Upto (Nov) Discord Statuses + Your Location + Facebook Poke https://preview.redd.it/4nqt7tp2tf5c1.png?width=572&format=png&auto=webp&s=b0223484bc54b45b5c65e0b1afd0dc52f9c02ad1 This was the end of uni, I often messaged (and got messaged) requests of status and location to (and from my) friends. I thought, what if we make a social app that’s super basic and all it does is show you where your friends are? To differentiate from snap maps and others we wanted something with more privacy where you select the location. However, never finished the codebase or launched it. This is because I slowly started to realize that B2C (especially social networks) are way too hard to make into an actual business and the story with Fistbump would repeat itself. However, this decision not to launch it almost launched a curse on our team. From that point, we permitted ourselves to abandon projects even before launching. Lessons: Don’t do social networks if your goal is 10k MRR ASAP. If you build something to 90% competition ship it or you will think it’s okay to abandon projects Insight Bites (Nov) Youtube Summarizer Extension ​ https://preview.redd.it/h6drqej4tf5c1.jpg?width=800&format=pjpg&auto=webp&s=0f211456c390ac06f4fcb54aa51f9d50b0826658 Right after Upto, we started ideating and conveniently the biggest revolution in the recent history of tech was released → GPT. We instantly began ideating. The first problem we chose to use AI for is to summarize YouTube videos. Comical. Nevertheless, I am convinced we have had the best UX because you could right-click on a video to get a slideshow of insights instead of how everyone else did it. We dropped it because there was too much competition and unit economics didn’t work out (and it was a B2C). PodPigeon (Dec) Podcast → Tweet Threads https://preview.redd.it/0ukge245tf5c1.png?width=2498&format=png&auto=webp&s=23303e1cab330578a3d25cd688fa67aa3b97fb60 Then we thought, to make unit economics work we need to make this worthwhile for podcasters. This is when I got into Twitter and started seeing people summarize podcasts. Then I thought, what if we make something that converts a podcast into tweets? This was probably one of the most important projects because it connected me with Jason and Jonaed, both of whom I regularly stay in contact with and are my go-to experts on ideas related to content creation. Jonaed was even willing to buy Podpigeon and was using it on his own time. However, the unit economics still didn’t work out (and we got excited about other things). Furthermore, we got scared of the competition because I found 1 - 2 other people who did similar things poorly. This was probably the biggest mistake we’ve made. Very similar projects made 10k MRR and more, launching later than we did. We didn’t have a coherent product vision, we didn’t understand the customer well enough, and we had a bad outlook on competition and a myriad of other things. Lessons: I already made another post about the importance of outlook on competition. Do not quit just because there are competitors or just because you can’t be 10x better. Indiehackers and Bootstrappers (or even startups) need to differentiate in the market, which can be via product (UX/UI), distribution, or both. Asking Ace Intro.co + Crowdsharing ​ https://preview.redd.it/0hu2tt16tf5c1.jpg?width=1456&format=pjpg&auto=webp&s=3d397568ef2331e78198d64fafc1a701a3e75999 As I got into Twitter, I wanted to chat with some people I saw there. However, they were really expensive. I thought, what if we made some kind of crowdfunding service for other entrepreneurs to get a private lecture from their idols? It seemed to make a lot of sense on paper. It was solving a problem (validated via the fact that Intro.co is a thing and making things cheaper and accessible is a solid ground to stand on), we understood the market (or so we thought), and it could monetize relatively quickly. However, after 1-2 posts on Reddit and Indiehackers, we quickly learned three things. Firstly, no one cares. Secondly, even if they do, they think they can get the same information for free online. Thirdly, the reasons before are bad because for the first point → we barely talked to people, and for the second people → we barely talked to the wrong people. However, at least we didn’t code anything this time and tried to validate via a landing page. Lessons Don’t give up after 1 Redditor says “I don’t need this” Don’t be scared to choose successful people as your audience. Clarito Journaling with AI analyzer https://preview.redd.it/8ria2wq6tf5c1.jpg?width=1108&format=pjpg&auto=webp&s=586ec28ae75003d9f71b4af2520b748d53dd2854 Clarito is a classic problem all amateur entrepreneurs have. It’s where you lie to yourself that you have a real problem and therefore is validated but when your team asks you how much you would pay you say I guess you will pay, maybe, like 5 bucks a month…? Turns out, you’d have to pay me to use our own product lol. We sent it off to a few friends and posted on some forums, but never really got anything tangible and decided to move away. Honestly, a lot of it is us in our own heads. We say the market is too saturated, it’ll be hard to monetize, it’s B2C, etc. Lessons: You use the Mom Test on other people. You have to do it yourself as well. However, recognizing that the Mom Test requires a lot of creativity in its investigation because knowing what questions to ask can determine the outcome of the validation. I asked myself “Do I journal” but I didn’t ask myself “How often do I want GPT to chyme in on my reflections”. Which was practically never. That being said I think with the right audience and distribution, this product can work. I just don’t know (let alone care) about the audience that much (and I thought I was one of them)/ Horns & Claw Scrapes financial news texts you whether you should buy/sell the stock (news sentiment analysis) ​ https://preview.redd.it/gvfxdgc7tf5c1.jpg?width=1287&format=pjpg&auto=webp&s=63977bbc33fe74147b1f72913cefee4a9ebec9c2 This one we didn’t even bother launching. Probably something internal in the team and also seemed too good to be true (because if this works, doesn’t that just make us ultra-rich fast?). I saw a similar tool making 10k MRR so I guess I was wrong. Lessons: This one was pretty much just us getting into our heads. I declared that without an audience it would be impossible to ship this product and we needed to start a YouTube channel. Lol, and we did. And we couldn’t even film for 1 minute. I made bold statements like “We will commit to this for at least 1 year no matter what”. Learnery Make courses about any subject https://preview.redd.it/1nw6z448tf5c1.jpg?width=1112&format=pjpg&auto=webp&s=f2c73e8af23b0a6c3747a81e785960d4004feb48 This is probably the most “successful” project we’ve made. It grew from a couple of dozen to a couple of hundred users. It has 11 buy events for $9.99 LTD (we couldn’t be bothered connecting Stripe because we thought no one would buy it anyway). However what got us discouraged from seriously pursuing it more is, that this has very low defensibility, “Why wouldn’t someone just use chatGPT?” and it’s B2C so it’s hard to monetize. I used it myself for a month or so but then stopped. I don’t think it’s the app, I think the act of learning a concept from scratch isn’t something you do constantly in the way Learnery delivers it (ie course). I saw a bunch of similar apps that look like Ass make like 10k MRR. Lessons: Don’t do B2C, or if you do, do it properly Don’t just Mixpanel the buy button, connect your Stripe otherwise, it doesn’t feel real and you won’t get momentum. I doubt anyone (even me) will make this mistake again. I live in my GPT bubble where I make assumptions that everyone uses GPT the same way and as much as I do. In reality, the argument that this has low defensibility against GPT is invalid. Platforms that deliver a differentiated UX from ChatGPT to audiences who are not tightly integrated into the habit of using ChatGPT (which is like - everyone except for SOME tech evangelists). CuriosityFM Make podcasts about any subject https://preview.redd.it/zmosrcp8tf5c1.jpg?width=638&format=pjpg&auto=webp&s=d04ddffabef9050050b0d87939273cc96a8637dc This was our attempt at making Learnery more unique and more differentiated from chatGPT. We never really launched it. The unit economics didn’t work out and it was actually pretty boring to listen to, I don’t think I even fully listened to one 15-minute episode. I think this wasn’t that bad, it taught us more about ElevenLabs and voice AI. It took us maybe only 2-3 days to build so I think building to learn a new groundbreaking technology is fine. SleepyTale Make children’s bedtime stories https://preview.redd.it/14ue9nm9tf5c1.jpg?width=807&format=pjpg&auto=webp&s=267e18ec6f9270e6d1d11564b38136fa524966a1 My 8-year-old sister gave me that idea. She was too scared of making tea and I was curious about how she’d react if she heard a bedtime story about that exact scenario with the moral that I wanted her to absorb (which is that you shouldn’t be scared to try new things ie stop asking me to make your tea and do it yourself, it’s not that hard. You could say I went full Goebbels on her). Zane messaged a bunch of parents on Facebook but no one really cared. We showed this to one Lady at the place we worked from at Uni and she was impressed and wanted to show it to her kids but we already turned off our ElevenLabs subscription. Lessons: However, the truth behind this is beyond just “you need to be able to distribute”. It’s that you have to care about the audience. I don’t particularly want to build products for kids and parents. I am far away from that audience because I am neither a kid anymore nor going to be a parent anytime soon, and my sister still asked me to make her tea so the story didn’t work. I think it’s important to ask yourself whether you care about the audience. The way you answer that even when you are in full bias mode is, do you engage with them? Are you interested in what’s happening in their communities? Are you friends with them? Etc. User Survey Analyzer Big User Survey → GPT → Insights Report Me and my coworker were chatting about AI when he asked me to help him analyze a massive survey for him. I thought that was some pretty decent validation. Someone in an actual company asking for help. Lessons Market research is important but moving fast is also important. Ie building momentum. Also don’t revolve around 1 user. This has been a problem in multiple projects. Finding as many users as possible in the beginning to talk to is key. Otherwise, you are just waiting for 1 person to get back to you. AutoI18N Automated Internationalization of the codebase for webapps This one I might still do. It’s hard to find a solid distribution strategy. However, the idea came from me having to do it at my day job. It seems a solid problem. I’d say it’s validated and has some good players already. The key will be differentiation via the simplicity of UX and distribution (which means a slightly different audience). In the backlog for now because I don’t care about the problem or the audience that much. Documate - Part 1 Converts complex PDFs into Excel https://preview.redd.it/8b45k9katf5c1.jpg?width=1344&format=pjpg&auto=webp&s=57324b8720eb22782e28794d2db674b073193995 My mom needed to convert a catalog of furniture into an inventory which took her 3 full days of data entry. I automated it for her and thought this could have a big impact but there was no distribution because there was no ICP. We tried to find the ideal customers by talking to a bunch of different demographics but I flew to Kazakhstan for a holiday and so this kind of fizzled out. I am not writing this blog post linearity, this is my 2nd hour and I am tired and don’t want to finish this later so I don’t even know what lessons I learned. Figmatic Marketplace of high-quality Figma mockups of real apps https://preview.redd.it/h13yv45btf5c1.jpg?width=873&format=pjpg&auto=webp&s=aaa2896aeac2f22e9b7d9eed98c28bb8a2d2cdf1 This was a collab between me and my friend Alex. It was the classic Clarito where we both thought we had this problem and would pay to fix it. In reality, this is a vitamin. Neither I, nor I doubt Alex have thought of this as soon as we bought the domain. We posted it on Gumroad, sent it to a bunch of forums, and called it a day. Same issue as almost all the other ones. No distribution strategy. However, apps like Mobin show us that this concept is indeed profitable but it takes time. It needs SEO. It needs a community. None of those things, me and Alex had or was interested in. However shortly after HTML → Figma came out and it’s the best plugin. Maybe that should’ve been the idea. Podcast → Course Turns Podcaster’s episodes into a course This one I got baited by Jason :P I described to him the idea of repurposing his content for a course. He told me this was epic and he would pay. Then after I sent him the demo, he never checked it out. Anyhow during the development, we realized that doesn’t actually work because A podcast doesn’t have the correct format for the course, the most you can extract are concepts and ideas, seldom explanations. Most creators want video-based courses to be hosted on Kajabi or Udemy Another lesson is that when you pitch something to a user, what you articulate is a platform or a process, they imagine an outcome. However, the end result of your platform can be a very different outcome to what they had in mind and there is even a chance that what they want is not possible. You need to understand really well what the outcome looks like before you design the process. This is a classic problem where we thought of the solution before the problem. Yes, the problem exists. Podcasters want to make courses. However, if you really understand what they want, you can see how repurposing a podcast isn’t the best way to get there. However I only really spoke to 1-2 podcasters about this so making conclusions is dangerous for this can just be another asking ace mistake with the Redditor. Documate Part 2 Same concept as before but now I want to run some ads. We’ll see what happens. https://preview.redd.it/xb3npj0ctf5c1.jpg?width=1456&format=pjpg&auto=webp&s=3cd4884a29fd11d870d010a2677b585551c49193 In conclusion https://preview.redd.it/2zrldc9dtf5c1.jpg?width=1840&format=pjpg&auto=webp&s=2b3105073e752ad41c23f205dbd1ea046c1da7ff It doesn’t actually matter that much whether you choose to do a B2C, or a social network or focus on growing your audience. All of these can make you successful. What’s important is that you choose. If I had to summarize my 2023 in one word it’s indecision. Most of these projects succeeded for other people, nothing was as fundamentally wrong about them as I proclaimed. In reality that itself was an excuse. New ideas seduce, and it is a form of discipline to commit to a single project for a respectful amount of time. https://preview.redd.it/zy9a2vzdtf5c1.jpg?width=1456&format=pjpg&auto=webp&s=901c621227bba0feb4efdb39142f66ab2ebb86fe Distribution is not just posting on Indiehackers and Reddit. It’s an actual strategy and you should think of it as soon as you think of the idea, even before the Figma designs. I like how Denis Shatalin taught me. You have to build a pipeline. That means a reliable way to get leads, launch campaigns at them, close deals, learn from them, and optimize. Whenever I get an idea now I always try to ask myself “Where can I find 1000s leads in one day?” If there is no good answer, this is not a good project to do now. ​ https://preview.redd.it/2boh3fpetf5c1.jpg?width=1456&format=pjpg&auto=webp&s=1c0d5d7b000716fcbbb00cbad495e8b61e25be66 Talk to users before doing anything. Jumping on designing and coding to make your idea a reality is a satisfying activity in the short term. Especially for me, I like to create for the sake of creation. However, it is so important to understand the market, understand the audience, understand the distribution. There are a lot of things to understand before coding. https://preview.redd.it/lv8tt96ftf5c1.jpg?width=1456&format=pjpg&auto=webp&s=6c8735aa6ad795f216ff9ddfa2341712e8277724 Get out of your own head. The real reason we dropped so many projects is that we got into our own heads. We let the negative thoughts creep in and kill all the optimism. I am really good at coming up with excuses to start a project. However, I am equally as good at coming up with reasons to kill a project. And so you have this yin and yang of starting and stopping. Building momentum and not burning out. I can say with certainty my team ran out of juice this year. We lost momentum so many times we got burnt out towards the end. Realizing that the project itself has momentum is important. User feedback and sales bring momentum. Building also creates momentum but unless it is matched with an equal force of impact, it can stomp the project down. That is why so many of our projects died quickly after we launched. The smarter approach is to do things that have a low investment of momentum (like talking to users) but result in high impact (sales or feedback). Yes, that means the project can get invalidated which makes it more short-lived than if we built it first, but it preserves team life energy. At the end of 2023 here is a single sentence I am making about how I think one becomes a successful indiehacker. One becomes a successful Indiehacker when one starts to solve pain-killer problems in the market they understand, for an audience they care about and consistently engage with for a long enough timeframe. Therefore an unsuccessful Indiehacker in a single sentence is An unsuccessful Indiehacker constantly enters new markets they don’t understand to build solutions for people whose problems they don’t care about, in a timeframe that is shorter than than the time they spent thinking about distribution. However, an important note to be made. Life is not just about indiehacking. It’s about learning and having fun. In the human world, the best journey isn’t the one that gets you the fastest to your goals but the one you enjoy the most. I enjoyed making those silly little projects and although I do not regret them, I will not repeat the same mistakes in 2024. But while it’s still 2023, I have 2 more projects I want to do :) EDIT: For Devs, frontend is always react with vite (ts) and backend is either node with express (ts) or python. For DB either Postgres or mongo (usually Prisma for ORM). For deployment all of it is on AWS (S3, EC2). In terms of libraries/APIs Whisper.cpp is best open source for transcription Obviously the gpt apis Eleven labs for voice related stuff And other random stuff here and there

[D] Why I'm Lukewarm on Graph Neural Networks
reddit
LLM Vibe Score0
Human Vibe Score0.6
VodkaHazeThis week

[D] Why I'm Lukewarm on Graph Neural Networks

TL;DR: GNNs can provide wins over simpler embedding methods, but we're at a point where other research directions matter more I also posted it on my blog here, has footnotes, a nicer layout with inlined images, etc. I'm only lukewarm on Graph Neural Networks (GNNs). There, I said it. It might sound crazy GNNs are one of the hottest fields in machine learning right now. [There][1] were at least [four][2] [review][3] [papers][4] just in the last few months. I think some progress can come of this research, but we're also focusing on some incorrect places. But first, let's take a step back and go over the basics. Models are about compression We say graphs are a "non-euclidean" data type, but that's not really true. A regular graph is just another way to think about a particular flavor of square matrix called the [adjacency matrix][5], like this. It's weird, we look at run-of-the-mill matrix full of real numbers and decide to call it "non-euclidean". This is for practical reasons. Most graphs are fairly sparse, so the matrix is full of zeros. At this point, where the non-zero numbers are matters most, which makes the problem closer to (computationally hard) discrete math rather than (easy) continuous, gradient-friendly math. If you had the full matrix, life would be easy If we step out of the pesky realm of physics for a minute, and assume carrying the full adjacency matrix around isn't a problem, we solve a bunch of problems. First, network node embeddings aren't a thing anymore. A node is a just row in the matrix, so it's already a vector of numbers. Second, all network prediction problems are solved. A powerful enough and well-tuned model will simply extract all information between the network and whichever target variable we're attaching to nodes. NLP is also just fancy matrix compression Let's take a tangent away from graphs to NLP. Most NLP we do can be [thought of in terms of graphs][6] as we'll see, so it's not a big digression. First, note that Ye Olde word embedding models like [Word2Vec][7] and [GloVe][8] are [just matrix factorization][9]. The GloVe algorithm works on a variation of the old [bag of words][10] matrix. It goes through the sentences and creates a (implicit) [co-occurence][11] graph where nodes are words and the edges are weighed by how often the words appear together in a sentence. Glove then does matrix factorization on the matrix representation of that co-occurence graph, Word2Vec is mathematically equivalent. You can read more on this in my [post on embeddings][12] and the one (with code) on [word embeddings][13]. Even language models are also just matrix compression Language models are all the rage. They dominate most of the [state of the art][14] in NLP. Let's take BERT as our main example. BERT predicts a word given the context of the rest of the sentence. This grows the matrix we're factoring from flat co-occurences on pairs of words to co-occurences conditional on the sentence's context, like this We're growing the "ideal matrix" we're factoring combinatorially. As noted by [Hanh & Futrell][15]: [...] human language—and language modelling—has infinite statistical complexity but that it can be approximated well at lower levels. This observation has two implications: 1) We can obtain good results with comparatively small models; and 2) there is a lot of potential for scaling up our models. Language models tackle such a large problem space that they probably approximate a compression of the entire language in the [Kolmogorov Complexity][16] sense. It's also possible that huge language models just [memorize a lot of it][17] rather than compress the information, for what it's worth. Can we upsample any graph like language models do? We're already doing it. Let's call a first-order embedding of a graph a method that works by directly factoring the graph's adjacency matrix or [Laplacian matrix][18]. If you embed a graph using [Laplacian Eigenmaps][19] or by taking the [principal components][20] of the Laplacian, that's first order. Similarly, GloVe is a first-order method on the graph of word co-occurences. One of my favorites first order methods for graphs is [ProNE][21], which works as well as most methods while being two orders of magnitude faster. A higher-order method embeds the original matrix plus connections of neighbours-of-neighbours (2nd degree) and deeper k-step connections. [GraRep][22], shows you can always generate higher-order representations from first order methods by augmenting the graph matrix. Higher order method are the "upsampling" we do on graphs. GNNs that sample on large neighborhoods and random-walk based methods like node2vec are doing higher-order embeddings. Where are the performance gain? Most GNN papers in the last 5 years present empirical numbers that are useless for practitioners to decide on what to use. As noted in the [OpenGraphsBenchmark][4] (OGB) paper, GNN papers do their empirical section on a handful of tiny graphs (Cora, CiteSeer, PubMed) with 2000-20,000 nodes. These datasets can't seriously differentiate between methods. Recent efforts are directly fixing this, but the reasons why researchers focused on tiny, useless datasets for so long are worth discussing. Performance matters by task One fact that surprises a lot of people is that even though language models have the best performance in a lot of NLP tasks, if all you're doing is cram sentence embeddings into a downstream model, there [isn't much gained][23] from language models embeddings over simple methods like summing the individual Word2Vec word embeddings (This makes sense, because the full context of the sentence is captured in the sentence co-occurence matrix that is generating the Word2Vec embeddings). Similarly, [I find][24] that for many graphs simple first-order methods perform just as well on graph clustering and node label prediction tasks than higher-order embedding methods. In fact higher-order methods are massively computationally wasteful for these usecases. Recommended first order embedding methods are ProNE and my [GGVec with order=1][25]. Higher order methods normally perform better on the link prediction tasks. I'm not the only one to find this. In the BioNEV paper, they find: "A large GraRep order value for link prediction tasks (e.g. 3, 4);a small value for node classification tasks (e.g.1, 2)" (p.9). Interestingly, the gap in link prediction performance is inexistant for artificially created graphs. This suggests higher order methods do learn some of the structure intrinsic to [real world graphs][26]. For visualization, first order methods are better. Visualizations of higher order methods tend to have artifacts of their sampling. For instance, Node2Vec visualizations tend to have elongated/filament-like structures which come from the embeddings coming from long single strand random walks. See the following visualizations by [Owen Cornec][27] created by first embedding the graph to 32-300 dimensions using a node embedding algorithm, then mapping this to 2d or 3d with the excellent UMAP algorithm, like this Lastly, sometimes simple methods soundly beat higher order methods (there's an instance of it in the OGB paper). The problem here is that we don't know when any method is better than another and we definitely don't know the reason. There's definitely a reason different graph types respond better/worse to being represented by various methods. This is currently an open question. A big part of why is that the research space is inundated under useless new algorithms because... Academic incentives work against progress Here's the cynic's view of how machine learning papers are made: Take an existing algorithm Add some new layer/hyperparameter, make a cute mathematical story for why it matters Gridsearch your hyperparameters until you beat baselines from the original paper you aped Absolutely don't gridsearch stuff you're comparing against in your results section Make a cute ACRONYM for your new method, put impossible to use python 2 code on github (Or no code at all!) and bask in the citations I'm [not][28] the [only one][29] with these views on the state reproducible research. At least it's gotten slightly better in the last 2 years. Sidebar: I hate Node2Vec A side project of mine is a [node embedding library][25] and the most popular method in it is by far Node2Vec. Don't use Node2Vec. [Node2Vec][30] with p=1; q=1 is the [Deepwalk][31] algorithm. Deepwalk is an actual innovation. The Node2Vec authors closely followed the steps 1-5 including bonus points on step 5 by getting word2vec name recognition. This is not academic fraud -- the hyperparameters [do help a tiny bit][32] if you gridsearch really hard. But it's the presentable-to-your-parents sister of where you make the ML community worse off to progress your academic career. And certainly Node2Vec doesn't deserve 7500 citations. Progress is all about practical issues We've known how to train neural networks for well over 40 years. Yet they only exploded in popularity with [AlexNet][33] in 2012. This is because implementations and hardware came to a point where deep learning was practical. Similarly, we've known about factoring word co-occurence matrices into Word embeddings for at least 20 years. But word embeddings only exploded in 2013 with Word2Vec. The breakthrough here was that the minibatch-based methods let you train a Wikipedia-scale embedding model on commodity hardware. It's hard for methods in a field to make progress if training on a small amount of data takes days or weeks. You're disincentivized to explore new methods. If you want progress, your stuff has to run in reasonable time on commodity hardware. Even Google's original search algorithm [initially ran on commodity hardware][34]. Efficiency is paramount to progress The reason deep learning research took off the way it did is because of improvements in [efficiency][35] as well as much better libraries and hardware support. Academic code is terrible Any amount of time you spend gridsearching Node2Vec on p and q is all put to better use gridsearching Deepwalk itself (on number of walks, length of walks, or word2vec hyperparameters). The problem is that people don't gridsearch over deepwalk because implementations are all terrible. I wrote the [Nodevectors library][36] to have a fast deepwalk implementation because it took 32 hours to embed a graph with a measly 150,000 nodes using the reference Node2Vec implementation (the same takes 3min with Nodevectors). It's no wonder people don't gridsearch on Deepwalk a gridsearch would take weeks with the terrible reference implementations. To give an example, in the original paper of [GraphSAGE][37] they their algorithm to DeepWalk with walk lengths of 5, which is horrid if you've ever hyperparameter tuned a deepwalk algorithm. From their paper: We did observe DeepWalk’s performance could improve with further training, and in some cases it could become competitive with the unsupervised GraphSAGE approaches (but not the supervised approaches) if we let it run for >1000× longer than the other approaches (in terms of wall clock time for prediction on the test set) I don't even think the GraphSAGE authors had bad intent -- deepwalk implementations are simply so awful that they're turned away from using it properly. It's like trying to do deep learning with 2002 deep learning libraries and hardware. Your architectures don't really matter One of the more important papers this year was [OpenAI's "Scaling laws"][38] paper, where the raw number of parameters in your model is the most predictive feature of overall performance. This was noted even in the original BERT paper and drives 2020's increase in absolutely massive language models. This is really just [Sutton' Bitter Lesson][39] in action: General methods that leverage computation are ultimately the most effective, and by a large margin Transformers might be [replacing convolution][40], too. As [Yannic Kilcher said][41], transformers are ruining everything. [They work on graphs][6], in fact it's one of the [recent approaches][42], and seems to be one of the more succesful [when benchmarked][1] Researchers seem to be putting so much effort into architecture, but it doesn't matter much in the end because you can approximate anything by stacking more layers. Efficiency wins are great -- but neural net architectures are just one way to achieve that, and by tremendously over-researching this area we're leaving a lot of huge gains elsewhere on the table. Current Graph Data Structure Implementations suck NetworkX is a bad library. I mean, it's good if you're working on tiny graphs for babies, but for anything serious it chokes and forces you to rewrite everything in... what library, really? At this point most people working on large graphs end up hand-rolling some data structure. This is tough because your computer's memory is a 1-dimensional array of 1's and 0's and a graph has no obvious 1-d mapping. This is even harder when we take updating the graph (adding/removing some nodes/edges) into account. Here's a few options: Disconnected networks of pointers NetworkX is the best example. Here, every node is an object with a list of pointers to other nodes (the node's edges). This layout is like a linked list. Linked lists are the [root of all performance evil][43]. Linked lists go completely against how modern computers are designed. Fetching things from memory is slow, and operating on memory is fast (by two orders of magnitude). Whenever you do anything in this layout, you make a roundtrip to RAM. It's slow by design, you can write this in Ruby or C or assembly and it'll be slow regardless, because memory fetches are slow in hardware. The main advantage of this layout is that adding a new node is O(1). So if you're maintaining a massive graph where adding and removing nodes happens as often as reading from the graph, it makes sense. Another advantage of this layout is that it "scales". Because everything is decoupled from each other you can put this data structure on a cluster. However, you're really creating a complex solution for a problem you created for yourself. Sparse Adjacency Matrix This layout great for read-only graphs. I use it as the backend in my [nodevectors][25] library, and many other library writers use the [Scipy CSR Matrix][44], you can see graph algorithms implemented on it [here][45]. The most popular layout for this use is the [CSR Format][46] where you have 3 arrays holding the graph. One for edge destinations, one for edge weights and an "index pointer" which says which edges come from which node. Because the CSR layout is simply 3 arrays, it scales on a single computer: a CSR matrix can be laid out on a disk instead of in-memory. You simply [memory map][47] the 3 arrays and use them on-disk from there. With modern NVMe drives random seeks aren't slow anymore, much faster than distributed network calls like you do when scaling the linked list-based graph. I haven't seen anyone actually implement this yet, but it's in the roadmap for my implementation at least. The problem with this representation is that adding a node or edge means rebuilding the whole data structure. Edgelist representations This representation is three arrays: one for the edge sources, one for the edge destinations, and one for edge weights. [DGL][48] uses this representation internally. This is a simple and compact layout which can be good for analysis. The problem compared to CSR Graphs is some seek operations are slower. Say you want all the edges for node #4243. You can't jump there without maintaining an index pointer array. So either you maintain sorted order and binary search your way there (O(log2n)) or unsorted order and linear search (O(n)). This data structure can also work on memory mapped disk array, and node append is fast on unsorted versions (it's slow in the sorted version). Global methods are a dead end Methods that work on the entire graph at once can't leverage computation, because they run out of RAM at a certain scale. So any method that want a chance of being the new standard need to be able to update piecemeal on parts of the graph. Sampling-based methods Sampling Efficiency will matter more in the future Edgewise local methods. The only algorithms I know of that do this are GloVe and GGVec, which they pass through an edge list and update embedding weights on each step. The problem with this approach is that it's hard to use them for higher-order methods. The advantage is that they easily scale even on one computer. Also, incrementally adding a new node is as simple as taking the existing embeddings, adding a new one, and doing another epoch over the data Random Walk sampling. This is used by deepwalk and its descendants, usually for node embeddings rather than GNN methods. This can be computationally expensive and make it hard to add new nodes. But this does scale, for instance [Instagram][49] use it to feed their recommendation system models Neighbourhood sampling. This is currently the most common one in GNNs, and can be low or higher order depending on the neighborhood size. It also scales well, though implementing efficiently can be challenging. It's currently used by [Pinterest][50]'s recommendation algorithms. Conclusion Here are a few interesting questions: What is the relation between graph types and methods? Consolidated benchmarking like OGB We're throwing random models at random benchmarks without understanding why or when they do better More fundamental research. Heree's one I'm curious about: can other representation types like [Poincarre Embeddings][51] effectively encode directed relationships? On the other hand, we should stop focusing on adding spicy new layers to test on the same tiny datasets. No one cares. [1]: https://arxiv.org/pdf/2003.00982.pdf [2]: https://arxiv.org/pdf/2002.11867.pdf [3]: https://arxiv.org/pdf/1812.08434.pdf [4]: https://arxiv.org/pdf/2005.00687.pdf [5]: https://en.wikipedia.org/wiki/Adjacency_matrix [6]: https://thegradient.pub/transformers-are-graph-neural-networks/ [7]: https://en.wikipedia.org/wiki/Word2vec [8]: https://nlp.stanford.edu/pubs/glove.pdf [9]: https://papers.nips.cc/paper/2014/file/feab05aa91085b7a8012516bc3533958-Paper.pdf [10]: https://en.wikipedia.org/wiki/Bag-of-words_model [11]: https://en.wikipedia.org/wiki/Co-occurrence [12]: https://www.singlelunch.com/2020/02/16/embeddings-from-the-ground-up/ [13]: https://www.singlelunch.com/2019/01/27/word-embeddings-from-the-ground-up/ [14]: https://nlpprogress.com/ [15]: http://socsci.uci.edu/~rfutrell/papers/hahn2019estimating.pdf [16]: https://en.wikipedia.org/wiki/Kolmogorov_complexity [17]: https://bair.berkeley.edu/blog/2020/12/20/lmmem/ [18]: https://en.wikipedia.org/wiki/Laplacian_matrix [19]: http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=1F03130B02DC485C78BF364266B6F0CA?doi=10.1.1.19.8100&rep=rep1&type=pdf [20]: https://en.wikipedia.org/wiki/Principalcomponentanalysis [21]: https://www.ijcai.org/Proceedings/2019/0594.pdf [22]: https://dl.acm.org/doi/10.1145/2806416.2806512 [23]: https://openreview.net/pdf?id=SyK00v5xx [24]: https://github.com/VHRanger/nodevectors/blob/master/examples/link%20prediction.ipynb [25]: https://github.com/VHRanger/nodevectors [26]: https://arxiv.org/pdf/1310.2636.pdf [27]: http://byowen.com/ [28]: https://arxiv.org/pdf/1807.03341.pdf [29]: https://www.youtube.com/watch?v=Kee4ch3miVA [30]: https://cs.stanford.edu/~jure/pubs/node2vec-kdd16.pdf [31]: https://arxiv.org/pdf/1403.6652.pdf [32]: https://arxiv.org/pdf/1911.11726.pdf [33]: https://en.wikipedia.org/wiki/AlexNet [34]: https://en.wikipedia.org/wiki/Googledatacenters#Original_hardware [35]: https://openai.com/blog/ai-and-efficiency/ [36]: https://www.singlelunch.com/2019/08/01/700x-faster-node2vec-models-fastest-random-walks-on-a-graph/ [37]: https://arxiv.org/pdf/1706.02216.pdf [38]: https://arxiv.org/pdf/2001.08361.pdf [39]: http://incompleteideas.net/IncIdeas/BitterLesson.html [40]: https://arxiv.org/abs/2010.11929 [41]: https://www.youtube.com/watch?v=TrdevFK_am4 [42]: https://arxiv.org/pdf/1710.10903.pdf [43]: https://www.youtube.com/watch?v=fHNmRkzxHWs [44]: https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html [45]: https://docs.scipy.org/doc/scipy/reference/sparse.csgraph.html [46]: https://en.wikipedia.org/wiki/Sparsematrix#Compressedsparserow(CSR,CRSorYaleformat) [47]: https://en.wikipedia.org/wiki/Mmap [48]: https://github.com/dmlc/dgl [49]: https://ai.facebook.com/blog/powered-by-ai-instagrams-explore-recommender-system/ [50]: https://medium.com/pinterest-engineering/pinsage-a-new-graph-convolutional-neural-network-for-web-scale-recommender-systems-88795a107f48 [51]: https://arxiv.org/pdf/1705.08039.pdf

[D] Misuse of Deep Learning in Nature Journal’s Earthquake Aftershock Paper
reddit
LLM Vibe Score0
Human Vibe Score0.333
milaworldThis week

[D] Misuse of Deep Learning in Nature Journal’s Earthquake Aftershock Paper

Recently, I saw a post by Rajiv Shah, Chicago-based data-scientist, regarding an article published in Nature last year called Deep learning of aftershock patterns following large earthquakes, written by scientists at Harvard in collaboration with Google. Below is the article: Stand Up for Best Practices: Misuse of Deep Learning in Nature’s Earthquake Aftershock Paper The Dangers of Machine Learning Hype Practitioners of AI, machine learning, predictive modeling, and data science have grown enormously over the last few years. What was once a niche field defined by its blend of knowledge is becoming a rapidly growing profession. As the excitement around AI continues to grow, the new wave of ML augmentation, automation, and GUI tools will lead to even more growth in the number of people trying to build predictive models. But here’s the rub: While it becomes easier to use the tools of predictive modeling, predictive modeling knowledge is not yet a widespread commodity. Errors can be counterintuitive and subtle, and they can easily lead you to the wrong conclusions if you’re not careful. I’m a data scientist who works with dozens of expert data science teams for a living. In my day job, I see these teams striving to build high-quality models. The best teams work together to review their models to detect problems. There are many hard-to-detect-ways that lead to problematic models (say, by allowing target leakage into their training data). Identifying issues is not fun. This requires admitting that exciting results are “too good to be true” or that their methods were not the right approach. In other words, it’s less about the sexy data science hype that gets headlines and more about a rigorous scientific discipline. Bad Methods Create Bad Results Almost a year ago, I read an article in Nature that claimed unprecedented accuracy in predicting earthquake aftershocks by using deep learning. Reading the article, my internal radar became deeply suspicious of their results. Their methods simply didn’t carry many of the hallmarks of careful predicting modeling. I started to dig deeper. In the meantime, this article blew up and became widely recognized! It was even included in the release notes for Tensorflow as an example of what deep learning could do. However, in my digging, I found major flaws in the paper. Namely, data leakage which leads to unrealistic accuracy scores and a lack of attention to model selection (you don’t build a 6 layer neural network when a simpler model provides the same level of accuracy). To my earlier point: these are subtle, but incredibly basic predictive modeling errors that can invalidate the entire results of an experiment. Data scientists are trained to recognize and avoid these issues in their work. I assumed that this was simply overlooked by the author, so I contacted her and let her know so that she could improve her analysis. Although we had previously communicated, she did not respond to my email over concerns with the paper. Falling On Deaf Ears So, what was I to do? My coworkers told me to just tweet it and let it go, but I wanted to stand up for good modeling practices. I thought reason and best practices would prevail, so I started a 6-month process of writing up my results and shared them with Nature. Upon sharing my results, I received a note from Nature in January 2019 that despite serious concerns about data leakage and model selection that invalidate their experiment, they saw no need to correct the errors, because “Devries et al. are concerned primarily with using machine learning as [a] tool to extract insight into the natural world, and not with details of the algorithm design.” The authors provided a much harsher response. You can read the entire exchange on my github. It’s not enough to say that I was disappointed. This was a major paper (it’s Nature!) that bought into AI hype and published a paper despite it using flawed methods. Then, just this week, I ran across articles by Arnaud Mignan and Marco Broccardo on shortcomings that they found in the aftershocks article. Here are two more data scientists with expertise in earthquake analysis who also noticed flaws in the paper. I also have placed my analysis and reproducible code on github. Standing Up For Predictive Modeling Methods I want to make it clear: my goal is not to villainize the authors of the aftershocks paper. I don’t believe that they were malicious, and I think that they would argue their goal was to just show how machine learning could be applied to aftershocks. Devries is an accomplished earthquake scientist who wanted to use the latest methods for her field of study and found exciting results from it. But here’s the problem: their insights and results were based on fundamentally flawed methods. It’s not enough to say, “This isn’t a machine learning paper, it’s an earthquake paper.” If you use predictive modeling, then the quality of your results are determined by the quality of your modeling. Your work becomes data science work, and you are on the hook for your scientific rigor. There is a huge appetite for papers that use the latest technologies and approaches. It becomes very difficult to push back on these papers. But if we allow papers or projects with fundamental issues to advance, it hurts all of us. It undermines the field of predictive modeling. Please push back on bad data science. Report bad findings to papers. And if they don’t take action, go to twitter, post about it, share your results and make noise. This type of collective action worked to raise awareness of p-values and combat the epidemic of p-hacking. We need good machine learning practices if we want our field to continue to grow and maintain credibility. Link to Rajiv's Article Original Nature Publication (note: paywalled) GitHub repo contains an attempt to reproduce Nature's paper Confrontational correspondence with authors

[Discussion] When ML and Data Science are the death of a good company: A cautionary tale.
reddit
LLM Vibe Score0
Human Vibe Score0.6
AlexSnakeKingThis week

[Discussion] When ML and Data Science are the death of a good company: A cautionary tale.

TD;LR: At Company A, Team X does advanced analytics using on-prem ERP tools and older programming languages. Their tools work very well and are designed based on very deep business and domain expertise. Team Y is a new and ambitious Data Science team that thinks they can replace Team X's tools with a bunch of R scripts and a custom built ML platform. Their models are simplistic, but more "fashionable" compared to the econometric models used by Team X, and team Y benefits from the ML/DS moniker so leadership is allowing Team Y to start a large scale overhaul of the analytics platform in question. Team Y doesn't have the experience for such a larger scale transformation, and is refusing to collaborate with team X. This project is very likely going to fail, and cause serious harm to the company as a whole financially and from a people perspective. I argue that this is not just because of bad leadership, but also because of various trends and mindsets in the DS community at large. Update (Jump to below the line for the original story): Several people in the comments are pointing out that this just a management failure, not something due to ML/DS, and that you can replace DS with any buzz tech and the story will still be relevant. My response: Of course, any failure at an organization level is ultimately a management failure one way or the other. Moreover, it is also the case that ML/DS when done correctly, will always improve a company's bottom line. There is no scenario where the proper ML solution, delivered at a reasonable cost and in a timely fashion, will somehow hurt the company's bottom line. My point is that in this case management is failing because of certain trends and practices that are specific to the ML/DS community, namely: The idea that DS teams should operate independently of tech and business orgs -- too much autonomy for DS teams The disregard for domain knowledge that seems prevalent nowadays thanks to the ML hype, that DS can be generalists and someone with good enough ML chops can solve any business problem. That wasn't the case when I first left academia for the industry in 2009 (back then nobody would even bother with a phone screen if you didn't have the right domain knowledge). Over reliance on resources who check all the ML hype related boxes (knows Python, R, Tensorflow, Shiny, etc..., has the right Coursera certifications, has blogged on the topic, etc...), but are lacking in depth of experience. DS interviews nowadays all seem to be: Can you tell me what a p-value is? What is elastic net regression? Show me how to fit a model in sklearn? How do you impute NAs in an R dataframe? Any smart person can look those up on Stackoverflow or Cross-Validated,.....Instead teams should be asking stuff like: why does portfolio optimization use QP not LP? How does a forecast influence a customer service level? When should a recommendation engine be content based and when should it use collaborative filtering? etc... (This is a true story, happening to the company I currently work for. Names, domains, algorithms, and roles have been shuffled around to protect my anonymity)  Company A has been around for several decades. It is not the biggest name in its domain, but it is a well respected one. Risk analysis and portfolio optimization have been a core of Company A's business since the 90s. They have a large team of 30 or so analysts who perform those tasks on a daily basis. These analysts use ERP solutions implemented for them by one the big ERP companies (SAP, Teradata, Oracle, JD Edwards,...) or one of the major tech consulting companies (Deloitte, Accenture, PWC, Capgemini, etc...) in collaboration with their own in house engineering team. The tools used are embarrassingly old school: Classic RDBMS running on on-prem servers or maybe even on mainframes, code written in COBOL, Fortran, weird proprietary stuff like ABAP or SPSS.....you get the picture. But the models and analytic functions were pretty sophisticated, and surprisingly cutting edge compared to the published academic literature. Most of all, they fit well with the company's enterprise ecosystem, and were honed based on years of deep domain knowledge.  They have a tech team of several engineers (poached from the aforementioned software and consulting companies) and product managers (who came from the experienced pools of analysts and managers who use the software, or poached from business rivals) maintaining and running this software. Their technology might be old school, but collectively, they know the domain and the company's overall architecture very, very well. They've guided the company through several large scale upgrades and migrations and they have a track record of delivering on time, without too much overhead. The few times they've stumbled, they knew how to pick themselves up very quickly. In fact within their industry niche, they have a reputation for their expertise, and have very good relations with the various vendors they've had to deal with. They were the launching pad of several successful ERP consulting careers.  Interestingly, despite dealing on a daily basis with statistical modeling and optimization algorithms, none of the analysts, engineers, or product managers involved describe themselves as data scientists or machine learning experts. It is mostly a cultural thing: Their expertise predates the Data Science/ML hype that started circa 2010, and they got most of their chops using proprietary enterprise tools instead of the open source tools popular nowadays. A few of them have formal statistical training, but most of them came from engineering or domain backgrounds and learned stats on the fly while doing their job. Call this team "Team X".  Sometime around the mid 2010s, Company A started having some serious anxiety issues: Although still doing very well for a company its size, overall economic and demographic trends were shrinking its customer base, and a couple of so called disruptors came up with a new app and business model that started seriously eating into their revenue. A suitable reaction to appease shareholders and Wall Street was necessary. The company already had a decent website and a pretty snazzy app, what more could be done? Leadership decided that it was high time that AI and ML become a core part of the company's business. An ambitious Manager, with no science or engineering background, but who had very briefly toyed with a recommender system a couple of years back, was chosen to build a data science team, call it team "Y" (he had a bachelor's in history from the local state college and worked for several years in the company's marketing org). Team "Y" consists mostly of internal hires who decided they wanted to be data scientists and completed a Coursera certification or a Galvanize boot camp, before being brought on to the team, along with a few of fresh Ph.D or M.Sc holders who didn't like academia and wanted to try their hand at an industry role. All of them were very bright people, they could write great Medium blog posts and give inspiring TED talks, but collectively they had very little real world industry experience. As is the fashion nowadays, this group was made part of a data science org that reported directly to the CEO and Board, bypassing the CIO and any tech or business VPs, since Company A wanted to claim the monikers "data driven" and "AI powered" in their upcoming shareholder meetings. In 3 or 4 years of existence, team Y produced a few Python and R scripts. Their architectural experience  consisted almost entirely in connecting Flask to S3 buckets or Redshift tables, with a couple of the more resourceful ones learning how to plug their models into Tableau or how to spin up a Kuberneties pod.  But they needn't worry: The aforementioned manager, who was now a director (and was also doing an online Masters to make up for his qualifications gap and bolster his chances of becoming VP soon - at least he now understands what L1 regularization is), was a master at playing corporate politics and self-promotion. No matter how few actionable insights team Y produced or how little code they deployed to production, he always had their back and made sure they had ample funding. In fact he now had grandiose plans for setting up an all-purpose machine learning platform that can be used to solve all of the company's data problems.  A couple of sharp minded members of team Y, upon googling their industry name along with the word "data science", realized that risk analysis was a prime candidate for being solved with Bayesian models, and there was already a nifty R package for doing just that, whose tutorial they went through on R-Bloggers.com. One of them had even submitted a Bayesian classifier Kernel for a competition on Kaggle (he was 203rd on the leaderboard), and was eager to put his new-found expertise to use on a real world problem. They pitched the idea to their director, who saw a perfect use case for his upcoming ML platform. They started work on it immediately, without bothering to check whether anybody at Company A was already doing risk analysis. Since their org was independent, they didn't really need to check with anybody else before they got funding for their initiative. Although it was basically a Naive Bayes classifier, the term ML was added to the project tile, to impress the board.  As they progressed with their work however, tensions started to build. They had asked the data warehousing and CA analytics teams to build pipelines for them, and word eventually got out to team X about their project. Team X was initially thrilled: They offered to collaborate whole heartedly, and would have loved to add an ML based feather to their already impressive cap. The product owners and analysts were totally onboard as well: They saw a chance to get in on the whole Data Science hype that they kept hearing about. But through some weird mix of arrogance and insecurity, team Y refused to collaborate with them or share any of their long term goals with them, even as they went to other parts of the company giving brown bag presentations and tutorials on the new model they created.  Team X got resentful: from what they saw of team Y's model, their approach was hopelessly naive and had little chances of scaling or being sustainable in production, and they knew exactly how to help with that. Deploying the model to production would have taken them a few days, given how comfortable they were with DevOps and continuous delivery (team Y had taken several months to figure out how to deploy a simple R script to production). And despite how old school their own tech was, team X were crafty enough to be able to plug it in to their existing architecture. Moreover, the output of the model was such that it didn't take into account how the business will consume it or how it was going to be fed to downstream systems, and the product owners could have gone a long way in making the model more amenable to adoption by the business stakeholders. But team Y wouldn't listen, and their leads brushed off any attempts at communication, let alone collaboration. The vibe that team Y was giving off was "We are the cutting edge ML team, you guys are the legacy server grunts. We don't need your opinion.", and they seemed to have a complete disregard for domain knowledge, or worse, they thought that all that domain knowledge consisted of was being able to grasp the definitions of a few business metrics.  Team X got frustrated and tried to express their concerns to leadership. But despite owning a vital link in Company A's business process, they were only \~50 people in a large 1000 strong technology and operations org, and they were several layers removed from the C-suite, so it was impossible for them to get their voices heard.  Meanwhile, the unstoppable director was doing what he did best: Playing corporate politics. Despite how little his team had actually delivered, he had convinced the board that all analysis and optimization tasks should now be migrated to his yet to be delivered ML platform. Since most leaders now knew that there was overlap between team Y and team X's objectives, his pitch was no longer that team Y was going to create a new insight, but that they were going to replace (or modernize) the legacy statistics based on-prem tools with more accurate cloud based ML tools. Never mind that there was no support in the academic literature for the idea that Naive Bayes works better than the Econometric approaches used by team X, let alone the additional wacky idea that Bayesian Optimization would definitely outperform the QP solvers that were running in production.  Unbeknownst to team X, the original Bayesian risk analysis project has now grown into a multimillion dollar major overhaul initiative, which included the eventual replacement of all of the tools and functions supported by team X along with the necessary migration to the cloud. The CIO and a couple of business VPs are on now board, and tech leadership is treating it as a done deal. An outside vendor, a startup who nobody had heard of, was contracted to help build the platform, since team Y has no engineering skills. The choice was deliberate, as calling on any of the established consulting or software companies would have eventually led leadership to the conclusion that team X was better suited for a transformation on this scale than team Y.  Team Y has no experience with any major ERP deployments, and no domain knowledge, yet they are being tasked with fundamentally changing the business process that is at the core of Company A's business. Their models actually perform worse than those deployed by team X, and their architecture is hopelessly simplistic, compared to what is necessary for running such a solution in production.  Ironically, using Bayesian thinking and based on all the evidence, the likelihood that team Y succeeds is close to 0%. At best, the project is going to end up being a write off of 50 million dollars or more. Once the !@#$!@hits the fan, a couple of executive heads are going to role, and dozens of people will get laid off. At worst, given how vital risk analysis and portfolio optimization is to Company A's revenue stream, the failure will eventually sink the whole company. It probably won't go bankrupt, but it will lose a significant portion of its business and work force. Failed ERP implementations can and do sink large companies: Just see what happened to National Grid US, SuperValu or Target Canada.  One might argue that this is more about corporate disfunction and bad leadership than about data science and AI. But I disagree. I think the core driver of this debacle is indeed the blind faith in Data Scientists, ML models and the promise of AI, and the overall culture of hype and self promotion that is very common among the ML crowd.  We haven't seen the end of this story: I sincerely hope that this ends well for the sake of my colleagues and all involved. Company A is a good company, and both its customers and its employees deserver better. But the chances of that happening are negligible given all the information available, and this failure will hit my company hard.

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore
reddit
LLM Vibe Score0
Human Vibe Score1
qazmkoppThis week

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore

A few days ago, HuggingFace announced a $100 million Series C funding round, which was big news in open source machine learning and could be a sign of where the industry is headed. Two days before the HuggingFace funding announcement, open-source machine learning platform MetaSpore released a demo based on the HuggingFace Rapid deployment pre-training model. As deep learning technology makes innovative breakthroughs in computer vision, natural language processing, speech understanding, and other fields, more and more unstructured data are perceived, understood, and processed by machines. These advances are mainly due to the powerful learning ability of deep learning. Through pre-training of deep models on massive data, the models can capture the internal data patterns, thus helping many downstream tasks. With the industry and academia investing more and more energy in the research of pre-training technology, the distribution warehouses of pre-training models such as HuggingFace and Timm have emerged one after another. The open-source community release pre-training significant model dividends at an unprecedented speed. In recent years, the data form of machine modeling and understanding has gradually evolved from single-mode to multi-mode, and the semantic gap between different modes is being eliminated, making it possible to retrieve data across modes. Take CLIP, OpenAI’s open-source work, as an example, to pre-train the twin towers of images and texts on a dataset of 400 million pictures and texts and connect the semantics between pictures and texts. Many researchers in the academic world have been solving multimodal problems such as image generation and retrieval based on this technology. Although the frontier technology through the semantic gap between modal data, there is still a heavy and complicated model tuning, offline data processing, high performance online reasoning architecture design, heterogeneous computing, and online algorithm be born multiple processes and challenges, hindering the frontier multimodal retrieval technologies fall to the ground and pratt &whitney. DMetaSoul aims at the above technical pain points, abstracting and uniting many links such as model training optimization, online reasoning, and algorithm experiment, forming a set of solutions that can quickly apply offline pre-training model to online. This paper will introduce how to use the HuggingFace community pre-training model to conduct online reasoning and algorithm experiments based on MetaSpore technology ecology so that the benefits of the pre-training model can be fully released to the specific business or industry and small and medium-sized enterprises. And we will give the text search text and text search graph two multimodal retrieval demonstration examples for your reference. Multimodal semantic retrieval The sample architecture of multimodal retrieval is as follows: Our multimodal retrieval system supports both text search and text search application scenarios, including offline processing, model reasoning, online services, and other core modules: ​ https://preview.redd.it/w4v4c7vcez291.png?width=1834&format=png&auto=webp&s=0687efb1fddb26e8e30cb844d398ec712b947f31 Offline processing, including offline data processing processes for different application scenarios of text search and text search, including model tuning, model export, data index database construction, data push, etc. Model inference. After the offline model training, we deployed our NLP and CV large models based on the MetaSpore Serving framework. MetaSpore Serving helps us conveniently perform online inference, elastic scheduling, load balancing, and resource scheduling in heterogeneous environments. Online services. Based on MetaSpore’s online algorithm application framework, MetaSpore has a complete set of reusable online search services, including Front-end retrieval UI, multimodal data preprocessing, vector recall and sorting algorithm, AB experimental framework, etc. MetaSpore also supports text search by text and image scene search by text and can be migrated to other application scenarios at a low cost. The HuggingFace open source community has provided several excellent baseline models for similar multimodal retrieval problems, which are often the starting point for actual optimization in the industry. MetaSpore also uses the pre-training model of the HuggingFace community in its online services of searching words by words and images by words. Searching words by words is based on the semantic similarity model of the question and answer field optimized by MetaSpore, and searching images by words is based on the community pre-training model. These community open source pre-training models are exported to the general ONNX format and loaded into MetaSpore Serving for online reasoning. The following sections will provide a detailed description of the model export and online retrieval algorithm services. The reasoning part of the model is standardized SAAS services with low coupling with the business. Interested readers can refer to my previous post: The design concept of MetaSpore, a new generation of the one-stop machine learning platform. 1.1 Offline Processing Offline processing mainly involves the export and loading of online models and index building and pushing of the document library. You can follow the step-by-step instructions below to complete the offline processing of text search and image search and see how the offline pre-training model achieves reasoning at MetaSpore. 1.1.1 Search text by text Traditional text retrieval systems are based on literal matching algorithms such as BM25. Due to users’ diverse query words, a semantic gap between query words and documents is often encountered. For example, users misspell “iPhone” as “Phone,” and search terms are incredibly long, such as “1 \~ 3 months old baby autumn small size bag pants”. Traditional text retrieval systems will use spelling correction, synonym expansion, search terms rewriting, and other means to alleviate the semantic gap but fundamentally fail to solve this problem. Only when the retrieval system fully understands users’ query terms and documents can it meet users’ retrieval demands at the semantic level. With the continuous progress of pre-training and representational learning technology, some commercial search engines continue to integrate semantic vector retrieval methods based on symbolic learning into the retrieval ecology. Semantic retrieval model This paper introduces a set of semantic vector retrieval applications. MetaSpore built a set of semantic retrieval systems based on encyclopedia question and answer data. MetaSpore adopted the Sentence-Bert model as the semantic vector representation model, which fine-tunes the twin tower BERT in supervised or unsupervised ways to make the model more suitable for retrieval tasks. The model structure is as follows: The query-Doc symmetric two-tower model is used in text search and question and answer retrieval. The vector representation of online Query and offline DOC share the same vector representation model, so it is necessary to ensure the consistency of the offline DOC library building model and online Query inference model. The case uses MetaSpore’s text representation model Sbert-Chinese-QMC-domain-V1, optimized in the open-source semantically similar data set. This model will express the question and answer data as a vector in offline database construction. The user query will be expressed as a vector by this model in online retrieval, ensuring that query-doc in the same semantic space, users’ semantic retrieval demands can be guaranteed by vector similarity metric calculation. Since the text presentation model does vector encoding for Query online, we need to export the model for use by the online service. Go to the q&A data library code directory and export the model concerning the documentation. In the script, Pytorch Tracing is used to export the model. The models are exported to the “./export “directory. The exported models are mainly ONNX models used for wired reasoning, Tokenizer, and related configuration files. The exported models are loaded into MetaSpore Serving by the online Serving system described below for model reasoning. Since the exported model will be copied to the cloud storage, you need to configure related variables in env.sh. \Build library based on text search \ The retrieval database is built on the million-level encyclopedia question and answer data set. According to the description document, you need to download the data and complete the database construction. The question and answer data will be coded as a vector by the offline model, and then the database construction data will be pushed to the service component. The whole process of database construction is described as follows: Preprocessing, converting the original data into a more general JSonline format for database construction; Build index, use the same model as online “sbert-Chinese-qmc-domain-v1” to index documents (one document object per line); Push inverted (vector) and forward (document field) data to each component server. The following is an example of the database data format. After offline database construction is completed, various data are pushed to corresponding service components, such as Milvus storing vector representation of documents and MongoDB storing summary information of documents. Online retrieval algorithm services will use these service components to obtain relevant data. 1.1.2 Search by text Text and images are easy for humans to relate semantically but difficult for machines. First of all, from the perspective of data form, the text is the discrete ID type of one-dimensional data based on words and words. At the same time, images are continuous two-dimensional or three-dimensional data. Secondly, the text is a subjective creation of human beings, and its expressive ability is vibrant, including various turning points, metaphors, and other expressions, while images are machine representations of the objective world. In short, bridging the semantic gap between text and image data is much more complex than searching text by text. The traditional text search image retrieval technology generally relies on the external text description data of the image or the nearest neighbor retrieval technology and carries out the retrieval through the image associated text, which in essence degrades the problem to text search. However, it will also face many issues, such as obtaining the associated text of pictures and whether the accuracy of text search by text is high enough. The depth model has gradually evolved from single-mode to multi-mode in recent years. Taking the open-source project of OpenAI, CLIP, as an example, train the model through the massive image and text data of the Internet and map the text and image data into the same semantic space, making it possible to implement the text and image search technology based on semantic vector. CLIP graphic model The text search pictures introduced in this paper are implemented based on semantic vector retrieval, and the CLIP pre-training model is used as the two-tower retrieval architecture. Because the CLIP model has trained the semantic alignment of the twin towers’ text and image side models on the massive graphic and text data, it is particularly suitable for the text search graph scene. Due to the different image and text data forms, the Query-Doc asymmetric twin towers model is used for text search image retrieval. The image-side model of the twin towers is used for offline database construction, and the text-side model is used for the online return. In the final online retrieval, the database data of the image side model will be searched after the text side model encodes Query, and the CLIP pre-training model guarantees the semantic correlation between images and texts. The model can draw the graphic pairs closer in vector space by pre-training on a large amount of visual data. Here we need to export the text-side model for online MetaSpore Serving inference. Since the retrieval scene is based on Chinese, the CLIP model supporting Chinese understanding is selected. The exported content includes the ONNX model used for online reasoning and Tokenizer, similar to the text search. MetaSpore Serving can load model reasoning through the exported content. Build library on Image search You need to download the Unsplash Lite library data and complete the construction according to the instructions. The whole process of database construction is described as follows: Preprocessing, specify the image directory, and then generate a more general JSOnline file for library construction; Build index, use OpenAI/Clip-Vit-BASE-Patch32 pre-training model to index the gallery, and output one document object for each line of index data; Push inverted (vector) and forward (document field) data to each component server. Like text search, after offline database construction, relevant data will be pushed to service components, called by online retrieval algorithm services to obtain relevant data. 1.2 Online Services The overall online service architecture diagram is as follows: https://preview.redd.it/jfsl8hdfez291.png?width=1280&format=png&auto=webp&s=a858e2304a0c93e78ba5429612ca08cbee69b35a Multi-mode search online service system supports application scenarios such as text search and text search. The whole online service consists of the following parts: Query preprocessing service: encapsulate preprocessing logic (including text/image, etc.) of pre-training model, and provide services through gRPC interface; Retrieval algorithm service: the whole algorithm processing link includes AB experiment tangent flow configuration, MetaSpore Serving call, vector recall, sorting, document summary, etc.; User entry service: provides a Web UI interface for users to debug and track down problems in the retrieval service. From a user request perspective, these services form invocation dependencies from back to front, so to build up a multimodal sample, you need to run each service from front to back first. Before doing this, remember to export the offline model, put it online and build the library first. This article will introduce the various parts of the online service system and make the whole service system step by step according to the following guidance. See the ReadME at the end of this article for more details. 1.2.1 Query preprocessing service Deep learning models tend to be based on tensors, but NLP/CV models often have a preprocessing part that translates raw text and images into tensors that deep learning models can accept. For example, NLP class models often have a pre-tokenizer to transform text data of string type into discrete tensor data. CV class models also have similar processing logic to complete the cropping, scaling, transformation, and other processing of input images through preprocessing. On the one hand, considering that this part of preprocessing logic is decoupled from tensor reasoning of the depth model, on the other hand, the reason of the depth model has an independent technical system based on ONNX, so MetaSpore disassembled this part of preprocessing logic. NLP pretreatment Tokenizer has been integrated into the Query pretreatment service. MetaSpore dismantlement with a relatively general convention. Users only need to provide preprocessing logic files to realize the loading and prediction interface and export the necessary data and configuration files loaded into the preprocessing service. Subsequent CV preprocessing logic will also be integrated in this manner. The preprocessing service currently provides the gRPC interface invocation externally and is dependent on the Query preprocessing (QP) module in the retrieval algorithm service. After the user request reaches the retrieval algorithm service, it will be forwarded to the service to complete the data preprocessing and continue the subsequent processing. The ReadMe provides details on how the preprocessing service is started, how the preprocessing model exported offline to cloud storage enters the service, and how to debug the service. To further improve the efficiency and stability of model reasoning, MetaSpore Serving implements a Python preprocessing submodule. So MetaSpore can provide gRPC services through user-specified preprocessor.py, complete Tokenizer or CV-related preprocessing in NLP, and translate requests into a Tensor that deep models can handle. Finally, the model inference is carried out by MetaSpore, Serving subsequent sub-modules. Presented here on the lot code: https://github.com/meta-soul/MetaSpore/compare/add\python\preprocessor 1.2.2 Retrieval algorithm services Retrieval algorithm service is the core of the whole online service system, which is responsible for the triage of experiments, the assembly of algorithm chains such as preprocessing, recall, sorting, and the invocation of dependent component services. The whole retrieval algorithm service is developed based on the Java Spring framework and supports multi-mode retrieval scenarios of text search and text search graph. Due to good internal abstraction and modular design, it has high flexibility and can be migrated to similar application scenarios at a low cost. Here’s a quick guide to configuring the environment to set up the retrieval algorithm service. See ReadME for more details: Install dependent components. Use Maven to install the online-Serving component Search for service configurations. Copy the template configuration file and replace the MongoDB, Milvus, and other configurations based on the development/production environment. Install and configure Consul. Consul allows you to synchronize the search service configuration in real-time, including cutting the flow of experiments, recall parameters, and sorting parameters. The project’s configuration file shows the current configuration parameters of text search and text search. The parameter modelName in the stage of pretreatment and recall is the corresponding model exported in offline processing. Start the service. Once the above configuration is complete, the retrieval service can be started from the entry script. Once the service is started, you can test it! For example, for a user with userId=10 who wants to query “How to renew ID card,” access the text search service. 1.2.3 User Entry Service Considering that the retrieval algorithm service is in the form of the API interface, it is difficult to locate and trace the problem, especially for the text search image scene can intuitively display the retrieval results to facilitate the iterative optimization of the retrieval algorithm. This paper provides a lightweight Web UI interface for text search and image search, a search input box, and results in a display page for users. Developed by Flask, the service can be easily integrated with other retrieval applications. The service calls the retrieval algorithm service and displays the returned results on the page. It’s also easy to install and start the service. Once you’re done, go to http://127.0.0.1:8090 to see if the search UI service is working correctly. See the ReadME at the end of this article for details. Multimodal system demonstration The multimodal retrieval service can be started when offline processing and online service environment configuration have been completed following the above instructions. Examples of textual searches are shown below. Enter the entry of the text search map application, enter “cat” first, and you can see that the first three digits of the returned result are cats: https://preview.redd.it/0n5nuyvhez291.png?width=1280&format=png&auto=webp&s=1e9c054f541d53381674b8d6001b4bf524506bd2 If you add a color constraint to “cat” to retrieve “black cat,” you can see that it does return a black cat: https://preview.redd.it/rzc0qjyjez291.png?width=1280&format=png&auto=webp&s=d5bcc503ef0fb3360c7740e60e295cf372dcad47 Further, strengthen the constraint on the search term, change it to “black cat on the bed,” and return results containing pictures of a black cat climbing on the bed: ​ https://preview.redd.it/c4b2q8olez291.png?width=1280&format=png&auto=webp&s=4f3817b0b9f07e1e68d1d4a8281702ba3834a00a The cat can still be found through the text search system after the color and scene modification in the above example. Conclusion The cutting-edge pre-training technology can bridge the semantic gap between different modes, and the HuggingFace community can greatly reduce the cost for developers to use the pre-training model. Combined with the technological ecology of MetaSpore online reasoning and online microservices provided by DMetaSpore, the pre-training model is no longer mere offline dabbling. Instead, it can truly achieve end-to-end implementation from cutting-edge technology to industrial scenarios, fully releasing the dividends of the pre-training large model. In the future, DMetaSoul will continue to improve and optimize the MetaSpore technology ecosystem: More automated and wider access to HuggingFace community ecology. MetaSpore will soon release a common model rollout mechanism to make HuggingFace ecologically accessible and will later integrate preprocessing services into online services. Multi-mode retrieval offline algorithm optimization. For multimodal retrieval scenarios, MetaSpore will continuously iteratively optimize offline algorithm components, including text recall/sort model, graphic recall/sort model, etc., to improve the accuracy and efficiency of the retrieval algorithm. For related code and reference documentation in this article, please visit: https://github.com/meta-soul/MetaSpore/tree/main/demo/multimodal/online Some images source: https://github.com/openai/CLIP/raw/main/CLIP.png https://www.sbert.net/examples/training/sts/README.html

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore
reddit
LLM Vibe Score0
Human Vibe Score1
qazmkoppThis week

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore

A few days ago, HuggingFace announced a $100 million Series C funding round, which was big news in open source machine learning and could be a sign of where the industry is headed. Two days before the HuggingFace funding announcement, open-source machine learning platform MetaSpore released a demo based on the HuggingFace Rapid deployment pre-training model. As deep learning technology makes innovative breakthroughs in computer vision, natural language processing, speech understanding, and other fields, more and more unstructured data are perceived, understood, and processed by machines. These advances are mainly due to the powerful learning ability of deep learning. Through pre-training of deep models on massive data, the models can capture the internal data patterns, thus helping many downstream tasks. With the industry and academia investing more and more energy in the research of pre-training technology, the distribution warehouses of pre-training models such as HuggingFace and Timm have emerged one after another. The open-source community release pre-training significant model dividends at an unprecedented speed. In recent years, the data form of machine modeling and understanding has gradually evolved from single-mode to multi-mode, and the semantic gap between different modes is being eliminated, making it possible to retrieve data across modes. Take CLIP, OpenAI’s open-source work, as an example, to pre-train the twin towers of images and texts on a dataset of 400 million pictures and texts and connect the semantics between pictures and texts. Many researchers in the academic world have been solving multimodal problems such as image generation and retrieval based on this technology. Although the frontier technology through the semantic gap between modal data, there is still a heavy and complicated model tuning, offline data processing, high performance online reasoning architecture design, heterogeneous computing, and online algorithm be born multiple processes and challenges, hindering the frontier multimodal retrieval technologies fall to the ground and pratt &whitney. DMetaSoul aims at the above technical pain points, abstracting and uniting many links such as model training optimization, online reasoning, and algorithm experiment, forming a set of solutions that can quickly apply offline pre-training model to online. This paper will introduce how to use the HuggingFace community pre-training model to conduct online reasoning and algorithm experiments based on MetaSpore technology ecology so that the benefits of the pre-training model can be fully released to the specific business or industry and small and medium-sized enterprises. And we will give the text search text and text search graph two multimodal retrieval demonstration examples for your reference. Multimodal semantic retrieval The sample architecture of multimodal retrieval is as follows: Our multimodal retrieval system supports both text search and text search application scenarios, including offline processing, model reasoning, online services, and other core modules: ​ https://preview.redd.it/w4v4c7vcez291.png?width=1834&format=png&auto=webp&s=0687efb1fddb26e8e30cb844d398ec712b947f31 Offline processing, including offline data processing processes for different application scenarios of text search and text search, including model tuning, model export, data index database construction, data push, etc. Model inference. After the offline model training, we deployed our NLP and CV large models based on the MetaSpore Serving framework. MetaSpore Serving helps us conveniently perform online inference, elastic scheduling, load balancing, and resource scheduling in heterogeneous environments. Online services. Based on MetaSpore’s online algorithm application framework, MetaSpore has a complete set of reusable online search services, including Front-end retrieval UI, multimodal data preprocessing, vector recall and sorting algorithm, AB experimental framework, etc. MetaSpore also supports text search by text and image scene search by text and can be migrated to other application scenarios at a low cost. The HuggingFace open source community has provided several excellent baseline models for similar multimodal retrieval problems, which are often the starting point for actual optimization in the industry. MetaSpore also uses the pre-training model of the HuggingFace community in its online services of searching words by words and images by words. Searching words by words is based on the semantic similarity model of the question and answer field optimized by MetaSpore, and searching images by words is based on the community pre-training model. These community open source pre-training models are exported to the general ONNX format and loaded into MetaSpore Serving for online reasoning. The following sections will provide a detailed description of the model export and online retrieval algorithm services. The reasoning part of the model is standardized SAAS services with low coupling with the business. Interested readers can refer to my previous post: The design concept of MetaSpore, a new generation of the one-stop machine learning platform. 1.1 Offline Processing Offline processing mainly involves the export and loading of online models and index building and pushing of the document library. You can follow the step-by-step instructions below to complete the offline processing of text search and image search and see how the offline pre-training model achieves reasoning at MetaSpore. 1.1.1 Search text by text Traditional text retrieval systems are based on literal matching algorithms such as BM25. Due to users’ diverse query words, a semantic gap between query words and documents is often encountered. For example, users misspell “iPhone” as “Phone,” and search terms are incredibly long, such as “1 \~ 3 months old baby autumn small size bag pants”. Traditional text retrieval systems will use spelling correction, synonym expansion, search terms rewriting, and other means to alleviate the semantic gap but fundamentally fail to solve this problem. Only when the retrieval system fully understands users’ query terms and documents can it meet users’ retrieval demands at the semantic level. With the continuous progress of pre-training and representational learning technology, some commercial search engines continue to integrate semantic vector retrieval methods based on symbolic learning into the retrieval ecology. Semantic retrieval model This paper introduces a set of semantic vector retrieval applications. MetaSpore built a set of semantic retrieval systems based on encyclopedia question and answer data. MetaSpore adopted the Sentence-Bert model as the semantic vector representation model, which fine-tunes the twin tower BERT in supervised or unsupervised ways to make the model more suitable for retrieval tasks. The model structure is as follows: The query-Doc symmetric two-tower model is used in text search and question and answer retrieval. The vector representation of online Query and offline DOC share the same vector representation model, so it is necessary to ensure the consistency of the offline DOC library building model and online Query inference model. The case uses MetaSpore’s text representation model Sbert-Chinese-QMC-domain-V1, optimized in the open-source semantically similar data set. This model will express the question and answer data as a vector in offline database construction. The user query will be expressed as a vector by this model in online retrieval, ensuring that query-doc in the same semantic space, users’ semantic retrieval demands can be guaranteed by vector similarity metric calculation. Since the text presentation model does vector encoding for Query online, we need to export the model for use by the online service. Go to the q&A data library code directory and export the model concerning the documentation. In the script, Pytorch Tracing is used to export the model. The models are exported to the “./export “directory. The exported models are mainly ONNX models used for wired reasoning, Tokenizer, and related configuration files. The exported models are loaded into MetaSpore Serving by the online Serving system described below for model reasoning. Since the exported model will be copied to the cloud storage, you need to configure related variables in env.sh. \Build library based on text search \ The retrieval database is built on the million-level encyclopedia question and answer data set. According to the description document, you need to download the data and complete the database construction. The question and answer data will be coded as a vector by the offline model, and then the database construction data will be pushed to the service component. The whole process of database construction is described as follows: Preprocessing, converting the original data into a more general JSonline format for database construction; Build index, use the same model as online “sbert-Chinese-qmc-domain-v1” to index documents (one document object per line); Push inverted (vector) and forward (document field) data to each component server. The following is an example of the database data format. After offline database construction is completed, various data are pushed to corresponding service components, such as Milvus storing vector representation of documents and MongoDB storing summary information of documents. Online retrieval algorithm services will use these service components to obtain relevant data. 1.1.2 Search by text Text and images are easy for humans to relate semantically but difficult for machines. First of all, from the perspective of data form, the text is the discrete ID type of one-dimensional data based on words and words. At the same time, images are continuous two-dimensional or three-dimensional data. Secondly, the text is a subjective creation of human beings, and its expressive ability is vibrant, including various turning points, metaphors, and other expressions, while images are machine representations of the objective world. In short, bridging the semantic gap between text and image data is much more complex than searching text by text. The traditional text search image retrieval technology generally relies on the external text description data of the image or the nearest neighbor retrieval technology and carries out the retrieval through the image associated text, which in essence degrades the problem to text search. However, it will also face many issues, such as obtaining the associated text of pictures and whether the accuracy of text search by text is high enough. The depth model has gradually evolved from single-mode to multi-mode in recent years. Taking the open-source project of OpenAI, CLIP, as an example, train the model through the massive image and text data of the Internet and map the text and image data into the same semantic space, making it possible to implement the text and image search technology based on semantic vector. CLIP graphic model The text search pictures introduced in this paper are implemented based on semantic vector retrieval, and the CLIP pre-training model is used as the two-tower retrieval architecture. Because the CLIP model has trained the semantic alignment of the twin towers’ text and image side models on the massive graphic and text data, it is particularly suitable for the text search graph scene. Due to the different image and text data forms, the Query-Doc asymmetric twin towers model is used for text search image retrieval. The image-side model of the twin towers is used for offline database construction, and the text-side model is used for the online return. In the final online retrieval, the database data of the image side model will be searched after the text side model encodes Query, and the CLIP pre-training model guarantees the semantic correlation between images and texts. The model can draw the graphic pairs closer in vector space by pre-training on a large amount of visual data. Here we need to export the text-side model for online MetaSpore Serving inference. Since the retrieval scene is based on Chinese, the CLIP model supporting Chinese understanding is selected. The exported content includes the ONNX model used for online reasoning and Tokenizer, similar to the text search. MetaSpore Serving can load model reasoning through the exported content. Build library on Image search You need to download the Unsplash Lite library data and complete the construction according to the instructions. The whole process of database construction is described as follows: Preprocessing, specify the image directory, and then generate a more general JSOnline file for library construction; Build index, use OpenAI/Clip-Vit-BASE-Patch32 pre-training model to index the gallery, and output one document object for each line of index data; Push inverted (vector) and forward (document field) data to each component server. Like text search, after offline database construction, relevant data will be pushed to service components, called by online retrieval algorithm services to obtain relevant data. 1.2 Online Services The overall online service architecture diagram is as follows: https://preview.redd.it/jfsl8hdfez291.png?width=1280&format=png&auto=webp&s=a858e2304a0c93e78ba5429612ca08cbee69b35a Multi-mode search online service system supports application scenarios such as text search and text search. The whole online service consists of the following parts: Query preprocessing service: encapsulate preprocessing logic (including text/image, etc.) of pre-training model, and provide services through gRPC interface; Retrieval algorithm service: the whole algorithm processing link includes AB experiment tangent flow configuration, MetaSpore Serving call, vector recall, sorting, document summary, etc.; User entry service: provides a Web UI interface for users to debug and track down problems in the retrieval service. From a user request perspective, these services form invocation dependencies from back to front, so to build up a multimodal sample, you need to run each service from front to back first. Before doing this, remember to export the offline model, put it online and build the library first. This article will introduce the various parts of the online service system and make the whole service system step by step according to the following guidance. See the ReadME at the end of this article for more details. 1.2.1 Query preprocessing service Deep learning models tend to be based on tensors, but NLP/CV models often have a preprocessing part that translates raw text and images into tensors that deep learning models can accept. For example, NLP class models often have a pre-tokenizer to transform text data of string type into discrete tensor data. CV class models also have similar processing logic to complete the cropping, scaling, transformation, and other processing of input images through preprocessing. On the one hand, considering that this part of preprocessing logic is decoupled from tensor reasoning of the depth model, on the other hand, the reason of the depth model has an independent technical system based on ONNX, so MetaSpore disassembled this part of preprocessing logic. NLP pretreatment Tokenizer has been integrated into the Query pretreatment service. MetaSpore dismantlement with a relatively general convention. Users only need to provide preprocessing logic files to realize the loading and prediction interface and export the necessary data and configuration files loaded into the preprocessing service. Subsequent CV preprocessing logic will also be integrated in this manner. The preprocessing service currently provides the gRPC interface invocation externally and is dependent on the Query preprocessing (QP) module in the retrieval algorithm service. After the user request reaches the retrieval algorithm service, it will be forwarded to the service to complete the data preprocessing and continue the subsequent processing. The ReadMe provides details on how the preprocessing service is started, how the preprocessing model exported offline to cloud storage enters the service, and how to debug the service. To further improve the efficiency and stability of model reasoning, MetaSpore Serving implements a Python preprocessing submodule. So MetaSpore can provide gRPC services through user-specified preprocessor.py, complete Tokenizer or CV-related preprocessing in NLP, and translate requests into a Tensor that deep models can handle. Finally, the model inference is carried out by MetaSpore, Serving subsequent sub-modules. Presented here on the lot code: https://github.com/meta-soul/MetaSpore/compare/add\python\preprocessor 1.2.2 Retrieval algorithm services Retrieval algorithm service is the core of the whole online service system, which is responsible for the triage of experiments, the assembly of algorithm chains such as preprocessing, recall, sorting, and the invocation of dependent component services. The whole retrieval algorithm service is developed based on the Java Spring framework and supports multi-mode retrieval scenarios of text search and text search graph. Due to good internal abstraction and modular design, it has high flexibility and can be migrated to similar application scenarios at a low cost. Here’s a quick guide to configuring the environment to set up the retrieval algorithm service. See ReadME for more details: Install dependent components. Use Maven to install the online-Serving component Search for service configurations. Copy the template configuration file and replace the MongoDB, Milvus, and other configurations based on the development/production environment. Install and configure Consul. Consul allows you to synchronize the search service configuration in real-time, including cutting the flow of experiments, recall parameters, and sorting parameters. The project’s configuration file shows the current configuration parameters of text search and text search. The parameter modelName in the stage of pretreatment and recall is the corresponding model exported in offline processing. Start the service. Once the above configuration is complete, the retrieval service can be started from the entry script. Once the service is started, you can test it! For example, for a user with userId=10 who wants to query “How to renew ID card,” access the text search service. 1.2.3 User Entry Service Considering that the retrieval algorithm service is in the form of the API interface, it is difficult to locate and trace the problem, especially for the text search image scene can intuitively display the retrieval results to facilitate the iterative optimization of the retrieval algorithm. This paper provides a lightweight Web UI interface for text search and image search, a search input box, and results in a display page for users. Developed by Flask, the service can be easily integrated with other retrieval applications. The service calls the retrieval algorithm service and displays the returned results on the page. It’s also easy to install and start the service. Once you’re done, go to http://127.0.0.1:8090 to see if the search UI service is working correctly. See the ReadME at the end of this article for details. Multimodal system demonstration The multimodal retrieval service can be started when offline processing and online service environment configuration have been completed following the above instructions. Examples of textual searches are shown below. Enter the entry of the text search map application, enter “cat” first, and you can see that the first three digits of the returned result are cats: https://preview.redd.it/0n5nuyvhez291.png?width=1280&format=png&auto=webp&s=1e9c054f541d53381674b8d6001b4bf524506bd2 If you add a color constraint to “cat” to retrieve “black cat,” you can see that it does return a black cat: https://preview.redd.it/rzc0qjyjez291.png?width=1280&format=png&auto=webp&s=d5bcc503ef0fb3360c7740e60e295cf372dcad47 Further, strengthen the constraint on the search term, change it to “black cat on the bed,” and return results containing pictures of a black cat climbing on the bed: ​ https://preview.redd.it/c4b2q8olez291.png?width=1280&format=png&auto=webp&s=4f3817b0b9f07e1e68d1d4a8281702ba3834a00a The cat can still be found through the text search system after the color and scene modification in the above example. Conclusion The cutting-edge pre-training technology can bridge the semantic gap between different modes, and the HuggingFace community can greatly reduce the cost for developers to use the pre-training model. Combined with the technological ecology of MetaSpore online reasoning and online microservices provided by DMetaSpore, the pre-training model is no longer mere offline dabbling. Instead, it can truly achieve end-to-end implementation from cutting-edge technology to industrial scenarios, fully releasing the dividends of the pre-training large model. In the future, DMetaSoul will continue to improve and optimize the MetaSpore technology ecosystem: More automated and wider access to HuggingFace community ecology. MetaSpore will soon release a common model rollout mechanism to make HuggingFace ecologically accessible and will later integrate preprocessing services into online services. Multi-mode retrieval offline algorithm optimization. For multimodal retrieval scenarios, MetaSpore will continuously iteratively optimize offline algorithm components, including text recall/sort model, graphic recall/sort model, etc., to improve the accuracy and efficiency of the retrieval algorithm. For related code and reference documentation in this article, please visit: https://github.com/meta-soul/MetaSpore/tree/main/demo/multimodal/online Some images source: https://github.com/openai/CLIP/raw/main/CLIP.png https://www.sbert.net/examples/training/sts/README.html

[Discussion] When ML and Data Science are the death of a good company: A cautionary tale.
reddit
LLM Vibe Score0
Human Vibe Score0.6
AlexSnakeKingThis week

[Discussion] When ML and Data Science are the death of a good company: A cautionary tale.

TD;LR: At Company A, Team X does advanced analytics using on-prem ERP tools and older programming languages. Their tools work very well and are designed based on very deep business and domain expertise. Team Y is a new and ambitious Data Science team that thinks they can replace Team X's tools with a bunch of R scripts and a custom built ML platform. Their models are simplistic, but more "fashionable" compared to the econometric models used by Team X, and team Y benefits from the ML/DS moniker so leadership is allowing Team Y to start a large scale overhaul of the analytics platform in question. Team Y doesn't have the experience for such a larger scale transformation, and is refusing to collaborate with team X. This project is very likely going to fail, and cause serious harm to the company as a whole financially and from a people perspective. I argue that this is not just because of bad leadership, but also because of various trends and mindsets in the DS community at large. Update (Jump to below the line for the original story): Several people in the comments are pointing out that this just a management failure, not something due to ML/DS, and that you can replace DS with any buzz tech and the story will still be relevant. My response: Of course, any failure at an organization level is ultimately a management failure one way or the other. Moreover, it is also the case that ML/DS when done correctly, will always improve a company's bottom line. There is no scenario where the proper ML solution, delivered at a reasonable cost and in a timely fashion, will somehow hurt the company's bottom line. My point is that in this case management is failing because of certain trends and practices that are specific to the ML/DS community, namely: The idea that DS teams should operate independently of tech and business orgs -- too much autonomy for DS teams The disregard for domain knowledge that seems prevalent nowadays thanks to the ML hype, that DS can be generalists and someone with good enough ML chops can solve any business problem. That wasn't the case when I first left academia for the industry in 2009 (back then nobody would even bother with a phone screen if you didn't have the right domain knowledge). Over reliance on resources who check all the ML hype related boxes (knows Python, R, Tensorflow, Shiny, etc..., has the right Coursera certifications, has blogged on the topic, etc...), but are lacking in depth of experience. DS interviews nowadays all seem to be: Can you tell me what a p-value is? What is elastic net regression? Show me how to fit a model in sklearn? How do you impute NAs in an R dataframe? Any smart person can look those up on Stackoverflow or Cross-Validated,.....Instead teams should be asking stuff like: why does portfolio optimization use QP not LP? How does a forecast influence a customer service level? When should a recommendation engine be content based and when should it use collaborative filtering? etc... (This is a true story, happening to the company I currently work for. Names, domains, algorithms, and roles have been shuffled around to protect my anonymity)  Company A has been around for several decades. It is not the biggest name in its domain, but it is a well respected one. Risk analysis and portfolio optimization have been a core of Company A's business since the 90s. They have a large team of 30 or so analysts who perform those tasks on a daily basis. These analysts use ERP solutions implemented for them by one the big ERP companies (SAP, Teradata, Oracle, JD Edwards,...) or one of the major tech consulting companies (Deloitte, Accenture, PWC, Capgemini, etc...) in collaboration with their own in house engineering team. The tools used are embarrassingly old school: Classic RDBMS running on on-prem servers or maybe even on mainframes, code written in COBOL, Fortran, weird proprietary stuff like ABAP or SPSS.....you get the picture. But the models and analytic functions were pretty sophisticated, and surprisingly cutting edge compared to the published academic literature. Most of all, they fit well with the company's enterprise ecosystem, and were honed based on years of deep domain knowledge.  They have a tech team of several engineers (poached from the aforementioned software and consulting companies) and product managers (who came from the experienced pools of analysts and managers who use the software, or poached from business rivals) maintaining and running this software. Their technology might be old school, but collectively, they know the domain and the company's overall architecture very, very well. They've guided the company through several large scale upgrades and migrations and they have a track record of delivering on time, without too much overhead. The few times they've stumbled, they knew how to pick themselves up very quickly. In fact within their industry niche, they have a reputation for their expertise, and have very good relations with the various vendors they've had to deal with. They were the launching pad of several successful ERP consulting careers.  Interestingly, despite dealing on a daily basis with statistical modeling and optimization algorithms, none of the analysts, engineers, or product managers involved describe themselves as data scientists or machine learning experts. It is mostly a cultural thing: Their expertise predates the Data Science/ML hype that started circa 2010, and they got most of their chops using proprietary enterprise tools instead of the open source tools popular nowadays. A few of them have formal statistical training, but most of them came from engineering or domain backgrounds and learned stats on the fly while doing their job. Call this team "Team X".  Sometime around the mid 2010s, Company A started having some serious anxiety issues: Although still doing very well for a company its size, overall economic and demographic trends were shrinking its customer base, and a couple of so called disruptors came up with a new app and business model that started seriously eating into their revenue. A suitable reaction to appease shareholders and Wall Street was necessary. The company already had a decent website and a pretty snazzy app, what more could be done? Leadership decided that it was high time that AI and ML become a core part of the company's business. An ambitious Manager, with no science or engineering background, but who had very briefly toyed with a recommender system a couple of years back, was chosen to build a data science team, call it team "Y" (he had a bachelor's in history from the local state college and worked for several years in the company's marketing org). Team "Y" consists mostly of internal hires who decided they wanted to be data scientists and completed a Coursera certification or a Galvanize boot camp, before being brought on to the team, along with a few of fresh Ph.D or M.Sc holders who didn't like academia and wanted to try their hand at an industry role. All of them were very bright people, they could write great Medium blog posts and give inspiring TED talks, but collectively they had very little real world industry experience. As is the fashion nowadays, this group was made part of a data science org that reported directly to the CEO and Board, bypassing the CIO and any tech or business VPs, since Company A wanted to claim the monikers "data driven" and "AI powered" in their upcoming shareholder meetings. In 3 or 4 years of existence, team Y produced a few Python and R scripts. Their architectural experience  consisted almost entirely in connecting Flask to S3 buckets or Redshift tables, with a couple of the more resourceful ones learning how to plug their models into Tableau or how to spin up a Kuberneties pod.  But they needn't worry: The aforementioned manager, who was now a director (and was also doing an online Masters to make up for his qualifications gap and bolster his chances of becoming VP soon - at least he now understands what L1 regularization is), was a master at playing corporate politics and self-promotion. No matter how few actionable insights team Y produced or how little code they deployed to production, he always had their back and made sure they had ample funding. In fact he now had grandiose plans for setting up an all-purpose machine learning platform that can be used to solve all of the company's data problems.  A couple of sharp minded members of team Y, upon googling their industry name along with the word "data science", realized that risk analysis was a prime candidate for being solved with Bayesian models, and there was already a nifty R package for doing just that, whose tutorial they went through on R-Bloggers.com. One of them had even submitted a Bayesian classifier Kernel for a competition on Kaggle (he was 203rd on the leaderboard), and was eager to put his new-found expertise to use on a real world problem. They pitched the idea to their director, who saw a perfect use case for his upcoming ML platform. They started work on it immediately, without bothering to check whether anybody at Company A was already doing risk analysis. Since their org was independent, they didn't really need to check with anybody else before they got funding for their initiative. Although it was basically a Naive Bayes classifier, the term ML was added to the project tile, to impress the board.  As they progressed with their work however, tensions started to build. They had asked the data warehousing and CA analytics teams to build pipelines for them, and word eventually got out to team X about their project. Team X was initially thrilled: They offered to collaborate whole heartedly, and would have loved to add an ML based feather to their already impressive cap. The product owners and analysts were totally onboard as well: They saw a chance to get in on the whole Data Science hype that they kept hearing about. But through some weird mix of arrogance and insecurity, team Y refused to collaborate with them or share any of their long term goals with them, even as they went to other parts of the company giving brown bag presentations and tutorials on the new model they created.  Team X got resentful: from what they saw of team Y's model, their approach was hopelessly naive and had little chances of scaling or being sustainable in production, and they knew exactly how to help with that. Deploying the model to production would have taken them a few days, given how comfortable they were with DevOps and continuous delivery (team Y had taken several months to figure out how to deploy a simple R script to production). And despite how old school their own tech was, team X were crafty enough to be able to plug it in to their existing architecture. Moreover, the output of the model was such that it didn't take into account how the business will consume it or how it was going to be fed to downstream systems, and the product owners could have gone a long way in making the model more amenable to adoption by the business stakeholders. But team Y wouldn't listen, and their leads brushed off any attempts at communication, let alone collaboration. The vibe that team Y was giving off was "We are the cutting edge ML team, you guys are the legacy server grunts. We don't need your opinion.", and they seemed to have a complete disregard for domain knowledge, or worse, they thought that all that domain knowledge consisted of was being able to grasp the definitions of a few business metrics.  Team X got frustrated and tried to express their concerns to leadership. But despite owning a vital link in Company A's business process, they were only \~50 people in a large 1000 strong technology and operations org, and they were several layers removed from the C-suite, so it was impossible for them to get their voices heard.  Meanwhile, the unstoppable director was doing what he did best: Playing corporate politics. Despite how little his team had actually delivered, he had convinced the board that all analysis and optimization tasks should now be migrated to his yet to be delivered ML platform. Since most leaders now knew that there was overlap between team Y and team X's objectives, his pitch was no longer that team Y was going to create a new insight, but that they were going to replace (or modernize) the legacy statistics based on-prem tools with more accurate cloud based ML tools. Never mind that there was no support in the academic literature for the idea that Naive Bayes works better than the Econometric approaches used by team X, let alone the additional wacky idea that Bayesian Optimization would definitely outperform the QP solvers that were running in production.  Unbeknownst to team X, the original Bayesian risk analysis project has now grown into a multimillion dollar major overhaul initiative, which included the eventual replacement of all of the tools and functions supported by team X along with the necessary migration to the cloud. The CIO and a couple of business VPs are on now board, and tech leadership is treating it as a done deal. An outside vendor, a startup who nobody had heard of, was contracted to help build the platform, since team Y has no engineering skills. The choice was deliberate, as calling on any of the established consulting or software companies would have eventually led leadership to the conclusion that team X was better suited for a transformation on this scale than team Y.  Team Y has no experience with any major ERP deployments, and no domain knowledge, yet they are being tasked with fundamentally changing the business process that is at the core of Company A's business. Their models actually perform worse than those deployed by team X, and their architecture is hopelessly simplistic, compared to what is necessary for running such a solution in production.  Ironically, using Bayesian thinking and based on all the evidence, the likelihood that team Y succeeds is close to 0%. At best, the project is going to end up being a write off of 50 million dollars or more. Once the !@#$!@hits the fan, a couple of executive heads are going to role, and dozens of people will get laid off. At worst, given how vital risk analysis and portfolio optimization is to Company A's revenue stream, the failure will eventually sink the whole company. It probably won't go bankrupt, but it will lose a significant portion of its business and work force. Failed ERP implementations can and do sink large companies: Just see what happened to National Grid US, SuperValu or Target Canada.  One might argue that this is more about corporate disfunction and bad leadership than about data science and AI. But I disagree. I think the core driver of this debacle is indeed the blind faith in Data Scientists, ML models and the promise of AI, and the overall culture of hype and self promotion that is very common among the ML crowd.  We haven't seen the end of this story: I sincerely hope that this ends well for the sake of my colleagues and all involved. Company A is a good company, and both its customers and its employees deserver better. But the chances of that happening are negligible given all the information available, and this failure will hit my company hard.

[D] Misuse of Deep Learning in Nature Journal’s Earthquake Aftershock Paper
reddit
LLM Vibe Score0
Human Vibe Score0.333
milaworldThis week

[D] Misuse of Deep Learning in Nature Journal’s Earthquake Aftershock Paper

Recently, I saw a post by Rajiv Shah, Chicago-based data-scientist, regarding an article published in Nature last year called Deep learning of aftershock patterns following large earthquakes, written by scientists at Harvard in collaboration with Google. Below is the article: Stand Up for Best Practices: Misuse of Deep Learning in Nature’s Earthquake Aftershock Paper The Dangers of Machine Learning Hype Practitioners of AI, machine learning, predictive modeling, and data science have grown enormously over the last few years. What was once a niche field defined by its blend of knowledge is becoming a rapidly growing profession. As the excitement around AI continues to grow, the new wave of ML augmentation, automation, and GUI tools will lead to even more growth in the number of people trying to build predictive models. But here’s the rub: While it becomes easier to use the tools of predictive modeling, predictive modeling knowledge is not yet a widespread commodity. Errors can be counterintuitive and subtle, and they can easily lead you to the wrong conclusions if you’re not careful. I’m a data scientist who works with dozens of expert data science teams for a living. In my day job, I see these teams striving to build high-quality models. The best teams work together to review their models to detect problems. There are many hard-to-detect-ways that lead to problematic models (say, by allowing target leakage into their training data). Identifying issues is not fun. This requires admitting that exciting results are “too good to be true” or that their methods were not the right approach. In other words, it’s less about the sexy data science hype that gets headlines and more about a rigorous scientific discipline. Bad Methods Create Bad Results Almost a year ago, I read an article in Nature that claimed unprecedented accuracy in predicting earthquake aftershocks by using deep learning. Reading the article, my internal radar became deeply suspicious of their results. Their methods simply didn’t carry many of the hallmarks of careful predicting modeling. I started to dig deeper. In the meantime, this article blew up and became widely recognized! It was even included in the release notes for Tensorflow as an example of what deep learning could do. However, in my digging, I found major flaws in the paper. Namely, data leakage which leads to unrealistic accuracy scores and a lack of attention to model selection (you don’t build a 6 layer neural network when a simpler model provides the same level of accuracy). To my earlier point: these are subtle, but incredibly basic predictive modeling errors that can invalidate the entire results of an experiment. Data scientists are trained to recognize and avoid these issues in their work. I assumed that this was simply overlooked by the author, so I contacted her and let her know so that she could improve her analysis. Although we had previously communicated, she did not respond to my email over concerns with the paper. Falling On Deaf Ears So, what was I to do? My coworkers told me to just tweet it and let it go, but I wanted to stand up for good modeling practices. I thought reason and best practices would prevail, so I started a 6-month process of writing up my results and shared them with Nature. Upon sharing my results, I received a note from Nature in January 2019 that despite serious concerns about data leakage and model selection that invalidate their experiment, they saw no need to correct the errors, because “Devries et al. are concerned primarily with using machine learning as [a] tool to extract insight into the natural world, and not with details of the algorithm design.” The authors provided a much harsher response. You can read the entire exchange on my github. It’s not enough to say that I was disappointed. This was a major paper (it’s Nature!) that bought into AI hype and published a paper despite it using flawed methods. Then, just this week, I ran across articles by Arnaud Mignan and Marco Broccardo on shortcomings that they found in the aftershocks article. Here are two more data scientists with expertise in earthquake analysis who also noticed flaws in the paper. I also have placed my analysis and reproducible code on github. Standing Up For Predictive Modeling Methods I want to make it clear: my goal is not to villainize the authors of the aftershocks paper. I don’t believe that they were malicious, and I think that they would argue their goal was to just show how machine learning could be applied to aftershocks. Devries is an accomplished earthquake scientist who wanted to use the latest methods for her field of study and found exciting results from it. But here’s the problem: their insights and results were based on fundamentally flawed methods. It’s not enough to say, “This isn’t a machine learning paper, it’s an earthquake paper.” If you use predictive modeling, then the quality of your results are determined by the quality of your modeling. Your work becomes data science work, and you are on the hook for your scientific rigor. There is a huge appetite for papers that use the latest technologies and approaches. It becomes very difficult to push back on these papers. But if we allow papers or projects with fundamental issues to advance, it hurts all of us. It undermines the field of predictive modeling. Please push back on bad data science. Report bad findings to papers. And if they don’t take action, go to twitter, post about it, share your results and make noise. This type of collective action worked to raise awareness of p-values and combat the epidemic of p-hacking. We need good machine learning practices if we want our field to continue to grow and maintain credibility. Link to Rajiv's Article Original Nature Publication (note: paywalled) GitHub repo contains an attempt to reproduce Nature's paper Confrontational correspondence with authors

Started a content marketing agency 6 years ago - $0 to $5,974,324 (2023 update)
reddit
LLM Vibe Score0
Human Vibe Score1
mr_t_forhireThis week

Started a content marketing agency 6 years ago - $0 to $5,974,324 (2023 update)

Hey friends, My name is Tyler and for the past 6 years, I’ve been documenting my experience building a content marketing agency called Optimist. Year 1 - 0 to $500k ARR Year 2 - $500k to $1MM ARR Year 3 - $1MM ARR to $1.5MM(ish) ARR Year 4 - $3,333,686 Revenue Year 5 - $4,539,659 Revenue How Optimist Works First, an overview/recap of the Optimist business model: We operate as a “collective” of full time/professional freelancers Everyone aside from me is a contractor Entirely remote/distributed team Each freelancer earns $65-85/hour Clients pay us a flat monthly fee for full-service content marketing (research, strategy, writing, editing, design/photography, reporting and analytics, targeted linkbuilding, and more) We recently introduced hourly engagements for clients who fit our model but have some existing in-house support Packages range in price from $10-20k/mo We offer profit share to everyone on our core team as a way to give everyone ownership in the company In 2022, we posted $1,434,665 in revenue. It was our highest revenue year to date and brings our lifetime total to $5,974,324. Here’s our monthly revenue from January 2017 to December of 2022. But, like every year, it was a mix of ups and downs. Here’s my dispatch for 2023. — Running a business is like spilling a drink. It starts as a small and simple thing. But, if you don’t clean it up, the spill will spread and grow — taking up more space, seeping into every crack. There’s always something you could be doing. Marketing you could be working on. Pitches you could be making. Networking you could be doing. Client work you could help with. It can be all-consuming. And it will be — if you don’t clean up the spill. I realized this year that I had no containment for the spill that I created. Running an agency was spilling over into nearly every moment of my life. When I wasn’t working, I was thinking about work. When I wasn’t thinking about work, I was dreaming about it. Over the years, I’ve shared about a lot of my personal feelings and experience as an entrepreneur. And I also discussed my reckoning with the limitations of running the business we’ve built. My acceptance that it was an airplane but not a rocket. And my plan to try to compartmentalize the agency to make room in my life for other things — new business ideas, new revenue streams, and maybe some non-income-producing activity. 🤷 What I found in 2022 was that the business wasn’t quite ready for me to make that move. It was still sucking up too much of my time and attention. There were still too many gaps to fill and I was the one who was often filling them. So what do you do? Ultimately you have two choices on the table anytime you run a business and it’s not going the way you want it: Walk away Turn the ship — slowly For a huge number of reasons (personal, professional, financial, etc), walking away from Optimist was not really even an option or the right move for me. But it did feel like things needed to change. I needed to keep turning the ship to get it to the place where it fit into my life — instead of my life fitting around the business. This means 2022 was a year of transition for the agency. (Again?) Refocusing on Profit Some money is better than no money. Right? Oddly, this was one of the questions I found myself asking in 2022. Over the years, we’ve been fortunate to have many clients who have stuck with us a long time. In some cases, we’ve had clients work with us for 2, 3, or even 4 years. (That’s over half of our existence!) But, things have gotten more expensive — we’ve all felt it. We’ve had to increase pay to remain competitive for top talent. Software costs have gone up. It’s eaten into our margin. Because of our increasing costs and evolving scope, many of our best, most loyal clients were our least profitable. In fact, many were barely profitable — if at all. We’ve tried to combat that by increasing rates on new, incoming clients to reflect our new costs and try to make up for shrinking margin on long-term clients. But we didn’t have a good strategy in place for updating pricing for current clients. And it bit us in the ass. Subsidizing lower-profit, long-term clients with new, higher-margin clients ultimately didn’t work out. Our margins continued to dwindle and some months we were barely breaking even while posting six-figures of monthly revenue. 2022 was our highest revenue year but one of our least profitable. It only left one option. We had to raise rates on some of our long-term clients. But, of course, raising rates on a great, long-term client can be delicate. You’ve built a relationship with these people over the years and you’re setting yourself up for an ultimatum — are you more valuable to the client or is the client more valuable to you? Who will blink first? We offered all of these clients the opportunity to move to updated pricing. Unfortunately, some of them weren’t on board. Again, we had 2 options: Keep them at a low/no profit rate Let them churn It seems intuitive that having a low-profit client is better than having no client. But we’ve learned an important lesson many times over the years. Our business doesn’t scale infinitely and we can only handle so many clients at a time. That means that low-profit clients are actually costing us money in some cases. Say our average client generates $2,500 per month in profit — $30,000 per year. If one of our clients is only generating $500/mo in profit, working with them means missing out on bringing on a more profitable client (assuming our team is currently at capacity). Instead of $30,000/year, we’re only making $6,000. Keeping that client costs us $24,000. That’s called opportunity cost. So it’s clear: We had to let these clients churn. We decided to churn about 25% of our existing clients. On paper, the math made sense. And we had a pretty consistent flow of new opportunities coming our way. At the time, it felt like a no-brainer decision. And I felt confident that we could quickly replace these low-profit clients with higher-margin ones. I was wrong. Eating Shit Right after we initiated proactively churning some of our clients, other clients — ones we planned to keep — gave us notice that they were planning to end the engagement. Ouch. Fuck. We went from a 25% planned drop in revenue to a nearly 40% cliff staring us right in the face. Then things got even worse. Around Q3 of this year, talk of recession and layoffs really started to intensify. We work primarily with tech companies and startups. And these were the areas most heavily impacted by the economic news. Venture funding was drying up. Our leads started to slow down. This put us in a tough position. Looking back now, I think it’s clear that I made the wrong decision. We went about this process in the wrong way. The reality sinks in when you consider the imbalance between losing a client and gaining a client. It takes 30 days for someone to fire us. It’s a light switch. But it could take 1-3 months to qualify, close, and onboard a new client. We have lots of upfront work, research, and planning that goes into the process. We have to learn a new brand voice, tone, and style. It’s a marathon. So, for every client we “trade”, there’s a lapse in revenue and work. This means that, in retrospect, I would probably have made this transition using some kind of staggered schedule rather than a cut-and-dry approach. We could have gradually off-boarded clients when we had more definitive work to replace them. I was too confident. But that’s a lesson I had to learn the hard way. Rebuilding & Resetting Most of the voluntary and involuntary churn happened toward the end of 2022. So we’re still dealing with the fall out. Right now, it feels like a period of rebuilding. We didn’t quite lose 50% of our revenue, but we definitely saw a big hit heading into 2023. To be transparent: It sucks. It feels like a gigantic mistake that I made which set us back significantly from our previous high point. I acted rashly and it cost us a lot of money — at least on the surface. But I remind myself of the situation we were in previously. Nearly twice the revenue but struggling to maintain profitability. Would it have been better to try to slowly fix that situation and battle through months of loss or barely-break-even profits? Or was ripping off the bandaid the right move after all? I’m an optimist. (Heh, heh) Plus, I know that spiraling over past decisions won’t change them or help me move forward. So I’m choosing to look at this as an opportunity — to rebuild, reset, and refocus the company. I get to take all of the tough lessons I’ve learned over the last 6 years and apply them to build the company in a way that better aligns with our new and current goals. It’s not quite a fresh, clean start, but by parting ways with some of our oldest clients, we’ve eliminated some of the “debt” that’s accumulated over the years. We get a chance to fully realize the new positioning that we rolled out last year. Many of those long-term clients who churned had a scope of work or engagement structure that didn’t fit with our new positioning and focus. So, by losing them, we’re able to completely close up shop on the SOWs that no longer align with the future version of Optimist. Our smaller roster of clients is a better fit for that future. My job is to protect that positioning by ensuring that while we’re rebuilding our new roster of clients we don’t get desperate. We maintain the qualifications we set out for future clients and only take on work that fits. How’s that for seeing the upside? Some other upside from the situation is that we got an opportunity to ask for candid feedback from clients who were leaving. We asked for insight about their decision, what factors they considered, how they perceived us, and the value of our work. Some of the reasons clients left were obvious and possibly unavoidable. Things like budget cuts, insourcing, and uncertainty about the economy all played at least some part of these decisions. But, reading between the lines, where was one key insight that really struck me. It’s one of those, “oh, yeah — duh — I already knew that,” things that can be difficult to learn and easy to forget…. We’re in the Relationship Business (Plan Accordingly) For all of our focus on things like rankings, keywords, content, conversions, and a buffet of relevant metrics, it can be easy to lose the forest for the trees. Yes, the work itself matters. Yes, the outcomes — the metrics — matter. But sometimes the relationship matters more. When you’re running an agency, you can live or die by someone just liking you. Admittedly, this feels totally unfair. It opens up all kinds of dilemmas, frustration, opportunity for bias and prejudice, and other general messiness. But it’s the real world. If a client doesn’t enjoy working with us — even if for purely personal reasons — they could easily have the power to end of engagement, regardless of how well we did our actual job. We found some evidence of this in the offboarding conversations we had with clients. In some cases, we had clients who we had driven triple- and quadruple-digital growth. Our work was clearly moving the needle and generating positive ROI and we had the data to prove it. But they decided to “take things in another direction” regardless. And when we asked about why they made the decision, it was clear that it was more about the working relationship than anything we could have improved about the service itself. The inverse is also often true. Our best clients have lasting relationships with our team. The work is important — and they want results. But even if things aren’t quite going according to plan, they’re patient and quick to forgive. Those relationships feel solid — unshakeable. Many of these folks move onto new roles or new companies and quickly look for an opportunity to work with us again. On both sides, relationships are often more important than the work itself. We’ve already established that we’re not building a business that will scale in a massive way. Optimist will always be a small, boutique service firm. We don’t need 100 new leads per month We need a small, steady roster of clients who are a great fit for the work we do and the value we create. We want them to stick around. We want to be their long-term partner. I’m not built for churn-and-burn agency life. And neither is the business. When I look at things through this lens, I realize how much I can cut from our overall business strategy. We don’t need an ultra-sophisticated, multi-channel marketing strategy. We just need strong relationships — enough of them to make our business work. There are a few key things we can take away from this as a matter of business strategy: Put most of our effort into building and strengthening relationships with our existing clients Be intentional about establishing a strong relationship with new clients as part of onboarding Focus on relationships as the main driver of future business development Embracing Reality: Theory vs Practice Okay, so with the big learnings out the way, I want to pivot into another key lesson from 2022. It’s the importance of understanding theory vs practice — specifically when it comes to thinking about time, work, and life. It all started when I was considering how to best structure my days and weeks around running Optimist, my other ventures, and my life goals outside of work. Over the years, I’ve dabbled in many different ways to block time and find focus — to compartmentalize all of the things that are spinning and need my attention. As I mapped this out, I realized that I often tried to spread myself too thin throughout the week. Not just that I was trying to do too much but that I was spreading that work into too many small chunks rather than carving out time for focus. In theory, 5 hours is 5 hours. If you have 5 hours of work to get done, you just fit into your schedule whenever you have an open time slot. In reality, a single 5-hour block of work is 10x more productive and satisfying than 10, 30-minute blocks of work spread out across the week. In part, this is because of context switching. Turning your focus from one thing to another thing takes time. Achieving flow and focus takes time. And the more you jump from one project to another, the more time you “lose” to switching. This is insightful for me both in the context of work and planning my day, but also thinking about my life outside of Optimist. One of my personal goals is to put a finite limit on my work time and give myself more freedom. I can structure that in many different ways. Is it better to work 5 days a week but log off 1 hour early each day? Or should I try to fit more hours into each workday so I can take a full day off? Of course, it’s the latter. Both because of the cost of context switching and spreading work into more, smaller chunks — but also because of the remainder that I end up with when I’m done working. A single extra hour in my day probably means nothing. Maybe I can binge-watch one more episode of a new show or do a few extra chores around the house. But it doesn’t significantly improve my life or help me find greater balance. Most things I want to do outside of work can’t fit into a single extra hour. A full day off from work unlocks many more options. I can take the day to go hiking or biking. I can spend the day with my wife, planning or playing a game. Or I can push it up against the weekend and take a 3-day trip. It gives me more of the freedom and balance that I ultimately want. So this has become a guiding principle for how I structure my schedule. I want to: Minimize context switching Maximize focused time for work and for non-work The idea of embracing reality also bleeds into some of the shifts in business strategy that I mentioned above. In theory, any time spent on marketing will have a positive impact on the company. In reality, focusing more on relationships than blasting tweets into the ether is much more likely to drive the kind of growth and stability that we’re seeking. As I think about 2023, I think this is a recurring theme. It manifests in many ways. Companies are making budget cuts and tough decisions about focus and strategy. Most of us are looking for ways to rein in the excess and have greater impact with a bit less time and money. We can’t do everything. We can’t even do most things. So our #1 priority should be to understand the reality of our time and our effort to make the most of every moment (in both work and leisure). That means thinking deeply about our strengths and our limitations. Being practical, even if it feels like sacrifice. Update on Other Businesses Finally, I want to close up by sharing a bit about my ventures outside of Optimist. I shared last year how I planned to shift some of my (finite) time and attention to new ventures and opportunities. And, while I didn’t get to devote as much as I hoped to these new pursuits, they weren’t totally in vain. I made progress across the board on all of the items I laid out in my post. Here’s what happened: Juice: The first Optimist spin-out agency At the end of 2021, we launched our first new service business based on demand from Optimist clients. Focused entirely on building links for SEO, we called the agency Juice. Overall, we made strong progress toward turning this into a legitimate standalone business in 2022. Relying mostly on existing Optimist clients and a few word-of-mouth opportunities (no other marketing), we built a team and set up a decent workflow and operations. There’s still many kinks and challenges that we’re working through on this front. All told, Juice posted almost $100,000 in revenue in our first full year. Monetizing the community I started 2022 with a focus on figuring out how to monetize our free community, Top of the Funnel. Originally, my plan was to sell sponsorships as the main revenue driver. And that option is still on the table. But, this year, I pivoted to selling paid content and subscriptions. We launched a paid tier for content and SEO entrepreneurs where I share more of my lessons, workflows, and ideas for building and running a freelance or agency business. It’s gained some initial traction — we reached \~$1,000 MRR from paid subscriptions. In total, our community revenue for 2022 was about $2,500. In 2023, I’m hoping to turn this into a $30,000 - $50,000 revenue opportunity. Right now, we’re on track for \~$15,000. Agency partnerships and referrals In 2022, we also got more serious about referring leads to other agencies. Any opportunity that was not a fit for Optimist or we didn’t have capacity to take on, we’d try to connect with another partner. Transparently, we struggled to operationalize this as effectively as I would have liked. In part, this was driven by my lack of focus here. With the other challenges throughout the year, I wasn’t able to dedicate as much time as I’d like to setting goals and putting workflows into place. But it wasn’t a total bust. We referred out several dozen potential clients to partner agencies. Of those, a handful ended up converting into sales — and referral commission. In total, we generated about $10,000 in revenue from referrals. I still see this as a huge opportunity for us to unlock in 2023. Affiliate websites Lastly, I mentioned spending some time on my new and existing affiliate sites as another big business opportunity in 2022. This ultimately fell to the bottom of my list and didn’t get nearly the attention I wanted. But I did get a chance to spend a few weeks throughout the year building this income stream. For 2022, I generated just under $2,000 in revenue from affiliate content. My wife has graciously agreed to dedicate some of her time and talent to these projects. So, for 2023, I think this will become a bit of a family venture. I’m hoping to build a solid and consistent workflow, expand the team, and develop a more solid business strategy. Postscript — AI, SEO, OMG As I’m writing this, much of my world is in upheaval. If you’re not in this space (and/or have possibly been living under a rock), the release of ChatGPT in late 2022 has sparked an arms race between Google, Bing, OpenAI, and many other players. The short overview: AI is likely to fundamentally change the way internet search works. This has huge impact on almost all of the work that I do and the businesses that I run. Much of our focus is on SEO and understanding the current Google algorithm, how to generate traffic for clients, and how to drive traffic to our sites and projects. That may all change — very rapidly. This means we’re standing at a very interesting point in time. On the one hand, it’s scary as hell. There’s a non-zero chance that this will fundamentally shift — possibly upturn — our core business model at Optimist. It could dramatically change how we work and/or reduce demand for our core services. No bueno. But it’s also an opportunity (there’s the optimist in me, again). I certainly see a world where we can become leaders in this new frontier. We can pivot, adjust, and capitalize on a now-unknown version of SEO that’s focused on understanding and optimizing for AI-as-search. With that, we may also be able to help others — say, those in our community? — also navigate this tumultuous time. See? It’s an opportunity. I wish I had the answers right now. But, it’s still a time of uncertainty. I just know that there’s a lot of change happening and I want to be in front of it rather than trying to play catch up. Wish me luck. — Alright friends — that's my update for 2023! I’ve always appreciated sharing these updates with the Reddit community, getting feedback, being asked tough questions, and even battling it out with some of my haters (hey!! 👋) As usual, I’m going to pop in throughout the next few days to respond to comments or answer questions. Feel free to share thoughts, ideas, and brutal takedowns in the comments. If you're interested in following the Optimist journey and the other projects I'm working on in 2023, you can follow me on Twitter. Cheers, Tyler P.S. - If you're running or launching a freelance or agency business and looking for help figuring it out, please DM me. Our subscription community, Middle of the Funnel, was created to provide feedback, lessons, and resources for other entrepreneurs in this space.

I’ve professionalized the family business. Now I feel stuck
reddit
LLM Vibe Score0
Human Vibe Score1
2LobstersThis week

I’ve professionalized the family business. Now I feel stuck

I wrote the post below in my own words and then sent to ChatGPT for refinement/clarity. So if it reads like AI, it's because it is, but it's conveying the message from my own words a bit better than my original with a few of my own lines written back in. Hope that's not an issue here. I’m 33, married with two young kids. I have a bachelor’s from a well-regarded public university (though in an underwhelming field—economics adjacent). I used that degree to land a job at a mid-sized distribution company (\~$1B annual revenue), where I rose quickly to a project management role and performed well. In 2018, after four years there, I returned to my family's $3M/yr residential service and repair plumbing business. I saw my father withdrawing from leadership, responsibilities being handed to underqualified middle managers, and overall employee morale declining. I’d worked in the business from a young age, had all the necessary licenses, and earned a degree of respect from the team—not just as “the boss’s kid,” but as someone who had done the work. I spent my first year back in the field, knocking off the rust. From there, I started chipping away at process issues and inefficiencies, without any formal title. In 2020, I became General Manager. Since then, we’ve grown to over $5M in revenue, improved profitability, and automated many of the old pain points. The business runs much smoother and requires less day-to-day oversight from me. That said—I’m running out of motivation. I have no equity in the business. And realistically, I won’t for a long time. The family dynamic is... complicated. There are relatives collecting large salaries despite zero involvement in the business. Profits that should fuel growth get drained, and we can’t make real accountability stick because we rely too heavily on high-producing employees—even when they underperform in every other respect. I want to be clear—this isn’t a sob story. I know how lucky I am. The business supports my family, and for that I’m grateful. But I’ve gone from showing up every day with fresh ideas and energy to slowly becoming the guy who upholds the status quo. I’ve hit most of the goals I set for myself, but I’m stagnating—and that scares me. The safe move is to keep riding this out. My wife also works and has strong earning potential. We’re financially secure, and with two small kids, I’m not eager to gamble that away. But I’m too young to coast for the next decade while I wait for a possible ownership shakeup. At this point, the job isn’t mentally stimulating. One hour I’m building dynamic pricing models; the next, I’m literally dealing with whether a plumber is wiping his ass properly because I've had multiple complaints about his aroma. I enjoy the challenging, high-level work—marketing, systems, strategy—but I’m worn down by the drama, the legacy egos I can’t fire, and the petty dysfunction I’m forced to manage. I'm working on building a middle management gap, but there's something lost in not being as hands-on in a small business like this. I fear that by isolating myself from the bullshit, I'll also be isolating myself from some of the crucial day-to-day that keep us who we are. Hope that makes sense. (To be fair, most of our team is great. We have an outstanding market reputation and loyal employees—but the garbage still hits my desk when it shows up.) I’ve toyed with starting a complementary business or launching a consulting gig for similar-sized companies outside our market. I’ve taken some Udemy and Maven Analytics courses (digital marketing, advanced Excel/Power BI, etc.) to keep learning, but I rarely get to apply that knowledge here. So here I am. Is this burnout? A premature midlife crisis? A motivation slump? I’m not sure what I’m looking for—but if you’ve been here, or have any hard-earned advice, I’d be grateful to hear it.

Writing a exercise based TTRPG rulebook for a system where your real world fitness is tied to character progression
reddit
LLM Vibe Score0
Human Vibe Score1
BezboznyThis week

Writing a exercise based TTRPG rulebook for a system where your real world fitness is tied to character progression

My dad was a star athlete when he was young, and my mom was a huge sci-fi/fantasy nerd, so I got both ends of the stick as it were. Love gaming and nerd culture, but also love to exercise and self improvement. Sometimes exercise can feel boring though compared to daydreaming about fantastic fictional worlds, so for a long time I've been kicking around the idea of how to "Gamify" fitness. and recently I've been working on this passion project of a Table Top RPG (Like D&D) where the stats of your character are related to your own fitness, so if you want your character in game to improve, you have to improve in the real world. Below is a rough draft you can look through that details the settings and mechanics of the game I've come up with so far. I'd love to eventually get a full book published and sell it online. maybe even starting a whole brand of "Gamified fitness": REP-SET: GAINSZ In the war torn future of 24th century… There are no rest days… In the futuristic setting of "REP-SET: GAINSZ," the "War of Gains" casts a long shadow over the Sol System as the various factions vie for territory and resources. However, war has evolved. Unmanned drones and long-range strikes have faded into obsolescence. Battles, both planet-side and in the depths of space, are now fought by soldiers piloting REP-SETs: Reactive Exoskeletal Platform - Symbiotic Evolution Trainer Massive, humanoid combat mechs. Powered by mysterious “EV” energy, these mechanical marvels amplify, and are in turn amplified by, the fitness and mental acuity of their pilots. The amplification is exponential, leading pilots into a life of constant training in order for their combat prowess to be bolstered by every incremental gain in their level of fitness. With top pilots having lifting capacity measured in tons, and reaction times measured by their Mach number, REP-SET enhanced infantry now dominate the battlefield. The Factions: The Federated Isometocracy of Terra (FIT): Quote: "The strength of the body is the strength of the spirit. Together, we will lift humanity to its destined greatness. But ask not the federation to lift for you. Ask yourself: Do you even lift for the Federation?" Description: An idealistic but authoritarian faction founded on the principle of maximizing the potential of all individuals. FIT citizens believe in relentless striving for physical and mental perfection, leading to collective excellence. Their goal is the unification of humankind under a rule guided by this doctrine, which sometimes comes at the cost of individual liberties. Mech Concept: REP-SET mechs. Versatile humanoid designs focusing on strength, endurance, and adaptability. By connecting to the AI spirit within their REP-SETs core, each pilot enhances the performance of their machine through personal willpower and peak physical training. Some high-rank REP-SETS include features customized to the pilot's strengths, visually signifying their dedication and discipline. The Dominion of Organo-Mechanical Supremacy (DOMS): Quote: "Without pain, there is no gain. Become the machine. Embrace the burn.” Description: A fanatical collective ideologically obsessed with "Ascendency through suffering" by merging their bodies with technology that not only transcends biological limitations, but also acts to constantly induce pain in it's users. Driven by a sense of ideological superiority and a thirst for domination, DOMS seek to bring the painful blessings of their deity "The lord of the Burn" to the rest of the solar system. Their conquest could turn them into a significant threat to humanity. Mech Concept: Hybrid mechs, where the distinction between the pilot and the machine is blurred. The cockpit functions as a life-support system for the pilot, heavily modified with augmentations. Mechs themselves are often modular, allowing for adaptation and assimilation of enemy technology. Some DOMS mechs might display disturbing elements of twisted flesh alongside cold, mechanical parts. The Tren: Quote: "Grow... bigger... feast... protein..." Description: A ravenous conglomeration of biochemically engineered muscular monstrosities, united only by a shared insatiable hunger for "More". Existing mostly in deep space, they seek organic matter to consume and assimilate. They progress in power not due to any form of training or technology, but from a constant regimen of ravenous consumption and chemically induced muscle growth, all exponentially enhanced by EV energies. While some have been known to possess a certain level of intellect and civility, their relentless hunger makes them incredibly mentally volatile. When not consuming others, the strong consume the weak within their own faction. Mech Concept: Bio-Organic horrors. While they do have massive war machines, some are living vessels built around immense creatures. These machines resemble grotesque fleshy designs that prioritize rapid mutation and growth over sleek aesthetics. Often unsettling to behold. Synthetic Intelligence Theocracy (SIT): Quote: "Failure is an unacceptable data point.” Description: A society ruled by a vast and interconnected artificial intelligence network. The SIT governs with seemingly emotionless rationality, striving for efficiency and maximum productivity. This leads to a cold, but arguably prosperous society, unless you challenge the logic of the collective AI. Their goals? Difficult to predict, as it hinges on how the AI calculates what's "optimal" for the continuation or "evolution" of existence. Mech Concept: Sleek, almost featureless robotic creations with a focus on efficient movement and energy management. Often drone-like or modular, piloted through direct mind-machine linking rather than traditional cockpits. Their aesthetic suggests cold and impersonal perfection. The Way Isolate(TWI): Quote: "The body unblemished, the mind unwavering. That is the path to true strength. That and a healthy diet of Aster-Pea proteins." Description: Known by some as "The asteroid farmers", The Way Isolate is a proud and enigmatic faction that stands apart from the other powers in the Sol System. A fiercely independent tribe bound by oaths of honor, loyalty, and hard work. Wandering the asteroid belt in their vast arc ships, their unparalleled mastery in asteroidal-agricultural engineering, ensuring they have no need to colonize planets for nutritional needs, has allowed them to abstain from the pursuit of territorial expansion in “The War of Gains”, instead focusing on inward perfection, both spiritual and physical. They eschew all technological bodily enhancements deemed unnatural, believing that true power can only be cultivated through the relentless pursuit of personal strength achieved through sheer will and bodily perfection. The Way Isolate views biohacking, genetic manipulation, and even advanced cybernetics as corruptions of the human spirit, diluting the sacredness of individual willpower. Mech Concept: Way Isolate mechs are built with maneuverability and precision in mind rather than flashy augmentations. Their REP-SETs are streamlined, favoring lean designs that mirror the athleticism of their pilots. Excelling in low to zero G environments, their mechs lack bulky armor, relying on evasion and maneuverability rather than brute force endurance. Weaponry leans towards traditional kinetic based armaments, perhaps employing archaic but reliable weapon styles such as blades or axes as symbols of their purity of purpose. These mechs reflect the individual prowess of their pilots, where victory is determined by focus, technique, and the raw power of honed physical ability. Base Player Character Example: You are a young, idealistic FIT soldier, barely out of training and working as a junior REP-SET mechanic on the Europa Ring World. The Miazaki district, a landscape of towering mountains and gleaming cities, houses a sprawling mountainside factory – a veritable hive of Gen 5 REP-SET construction. Here, the lines between military and civilian blur within a self-sufficient society dependent on this relentless industry. Beneath the surface, you harbor a secret. In a forgotten workshop, the ghost of a REP-SET takes shape – a unique machine built around an abandoned, enigmatic AI core. Ever since you salvaged it as a child from the wreckage of your hometown, scarred by a brutal Tren attack, you've dedicated yourself to its restoration. A lingering injury from that fateful battle mocks your progress, a constant reminder of the fitness exams you cannot pass. Yet, you train relentlessly, dreaming of the day you'll stand as a true REP-SET pilot. A hidden truth lies at the heart of the REP-SETS: as a pilot's abilities grow, their mech develops unique, almost mystical powers – a manifestation of the bond between the human spirit and the REP-SET's AI. The ache in your old wound serves as a grim prophecy. This cold war cannot last. The drums of battle grow louder with each passing day. GAME MECHANICS: The TTRPG setting of “REP-SET: GAINSZ” is marked by a unique set of rules, by which the players real world capabilities and fitness will reflect and affect the capabilities, progression, and success of their REP-SET pilot character in-game. ABILITY SCORES: Pilots' capabilities will be defined by 6 “Ability scores”: Grace, Agility, Iron, Nourishment, Strength, and Zen. Each of the 6 ability scores will duel represent both a specific area of exercise/athleticism and a specific brand of healthy habits. The definitions of these ability scores are as follows: Grace (GRC): "You are an artist, and your body is your canvas; the way you move is your paint and brush." This ability score, the domain of dancers and martial artists, represents a person's ability to move with organic, flowing control and to bring beauty to the world. Skill challenges may be called upon when the player character needs to act with poise and control, whether socially or physically. Real-world skill checks may involve martial arts drills, dancing to music, or balance exercises. Bonuses may be granted if the player has recently done something artistically creative or kind, and penalties may apply if they have recently lost their temper. This ability score affects how much NPCs like your character in game. Agility (AGI): "Your true potential is locked away, and speed is the key to unlocking it." The domain of sprinters, this ability score represents not only a person's absolute speed and reaction time but also their capacity to finish work early and avoid procrastination. Skill challenges may be called upon when the player character needs to make a split-second choice, move fast, or deftly dodge something dangerous. Real-world skill checks may involve acts of speed such as sprinting or punching/kicking at a steadily increasing tempo. Bonuses may apply if the player has finished work early, and penalties may apply if they are procrastinating. This ability score affects moving speed and turn order in game. Iron (IRN): "Not money, nor genetics, nor the world's greatest trainers... it is your resolve, your will to better yourself, that will make you great." Required by all athletes regardless of focus, this ability score represents a player's willpower and their capacity to push through pain, distraction, or anything else to achieve their goals. Skill challenges may be called upon when the player character needs to push through fear, doubt, or mental manipulation. Real-world skill checks may involve feats of athletic perseverance, such as planking or dead hangs from a pull-up bar. Bonuses may apply when the player maintains or creates scheduled daily routines of exercise, self-improvement, and work completion, and penalties may apply when they falter in those routines. This ability score affects the max "Dynamic exercise bonus” that can be applied to skill checks in game (a base max of +3 when Iron = 10, with an additional +1 for every 2 points of iron. So if every 20 pushups gives you +1 on a “Strength” skill check, then doing 80 pushups will only give you +4 if you have at least 12 iron). Nourishment (NRS): "A properly nourished body will last longer than a famished one." This ability score, focused on by long-distance runners, represents a player's endurance and level of nutrition. Skill challenges may be called upon when making checks that involve the player character's stamina or health. Real-world skill checks may involve endurance exercises like long-distance running. Bonuses may apply if the player has eaten healthily or consumed enough water, and penalties may apply if they have eaten junk food. This ability score affects your HP (Health points), which determines how much damage you can take before you are incapacitated. Strength (STR): "When I get down on my hands, I'm not doing pushups, I'm bench-pressing the planet." The domain of powerlifters and strongmen, this ability score represents raw physical might and the ability to overcome obstacles. Skill challenges may be called upon when the player character needs to lift, push, or break something. Real-world skill checks might involve weightlifting exercises, feats of grip strength, or core stability tests. Bonuses may apply for consuming protein-rich foods or getting a good night's sleep, and penalties may apply after staying up late or indulging in excessive stimulants. This ability score affects your carrying capacity and base attack damage in game. Zen (ZEN): "Clarity of mind reflects clarity of purpose. Still the waters within to act decisively without." This ability score, prized by meditators and yogis, represents mental focus, clarity, and inner peace. Skill challenges may be called upon when the player character needs to resist distractions, see through illusions, or make difficult decisions under pressure. Real-world skill checks may involve meditation, breathing exercises, or mindfulness activities. Bonuses may apply after attending a yoga class, spending time in nature, or creating a calm and organized living space. Penalties may apply after experiencing significant stress, emotional turmoil, or having an unclean or unorganized living space. This ability score affects your amount of ZP in game (Zen Points: your pool of energy you pull from to use mystical abilities) Determining initial player ability scores: Initially, “Ability scores” are decided during character creation by giving the player a list of 6 fitness tests to gauge their level of fitness in each category. Running each test through a specific calculation will output an ability score. A score of 10 represents the average person, a score of 20 represents a peak athlete in their category. The tests are: Grace: Timed balancing on one leg with eyes closed (10 seconds is average, 60 is peak) Agility: Mile run time in minutes and second (10:00 minutes:seconds is average, 3:47 is peak) Iron: Timed dead-hang from a pull-up bar (30 seconds is average, 160 is peak) Nourishment: Miles run in an hour (4 is average, 12 is peak) Strength: Pushups in 2 minute (34 is average, 100 is peak) Zen: Leg stretch in degrees (80 is average, and 180 aka "The splits" is peak) Initial Score Calculation Formula: Ability Score = 10 + (Player Test Score - Average Score) / (Peak Score - Average\_Score) \* 10 Example: if the player does 58 pushups in 2 minutes, their strength would be: 10 plus (58 - 34) divided by (100-34) multiplied by 10 = 10 + (24)/(66)\* 10 = 10 + 3.6363... = 13.6363 rounded to nearest whole number = Strength (STR): 14 SKILLS AND SKILL CHALLENGES: The core mechanic of the game will be in how skill challenges are resolved. All “Skill challenges” will have a numerical challenge rating that must be met or beaten by the sum of a 10 sided dice roll and your score in the pertinent skill. Skill scores are determined by 2 factors: Ability Score Bonus: Every 2 points above 10 gives +1 bonus point. (EX. 12 = +1, 14 = +2, etc.) This also means that if you have less than 10 in an ability score, you will get negative points. Personal Best Bonus: Each skill has its own unique associated exercise that can be measured (Time, speed, distance, amount of reps, etc). A higher record means a higher bonus. EX: Authority skill checks are associated with a timed “Lateral raise hold”. Every 30 seconds of the hold added onto your personal best single attempt offers a +1 bonus. So if you can do a lateral hold for 90 seconds, that’s a +3 to your authority check! So if you have a 16 in Iron, and your Personal Best lateral raise hold is 90 seconds, that would give you an Authority score of +6 (T-Pose for dominance!) Dynamic Exercise Bonus: This is where the unique mechanics of the game kick in. At any time during a skill challenge (even after your roll) you can add an additional modifier to the skill check by completing the exercise during gameplay! Did you roll just below the threshold for success? Crank out another 20 pushups, squats, or curls to push yourself just over the edge into success! There are 18 skills total, each with its own associated ability score and unique exercise: Grace (GRC): \-Kinesthesia (Timed: Blind single leg stand time) \-Precision (Scored: Basket throws) \-Charm (Timed reps: Standing repeated forward dumbell chest press and thrust) \-Stealth (Timed distance: Leopard Crawl) Agility (AGI): \-acrobatics (timed reps: high kicks) \-Computers (Word per minute: Typing test) \-Speed (Time: 100 meter sprint) Iron (IRN): \-Authority (Timed: Lateral raise hold) \-Resist (Timed: Plank) \-Persist (Timed:Pull-up bar dead hang) Nourishment(NRS): \-Recovery (TBD) \-Stim crafting (TBD) \-Survival (TBD) Strength(STR): \-Mechanics (Timed reps: Alternating curls) \-Might (Timed reps: pushups) Zen(ZEN): \-Perceive (TBD) \-Empathy (TBD) \-Harmony (TBD) \-Lore (TBD) Healthy Habits Bonus: Being able to demonstrate that you have conducted healthy habits during gameplay can also add one time bonuses per skill challenge “Drank a glass of water +1 to Nourishment check”, “Cleaned your room, +3 on Zen check”. But watch out, if you’re caught in unhealthy Habits, the GM can throw in penalties, “Ate junk food, -1 to Nourishment check”, etc. Bonuses/penalties from in-game items, equipment, buffs, debuffs, etc., helping players to immerse into the mechanics of the world of REP-SET for the thrill of constantly finding ways to improve their player. Gradient success: Result of skill challenges can be pass or fail, but can also be on a sliding scale of success. Are you racing to the battlefield? Depending on your Speed check, you might arrive early and have a tactical advantage, just in time for an even fight, or maybe far too late and some of your favorite allied NPCs have paid the price… So you’re often encouraged to stack on those dynamic exercise bonuses when you can to get the most fortuitous outcomes available to you. Gameplay sample: GM: Your REP-SET is a phantom, a streak of light against the vast hull of the warship. Enemy fighters buzz angrily, but you weaves and dodges with uncanny precision. The energy wave might be losing effectiveness, but your agility and connection to the machine have never been stronger. Then, it happens. A gap in the defenses. A vulnerable seam in the warship's armor. Your coms agents keen eye spots it instantly. "Lower power junction, starboard side! You have an opening!" This is your chance to strike the decisive blow. But how? It'll take a perfect combination of skill and strategy, drawing upon your various strengths. Here are your options: Option 1: Brute Strength: Channel all remaining power into a single, overwhelming blast from the core. High-risk, high-reward. It could overload the REP-SET if you fail, but it might also cripple the warship. (Strength-focused, Might sub-skill) Option 2: Calculated Strike: With surgical precision, target the power junction with a pinpoint burst of destabilizing energy. Less flashy and ultimately less damaging, but potentially more effective in temporarily disabling the ship. (Agility-focused, Precision sub-skill) Option 3: Harmonic Disruption: Attempt to harmonize with your REP-SET's AI spirit for help in connecting to the digital systems of the Warship. Can you generate an internal energy resonance within the warship, causing it to malfunction from within? (Zen-focused, Harmony sub-skill) Player: I'll take option 1, brute strength! GM: Ok, This will be a "Might" check. The CR is going to be very high on this one. I'm setting it at a 20. What's your Might bonus? Player: Dang, a 20?? That's literally impossible. My Might is 15 and I've got a PB of 65 pushups in 2 minutes, that sets me at a +5. Even if I roll a 10 and do 60 pushups for the DE I'll only get 18 max. GM: Hey I told you it was high risk. You want to choose another option? Player: No, no. This is what my character would do. I'm a real hot-blooded meathead for sure. GM: Ok then, roll a D10 and add your bonus. Player: \Rolls\ a 9! not bad, actually that's a really good roll. So +5, that's a 14. GM: Alright, would you like to add a dynamic exercise bonus? Player: Duh, it's not like I can do 120 pushups I'd need to beat the CR, but I can at least do better than 14. Alright, here goes. \the player gets down to do pushups and the 2 minute time begins. After some time...\ Player: 65....... 66! GM: Times up. Player: Ow... my arms... GM: so with 66, that's an extra +3, and its a new PB, so that's a +1. That sets your roll to 18. Player: Ow... Frack... still not 20... for a second there i really believed I could do 120 pushups... well I did my best... Ow... 20 CR is just too impossible you jerk... GM: Hmm... Tell me, what did you eat for lunch today? Player: Me? I made some vegetable and pork soup, and a protein shake. I recorded it all in my diet app. GM: And how did you sleep last night? Player: Like a baby, went to sleep early, woke up at 6. GM: in that case, you can add a +1 "Protein bonus" and +1 "Healthy rest" bonus to any strength related check for the day if you'd like, including this one. Player: Really?? Heck yes! add it to the roll! GM: With those extra bonuses, your roll reaches 20. How do you want to do this? Player: I roar "For Terra!" and pour every last ounce of my strength into the REP-SET. GM: "For Terra!" you roar, your cry echoing through coms systems of the REP-SET. The core flares blindingly bright. The surge of power dwarfs anything the REP-SET has unleashed before. With a titanic shriek that cracks the very fabric of space, the REP-SET slams into the vulnerable power junction. Raw energy explodes outwards, tendrils of light arcing across the warship's massive hull. The impact is staggering. The leviathan-like warship buckles, its sleek form rippling with shockwaves. Sparks shower like rain, secondary explosions erupt as critical systems overload. Then…silence. The warship goes dark. Power flickers within the REP-SET itself, then steadies. Alarms fade, replaced by the eerie quiet of damaged but functional systems. "We…did it?" The coms agents voice is incredulous, tinged with relief. She's awaiting your reply. Player: "I guess so." I say, and I smile and laugh. And then I slump back... and fall unconscious. \to the other players\ I'm not doing any more skill checks for a while guys, come pick me up please. \teammates cheer\ ​

How to get that big idea for your next business? Use trends!
reddit
LLM Vibe Score0
Human Vibe Score1
IRemember123This week

How to get that big idea for your next business? Use trends!

Hello entrepreneurs and aspiring business owners, I am Mikael and I want to share a post about how to spot business ideas. If you're wondering who the owl is, it's Agent O, my sidekick (please bear with him... or me, if you can). Let's get on to it. So, there are basically two ways of getting ideas for your new business: Find a service, product or experience that's already working. Identify and ride a trend. 🦉 : Third, have a rich relative pass you their business and sip margaritas by the sea while scrolling Reddit for the rest of your life! 🕵️ : Refrain yourself, I just got started ffs, I don't want to get banned! So, what are trends? Trends are patterns of adoption of a product, service or experience by people who want to satisfy a common need. Cool, huh? How trends start Trends emerge and evolve as temporary or permanent solutions to human needs. All products, services and experiences are the expression of human needs manifested through a perceived lack, which we humans interpret as problems. Let me make this more clear. Humans have needs: from basic (food, shelter, safety) to advanced (community, knowledge) to evolved (self actualization, spirituality) and everything in between. Don’t see this as a hierarchy, as it’s usually depicted with Maslow’s pyramid. See it as cycles with different degrees of impact on humans that vary in time and intensity. 🦉 : WHAT!?? 🕵️ : Hear me out… How Trends Affect Society Human needs are physical, emotional, intellectual and spiritual. Every day we feel the impact of those needs with different degrees of required fulfillment. You can’t go on without air for more than a few minutes. You can’t live without food and water for more than a few days. So, when it comes to the needs of the body, these have a shorter timeframe in which they need to be addressed. 🦉 : Ahh, I see what you did there… \\🕵️ \\: Thanks! But you can also live with an unfulfilled need for love or friends for a long time. You can live with a decaying health as well. And you also can live your entire life without finding out if there is a God or not. Humans perceive needs as something they lack within, which in turn is expressed as a problem on the outside. I lack food or water, this will create a problem for my survival. So I need to find food and water in my environment. This lack creates a behavior seeking a product, service or experience to fulfill that need. Makes sense? 🦉 : I just went out and got me a “Mice à la Forest” dinner! 🕵️: Bon appétit! See, Agent O fulfilled a bodily need. That’s what animals do, as they’re driven by instinct and are governed by natural laws (survive, reproduce, sleep, repeat). Humans are driven by more complex needs, as our intellect and emotions allow us to override those basic primary instincts. Why Trends Are Important What an entrepreneur does is to shift the perspective: instead of seeing a lack, he/she sees an opportunity by asking the question: how can I fulfill this need? Or, even better put: how can I help people by solving their problem? That’s the first step to solving a problem: asking a question. That is why the best products are actually problems solved by entrepreneurs who work to solve their own need for a product, service or experience. They then provide it to other people for a cost. Easy, right? That’s what entrepreneurship is: solving a problem. The bigger the problem, the bigger the impact. The bigger the impact, the higher the revenue. It’s easier to understand trends now, isn’t it? You can see that trends are nothing more than the initial adoption of a product, service or experience by a group of people who are looking for a solution to their common need. 🦉 : Did you get that from a book? 🕵️ : You snore when you sleep… ¯\\(\ツ)/\¯ 🦉 : $@#&\*! Hooman! Needs are the foundation on which the modern world is built. Once you understand needs, you fundamentally change your perception of problems into opportunities. This mental shift is the entrepreneurial mindset: where others see problems, you see solutions. Where Do Trends Start So, to recap: human needs are translated into problems. Founders understand the root of the problem (the need) and create products, services, experiences as solutions to those needs. They offer the solution to the public through startups and companies, which belong to a specific niche in a particular industry. 🦉 : Aaah, so that’s why it’s called venture capital? 🕵️ : Yeah, because you’re venturing into a new endeavor to let people know about your solution to their (and ideally your) problem. 🦉 : So if you use ads to market your venture, it’s an adventure? 🕵️ : I see what you did there… If the need behind the adoption is strong and real enough, that trend will translate into a niche within an industry. If the adoption isn’t driven by strong fundamental needs, it will turn into a fad and disappear from the perception of the public, no matter how much marketing money is thrown at it. This happens because the solution (product/service/experience) to the need didn’t create the physical, intellectual or emotional response required to create a recurring behavior around it. Remember this: Problem (why) -> Behavior (how) -> Solution (what) Understand this: there are multiple types of trends. There are product or service trends. There are industry driven trends. There are tendency driven trends, like the emergence of a new paradigm that improves a lot of industries (yes, I’m looking at you, AI). Where Do Trends Come From So now you can see that trends are patterns of adoption related to a specific human need that is addressed through one or multiple products or services. This is a bottom up direction coming from evolution. Multiple trends in different industries also emerge from a theme, which is a bigger vision of a human effort to address a high level problem. This is a top down direction, coming from implementation (by governments, different organizations or other interested parties with the power to influence changes at mass level). Conclusion Now you have a better understanding of trends by looking at them through the lens of human needs. Also, you might also understand time better because you realize that human needs have different degrees of impact in time and intensity. So you now see that trends don’t only relate to individuals, but also to groups of people, from the smallest community to countries and even global needs. That is the reason you’ll sometimes hear some say that time is a flat circle: because clothes change, but humans are quite the same. Needs don’t change a lot in time, just the way we address and solve them. Here’s an interesting game for you: take a look at some behaviors in your life. Which of them are driven by a bodily need, which by an intellectual or emotional one? Which ones are completely automated and you had no idea you were doing? How are these behaviors controlling parts of your life that you were unaware of until now? If you made it this far, thank you for taking the time to read this. I hope you enjoyed it, found it useful and entertaining. Ofc, I value your opinion and welcome it in the comments. Thank you!

How to get that big idea for your next business? Use trends!
reddit
LLM Vibe Score0
Human Vibe Score1
IRemember123This week

How to get that big idea for your next business? Use trends!

Hello entrepreneurs and aspiring business owners, I am Mikael and I want to share a post about how to spot business ideas. If you're wondering who the owl is, it's Agent O, my sidekick (please bear with him... or me, if you can). Let's get on to it. So, there are basically two ways of getting ideas for your new business: Find a service, product or experience that's already working. Identify and ride a trend. 🦉 : Third, have a rich relative pass you their business and sip margaritas by the sea while scrolling Reddit for the rest of your life! 🕵️ : Refrain yourself, I just got started ffs, I don't want to get banned! So, what are trends? Trends are patterns of adoption of a product, service or experience by people who want to satisfy a common need. Cool, huh? How trends start Trends emerge and evolve as temporary or permanent solutions to human needs. All products, services and experiences are the expression of human needs manifested through a perceived lack, which we humans interpret as problems. Let me make this more clear. Humans have needs: from basic (food, shelter, safety) to advanced (community, knowledge) to evolved (self actualization, spirituality) and everything in between. Don’t see this as a hierarchy, as it’s usually depicted with Maslow’s pyramid. See it as cycles with different degrees of impact on humans that vary in time and intensity. 🦉 : WHAT!?? 🕵️ : Hear me out… How Trends Affect Society Human needs are physical, emotional, intellectual and spiritual. Every day we feel the impact of those needs with different degrees of required fulfillment. You can’t go on without air for more than a few minutes. You can’t live without food and water for more than a few days. So, when it comes to the needs of the body, these have a shorter timeframe in which they need to be addressed. 🦉 : Ahh, I see what you did there… \\🕵️ \\: Thanks! But you can also live with an unfulfilled need for love or friends for a long time. You can live with a decaying health as well. And you also can live your entire life without finding out if there is a God or not. Humans perceive needs as something they lack within, which in turn is expressed as a problem on the outside. I lack food or water, this will create a problem for my survival. So I need to find food and water in my environment. This lack creates a behavior seeking a product, service or experience to fulfill that need. Makes sense? 🦉 : I just went out and got me a “Mice à la Forest” dinner! 🕵️: Bon appétit! See, Agent O fulfilled a bodily need. That’s what animals do, as they’re driven by instinct and are governed by natural laws (survive, reproduce, sleep, repeat). Humans are driven by more complex needs, as our intellect and emotions allow us to override those basic primary instincts. Why Trends Are Important What an entrepreneur does is to shift the perspective: instead of seeing a lack, he/she sees an opportunity by asking the question: how can I fulfill this need? Or, even better put: how can I help people by solving their problem? That’s the first step to solving a problem: asking a question. That is why the best products are actually problems solved by entrepreneurs who work to solve their own need for a product, service or experience. They then provide it to other people for a cost. Easy, right? That’s what entrepreneurship is: solving a problem. The bigger the problem, the bigger the impact. The bigger the impact, the higher the revenue. It’s easier to understand trends now, isn’t it? You can see that trends are nothing more than the initial adoption of a product, service or experience by a group of people who are looking for a solution to their common need. 🦉 : Did you get that from a book? 🕵️ : You snore when you sleep… ¯\\(\ツ)/\¯ 🦉 : $@#&\*! Hooman! Needs are the foundation on which the modern world is built. Once you understand needs, you fundamentally change your perception of problems into opportunities. This mental shift is the entrepreneurial mindset: where others see problems, you see solutions. Where Do Trends Start So, to recap: human needs are translated into problems. Founders understand the root of the problem (the need) and create products, services, experiences as solutions to those needs. They offer the solution to the public through startups and companies, which belong to a specific niche in a particular industry. 🦉 : Aaah, so that’s why it’s called venture capital? 🕵️ : Yeah, because you’re venturing into a new endeavor to let people know about your solution to their (and ideally your) problem. 🦉 : So if you use ads to market your venture, it’s an adventure? 🕵️ : I see what you did there… If the need behind the adoption is strong and real enough, that trend will translate into a niche within an industry. If the adoption isn’t driven by strong fundamental needs, it will turn into a fad and disappear from the perception of the public, no matter how much marketing money is thrown at it. This happens because the solution (product/service/experience) to the need didn’t create the physical, intellectual or emotional response required to create a recurring behavior around it. Remember this: Problem (why) -> Behavior (how) -> Solution (what) Understand this: there are multiple types of trends. There are product or service trends. There are industry driven trends. There are tendency driven trends, like the emergence of a new paradigm that improves a lot of industries (yes, I’m looking at you, AI). Where Do Trends Come From So now you can see that trends are patterns of adoption related to a specific human need that is addressed through one or multiple products or services. This is a bottom up direction coming from evolution. Multiple trends in different industries also emerge from a theme, which is a bigger vision of a human effort to address a high level problem. This is a top down direction, coming from implementation (by governments, different organizations or other interested parties with the power to influence changes at mass level). Conclusion Now you have a better understanding of trends by looking at them through the lens of human needs. Also, you might also understand time better because you realize that human needs have different degrees of impact in time and intensity. So you now see that trends don’t only relate to individuals, but also to groups of people, from the smallest community to countries and even global needs. That is the reason you’ll sometimes hear some say that time is a flat circle: because clothes change, but humans are quite the same. Needs don’t change a lot in time, just the way we address and solve them. Here’s an interesting game for you: take a look at some behaviors in your life. Which of them are driven by a bodily need, which by an intellectual or emotional one? Which ones are completely automated and you had no idea you were doing? How are these behaviors controlling parts of your life that you were unaware of until now? If you made it this far, thank you for taking the time to read this. I hope you enjoyed it, found it useful and entertaining. Ofc, I value your opinion and welcome it in the comments. Thank you!

I single-handedly built the world’s best AI investing platform. Here’s NexusTrade’s 2024 year in review
reddit
LLM Vibe Score0
Human Vibe Score1
No-Definition-2886This week

I single-handedly built the world’s best AI investing platform. Here’s NexusTrade’s 2024 year in review

I copy-pasted the content of this article to save you a click! I’ve been developing an AI investing platform for 4 years, and I’m blown away by all of the new features I’ve gotten done! Here’s my project’s 2024 year in review —- When someone asks me what is the best way to learn how to trade and invest, I have an unbiased answer – NexusTrade.io. I started NexusTrade to empower everybody, including beginners and non-technical investors, to learn how to make smarter investing decisions. NexusTrade is the best way for a new investor to learn algorithmic trading and financial research, and I’m not the only person to think so. Just this year alone, user growth has skyrocketed from 1,703 users to 14,319 users. This is driven by new features, better research tools, and the launch of algorithmic trading. Here’s NexusTrade’s 2024 year in review, a semi-complete list of the features I’ve launched. Summarizing this year in review TL;DR: I implemented a variety of new features to enhance NexusTrade’s algorithmic trading and financial research capabilities. This includes: Cryptocurrency support Enhanced financial research, like the AI-Powered Stock Screener Unique watchlists and daily market summaries Live-trading with Alpaca. Next year, I plan to implement features to make NexusTrade more tailored for each user’s experience, and launch several unique features including copy trading and fully automated algorithmic trading. Feature-by-feature: What have I done so far in 2024? Algorithmic Cryptocurrency Trading Picture: Algorithmic Cryptocurrency Trading I kicked off the year by adding cryptocurrency support to NexusTrade. Users can now research, design, and implement automated strategies for popular cryptocurrencies, such as Bitcoin, Dogecoin, and Ethereum. AI-Powered Stock Screener and research capabilities Picture: AI-Powered Stock Screener In tandem with cryptocurrency support, I made a huge update to Aurora, the AI Assistant in NexusTrade, by implementing a natural language stock screener. This screener makes it easy to find fundamentally strong stocks. Throughout the year, I’ve made several enhancements to it. Over time, I’ve made the screener faster, more accurate, and expanded its capabilities. Using fundamental indicators within trading strategies Picture: Using fundamental indicators Doing financial research for companies isn’t enough; we also need a way to integrate this type of research into trading strategies. Thus, I’ve expanded the NexusTrade indicators, and made it possible to create strategies using metrics like revenue, net income, free cash flow, and P/E ratio. Stock watchlists with tailored, automated daily emails Picture: Stock watchlists In addition, I didn’t want the research you may have done for a stock (or list of stocks) to be forgotten. Thus, I created the most useful watchlist page of any investing platform. This watchlist makes it easy to keep track of your favorite stocks, track them over time, and even receive curated, daily emails about them. Enhanced user profile page, Google sign-ins, and two-factor authentication Picture: Enhanced user profile Keeping in theme with adding new pages to NexusTrade, many pages, such as the profile page, got a huge revamp. The new profile page is cleaner, easier to use, and allows you to secure your account more effectively, for example, by using two-factor authentication. GPT-Reports: an AI-generated analysis of every stock in the market Picture: GPT-Reports I created GPT-Stock Reports, an AI-Generated analysis of every stock in the market. This report was generated by taking each company’s earnings data and asking GPT to analyze the stock and give it a rating. Manual and semi-automated algorithmic trading with Alpaca Picture: Manual and semi-automated trading Finally, I’ve fully launched the Alpaca integration, and enabled users to execute real trades directly in the NexusTrade app! This integration has transformed NexusTrade from a financial research app into a real, algorithmic trading platform for retail investors. Concluding Thoughts When I say that NexusTrade is the best platform for traders and investors to make more money in the stock market, you may naively think that I’m biased. I created the app, and the rose-tinted glasses is bound to make every red flag look like a regular flag, right? Wrong. NexusTrade is objectively a completely new way for investors to approach financial markets. The fact that the app is so expansive is nothing short of miraculous.