VibeBuilders.ai Logo
VibeBuilders.ai

Responsible Ai Hub

Explore resources related to responsible ai hub to help implement AI solutions for your business.

responsible-ai-hub
github
LLM Vibe Score0.328
Human Vibe Score0.04251968503937008
Thebbie-ADec 21, 2023

responsible-ai-hub

Responsible AI Hub Welcome to the Responsible AI Hub for Developers with all levels of expertise in AI and Machine Learning. This is a dedicated space to help the community discover relevant training resources and events to learn about Responsible AI. View Hub Website You can visit the hosted Responsible AI Hub site to learn about upcoming training events, or to explore self-guided workshops to skill up on topics like: The Responsible AI Dashboard Azure Content Safety Azure Prompt Flow Build & Preview Site Want to contribute content? Start by making sure you can build and preview the site in a relevant development environment. The project is instrumented with a dev container, making it easy to launch using either Github Codespaces (in the cloud) or Docker Desktop (in your local device). The project is built using the Docusaurus 3 static site generator. Once the container is running, use these commands to build and preview the site: You should see something like this: You can now open the browser to that URL to see the site in preview mode. As you make changes to the content, the site preview will automatically refresh to show those updates. To learn more about how the website is configured and structured, see the Docusaurus documentation. Provide Feedback Have comments or questions? Post an Issue to let us know how we can improve the content to support you better, on your learning journey. TODO 🚧 Updating SUPPORT.MD as required Review security processes in SECURITY.MD Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments. Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

MMML | Deploy HuggingFace training model rapidly based on MetaSpore
reddit
LLM Vibe Score0
Human Vibe Score1
qazmkoppThis week

MMML | Deploy HuggingFace training model rapidly based on MetaSpore

A few days ago, HuggingFace announced a $100 million Series C funding round, which was big news in open source machine learning and could be a sign of where the industry is headed. Two days before the HuggingFace funding announcement, open-source machine learning platform MetaSpore released a demo based on the HuggingFace Rapid deployment pre-training model. As deep learning technology makes innovative breakthroughs in computer vision, natural language processing, speech understanding, and other fields, more and more unstructured data are perceived, understood, and processed by machines. These advances are mainly due to the powerful learning ability of deep learning. Through pre-training of deep models on massive data, the models can capture the internal data patterns, thus helping many downstream tasks. With the industry and academia investing more and more energy in the research of pre-training technology, the distribution warehouses of pre-training models such as HuggingFace and Timm have emerged one after another. The open-source community release pre-training significant model dividends at an unprecedented speed. In recent years, the data form of machine modeling and understanding has gradually evolved from single-mode to multi-mode, and the semantic gap between different modes is being eliminated, making it possible to retrieve data across modes. Take CLIP, OpenAI’s open-source work, as an example, to pre-train the twin towers of images and texts on a dataset of 400 million pictures and texts and connect the semantics between pictures and texts. Many researchers in the academic world have been solving multimodal problems such as image generation and retrieval based on this technology. Although the frontier technology through the semantic gap between modal data, there is still a heavy and complicated model tuning, offline data processing, high performance online reasoning architecture design, heterogeneous computing, and online algorithm be born multiple processes and challenges, hindering the frontier multimodal retrieval technologies fall to the ground and pratt &whitney. DMetaSoul aims at the above technical pain points, abstracting and uniting many links such as model training optimization, online reasoning, and algorithm experiment, forming a set of solutions that can quickly apply offline pre-training model to online. This paper will introduce how to use the HuggingFace community pre-training model to conduct online reasoning and algorithm experiments based on MetaSpore technology ecology so that the benefits of the pre-training model can be fully released to the specific business or industry and small and medium-sized enterprises. And we will give the text search text and text search graph two multimodal retrieval demonstration examples for your reference. Multimodal semantic retrieval The sample architecture of multimodal retrieval is as follows: Our multimodal retrieval system supports both text search and text search application scenarios, including offline processing, model reasoning, online services, and other core modules: https://preview.redd.it/mdyyv1qmdz291.png?width=1834&format=png&auto=webp&s=e9e10710794c78c64cc05adb75db385aa53aba40 Offline processing, including offline data processing processes for different application scenarios of text search and text search, including model tuning, model export, data index database construction, data push, etc. Model inference. After the offline model training, we deployed our NLP and CV large models based on the MetaSpore Serving framework. MetaSpore Serving helps us conveniently perform online inference, elastic scheduling, load balancing, and resource scheduling in heterogeneous environments. Online services. Based on MetaSpore’s online algorithm application framework, MetaSpore has a complete set of reusable online search services, including Front-end retrieval UI, multimodal data preprocessing, vector recall and sorting algorithm, AB experimental framework, etc. MetaSpore also supports text search by text and image scene search by text and can be migrated to other application scenarios at a low cost. The HuggingFace open source community has provided several excellent baseline models for similar multimodal retrieval problems, which are often the starting point for actual optimization in the industry. MetaSpore also uses the pre-training model of the HuggingFace community in its online services of searching words by words and images by words. Searching words by words is based on the semantic similarity model of the question and answer field optimized by MetaSpore, and searching images by words is based on the community pre-training model. These community open source pre-training models are exported to the general ONNX format and loaded into MetaSpore Serving for online reasoning. The following sections will provide a detailed description of the model export and online retrieval algorithm services. The reasoning part of the model is standardized SAAS services with low coupling with the business. Interested readers can refer to my previous post: The design concept of MetaSpore, a new generation of the one-stop machine learning platform. 1.1 Offline Processing Offline processing mainly involves the export and loading of online models and index building and pushing of the document library. You can follow the step-by-step instructions below to complete the offline processing of text search and image search and see how the offline pre-training model achieves reasoning at MetaSpore. 1.1.1 Search text by text Traditional text retrieval systems are based on literal matching algorithms such as BM25. Due to users’ diverse query words, a semantic gap between query words and documents is often encountered. For example, users misspell “iPhone” as “Phone,” and search terms are incredibly long, such as “1 \~ 3 months old baby autumn small size bag pants”. Traditional text retrieval systems will use spelling correction, synonym expansion, search terms rewriting, and other means to alleviate the semantic gap but fundamentally fail to solve this problem. Only when the retrieval system fully understands users’ query terms and documents can it meet users’ retrieval demands at the semantic level. With the continuous progress of pre-training and representational learning technology, some commercial search engines continue to integrate semantic vector retrieval methods based on symbolic learning into the retrieval ecology. Semantic retrieval model This paper introduces a set of semantic vector retrieval applications. MetaSpore built a set of semantic retrieval systems based on encyclopedia question and answer data. MetaSpore adopted the Sentence-Bert model as the semantic vector representation model, which fine-tunes the twin tower BERT in supervised or unsupervised ways to make the model more suitable for retrieval tasks. The model structure is as follows: The query-Doc symmetric two-tower model is used in text search and question and answer retrieval. The vector representation of online Query and offline DOC share the same vector representation model, so it is necessary to ensure the consistency of the offline DOC library building model and online Query inference model. The case uses MetaSpore’s text representation model Sbert-Chinese-QMC-domain-V1, optimized in the open-source semantically similar data set. This model will express the question and answer data as a vector in offline database construction. The user query will be expressed as a vector by this model in online retrieval, ensuring that query-doc in the same semantic space, users’ semantic retrieval demands can be guaranteed by vector similarity metric calculation. Since the text presentation model does vector encoding for Query online, we need to export the model for use by the online service. Go to the q&A data library code directory and export the model concerning the documentation. In the script, Pytorch Tracing is used to export the model. The models are exported to the “./export “directory. The exported models are mainly ONNX models used for wired reasoning, Tokenizer, and related configuration files. The exported models are loaded into MetaSpore Serving by the online Serving system described below for model reasoning. Since the exported model will be copied to the cloud storage, you need to configure related variables in env.sh. \Build library based on text search \ The retrieval database is built on the million-level encyclopedia question and answer data set. According to the description document, you need to download the data and complete the database construction. The question and answer data will be coded as a vector by the offline model, and then the database construction data will be pushed to the service component. The whole process of database construction is described as follows: Preprocessing, converting the original data into a more general JSonline format for database construction; Build index, use the same model as online “sbert-Chinese-qmc-domain-v1” to index documents (one document object per line); Push inverted (vector) and forward (document field) data to each component server. The following is an example of the database data format. After offline database construction is completed, various data are pushed to corresponding service components, such as Milvus storing vector representation of documents and MongoDB storing summary information of documents. Online retrieval algorithm services will use these service components to obtain relevant data. 1.1.2 Search by text Text and images are easy for humans to relate semantically but difficult for machines. First of all, from the perspective of data form, the text is the discrete ID type of one-dimensional data based on words and words. At the same time, images are continuous two-dimensional or three-dimensional data. Secondly, the text is a subjective creation of human beings, and its expressive ability is vibrant, including various turning points, metaphors, and other expressions, while images are machine representations of the objective world. In short, bridging the semantic gap between text and image data is much more complex than searching text by text. The traditional text search image retrieval technology generally relies on the external text description data of the image or the nearest neighbor retrieval technology and carries out the retrieval through the image associated text, which in essence degrades the problem to text search. However, it will also face many issues, such as obtaining the associated text of pictures and whether the accuracy of text search by text is high enough. The depth model has gradually evolved from single-mode to multi-mode in recent years. Taking the open-source project of OpenAI, CLIP, as an example, train the model through the massive image and text data of the Internet and map the text and image data into the same semantic space, making it possible to implement the text and image search technology based on semantic vector. CLIP graphic model The text search pictures introduced in this paper are implemented based on semantic vector retrieval, and the CLIP pre-training model is used as the two-tower retrieval architecture. Because the CLIP model has trained the semantic alignment of the twin towers’ text and image side models on the massive graphic and text data, it is particularly suitable for the text search graph scene. Due to the different image and text data forms, the Query-Doc asymmetric twin towers model is used for text search image retrieval. The image-side model of the twin towers is used for offline database construction, and the text-side model is used for the online return. In the final online retrieval, the database data of the image side model will be searched after the text side model encodes Query, and the CLIP pre-training model guarantees the semantic correlation between images and texts. The model can draw the graphic pairs closer in vector space by pre-training on a large amount of visual data. Here we need to export the text-side model for online MetaSpore Serving inference. Since the retrieval scene is based on Chinese, the CLIP model supporting Chinese understanding is selected. The exported content includes the ONNX model used for online reasoning and Tokenizer, similar to the text search. MetaSpore Serving can load model reasoning through the exported content. Build library on Image search You need to download the Unsplash Lite library data and complete the construction according to the instructions. The whole process of database construction is described as follows: Preprocessing, specify the image directory, and then generate a more general JSOnline file for library construction; Build index, use OpenAI/Clip-Vit-BASE-Patch32 pre-training model to index the gallery, and output one document object for each line of index data; Push inverted (vector) and forward (document field) data to each component server. Like text search, after offline database construction, relevant data will be pushed to service components, called by online retrieval algorithm services to obtain relevant data. 1.2 Online Services The overall online service architecture diagram is as follows: ​ https://preview.redd.it/nz8zrbbpdz291.png?width=1280&format=png&auto=webp&s=28dae7e031621bc8819519667ed03d8d085d8ace Multi-mode search online service system supports application scenarios such as text search and text search. The whole online service consists of the following parts: Query preprocessing service: encapsulate preprocessing logic (including text/image, etc.) of pre-training model, and provide services through gRPC interface; Retrieval algorithm service: the whole algorithm processing link includes AB experiment tangent flow configuration, MetaSpore Serving call, vector recall, sorting, document summary, etc.; User entry service: provides a Web UI interface for users to debug and track down problems in the retrieval service. From a user request perspective, these services form invocation dependencies from back to front, so to build up a multimodal sample, you need to run each service from front to back first. Before doing this, remember to export the offline model, put it online and build the library first. This article will introduce the various parts of the online service system and make the whole service system step by step according to the following guidance. See the ReadME at the end of this article for more details. 1.2.1 Query preprocessing service Deep learning models tend to be based on tensors, but NLP/CV models often have a preprocessing part that translates raw text and images into tensors that deep learning models can accept. For example, NLP class models often have a pre-tokenizer to transform text data of string type into discrete tensor data. CV class models also have similar processing logic to complete the cropping, scaling, transformation, and other processing of input images through preprocessing. On the one hand, considering that this part of preprocessing logic is decoupled from tensor reasoning of the depth model, on the other hand, the reason of the depth model has an independent technical system based on ONNX, so MetaSpore disassembled this part of preprocessing logic. NLP pretreatment Tokenizer has been integrated into the Query pretreatment service. MetaSpore dismantlement with a relatively general convention. Users only need to provide preprocessing logic files to realize the loading and prediction interface and export the necessary data and configuration files loaded into the preprocessing service. Subsequent CV preprocessing logic will also be integrated in this manner. The preprocessing service currently provides the gRPC interface invocation externally and is dependent on the Query preprocessing (QP) module in the retrieval algorithm service. After the user request reaches the retrieval algorithm service, it will be forwarded to the service to complete the data preprocessing and continue the subsequent processing. The ReadMe provides details on how the preprocessing service is started, how the preprocessing model exported offline to cloud storage enters the service, and how to debug the service. To further improve the efficiency and stability of model reasoning, MetaSpore Serving implements a Python preprocessing submodule. So MetaSpore can provide gRPC services through user-specified preprocessor.py, complete Tokenizer or CV-related preprocessing in NLP, and translate requests into a Tensor that deep models can handle. Finally, the model inference is carried out by MetaSpore, Serving subsequent sub-modules. Presented here on the lot code: https://github.com/meta-soul/MetaSpore/compare/add\python\preprocessor 1.2.2 Retrieval algorithm services Retrieval algorithm service is the core of the whole online service system, which is responsible for the triage of experiments, the assembly of algorithm chains such as preprocessing, recall, sorting, and the invocation of dependent component services. The whole retrieval algorithm service is developed based on the Java Spring framework and supports multi-mode retrieval scenarios of text search and text search graph. Due to good internal abstraction and modular design, it has high flexibility and can be migrated to similar application scenarios at a low cost. Here’s a quick guide to configuring the environment to set up the retrieval algorithm service. See ReadME for more details: Install dependent components. Use Maven to install the online-Serving component Search for service configurations. Copy the template configuration file and replace the MongoDB, Milvus, and other configurations based on the development/production environment. Install and configure Consul. Consul allows you to synchronize the search service configuration in real-time, including cutting the flow of experiments, recall parameters, and sorting parameters. The project’s configuration file shows the current configuration parameters of text search and text search. The parameter modelName in the stage of pretreatment and recall is the corresponding model exported in offline processing. Start the service. Once the above configuration is complete, the retrieval service can be started from the entry script. Once the service is started, you can test it! For example, for a user with userId=10 who wants to query “How to renew ID card,” access the text search service. 1.2.3 User Entry Service Considering that the retrieval algorithm service is in the form of the API interface, it is difficult to locate and trace the problem, especially for the text search image scene can intuitively display the retrieval results to facilitate the iterative optimization of the retrieval algorithm. This paper provides a lightweight Web UI interface for text search and image search, a search input box, and results in a display page for users. Developed by Flask, the service can be easily integrated with other retrieval applications. The service calls the retrieval algorithm service and displays the returned results on the page. It’s also easy to install and start the service. Once you’re done, go to http://127.0.0.1:8090 to see if the search UI service is working correctly. See the ReadME at the end of this article for details. Multimodal system demonstration The multimodal retrieval service can be started when offline processing and online service environment configuration have been completed following the above instructions. Examples of textual searches are shown below. Enter the entry of the text search map application, enter “cat” first, and you can see that the first three digits of the returned result are cats: https://preview.redd.it/d7syq47rdz291.png?width=1280&format=png&auto=webp&s=b43df9abd380b7d9a52e3045dd787f4feeb69635 If you add a color constraint to “cat” to retrieve “black cat,” you can see that it does return a black cat: ​ https://preview.redd.it/aa7pxx8tdz291.png?width=1280&format=png&auto=webp&s=e3727c29d1bde6eea2e1cccf6c46d3cae3f4750e Further, strengthen the constraint on the search term, change it to “black cat on the bed,” and return results containing pictures of a black cat climbing on the bed: ​ https://preview.redd.it/2mw4qpjudz291.png?width=1280&format=png&auto=webp&s=1cf1db667892b9b3a40451993680fbd6980b5520 The cat can still be found through the text search system after the color and scene modification in the above example. Conclusion The cutting-edge pre-training technology can bridge the semantic gap between different modes, and the HuggingFace community can greatly reduce the cost for developers to use the pre-training model. Combined with the technological ecology of MetaSpore online reasoning and online microservices provided by DMetaSpore, the pre-training model is no longer mere offline dabbling. Instead, it can truly achieve end-to-end implementation from cutting-edge technology to industrial scenarios, fully releasing the dividends of the pre-training large model. In the future, DMetaSoul will continue to improve and optimize the MetaSpore technology ecosystem: More automated and wider access to HuggingFace community ecology. MetaSpore will soon release a common model rollout mechanism to make HuggingFace ecologically accessible and will later integrate preprocessing services into online services. Multi-mode retrieval offline algorithm optimization. For multimodal retrieval scenarios, MetaSpore will continuously iteratively optimize offline algorithm components, including text recall/sort model, graphic recall/sort model, etc., to improve the accuracy and efficiency of the retrieval algorithm. For related code and reference documentation in this article, please visit: https://github.com/meta-soul/MetaSpore/tree/main/demo/multimodal/online Some images source: https://github.com/openai/CLIP/raw/main/CLIP.png https://www.sbert.net/examples/training/sts/README.html

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore
reddit
LLM Vibe Score0
Human Vibe Score1
qazmkoppThis week

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore

A few days ago, HuggingFace announced a $100 million Series C funding round, which was big news in open source machine learning and could be a sign of where the industry is headed. Two days before the HuggingFace funding announcement, open-source machine learning platform MetaSpore released a demo based on the HuggingFace Rapid deployment pre-training model. As deep learning technology makes innovative breakthroughs in computer vision, natural language processing, speech understanding, and other fields, more and more unstructured data are perceived, understood, and processed by machines. These advances are mainly due to the powerful learning ability of deep learning. Through pre-training of deep models on massive data, the models can capture the internal data patterns, thus helping many downstream tasks. With the industry and academia investing more and more energy in the research of pre-training technology, the distribution warehouses of pre-training models such as HuggingFace and Timm have emerged one after another. The open-source community release pre-training significant model dividends at an unprecedented speed. In recent years, the data form of machine modeling and understanding has gradually evolved from single-mode to multi-mode, and the semantic gap between different modes is being eliminated, making it possible to retrieve data across modes. Take CLIP, OpenAI’s open-source work, as an example, to pre-train the twin towers of images and texts on a dataset of 400 million pictures and texts and connect the semantics between pictures and texts. Many researchers in the academic world have been solving multimodal problems such as image generation and retrieval based on this technology. Although the frontier technology through the semantic gap between modal data, there is still a heavy and complicated model tuning, offline data processing, high performance online reasoning architecture design, heterogeneous computing, and online algorithm be born multiple processes and challenges, hindering the frontier multimodal retrieval technologies fall to the ground and pratt &whitney. DMetaSoul aims at the above technical pain points, abstracting and uniting many links such as model training optimization, online reasoning, and algorithm experiment, forming a set of solutions that can quickly apply offline pre-training model to online. This paper will introduce how to use the HuggingFace community pre-training model to conduct online reasoning and algorithm experiments based on MetaSpore technology ecology so that the benefits of the pre-training model can be fully released to the specific business or industry and small and medium-sized enterprises. And we will give the text search text and text search graph two multimodal retrieval demonstration examples for your reference. Multimodal semantic retrieval The sample architecture of multimodal retrieval is as follows: Our multimodal retrieval system supports both text search and text search application scenarios, including offline processing, model reasoning, online services, and other core modules: ​ https://preview.redd.it/w4v4c7vcez291.png?width=1834&format=png&auto=webp&s=0687efb1fddb26e8e30cb844d398ec712b947f31 Offline processing, including offline data processing processes for different application scenarios of text search and text search, including model tuning, model export, data index database construction, data push, etc. Model inference. After the offline model training, we deployed our NLP and CV large models based on the MetaSpore Serving framework. MetaSpore Serving helps us conveniently perform online inference, elastic scheduling, load balancing, and resource scheduling in heterogeneous environments. Online services. Based on MetaSpore’s online algorithm application framework, MetaSpore has a complete set of reusable online search services, including Front-end retrieval UI, multimodal data preprocessing, vector recall and sorting algorithm, AB experimental framework, etc. MetaSpore also supports text search by text and image scene search by text and can be migrated to other application scenarios at a low cost. The HuggingFace open source community has provided several excellent baseline models for similar multimodal retrieval problems, which are often the starting point for actual optimization in the industry. MetaSpore also uses the pre-training model of the HuggingFace community in its online services of searching words by words and images by words. Searching words by words is based on the semantic similarity model of the question and answer field optimized by MetaSpore, and searching images by words is based on the community pre-training model. These community open source pre-training models are exported to the general ONNX format and loaded into MetaSpore Serving for online reasoning. The following sections will provide a detailed description of the model export and online retrieval algorithm services. The reasoning part of the model is standardized SAAS services with low coupling with the business. Interested readers can refer to my previous post: The design concept of MetaSpore, a new generation of the one-stop machine learning platform. 1.1 Offline Processing Offline processing mainly involves the export and loading of online models and index building and pushing of the document library. You can follow the step-by-step instructions below to complete the offline processing of text search and image search and see how the offline pre-training model achieves reasoning at MetaSpore. 1.1.1 Search text by text Traditional text retrieval systems are based on literal matching algorithms such as BM25. Due to users’ diverse query words, a semantic gap between query words and documents is often encountered. For example, users misspell “iPhone” as “Phone,” and search terms are incredibly long, such as “1 \~ 3 months old baby autumn small size bag pants”. Traditional text retrieval systems will use spelling correction, synonym expansion, search terms rewriting, and other means to alleviate the semantic gap but fundamentally fail to solve this problem. Only when the retrieval system fully understands users’ query terms and documents can it meet users’ retrieval demands at the semantic level. With the continuous progress of pre-training and representational learning technology, some commercial search engines continue to integrate semantic vector retrieval methods based on symbolic learning into the retrieval ecology. Semantic retrieval model This paper introduces a set of semantic vector retrieval applications. MetaSpore built a set of semantic retrieval systems based on encyclopedia question and answer data. MetaSpore adopted the Sentence-Bert model as the semantic vector representation model, which fine-tunes the twin tower BERT in supervised or unsupervised ways to make the model more suitable for retrieval tasks. The model structure is as follows: The query-Doc symmetric two-tower model is used in text search and question and answer retrieval. The vector representation of online Query and offline DOC share the same vector representation model, so it is necessary to ensure the consistency of the offline DOC library building model and online Query inference model. The case uses MetaSpore’s text representation model Sbert-Chinese-QMC-domain-V1, optimized in the open-source semantically similar data set. This model will express the question and answer data as a vector in offline database construction. The user query will be expressed as a vector by this model in online retrieval, ensuring that query-doc in the same semantic space, users’ semantic retrieval demands can be guaranteed by vector similarity metric calculation. Since the text presentation model does vector encoding for Query online, we need to export the model for use by the online service. Go to the q&A data library code directory and export the model concerning the documentation. In the script, Pytorch Tracing is used to export the model. The models are exported to the “./export “directory. The exported models are mainly ONNX models used for wired reasoning, Tokenizer, and related configuration files. The exported models are loaded into MetaSpore Serving by the online Serving system described below for model reasoning. Since the exported model will be copied to the cloud storage, you need to configure related variables in env.sh. \Build library based on text search \ The retrieval database is built on the million-level encyclopedia question and answer data set. According to the description document, you need to download the data and complete the database construction. The question and answer data will be coded as a vector by the offline model, and then the database construction data will be pushed to the service component. The whole process of database construction is described as follows: Preprocessing, converting the original data into a more general JSonline format for database construction; Build index, use the same model as online “sbert-Chinese-qmc-domain-v1” to index documents (one document object per line); Push inverted (vector) and forward (document field) data to each component server. The following is an example of the database data format. After offline database construction is completed, various data are pushed to corresponding service components, such as Milvus storing vector representation of documents and MongoDB storing summary information of documents. Online retrieval algorithm services will use these service components to obtain relevant data. 1.1.2 Search by text Text and images are easy for humans to relate semantically but difficult for machines. First of all, from the perspective of data form, the text is the discrete ID type of one-dimensional data based on words and words. At the same time, images are continuous two-dimensional or three-dimensional data. Secondly, the text is a subjective creation of human beings, and its expressive ability is vibrant, including various turning points, metaphors, and other expressions, while images are machine representations of the objective world. In short, bridging the semantic gap between text and image data is much more complex than searching text by text. The traditional text search image retrieval technology generally relies on the external text description data of the image or the nearest neighbor retrieval technology and carries out the retrieval through the image associated text, which in essence degrades the problem to text search. However, it will also face many issues, such as obtaining the associated text of pictures and whether the accuracy of text search by text is high enough. The depth model has gradually evolved from single-mode to multi-mode in recent years. Taking the open-source project of OpenAI, CLIP, as an example, train the model through the massive image and text data of the Internet and map the text and image data into the same semantic space, making it possible to implement the text and image search technology based on semantic vector. CLIP graphic model The text search pictures introduced in this paper are implemented based on semantic vector retrieval, and the CLIP pre-training model is used as the two-tower retrieval architecture. Because the CLIP model has trained the semantic alignment of the twin towers’ text and image side models on the massive graphic and text data, it is particularly suitable for the text search graph scene. Due to the different image and text data forms, the Query-Doc asymmetric twin towers model is used for text search image retrieval. The image-side model of the twin towers is used for offline database construction, and the text-side model is used for the online return. In the final online retrieval, the database data of the image side model will be searched after the text side model encodes Query, and the CLIP pre-training model guarantees the semantic correlation between images and texts. The model can draw the graphic pairs closer in vector space by pre-training on a large amount of visual data. Here we need to export the text-side model for online MetaSpore Serving inference. Since the retrieval scene is based on Chinese, the CLIP model supporting Chinese understanding is selected. The exported content includes the ONNX model used for online reasoning and Tokenizer, similar to the text search. MetaSpore Serving can load model reasoning through the exported content. Build library on Image search You need to download the Unsplash Lite library data and complete the construction according to the instructions. The whole process of database construction is described as follows: Preprocessing, specify the image directory, and then generate a more general JSOnline file for library construction; Build index, use OpenAI/Clip-Vit-BASE-Patch32 pre-training model to index the gallery, and output one document object for each line of index data; Push inverted (vector) and forward (document field) data to each component server. Like text search, after offline database construction, relevant data will be pushed to service components, called by online retrieval algorithm services to obtain relevant data. 1.2 Online Services The overall online service architecture diagram is as follows: https://preview.redd.it/jfsl8hdfez291.png?width=1280&format=png&auto=webp&s=a858e2304a0c93e78ba5429612ca08cbee69b35a Multi-mode search online service system supports application scenarios such as text search and text search. The whole online service consists of the following parts: Query preprocessing service: encapsulate preprocessing logic (including text/image, etc.) of pre-training model, and provide services through gRPC interface; Retrieval algorithm service: the whole algorithm processing link includes AB experiment tangent flow configuration, MetaSpore Serving call, vector recall, sorting, document summary, etc.; User entry service: provides a Web UI interface for users to debug and track down problems in the retrieval service. From a user request perspective, these services form invocation dependencies from back to front, so to build up a multimodal sample, you need to run each service from front to back first. Before doing this, remember to export the offline model, put it online and build the library first. This article will introduce the various parts of the online service system and make the whole service system step by step according to the following guidance. See the ReadME at the end of this article for more details. 1.2.1 Query preprocessing service Deep learning models tend to be based on tensors, but NLP/CV models often have a preprocessing part that translates raw text and images into tensors that deep learning models can accept. For example, NLP class models often have a pre-tokenizer to transform text data of string type into discrete tensor data. CV class models also have similar processing logic to complete the cropping, scaling, transformation, and other processing of input images through preprocessing. On the one hand, considering that this part of preprocessing logic is decoupled from tensor reasoning of the depth model, on the other hand, the reason of the depth model has an independent technical system based on ONNX, so MetaSpore disassembled this part of preprocessing logic. NLP pretreatment Tokenizer has been integrated into the Query pretreatment service. MetaSpore dismantlement with a relatively general convention. Users only need to provide preprocessing logic files to realize the loading and prediction interface and export the necessary data and configuration files loaded into the preprocessing service. Subsequent CV preprocessing logic will also be integrated in this manner. The preprocessing service currently provides the gRPC interface invocation externally and is dependent on the Query preprocessing (QP) module in the retrieval algorithm service. After the user request reaches the retrieval algorithm service, it will be forwarded to the service to complete the data preprocessing and continue the subsequent processing. The ReadMe provides details on how the preprocessing service is started, how the preprocessing model exported offline to cloud storage enters the service, and how to debug the service. To further improve the efficiency and stability of model reasoning, MetaSpore Serving implements a Python preprocessing submodule. So MetaSpore can provide gRPC services through user-specified preprocessor.py, complete Tokenizer or CV-related preprocessing in NLP, and translate requests into a Tensor that deep models can handle. Finally, the model inference is carried out by MetaSpore, Serving subsequent sub-modules. Presented here on the lot code: https://github.com/meta-soul/MetaSpore/compare/add\python\preprocessor 1.2.2 Retrieval algorithm services Retrieval algorithm service is the core of the whole online service system, which is responsible for the triage of experiments, the assembly of algorithm chains such as preprocessing, recall, sorting, and the invocation of dependent component services. The whole retrieval algorithm service is developed based on the Java Spring framework and supports multi-mode retrieval scenarios of text search and text search graph. Due to good internal abstraction and modular design, it has high flexibility and can be migrated to similar application scenarios at a low cost. Here’s a quick guide to configuring the environment to set up the retrieval algorithm service. See ReadME for more details: Install dependent components. Use Maven to install the online-Serving component Search for service configurations. Copy the template configuration file and replace the MongoDB, Milvus, and other configurations based on the development/production environment. Install and configure Consul. Consul allows you to synchronize the search service configuration in real-time, including cutting the flow of experiments, recall parameters, and sorting parameters. The project’s configuration file shows the current configuration parameters of text search and text search. The parameter modelName in the stage of pretreatment and recall is the corresponding model exported in offline processing. Start the service. Once the above configuration is complete, the retrieval service can be started from the entry script. Once the service is started, you can test it! For example, for a user with userId=10 who wants to query “How to renew ID card,” access the text search service. 1.2.3 User Entry Service Considering that the retrieval algorithm service is in the form of the API interface, it is difficult to locate and trace the problem, especially for the text search image scene can intuitively display the retrieval results to facilitate the iterative optimization of the retrieval algorithm. This paper provides a lightweight Web UI interface for text search and image search, a search input box, and results in a display page for users. Developed by Flask, the service can be easily integrated with other retrieval applications. The service calls the retrieval algorithm service and displays the returned results on the page. It’s also easy to install and start the service. Once you’re done, go to http://127.0.0.1:8090 to see if the search UI service is working correctly. See the ReadME at the end of this article for details. Multimodal system demonstration The multimodal retrieval service can be started when offline processing and online service environment configuration have been completed following the above instructions. Examples of textual searches are shown below. Enter the entry of the text search map application, enter “cat” first, and you can see that the first three digits of the returned result are cats: https://preview.redd.it/0n5nuyvhez291.png?width=1280&format=png&auto=webp&s=1e9c054f541d53381674b8d6001b4bf524506bd2 If you add a color constraint to “cat” to retrieve “black cat,” you can see that it does return a black cat: https://preview.redd.it/rzc0qjyjez291.png?width=1280&format=png&auto=webp&s=d5bcc503ef0fb3360c7740e60e295cf372dcad47 Further, strengthen the constraint on the search term, change it to “black cat on the bed,” and return results containing pictures of a black cat climbing on the bed: ​ https://preview.redd.it/c4b2q8olez291.png?width=1280&format=png&auto=webp&s=4f3817b0b9f07e1e68d1d4a8281702ba3834a00a The cat can still be found through the text search system after the color and scene modification in the above example. Conclusion The cutting-edge pre-training technology can bridge the semantic gap between different modes, and the HuggingFace community can greatly reduce the cost for developers to use the pre-training model. Combined with the technological ecology of MetaSpore online reasoning and online microservices provided by DMetaSpore, the pre-training model is no longer mere offline dabbling. Instead, it can truly achieve end-to-end implementation from cutting-edge technology to industrial scenarios, fully releasing the dividends of the pre-training large model. In the future, DMetaSoul will continue to improve and optimize the MetaSpore technology ecosystem: More automated and wider access to HuggingFace community ecology. MetaSpore will soon release a common model rollout mechanism to make HuggingFace ecologically accessible and will later integrate preprocessing services into online services. Multi-mode retrieval offline algorithm optimization. For multimodal retrieval scenarios, MetaSpore will continuously iteratively optimize offline algorithm components, including text recall/sort model, graphic recall/sort model, etc., to improve the accuracy and efficiency of the retrieval algorithm. For related code and reference documentation in this article, please visit: https://github.com/meta-soul/MetaSpore/tree/main/demo/multimodal/online Some images source: https://github.com/openai/CLIP/raw/main/CLIP.png https://www.sbert.net/examples/training/sts/README.html

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore
reddit
LLM Vibe Score0
Human Vibe Score1
qazmkoppThis week

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore

A few days ago, HuggingFace announced a $100 million Series C funding round, which was big news in open source machine learning and could be a sign of where the industry is headed. Two days before the HuggingFace funding announcement, open-source machine learning platform MetaSpore released a demo based on the HuggingFace Rapid deployment pre-training model. As deep learning technology makes innovative breakthroughs in computer vision, natural language processing, speech understanding, and other fields, more and more unstructured data are perceived, understood, and processed by machines. These advances are mainly due to the powerful learning ability of deep learning. Through pre-training of deep models on massive data, the models can capture the internal data patterns, thus helping many downstream tasks. With the industry and academia investing more and more energy in the research of pre-training technology, the distribution warehouses of pre-training models such as HuggingFace and Timm have emerged one after another. The open-source community release pre-training significant model dividends at an unprecedented speed. In recent years, the data form of machine modeling and understanding has gradually evolved from single-mode to multi-mode, and the semantic gap between different modes is being eliminated, making it possible to retrieve data across modes. Take CLIP, OpenAI’s open-source work, as an example, to pre-train the twin towers of images and texts on a dataset of 400 million pictures and texts and connect the semantics between pictures and texts. Many researchers in the academic world have been solving multimodal problems such as image generation and retrieval based on this technology. Although the frontier technology through the semantic gap between modal data, there is still a heavy and complicated model tuning, offline data processing, high performance online reasoning architecture design, heterogeneous computing, and online algorithm be born multiple processes and challenges, hindering the frontier multimodal retrieval technologies fall to the ground and pratt &whitney. DMetaSoul aims at the above technical pain points, abstracting and uniting many links such as model training optimization, online reasoning, and algorithm experiment, forming a set of solutions that can quickly apply offline pre-training model to online. This paper will introduce how to use the HuggingFace community pre-training model to conduct online reasoning and algorithm experiments based on MetaSpore technology ecology so that the benefits of the pre-training model can be fully released to the specific business or industry and small and medium-sized enterprises. And we will give the text search text and text search graph two multimodal retrieval demonstration examples for your reference. Multimodal semantic retrieval The sample architecture of multimodal retrieval is as follows: Our multimodal retrieval system supports both text search and text search application scenarios, including offline processing, model reasoning, online services, and other core modules: ​ https://preview.redd.it/w4v4c7vcez291.png?width=1834&format=png&auto=webp&s=0687efb1fddb26e8e30cb844d398ec712b947f31 Offline processing, including offline data processing processes for different application scenarios of text search and text search, including model tuning, model export, data index database construction, data push, etc. Model inference. After the offline model training, we deployed our NLP and CV large models based on the MetaSpore Serving framework. MetaSpore Serving helps us conveniently perform online inference, elastic scheduling, load balancing, and resource scheduling in heterogeneous environments. Online services. Based on MetaSpore’s online algorithm application framework, MetaSpore has a complete set of reusable online search services, including Front-end retrieval UI, multimodal data preprocessing, vector recall and sorting algorithm, AB experimental framework, etc. MetaSpore also supports text search by text and image scene search by text and can be migrated to other application scenarios at a low cost. The HuggingFace open source community has provided several excellent baseline models for similar multimodal retrieval problems, which are often the starting point for actual optimization in the industry. MetaSpore also uses the pre-training model of the HuggingFace community in its online services of searching words by words and images by words. Searching words by words is based on the semantic similarity model of the question and answer field optimized by MetaSpore, and searching images by words is based on the community pre-training model. These community open source pre-training models are exported to the general ONNX format and loaded into MetaSpore Serving for online reasoning. The following sections will provide a detailed description of the model export and online retrieval algorithm services. The reasoning part of the model is standardized SAAS services with low coupling with the business. Interested readers can refer to my previous post: The design concept of MetaSpore, a new generation of the one-stop machine learning platform. 1.1 Offline Processing Offline processing mainly involves the export and loading of online models and index building and pushing of the document library. You can follow the step-by-step instructions below to complete the offline processing of text search and image search and see how the offline pre-training model achieves reasoning at MetaSpore. 1.1.1 Search text by text Traditional text retrieval systems are based on literal matching algorithms such as BM25. Due to users’ diverse query words, a semantic gap between query words and documents is often encountered. For example, users misspell “iPhone” as “Phone,” and search terms are incredibly long, such as “1 \~ 3 months old baby autumn small size bag pants”. Traditional text retrieval systems will use spelling correction, synonym expansion, search terms rewriting, and other means to alleviate the semantic gap but fundamentally fail to solve this problem. Only when the retrieval system fully understands users’ query terms and documents can it meet users’ retrieval demands at the semantic level. With the continuous progress of pre-training and representational learning technology, some commercial search engines continue to integrate semantic vector retrieval methods based on symbolic learning into the retrieval ecology. Semantic retrieval model This paper introduces a set of semantic vector retrieval applications. MetaSpore built a set of semantic retrieval systems based on encyclopedia question and answer data. MetaSpore adopted the Sentence-Bert model as the semantic vector representation model, which fine-tunes the twin tower BERT in supervised or unsupervised ways to make the model more suitable for retrieval tasks. The model structure is as follows: The query-Doc symmetric two-tower model is used in text search and question and answer retrieval. The vector representation of online Query and offline DOC share the same vector representation model, so it is necessary to ensure the consistency of the offline DOC library building model and online Query inference model. The case uses MetaSpore’s text representation model Sbert-Chinese-QMC-domain-V1, optimized in the open-source semantically similar data set. This model will express the question and answer data as a vector in offline database construction. The user query will be expressed as a vector by this model in online retrieval, ensuring that query-doc in the same semantic space, users’ semantic retrieval demands can be guaranteed by vector similarity metric calculation. Since the text presentation model does vector encoding for Query online, we need to export the model for use by the online service. Go to the q&A data library code directory and export the model concerning the documentation. In the script, Pytorch Tracing is used to export the model. The models are exported to the “./export “directory. The exported models are mainly ONNX models used for wired reasoning, Tokenizer, and related configuration files. The exported models are loaded into MetaSpore Serving by the online Serving system described below for model reasoning. Since the exported model will be copied to the cloud storage, you need to configure related variables in env.sh. \Build library based on text search \ The retrieval database is built on the million-level encyclopedia question and answer data set. According to the description document, you need to download the data and complete the database construction. The question and answer data will be coded as a vector by the offline model, and then the database construction data will be pushed to the service component. The whole process of database construction is described as follows: Preprocessing, converting the original data into a more general JSonline format for database construction; Build index, use the same model as online “sbert-Chinese-qmc-domain-v1” to index documents (one document object per line); Push inverted (vector) and forward (document field) data to each component server. The following is an example of the database data format. After offline database construction is completed, various data are pushed to corresponding service components, such as Milvus storing vector representation of documents and MongoDB storing summary information of documents. Online retrieval algorithm services will use these service components to obtain relevant data. 1.1.2 Search by text Text and images are easy for humans to relate semantically but difficult for machines. First of all, from the perspective of data form, the text is the discrete ID type of one-dimensional data based on words and words. At the same time, images are continuous two-dimensional or three-dimensional data. Secondly, the text is a subjective creation of human beings, and its expressive ability is vibrant, including various turning points, metaphors, and other expressions, while images are machine representations of the objective world. In short, bridging the semantic gap between text and image data is much more complex than searching text by text. The traditional text search image retrieval technology generally relies on the external text description data of the image or the nearest neighbor retrieval technology and carries out the retrieval through the image associated text, which in essence degrades the problem to text search. However, it will also face many issues, such as obtaining the associated text of pictures and whether the accuracy of text search by text is high enough. The depth model has gradually evolved from single-mode to multi-mode in recent years. Taking the open-source project of OpenAI, CLIP, as an example, train the model through the massive image and text data of the Internet and map the text and image data into the same semantic space, making it possible to implement the text and image search technology based on semantic vector. CLIP graphic model The text search pictures introduced in this paper are implemented based on semantic vector retrieval, and the CLIP pre-training model is used as the two-tower retrieval architecture. Because the CLIP model has trained the semantic alignment of the twin towers’ text and image side models on the massive graphic and text data, it is particularly suitable for the text search graph scene. Due to the different image and text data forms, the Query-Doc asymmetric twin towers model is used for text search image retrieval. The image-side model of the twin towers is used for offline database construction, and the text-side model is used for the online return. In the final online retrieval, the database data of the image side model will be searched after the text side model encodes Query, and the CLIP pre-training model guarantees the semantic correlation between images and texts. The model can draw the graphic pairs closer in vector space by pre-training on a large amount of visual data. Here we need to export the text-side model for online MetaSpore Serving inference. Since the retrieval scene is based on Chinese, the CLIP model supporting Chinese understanding is selected. The exported content includes the ONNX model used for online reasoning and Tokenizer, similar to the text search. MetaSpore Serving can load model reasoning through the exported content. Build library on Image search You need to download the Unsplash Lite library data and complete the construction according to the instructions. The whole process of database construction is described as follows: Preprocessing, specify the image directory, and then generate a more general JSOnline file for library construction; Build index, use OpenAI/Clip-Vit-BASE-Patch32 pre-training model to index the gallery, and output one document object for each line of index data; Push inverted (vector) and forward (document field) data to each component server. Like text search, after offline database construction, relevant data will be pushed to service components, called by online retrieval algorithm services to obtain relevant data. 1.2 Online Services The overall online service architecture diagram is as follows: https://preview.redd.it/jfsl8hdfez291.png?width=1280&format=png&auto=webp&s=a858e2304a0c93e78ba5429612ca08cbee69b35a Multi-mode search online service system supports application scenarios such as text search and text search. The whole online service consists of the following parts: Query preprocessing service: encapsulate preprocessing logic (including text/image, etc.) of pre-training model, and provide services through gRPC interface; Retrieval algorithm service: the whole algorithm processing link includes AB experiment tangent flow configuration, MetaSpore Serving call, vector recall, sorting, document summary, etc.; User entry service: provides a Web UI interface for users to debug and track down problems in the retrieval service. From a user request perspective, these services form invocation dependencies from back to front, so to build up a multimodal sample, you need to run each service from front to back first. Before doing this, remember to export the offline model, put it online and build the library first. This article will introduce the various parts of the online service system and make the whole service system step by step according to the following guidance. See the ReadME at the end of this article for more details. 1.2.1 Query preprocessing service Deep learning models tend to be based on tensors, but NLP/CV models often have a preprocessing part that translates raw text and images into tensors that deep learning models can accept. For example, NLP class models often have a pre-tokenizer to transform text data of string type into discrete tensor data. CV class models also have similar processing logic to complete the cropping, scaling, transformation, and other processing of input images through preprocessing. On the one hand, considering that this part of preprocessing logic is decoupled from tensor reasoning of the depth model, on the other hand, the reason of the depth model has an independent technical system based on ONNX, so MetaSpore disassembled this part of preprocessing logic. NLP pretreatment Tokenizer has been integrated into the Query pretreatment service. MetaSpore dismantlement with a relatively general convention. Users only need to provide preprocessing logic files to realize the loading and prediction interface and export the necessary data and configuration files loaded into the preprocessing service. Subsequent CV preprocessing logic will also be integrated in this manner. The preprocessing service currently provides the gRPC interface invocation externally and is dependent on the Query preprocessing (QP) module in the retrieval algorithm service. After the user request reaches the retrieval algorithm service, it will be forwarded to the service to complete the data preprocessing and continue the subsequent processing. The ReadMe provides details on how the preprocessing service is started, how the preprocessing model exported offline to cloud storage enters the service, and how to debug the service. To further improve the efficiency and stability of model reasoning, MetaSpore Serving implements a Python preprocessing submodule. So MetaSpore can provide gRPC services through user-specified preprocessor.py, complete Tokenizer or CV-related preprocessing in NLP, and translate requests into a Tensor that deep models can handle. Finally, the model inference is carried out by MetaSpore, Serving subsequent sub-modules. Presented here on the lot code: https://github.com/meta-soul/MetaSpore/compare/add\python\preprocessor 1.2.2 Retrieval algorithm services Retrieval algorithm service is the core of the whole online service system, which is responsible for the triage of experiments, the assembly of algorithm chains such as preprocessing, recall, sorting, and the invocation of dependent component services. The whole retrieval algorithm service is developed based on the Java Spring framework and supports multi-mode retrieval scenarios of text search and text search graph. Due to good internal abstraction and modular design, it has high flexibility and can be migrated to similar application scenarios at a low cost. Here’s a quick guide to configuring the environment to set up the retrieval algorithm service. See ReadME for more details: Install dependent components. Use Maven to install the online-Serving component Search for service configurations. Copy the template configuration file and replace the MongoDB, Milvus, and other configurations based on the development/production environment. Install and configure Consul. Consul allows you to synchronize the search service configuration in real-time, including cutting the flow of experiments, recall parameters, and sorting parameters. The project’s configuration file shows the current configuration parameters of text search and text search. The parameter modelName in the stage of pretreatment and recall is the corresponding model exported in offline processing. Start the service. Once the above configuration is complete, the retrieval service can be started from the entry script. Once the service is started, you can test it! For example, for a user with userId=10 who wants to query “How to renew ID card,” access the text search service. 1.2.3 User Entry Service Considering that the retrieval algorithm service is in the form of the API interface, it is difficult to locate and trace the problem, especially for the text search image scene can intuitively display the retrieval results to facilitate the iterative optimization of the retrieval algorithm. This paper provides a lightweight Web UI interface for text search and image search, a search input box, and results in a display page for users. Developed by Flask, the service can be easily integrated with other retrieval applications. The service calls the retrieval algorithm service and displays the returned results on the page. It’s also easy to install and start the service. Once you’re done, go to http://127.0.0.1:8090 to see if the search UI service is working correctly. See the ReadME at the end of this article for details. Multimodal system demonstration The multimodal retrieval service can be started when offline processing and online service environment configuration have been completed following the above instructions. Examples of textual searches are shown below. Enter the entry of the text search map application, enter “cat” first, and you can see that the first three digits of the returned result are cats: https://preview.redd.it/0n5nuyvhez291.png?width=1280&format=png&auto=webp&s=1e9c054f541d53381674b8d6001b4bf524506bd2 If you add a color constraint to “cat” to retrieve “black cat,” you can see that it does return a black cat: https://preview.redd.it/rzc0qjyjez291.png?width=1280&format=png&auto=webp&s=d5bcc503ef0fb3360c7740e60e295cf372dcad47 Further, strengthen the constraint on the search term, change it to “black cat on the bed,” and return results containing pictures of a black cat climbing on the bed: ​ https://preview.redd.it/c4b2q8olez291.png?width=1280&format=png&auto=webp&s=4f3817b0b9f07e1e68d1d4a8281702ba3834a00a The cat can still be found through the text search system after the color and scene modification in the above example. Conclusion The cutting-edge pre-training technology can bridge the semantic gap between different modes, and the HuggingFace community can greatly reduce the cost for developers to use the pre-training model. Combined with the technological ecology of MetaSpore online reasoning and online microservices provided by DMetaSpore, the pre-training model is no longer mere offline dabbling. Instead, it can truly achieve end-to-end implementation from cutting-edge technology to industrial scenarios, fully releasing the dividends of the pre-training large model. In the future, DMetaSoul will continue to improve and optimize the MetaSpore technology ecosystem: More automated and wider access to HuggingFace community ecology. MetaSpore will soon release a common model rollout mechanism to make HuggingFace ecologically accessible and will later integrate preprocessing services into online services. Multi-mode retrieval offline algorithm optimization. For multimodal retrieval scenarios, MetaSpore will continuously iteratively optimize offline algorithm components, including text recall/sort model, graphic recall/sort model, etc., to improve the accuracy and efficiency of the retrieval algorithm. For related code and reference documentation in this article, please visit: https://github.com/meta-soul/MetaSpore/tree/main/demo/multimodal/online Some images source: https://github.com/openai/CLIP/raw/main/CLIP.png https://www.sbert.net/examples/training/sts/README.html

CollabAI
github
LLM Vibe Score0.449
Human Vibe Score0.07795191529604462
sjinnovationMar 27, 2025

CollabAI

CollabAI About Welcome to Collabai.software, where we've taken the world of AI to new heights. We've been working tirelessly to bring you the most advanced, user-friendly platform that seamlessly integrates with the powerful OpenAI API, Gemini, and Claude. Imagine running your own ChatGPT on your server, with the ability to manage access for your entire team. Picture creating custom AI assistants that cater to your unique needs, and organizing your employees into groups for streamlined collaboration. With Collabai.software, this is not just a dream, but a reality. Collabai.software Features: Self-Hosting on Your Cloud: Gain full control by hosting the platform on your private cloud. Ensure data privacy by using your API codes, allowing for secure data handling. Enhanced Team Management: Manage teams with private accounts and customizable access levels (Departments). Prompt Templates: Utilize generic templates to streamline team usage. Departmental Access & Assistant Assignment: Assign AI assistants to specific departments for shared team access. Customizable AI Assistants: Create personalized AI assistants for users or organizations. Tagging Feature in Chats: Organize and retrieve chat data efficiently with custom tags. Chat Storage and Retrieval: Save all chats and replies for future analysis, with an option to restore accidentally deleted chats from Trash. Optimized Performance: Experience our high-speed, efficient platform. Our clients have been using it for over a year, with some spending $1500-$2000 per month on the API. File Upload & GPT-4 Vision Integration: Enhance interactions by uploading files for analysis and sending pictures for AI description. OpenAI API, Gemini, and Claude Integration: Seamlessly integrate with the powerful OpenAI API, Gemini, and Claude for a comprehensive suite of AI capabilities. API-Based Function Calls: Execute custom functions and automate tasks directly through the API. Usage Monitoring: Track your daily and monthly API usage costs to optimize spending. Day and Night Mode: Switch between light and dark themes to enhance visual comfort. Additional Features: Private Accounts: Ensure the security and privacy of your team members' data. Customizable Access Levels: Tailor access permissions to meet the specific needs of your organization. Shared Team Access: Foster collaboration by assigning AI assistants to specific departments or teams. AI-Powered File Analysis: Gain insights and automate tasks by uploading files for AI analysis. AI-Generated Image Descriptions: Enhance communication and understanding by sending pictures for AI-powered descriptions. !image !image !image Folder Structure Client The client folder contains the React-based frontend code for the application. This includes JSX, CSS, and JavaScript files, as well as any additional assets such as images or fonts. Below is a brief overview of the main subdirectories within the client folder: src: This directory contains the React components, styles, and scripts for the frontend application. public: Static assets, such as images or favicon.ico, go here. This folder is served as-is and not processed by the build system. Server The server folder contains all the backend-related code for the application, following a Model-View-Controller (MVC) pattern. Here is a breakdown of the main subdirectories within the server folder: controllers: This directory holds the controller files responsible for handling requests, processing data, and interacting with models. models: Data models and database-related code are organized in this folder. config: Configuration files for the backend, such as database configuration or any other service configuration should be stored here, can be stored in this directory. Getting Started Follow the steps below to get the project up and running. Prerequisites Node.js (Version: >=20.x) MongoDB NPM Development Setup Clone the Repository bash cd client Install Dependencies bash cd ../server Install Backend Dependencies bash npm start To initialize the application data and create a superadmin user, you can use either cURL or Postman: Using cURL If you prefer command-line tools, you can use curl to make a POST request to the /init-setup endpoint. Open your terminal and run the following command: curl -X POST http://localhost:8011/api/init -H "Content-Type: application/json" -d '{ "fname": "Super", "lname": "Admin", "email": "superadmin@example.com", "password": "yourSecurePassword", "employeeCount": 100, "companyName": "INIT_COMPANY" }' Initializing Setup with Postman Open Postman: Launch the Postman application. Create a New Request: Click on the '+' or 'New' button to create a new request. Set HTTP Method to POST: Ensure that the HTTP method is set to POST. Enter URL: Enter the URL http://localhost:8011/api/init. Set Headers: Go to the 'Headers' tab. Set Content-Type to application/json. Set Request Body: Switch to the 'Body' tab. Select the 'raw' radio button. Enter the JSON data for your superadmin user: Send Request: Click the 'Send' button to make the request. This will send a POST request to http://localhost:8011/api/init with the provided JSON payload, creating a superadmin user with the specified details. Site Setup: Login with the superadmin credentials and set up your site by adding configs from your settings page, for ex. API keys, etc. Reference CollaborativeAI Reference Guide Contributing If you would like to contribute to the project, we welcome your contributions! Please follow the guidelines outlined in the CONTRIBUTING.md file. Feel free to raise issues, suggest new features, or send pull requests to help improve the project. Your involvement is greatly appreciated! Thank you for contributing to our project! License MIT

Practical-AI-Bootcamp
github
LLM Vibe Score0.4
Human Vibe Score0.010988541997291353
tinkerhubJan 8, 2023

Practical-AI-Bootcamp

Practical AI Bootcamp Practical AI Bootcamp by TinkerHub Foundation. Here you will learn how to build good AI products. This learning program cover the following. Finding the right machine learning model for a problem Building responsible AI - Bias and other issues How to train a good machine learning model - how to tune hyperparams Transfer Learning - where, when and how to use ? Speed and performance Wraping and hosting machine learning models On device machine learning Some tools and tricks Participants criteria Should know OOP and python Should know git and github Should know basic machine learning (different categories of ML, what is training ? What is testing ? What is dataset..etc) All the resources to get you started with the program is given in the resources folder. You can learn it and finish the task for joining the program! Join the program This bootcamp need you to have the following skills Python Github Machine learning There is a task for you in the tasks folder. Finish the task in a private repo. Give Gopikrishnan Sasikumar access to the private repo. Fill this form We will let you know if you are selected Program schedule This is a 2 week Bootcamp. There will be 1 hour sessions every Monday, Wednesday, Friday and Sunday. There will be tasks to do every other days. Day 1 (Aug 18) Finding the right machine learning model for a problem Should I use machine learning for this problem ? What kind of ML task is this ? Machine learning or deep learning ? Day 2 (Aug 19) Building responsible AI - Bias and other issues Bias Accountability and explainability Reproducability Robustness Privacy Day 3 (Aug 23) Dataset and performance Data prep Data reading Data Augumentation Day 4 (Aug 25) Techniques in training AI models How to find the right learning rate ? Effect of batch size Epochs and early stop Day 5 (Aug 27) Transfer learning where when and how to use Day 6 (Aug 29) Wraping and hosting machine learning models Building a micro service Making the model as an API Hosting and serving Day 7 (Aug 31) On device machine learning Techniques to make models small TensorFlow lite PyTorch quantisation Day 8 (Sep 02) Some tools and tricks Installation Finding models Data Privacy Cloud APIs and frameworks Projects (Sep 03 to Sep 09) You and your fellow teammates will be doing a project based on what you learnt through out the bootcamp

nine
github
LLM Vibe Score0.406
Human Vibe Score0.000678327714013925
NethermindEthMar 28, 2025

nine

NINE - Neural Interconnected Nodes Engine A flexible framework for building a distributed network of AI agents that work everywhere (STD, WASM, TEE) with a dynamic interface and hot-swappable components. One of the key concepts of the framework is a meta-layer that enables building software systems in a No-code style, where the entire integration is handled by the LLM. Documentation | Telegram | X | Discord Overview Project Structure The project is built using Rust (full-stack) and organized as a workspace consisting of two major groups: substance/ - The core components of the system, responsible for interaction. particles/ - Plugins for the system that enable additional functionalities. examples/ - Usage examples of the framework. Use cases The following cases will have a minimal implementation, and they will be used to track the progress of the framework and its flexibility in building such systems. ☑️ Chatbots - AI-driven natural language chatbots for customer support, virtual assistants, and automation. ☑️ AI-governed blockchains (ChaosChain) - Self-regulating and intelligent blockchain ecosystems with automated decision-making. ⬜ Personal AI Assistant with dynamic UI - AI that generates adaptive and context-aware user interfaces on demand. ☑️ AI-powered trading bots - Autonomous financial agents for high-frequency trading and portfolio management. ⬜ Intelligent email assistant - AI for reading, summarizing, filtering, and responding to emails autonomously. ⬜ Interactivity in home appliances - AI-powered automation for home appliances, making them responsive and adaptive. ⬜ On-demand observability and awareness in DevOps - AI-driven insights, predictive monitoring, and automated issue detection in IT systems. ⬜ AI-powered developer tools - AI agents assisting with code generation, debugging, and software optimization. ⬜ Autonomous research agent - Self-learning AI for data analysis, knowledge discovery, and hypothesis testing. Status: ⬜ Not started | ☑️ In Progress | ✅ Completed Interfaces The platform provides No-code interfaces that automatically adapt to your needs and use LLM for system management. ☑️ Stdio - A console interface that also allows interaction with models through the terminal or via scripts. ☑️ TUI - An advanced console interface with an informative dashboard and the ability to interact more comprehensively with the system. ☑️ GUI - A graphical immediate-state interface suitable for embedded systems with real-time information rendering. ⬜ WEB - The ability to interact with the system through a web browser, such as from a mobile phone. ⬜ Voice - An interface for people with disabilities or those who prefer interaction without a graphical representation (e.g., voice control). ⬜ API - On-the-fly API creation for your system, providing a formal interaction method. This includes encapsulating an entire mesh system into a simple tool for LLM. Features (goals) Built on Rust and implemented as hybrid actor-state machines. Supports various LLMs, tools, and extensibility. Hot model swapping without restarting. Real-time configuration adjustment. Distributed agents, the ability to run components on different machines. Provides a dynamic user interface (UI9) that is automatically generated for interacting with a network of agents. Usage An agent is a substance that assembles from components (particles). Connections automatically form between them, bringing the agent to life: License This project is licensed under the [MIT license]. [MIT license]: https://github.com/NethermindEth/nine/blob/trunk/LICENSE Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this project by you, shall be licensed as MIT, without any additional terms or conditions.

bubbln_network-automation
github
LLM Vibe Score0.421
Human Vibe Score0.004537250556463098
olasupoMar 14, 2025

bubbln_network-automation

Bubbln: An AI-driven Network Automation In the world of network engineering, automation has completely transformed the way things work. But, before automation, setting up and managing networks was a tedious job filled with challenges. Engineers had to manually type out configurations, often doing the same tasks repeatedly on different devices. This led to mistakes and wasted time. Then came automation tools like Ansible, Chef, and Puppet, which changed everything. They made network management much easier and allowed for scalability. But there was still a problem: creating automation scripts required a lot of technical know-how and was prone to errors because it relied on human input. And that's why we built Bubbln. It's a game-changer in network engineering, integrating AI into Ansible to take automation to the next level. With Bubbln, we can automatically generate and execute playbooks with incredible accuracy, thereby improving automation efficiency and increasing network engineer’s productivity. It was developed using Python programming language and acts as a bridge between ChatGPT and network systems, making interactions seamless and deployments effortless. Current Capabilities AI-Driven Playbook Generation for OSPF and EIGRP based networks: Bubbln has been rigorously tested to leverage ChatGPT for generation of playbooks for networks based on OSPF and EIGRP networks, with a very high accuracy rate. Auto-creation of Inventory files: Users do not need to prepare the hosts file. Bubbln will auto-generate this file from input provided by the user. Customizable Configurations: Users can input specific router protocols (OSPF or EIGRP), interface configurations, and other network details to tailor the generated playbooks. Documentation: Bubbln automatically creates a report that contains the network configurations, prompts, and generated playbooks for easy reference in future. No expertise required: By auto-generation of the playbooks and inventory file, Bubbln has been able to eliminate a major hurdle to network automation – need for users to learn the automation tools e.g Ansible, Chef. Improved Efficiency: With AI automation, Bubbln speeds up the deployment of network configurations, reducing the time required for manual playbook creation, thereby increasing the productivity of network engineers. Getting Started There are two main approaches to installing Bubbln on your local machine. Docker Container Bubbln has been packaged using docker containers for easy distribution and usage. The following steps can be followed to deploy the Bubbln container on your local machine. Ensure docker is installed on your local machine by entering the below command. This command works for windows and linux OS: The version of docker would be displayed if it is installed. Otherwise, please follow the link below to install docker on your machine: Windows: Docker Desktop for Windows Ubuntu: Docker Engine for Ubuntu CentOS: Docker Engine for CentOS Debian: Docker Engine for Debian Fedora: Docker Engine for Fedora Download the docker image: Create a directory for the project and download Bubbln image using the below command: Run the docker container using the below command: Install nano Update the sshipaddresses.txt file: Update the ssh_addresses.txt file with the SSH IP addresses of the routers you want to configure. Bubbln will utilize this information along with the login credentials (inputted at runtime) to automatically generate a hosts.yml file required by ansible for network configuration. To do this enter the below command to edit the file: Obtain an OpenAPI API Key: You may follow this guide to sign up and obtain an API key: Utilizing a Virtualization machine of choice, setup a network with the following basic configurations: Enable SSH on each of the routers. Configure IP addresses and enable only interfaces required for connectivity by Bubbln. Configure static routes to enable Bubbln reach the routers on the network. Ensure all the routers can be reached by ping and SSH from your host machine. Initialize Bubbln by entering the below command: Github Repository Clone You can clone Bubbln’s GitHub repository by following the below steps: Prerequisites Bubbln works well with Python 3.10. You need to ensure python3.10 is installed on your local machine. This can be confirmed by entering the below command: If it is not Installed, then the below command can be utilized to install python 3.10: Build and Prepare the Project Clone the Bubbln repository from GitHub: To clone the repository, first verify you have git installed on your machine by issuing the following commands: If git is installed, the version number would be displayed, otherwise, you can issue the following commands to have git installed on your machine: Navigate or create a directory for the project on your machine and issue the following commands to clone the Bubbln git repository: Create a Virtual Environment for the application Firstly, confirm virtualenv is installed on your machine by inputting the following command: If the output shows something similar to the below, then go to the next step to install virtualenv ` WARNING: Package(s) not found: env, virtual ` Issue the below command to install virtualenv: Create a virtual environment for the project: Activate the virtual environment: Install the dependencies You can then run the below command to install the necessary packages for the app. Update the sshipaddresses.txt file: Update the ssh_addresses.txt file with the SSH IP addresses of the routers you want to configure. Bubbln will utilize this information along with the login credentials (inputted at runtime) to automatically generate a hosts.yml file required by ansible for network configuration. Obtain an OpenAPI API Key: You may follow this guide to sign up and obtain an API key OpenAI Key: OpenAI Key Utilizing a Virtualization machine of choice, setup a network with the following basic configurations: Enable SSH on each of the routers. Configure IP addresses and enable only interfaces required for connectivity by Bubbln Configure static routes to enable Bubbln reach the routers on the network. Ensure all the routers can be reached by ping and SSH from your host machine. Initialize Bubbln While ensuring that python virtual environment is activated as stated in step 5, run the below command to initialize Bubbln How Bubbln Works Bubbln serves as an intermediary between ChatGPT and a network infrastructure, providing logic, control functions, and facilitating network automation. Its operation can be summarized as follows: !image Figure 1Bubbln architecture and interaction with a network of four routers. Initialization: When Bubbln is initialized, it checks the “userconfig.pkl” file to see if Bubbln has ever been initiated. This is indicated by the presence of a welcome message status in the file. If it exists, Bubbln jumps straight to request the user to input the OpenAI key. Otherwise, it displays a welcome message, and updates the userconfig.pkl file accordingly. Upon successful input of the API key, the user is prompted for the SSH credentials of the routers. These parameters are then encrypted and saved in the user_config.pkl file. The SSH credential is later decrypted and parsed as input to dynamically generate a hosts.yml file at runtime. Responsible Code Section: bubbln.py: welcomemessagefeature() !image Figure 2 Bubbln's welcome message. Parameter Input & Validation: In the parameter input stage, Bubbln first checks for the existence of a file called “router_configuration.pkl”. If it exists, the user is prompted to decide whether to load an existing configuration or input a new set of configurations. If the file is empty or non-existent, then users are prompted to input the configuration parameters for each router on the network. These parameters serve as variables that are combined with hardcoded instructions written in natural language to form the prompt sent to ChatGPT. Key parameters include: Router Configurations: OSPF Area OSPF Process ID Number of networks to advertise (OSPF/EIGRP) AS Number (EIGRP) Interface names IP Addresses (in CIDR format) This module also ensures that parameters are keyed in using the correct data type and format e.g. IP addresses are expected in CIDR format and OSPF Area should be of type integer. Upon completion of parameter input, all parameters are saved into a file called “router_configuration.pkl” upon validation of accuracy by the user. Responsible Code Section: parameter_input.py !image Figure 3 Bubbln receiving Network Parameters. Before generating the prompt, a summary of the inputted parameters is displayed for user validation. This step ensures accuracy and minimizes errors. Users are given the option to make corrections if any discrepancies are found. Responsible Code Section: parameterinput.py: validateinputs() !image Figure 4 Bubbln Awaiting Validation of Inputted Network Parameters. Auto-Generation of Prompt: After validation of inputted parameters, Bubbln composes the prompt by combining the inputted parameters with a set of well-engineered hardcoded instructions written in natural language. Responsible Code Section: prompt_generator.py ChatGPT Prompting: The auto-composed prompt is then sent to ChatGPT utilizing gpt-4 chatCompletions model with a temperature parameter of 0.2 and maximum tokens of 1500. The following functions were designed into this process stage Responsible Code Section: chatGPT_prompting.py !image Figure 5 ChatGPT prompting in progress Playbook Generation & Extraction: After ChatGPT processes the prompt from Bubbln, it provides a response which usually contains the generated playbook and explanatory notes. Bubbln then extracts the playbook from the explanatory notes by searching for “---” which usually connotes the start of playbooks and saves each generated playbook uniquely using the nomenclature RouteriPlaybook.yml. Responsible Code Section: playbook_extractor.py !image Figure 6 ChatGPT-generated playbook. Playbook Execution: Bubbln loads the saved “RouteriPlaybook.yml” playbook and dynamically generates the hosts.yml file and parses them to the python library ansiblerunner for further execution on the configured network. Bubbln generates the hosts.yml file at run time by using the pre-inputted SSH credentials in userconfig.pkl file - and decrypts them, as well as IP addresses from the sshipaddresses.txt file, as inputs Responsible Code Section: playbook_execution.py !image Figure 7 Playbook execution in progress Sample result of Executed Playbook Upon successful execution of all playbooks, a query of the routing table on router 4 indicates that router 4 could reach all the prefixes on the network. !image Figure 8 Output of 'sh ip route' executed on R1 File Management and Handling Throughout the execution process, Bubbln manages the creation, saving, and loading of various files to streamline the network automation process. user_config.pkl: This dictionary file dynamically created at run time is used to store encrypted API keys, SSH credentials and initial welcome message information. router_configuration.pkl: It is auto created by Bubbln and used to store network configuration parameters for easy loading during subsequent sessions. hosts.yml: This is a runtime autogenerated file that contains inventory of the network devices. It is auto deleted after the program runs. networkconfigurationreport.pdf: This auto-generated report by Bubbln is a documentation of all the routers configured their parameters, generated playbooks, and prompt for each execution of the Bubbln application. It is created after a successful execution of playbooks and network testing and is meant for auditing and documentation purposes. RouteriPlaybook.yml: After extraction of generated playbooks from ChatGPT’s raw response, Bubbln automatically saves a copy of the generated playbook using unique names for each playbook. !image Figure 9 File structure after successful deployment of a four-router network Providing Feedback We are glad to hear your thoughts and suggestions. Kindly do this through the discussion section of our GitHub - https://github.com/olasupo/bubbln_network-automation/discussions/1#discussion-6487475 We can also be reached on: Olasupo Okunaiya – olasupo.o@gmail.com