VibeBuilders.ai Logo
VibeBuilders.ai

All Resources

AI-Strategies-StockMarket
github
LLM Vibe Score0.407
Human Vibe Score0.026251017937713218
Solano96Mar 28, 2025

AI-Strategies-StockMarket

Artificial Intelligence Trading App to test strategies based on artificial intelligence for investing in the stock market. The program has two simple investment strategies to compare results. One of these strategies is simply to buy and hold. The other is a classic strategy based on the crossing of Moving Averages and the use of the Relative Strength Index or RSI. At this moment the app has the following strategies based on artificial intelligence: Deep Neural Network: strategy that tries to predict the market trend with the use of neural networks that take different technical indicators as inputs. Strategy that combines in a weighted way buy-sell signals coming from moving average crosses. The weights are obtained through the PSO (Particle swarm optimization) algorithm. Getting Started 🚀 These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. The local installation has been successfully tested in Ubuntu 18.04. Prerequisites 📋 Have installed Python3, you can check with the following command in your terminal: In case you did not have Python3 installed, you can use the following commands: To use the program with interface is necessary to intall tkinter with the following command: Installing 🔧 First clone the repository: Now we need to install some dependencies. To do this, execute the following command: Usage 📦 First we need to activate the virtual environment with: You can use the command line program, that can be execute as follow: You can also use the short options: Example: PD: You can use as data name any market abbreviation recognized by yahoo finance. After the execution you can find the results in 'reports' where you can find a PDF report with a summary of the execution. Author ✒️ Francisco Solano López Rodríguez

short-video-automation
github
LLM Vibe Score0.383
Human Vibe Score0.004820399169034897
ChetanXproMar 28, 2025

short-video-automation

Short Video Automation Automate the creation of short videos with text-to-speech, audio merging, image overlay, and background audio. It takes average 40 second to create a 35 second short video. Example videos Here are some example videos created using Short Video Automation: A fact video about earth. https://github.com/ChetanXpro/short-video-automation/assets/107798155/1220d3d7-46ac-4c6f-90ad-9f9529a1bca6 Overview Short Video Automation is a tool that simplifies the process of creating short videos. It combines various multimedia elements to produce engaging videos quickly. The key features of this tool include: AI-Generated Scripts: Generate scripts with the help of artificial intelligence (AI). These scripts will form the basis of your short videos. Text-to-Speech: Convert the generated scripts into audio using text-to-speech technology. Audio Merging: Combine the generated audio with a sample video using FFmpeg to create the audio track for your short video. Image Overlay: For specific keywords in the script, automatically download images and overlay them on the video. Background Audio: Add a background audio track to enhance the video's appeal. Usage Prerequisites Node.js and npm installed FFmpeg installed Installation Clone the repository: Download and paste a base video which you want to use in project root dir You can test with this video: https://drive.google.com/file/d/1ZNN3GX2iR74FxrTM_6adDEnl6BA8gKcc/view?usp=sharing Then find any interesting quora question and answer and paste its link in tool Run the tool

DownEdit
github
LLM Vibe Score0.491
Human Vibe Score0.032913669732192626
nxNullMar 28, 2025

DownEdit

DownEdit is a fast and powerful program for downloading and editing videos from top platforms like TikTok, Douyin, and Kuaishou. Effortlessly grab videos from user profiles, make bulk edits, throughout the entire directory with just one click. Plus, our advanced Chat & AI features let you download, edit, and generate videos, images, and sounds in bulk. Exciting new features are coming soon—stay tuned! ✨ Preview 🔥 Current Features Edit Video: Enhance videos with various functions designed to streamline editing tasks across entire directories. Edit Photo: Quickly enhance images in bulk with various functions, including AI-powered functions, Edit Sound: Improve audio in bulk using powerful functions, including cutting-edge AI-powered tools. Download all videos: Retrieve videos from users (TikTok, Kuaishou, Douyin, etc.) without watermarks. Bulk AI Generator: Generate images and videos in bulk using powerful generative AI. AI Editor: Enhance your content effortlessly with using AI editor designed for images, sounds and videos. 🌐 Service | Website| Provider| Single Video | User's Videos | Stream | Access | Status | | --- | --- | --- | --- | --- | --- | --- | | tiktok.com | None | ✔️ | ✔️ | ❌ | API (Cookie) | !Inactive | | douyin.com | None | ✔️ | ✔️ | ❌ | API (Cookie) | !Inactive | | kuaishou.com | None | ✔️ | ✔️ | ❌ | Login Required (Cookie) | !Active | | youtube.com | None | ✔️ | ✔️ | ❌ | (Public/Private) | !Active | 🤖 AI Cloud | Type | Model | Provider| Minimal | Bulk | Access | Status | | --- | --- | --- | --- | --- | --- | --- | | Image Generation | None | | None | ✔️ | API (Public) | !Active | | Video Generation | None | | None | ✔️ | | !Inactive | | Sound Generation | None | | None | ✔️ | | !Inactive | Local | Type | Model | Provider| Minimal | Bulk | Access | Status | | --- | --- | --- | --- | --- | --- | --- | | Image Generation | None | | None | ✔️ | | !Inactive | | Video Generation | None | | None | ✔️ | | !Inactive | | Sound Generation | None | | None | ✔️ | | !Inactive | 🚀 Usage Edit Video - Simply copy and paste (right click) whatever directory location you would like to process. Tutorial !EditVideoAdobeExpress Change it according to your desired video speed. Input your music file location Download douyin videos - Download all video from user by input user link. Tutorial Download tiktok videos - Download all video from user by input username with @. Tutorial Download kuaishou videos - Remember to input your own Cookie. Otherwise it won't work. Tutorial Step 1. Right click and select on Inspect element. Step 2. Copy your Cookie browser. Step 3. Copy user ID you want to download. Tips: If you still getting error, try changing your Browser, use Incognito/Private mode and reset your Internet/IP. Edit Photo - Simply copy and paste (right click) whatever directory location you would like to process. Tutorial Remove Background AI 🔎 Requirements Python [!NOTE] Version must be between 3.8 and 3.12. ⚙ Installation Step 1. Download and install python on your pc. Step 2. libraries installation You have three options to install the required libraries: Option 1: Manual Installation Option 2: Automatic installation & virtual environments Option 3: Terminal & virtual environments Step 3. Run the script For Regular Use: You can also download the application and use it on your PC without installing python. Windows: Download macOS: None [!TIP] Fix Terminal Font Issues Install the Microsoft Cascadia font on your computer if your terminal does not support the font, which is resulting in program error. 🔨 Module The following dependencies are required for the project: List Pystyle Requests Inquirer Colorama Moviepy Rich Playwright Rembg WMI Psutil Httpx Aiofiles Author 👤 Sokun Heng Github: @SokunHeng Show your support Please ⭐️ this repository if this project helped you! 📚 Reference Documentation 📝 License Copyright © 2022 SokunHeng.

awesome-quantum-machine-learning
github
LLM Vibe Score0.64
Human Vibe Score1
krishnakumarsekarMar 27, 2025

awesome-quantum-machine-learning

Awesome Quantum Machine Learning A curated list of awesome quantum machine learning algorithms,study materials,libraries and software (by language). Table of Contents INTRODUCTION Why Quantum Machine Learning? BASICS What is Quantum Mechanics? What is Quantum Computing? What is Topological Quantum Computing? Quantum Computing vs Classical Computing QUANTUM COMPUTING Atom Structure Photon wave Electron Fluctuation or spin States SuperPosition SuperPosition specific for machine learning(Quantum Walks) Classical Bit Quantum Bit or Qubit or Qbit Basic Gates in Quantum Computing Quantum Diode Quantum Transistor Quantum Processor Quantum Registery QRAM Quantum Entanglement QUANTUM COMPUTING MACHINE LEARNING BRIDGE Complex Numbers Tensors Tensors Network Oracle Hadamard transform Hilbert Space eigenvalues and eigenvectors Schr¨odinger Operators Quantum lambda calculus Quantum Amplitute Phase Qubits Encode and Decode convert classical bit to qubit Quantum Dirac and Kets Quantum Complexity Arbitrary State Generation QUANTUM ALGORITHMS Quantum Fourier Transform Variational-Quantum-Eigensolver Grovers Algorithm Shor's algorithm Hamiltonian Oracle Model Bernstein-Vazirani Algorithm Simon’s Algorithm Deutsch-Jozsa Algorithm Gradient Descent Phase Estimation Haar Tansform Quantum Ridgelet Transform Quantum NP Problem QUANTUM MACHINE LEARNING ALGORITHMS Quantum K-Nearest Neighbour Quantum K-Means Quantum Fuzzy C-Means Quantum Support Vector Machine Quantum Genetic Algorithm Quantum Hidden Morkov Models Quantum state classification with Bayesian methods Quantum Ant Colony Optimization Quantum Cellular Automata Quantum Classification using Principle Component Analysis Quantum Inspired Evolutionary Algorithm Quantum Approximate Optimization Algorithm Quantum Elephant Herding Optimization Quantum-behaved Particle Swarm Optimization Quantum Annealing Expectation-Maximization QAUNTUM NEURAL NETWORK Quantum perceptrons Qurons Quantum Auto Encoder Quantum Annealing Photonic Implementation of Quantum Neural Network Quantum Feed Forward Neural Network Quantum Boltzman Neural Network Quantum Neural Net Weight Storage Quantum Upside Down Neural Net Quantum Hamiltonian Neural Net QANN QPN SAL Quantum Hamiltonian Learning Compressed Quantum Hamiltonian Learning QAUNTUM STATISTICAL DATA ANALYSIS Quantum Probability Theory Kolmogorovian Theory Quantum Measurement Problem Intuitionistic Logic Heyting Algebra Quantum Filtering Paradoxes Quantum Stochastic Process Double Negation Quantum Stochastic Calculus Hamiltonian Calculus Quantum Ito's Formula Quantum Stochastic Differential Equations(QSDE) Quantum Stochastic Integration Itō Integral Quasiprobability Distributions Quantum Wiener Processes Quantum Statistical Ensemble Quantum Density Operator or Density Matrix Gibbs Canonical Ensemble Quantum Mean Quantum Variance Envariance Polynomial Optimization Quadratic Unconstrained Binary Optimization Quantum Gradient Descent Quantum Based Newton's Method for Constrained Optimization Quantum Based Newton's Method for UnConstrained Optimization Quantum Ensemble Quantum Topology Quantum Topological Data Analysis Quantum Bayesian Hypothesis Quantum Statistical Decision Theory Quantum Minimax Theorem Quantum Hunt-Stein Theorem Quantum Locally Asymptotic Normality Quantum Ising Model Quantum Metropolis Sampling Quantum Monte Carlo Approximation Quantum Bootstrapping Quantum Bootstrap Aggregation Quantum Decision Tree Classifier Quantum Outlier Detection Cholesky-Decomposition for Quantum Chemistry Quantum Statistical Inference Asymptotic Quantum Statistical Inference Quantum Gaussian Mixture Modal Quantum t-design Quantum Central Limit Theorem Quantum Hypothesis Testing Quantum Chi-squared and Goodness of Fit Testing Quantum Estimation Theory Quantum Way of Linear Regression Asymptotic Properties of Quantum Outlier Detection in Quantum Concepts QAUNTUM ARTIFICIAL INTELLIGENCE Heuristic Quantum Mechanics Consistent Quantum Reasoning Quantum Reinforcement Learning QAUNTUM COMPUTER VISION QUANTUM PROGRAMMING LANGUAGES , TOOLs and SOFTWARES ALL QUANTUM ALGORITHMS SOURCE CODES , GITHUBS QUANTUM HOT TOPICS Quantum Cognition Quantum Camera Quantum Mathematics Quantum Information Processing Quantum Image Processing Quantum Cryptography Quantum Elastic Search Quantum DNA Computing Adiabetic Quantum Computing Topological Big Data Anlytics using Quantum Hamiltonian Time Based Quantum Computing Deep Quantum Learning Quantum Tunneling Quantum Entanglment Quantum Eigen Spectrum Quantum Dots Quantum elctro dynamics Quantum teleportation Quantum Supremacy Quantum Zeno Effect Quantum Cohomology Quantum Chromodynamics Quantum Darwinism Quantum Coherence Quantum Decoherence Topological Quantum Computing Topological Quantum Field Theory Quantum Knots Topological Entanglment Boson Sampling Quantum Convolutional Code Stabilizer Code Quantum Chaos Quantum Game Theory Quantum Channel Tensor Space Theory Quantum Leap Quantum Mechanics for Time Travel Quantum Secured Block Chain Quantum Internet Quantum Optical Network Quantum Interference Quantum Optical Network Quantum Operating System Electron Fractionalization Flip-Flop Quantum Computer Quantum Information with Gaussian States Quantum Anomaly Detection Distributed Secure Quantum Machine Learning Decentralized Quantum Machine Learning Artificial Agents for Quantum Designs Light Based Quantum Chips for AI Training QUANTUM STATE PREPARATION ALGORITHM FOR MACHINE LEARNING Pure Quantum State Product State Matrix Product State Greenberger–Horne–Zeilinger State W state AKLT model Majumdar–Ghosh Model Multistate Landau–Zener Models Projected entangled-pair States Infinite Projected entangled-pair States Corner Transfer Matrix Method Tensor-entanglement Renormalization Tree Tensor Network for Supervised Learning QUANTUM MACHINE LEARNING VS DEEP LEARNING QUANTUM MEETUPS QUANTUM GOOGLE GROUPS QUANTUM BASED COMPANIES QUANTUM LINKEDLIN QUANTUM BASED DEGREES CONSOLIDATED QUANTUM ML BOOKS CONSOLIDATED QUANTUM ML VIDEOS CONSOLIDATED QUANTUM ML Reserach Papers CONSOLIDATED QUANTUM ML Reserach Scientist RECENT QUANTUM UPDATES FORUM ,PAGES AND NEWSLETTER INTRODUCTION Why Quantum Machine Learning? Machine Learning(ML) is just a term in recent days but the work effort start from 18th century. What is Machine Learning ? , In Simple word the answer is making the computer or application to learn themselves . So its totally related with computing fields like computer science and IT ? ,The answer is not true . ML is a common platform which is mingled in all the aspects of the life from agriculture to mechanics . Computing is a key component to use ML easily and effectively . To be more clear ,Who is the mother of ML ?, As no option Mathematics is the mother of ML . The world tremendous invention complex numbers given birth to this field . Applying mathematics to the real life problem always gives a solution . From Neural Network to the complex DNA is running under some specific mathematical formulas and theorems. As computing technology growing faster and faster mathematics entered into this field and makes the solution via computing to the real world . In the computing technology timeline once a certain achievements reached peoples interested to use advanced mathematical ideas such as complex numbers ,eigen etc and its the kick start for the ML field such as Artificial Neural Network ,DNA Computing etc. Now the main question, why this field is getting boomed now a days ? , From the business perspective , 8-10 Years before during the kick start time for ML ,the big barrier is to merge mathematics into computing field . people knows well in computing has no idea on mathematics and research mathematician has no idea on what is computing . The education as well as the Job Opportunities is like that in that time . Even if a person tried to study both then the business value for making a product be not good. Then the top product companies like Google ,IBM ,Microsoft decided to form a team with mathematician ,a physician and a computer science person to come up with various ideas in this field . Success of this team made some wonderful products and they started by providing cloud services using this product . Now we are in this stage. So what's next ? , As mathematics reached the level of time travel concepts but the computing is still running under classical mechanics . the companies understood, the computing field must have a change from classical to quantum, and they started working on the big Quantum computing field, and the market named this field as Quantum Information Science .The kick start is from Google and IBM with the Quantum Computing processor (D-Wave) for making Quantum Neural Network .The field of Quantum Computer Science and Quantum Information Science will do a big change in AI in the next 10 years. Waiting to see that........... .(google, ibm). References D-Wave - Owner of a quantum processor Google - Quantum AI Lab IBM - Quantum Computer Lab Quora - Question Regarding future of quantum AI NASA - NASA Quantum Works Youtube - Google Video of a Quantum Processor external-link - MIT Review microsoft new product - Newly Launched Microsoft Quantum Language and Development Kit microsoft - Microsoft Quantum Related Works Google2 - Google Quantum Machine Learning Blog BBC - About Google Quantum Supremacy,IBM Quantum Computer and Microsoft Q Google Quantum Supremacy - Latest 2019 Google Quantum Supremacy Achievement IBM Quantum Supremacy - IBM Talk on Quantum Supremacy as a Primer VICE on the fight - IBM Message on Google Quantum Supremacy IBM Zurich Quantum Safe Cryptography - An interesting startup to replace all our Certificate Authority Via Cloud and IBM Q BASICS What is Quantum Mechanics? In a single line study of an electron moved out of the atom then its classical mechanic ,vibrates inside the atom its quantum mechanics WIKIPEDIA - Basic History and outline LIVESCIENCE. - A survey YOUTUBE - Simple Animation Video Explanining Great. What is Quantum Computing? A way of parallel execution of multiple processess in a same time using qubit ,It reduces the computation time and size of the processor probably in neuro size WIKIPEDIA - Basic History and outline WEBOPEDIA. - A survey YOUTUBE - Simple Animation Video Explanining Great. Quantum Computing vs Classical Computing LINK - Basic outline Quantum Computing Atom Structure one line : Electron Orbiting around the nucleous in an eliptical format YOUTUBE - A nice animation video about the basic atom structure Photon Wave one line : Light nornmally called as wave transmitted as photons as similar as atoms in solid particles YOUTUBE - A nice animation video about the basic photon 1 YOUTUBE - A nice animation video about the basic photon 2 Electron Fluctuation or spin one line : When a laser light collide with solid particles the electrons of the atom will get spin between the orbitary layers of the atom ) YOUTUBE - A nice animation video about the basic Electron Spin 1 YOUTUBE - A nice animation video about the basic Electron Spin 2 YOUTUBE - A nice animation video about the basic Electron Spin 3 States one line : Put a point on the spinning electron ,if the point is in the top then state 1 and its in bottom state 0 YOUTUBE - A nice animation video about the Quantum States SuperPosition two line : During the spin of the electron the point may be in the middle of upper and lower position, So an effective decision needs to take on the point location either 0 or 1 . Better option to analyse it along with other electrons using probability and is called superposition YOUTUBE - A nice animation video about the Quantum Superposition SuperPosition specific for machine learning(Quantum Walks) one line : As due to computational complexity ,quantum computing only consider superposition between limited electrons ,In case to merge more than one set quantum walk be the idea YOUTUBE - A nice video about the Quantum Walks Classical Bits one line : If electron moved from one one atom to other ,from ground state to excited state a bit value 1 is used else bit value 0 used Qubit one line : The superposition value of states of a set of electrons is Qubit YOUTUBE - A nice video about the Quantum Bits 1 YOUTUBE - A nice video about the Bits and Qubits 2 Basic Gates in Quantum Computing one line : As like NOT, OR and AND , Basic Gates like NOT, Hadamard gate , SWAP, Phase shift etc can be made with quantum gates YOUTUBE - A nice video about the Quantum Gates Quantum Diode one line : Quantum Diodes using a different idea from normal diode, A bunch of laser photons trigger the electron to spin and the quantum magnetic flux will capture the information YOUTUBE - A nice video about the Quantum Diode Quantum Transistors one line : A transistor default have Source ,drain and gate ,Here source is photon wave ,drain is flux and gate is classical to quantum bits QUORA -Discussion about the Quantum Transistor YOUTUBE - Well Explained Quantum Processor one line : A nano integration circuit performing the quantum gates operation sorrounded by cooling units to reduce the tremendous amount of heat YOUTUBE - Well Explained Quantum Registery QRAM one line : Comapring the normal ram ,its ultrafast and very small in size ,the address location can be access using qubits superposition value ,for a very large memory set coherent superposition(address of address) be used PDF - very Well Explained QUANTUM COMPUTING MACHINE LEARNING BRIDGE Complex Numbers one line : Normally Waves Interference is in n dimensional structure , to find a polynomial equation n order curves ,better option is complex number YOUTUBE - Wonderful Series very super Explained Tensors one line : Vectors have a direction in 2D vector space ,If on a n dimensional vector space ,vectors direction can be specify with the tensor ,The best solution to find the superposition of a n vector electrons spin space is representing vectors as tensors and doing tensor calculus YOUTUBE - Wonderful super Explained tensors basics YOUTUBE - Quantum tensors basics Tensors Network one line : As like connecting multiple vectors ,multple tensors form a network ,solving such a network reduce the complexity of processing qubits YOUTUBE - Tensors Network Some ideas specifically for quantum algorithms QUANTUM MACHINE LEARNING ALGORITHMS Quantum K-Nearest Neighbour info : Here the centroid(euclidean distance) can be detected using the swap gates test between two states of the qubit , As KNN is regerssive loss can be tally using the average PDF1 from Microsoft - Theory Explanation PDF2 - A Good Material to understand the basics Matlab - Yet to come soon Python - Yet to come soon Quantum K-Means info : Two Approaches possible ,1. FFT and iFFT to make an oracle and calculate the means of superposition 2. Adiobtic Hamiltonian generation and solve the hamiltonian to determine the cluster PDF1 - Applying Quantum Kmeans on Images in a nice way PDF2 - Theory PDF3 - Explaining well the K-means clustering using hamiltonian Matlab - Yet to come soon Python - Yet to come soon Quantum Fuzzy C-Means info : As similar to kmeans fcm also using the oracle dialect ,but instead of means,here oracle optimization followed by a rotation gate is giving a good result PDF1 - Theory Matlab - Yet to come soon Python - Yet to come soon Quantum Support Vector Machine info : A little different from above as here kernel preparation is via classical and the whole training be in oracles and oracle will do the classification, As SVM is linear ,An optimal Error(Optimum of the Least Squares Dual Formulation) Based regression is needed to improve the performance PDF1 - Nice Explanation but little hard to understand :) PDF2 - Nice Application of QSVM Matlab - Yet to come soon Python - Yet to come soon Quantum Genetic Algorithm info : One of the best algorithm suited for Quantum Field ,Here the chromosomes act as qubit vectors ,the crossover part carrying by an evaluation and the mutation part carrying by the rotation of gates ![Flow Chart]() PDF1 - Very Beautiful Article , well explained and superp PDF2 - A big theory :) PDF3 - Super Comparison Matlab - Simulation Python1 - Simulation Python2 - Yet to come Quantum Hidden Morkov Models info : As HMM is already state based ,Here the quantum states acts as normal for the markov chain and the shift between states is using quantum operation based on probability distribution ![Flow Chart]() PDF1 - Nice idea and explanation PDF2 - Nice but a different concept little Matlab - Yet to come Python1 - Yet to come Python2 - Yet to come Quantum state classification with Bayesian methods info : Quantum Bayesian Network having the same states concept using quantum states,But here the states classification to make the training data as reusable is based on the density of the states(Interference) ![Bayesian Network Sample1]() ![Bayesian Network Sample2]() ![Bayesian Network Sample3]() PDF1 - Good Theory PDF2 - Good Explanation Matlab - Yet to come Python1 - Yet to come Python2 - Yet to come Quantum Ant Colony Optimization info : A good algorithm to process multi dimensional equations, ACO is best suited for Sales man issue , QACO is best suited for Sales man in three or more dimension, Here the quantum rotation circuit is doing the peromene update and qubits based colony communicating all around the colony in complex space ![Ant Colony Optimization 1]() PDF1 - Good Concept PDF2 - Good Application Matlab - Yet to come Python1 - Yet to come Python2 - Yet to come Quantum Cellular Automata info : One of the very complex algorithm with various types specifically used for polynomial equations and to design the optimistic gates for a problem, Here the lattice is formed using the quatum states and time calculation is based on the change of the state between two qubits ,Best suited for nano electronics ![Quantum Cellular Automata]() Wikipedia - Basic PDF1 - Just to get the keywords PDF2 - Nice Explanation and an easily understandable application Matlab - Yet to come Python1 - Yet to come Python2 - Yet to come QAUNTUM NEURAL NETWORK one line : Its really one of the hardest topic , To understand easily ,Normal Neural Network is doing parallel procss ,QNN is doing parallel of parallel processess ,In theory combination of various activation functions is possible in QNN ,In Normal NN more than one activation function reduce the performance and increase the complexity Quantum perceptrons info : Perceptron(layer) is the basic unit in Neural Network ,The quantum version of perceptron must satisfy both linear and non linear problems , Quantum Concepts is combination of linear(calculus of superposition) and nonlinear(State approximation using probability) ,To make a perceptron in quantum world ,Transformation(activation function) of non linearity to certain limit is needed ,which is carrying by phase estimation algorithm ![Quantum Perceptron 3]() PDF1 - Good Theory PDF2 - Good Explanation Matlab - Yet to come Python1 - Yet to come Python2 - Yet to come QAUNTUM STATISTICAL DATA ANALYSIS one line : An under research concept ,It can be seen in multiple ways, one best way if you want to apply n derivative for a problem in current classical theory its difficult to compute as its serialization problem instead if you do parallelization of differentiation you must estimate via probability the value in all flows ,Quantum Probability Helps to achieve this ,as the loss calculation is very less . the other way comparatively booming is Quantum Bayesianism, its a solution to solve most of the uncertainity problem in statistics to combine time and space in highly advanced physical research QUANTUM PROGRAMMING LANGUAGES , TOOLs and SOFTWARES All info : All Programming languages ,softwares and tools in alphabetical order Software - Nice content of all Python library - A python library Matlab based python library - Matlab Python Library Quantum Tensor Network Github - Tensor Network Bayesforge - A Beautiful Amazon Web Service Enabled Framework for Quantum Alogorithms and Data Analytics Rigetti - A best tools repository to use quantum computer in real time Rigetti Forest - An API to connect Quantum Computer quil/pyQuil - A quantum instruction language to use forest framework Grove - Grove is a repository to showcase quantum Fourier transform, phase estimation, the quantum approximate optimization algorithm, and others developed using Forest QISKit - A IBM Kit to access quantum computer and mainly for quantum circuits IBM Bluemix Simulator - A Bluemix Simulator for Quantum Circuits Microsoft Quantum Development Kit - Microsoft Visual Studio Enbaled Kit for Quantum Circuit Creation Microsoft "Q#" - Microsoft Q Sharp a new Programming Language for Quantum Circuit Creation qiskit api python - An API to connect IBM Quantum Computer ,With the generated token its easy to connect ,but very limited utils ,Lot of new utils will come soon Cyclops Tensor Framework - A framework to do tensor network simulations Python ToolKit for chemistry and physics Quantum Algorithm simulations - A New Started Project for simulating molecule and solids Bayesian Based Quatum Projects Repository - A nice repository and the kickstarter of bayesforge Google Fermion Products - A newly launched product specifivally for chemistry simulation Tree Tensor Networks - Interesting Tensor Network in Incubator Deep Tensor Neural Network - Some useful information about Tensor Neural Network in Incubator Generative Tensorial Networks - A startup to apply machine learning via tensor network for drug discovery Google Bristlecone - A new Quantum Processor from Google , Aimed for Future Hardwares with full fledged AI support XANADU - A Light based Quantum Hardware(chips supports) and Software Company Started in Preparation Stage. Soon will be in market fathom computing - A new concept to train the ai in a processor using light and quantum based concepts. soon products will be launch Alibaba Quantum Computing Cloud Service - Cloud Service to access 11 Bit Quantum Computing Processor Atomistic Machine Learning Project - Seems something Interesting with Deep Tensor Network for Quantum Chemistry Applications circQ and Google Works - Google Top Efforts on Tools IBM Safe Cryptography on Cloud - IBM Started and Developing a Quantm Safe Cryptography to replace all our Certificate Authority via Cloud Google Tensor Network Open Source - Google Started the Most Scientist Preferred Way To Use a Quantum Computer Circuit. Tensor Flow Which Makes Easy to Design the Network and Will Leave the Work Effect Of Gates, Processor Preparation and also going to tell the beauty of Maths Google Tensor Network Github - Github Project of Google Tensor Network Quantum Tensorflow - Yet to come soon Quantum Spark - Yet to come soon Quatum Map Reduce - Yet to come soon Quantum Database - Yet to come soon Quantum Server - Yet to come soon Quantum Data Analytics - Yet to come soon QUANTUM HOT TOPICS Deep Quantum Learning why and what is deep learning? In one line , If you know deep learning you can get a good job :) ,Even a different platform undergraduated and graduated person done a master specialization in deep learning can work in this big sector :), Practically speaking machine learning (vector mathematics) , deep learning (vector space(Graphics) mathematics) and big data are the terms created by big companies to make a trend in the market ,but in science and research there is no word such that , Now a days if you ask a junior person working in this big companies ,what is deep learning ,you will get some reply as "doing linear regression with stochastic gradient for a unsupervised data using Convolutional Neural Network :)" ,They knows the words clearly and knows how to do programming using that on a bunch of "relative data" , If you ask them about the FCM , SVM and HMM etc algorithms ,they will simply say these are olden days algorithms , deep learning replaced all :), But actually they dont know from the birth to the till level and the effectiveness of algorithms and mathematics ,How many mathematical theorems in vector, spaces , tensors etc solved to find this "hiding the complexity technology", They did not played with real non relative data like medical images, astro images , geology images etc , finding a relation and features is really complex and looping over n number of images to do pattern matching is a giant work , Now a days the items mentioned as deep learning (= multiple hidden artifical neural network) is not suitable for that why quantum deep learning or deep quantum learning? In the mid of Artificial Neural Network Research people realised at the maximum extreme only certain mathematical operations possible to do with ANN and the aim of this ANN is to achieve parallel execution of many mathematical operations , In artificial Intelligence ,the world intelligence stands for mathematics ,how effective if a probem can be solvable is based on the mathematics logic applying on the problem , more the logic will give more performance(more intelligent), This goal open the gate for quantum artificial neural network, On applying the ideas behind the deep learning to quantum mechanics environment, its possible to apply complex mathematical equations to n number of non relational data to find more features and can improve the performance Quantum Machine Learning vs Deep Learning Its fun to discuss about this , In recent days most of the employees from Product Based Companies Like google,microsoft etc using the word deep learning ,What actually Deep Learning ? and is it a new inventions ? how to learn this ? Is it replacing machine learning ? these question come to the mind of junior research scholars and mid level employees The one answer to all questions is deep learning = parallel "for" loops ,No more than that ,Its an effective way of executing multiple tasks repeatly and to reduce the computation cost, But it introduce a big cap between mathematics and computerscience , How ? All classical algorithms based on serial processing ,Its depends on the feedback of the first loop ,On applying a serial classical algorithm in multiple clusters wont give a good result ,but some light weight parallel classical algorithms(Deep learning) doing the job in multiple clusters and its not suitable for complex problems, What is the solution for then? As in the title Quantum Machine Learning ,The advantage behind is deep learning is doing the batch processing simply on the data ,but quantum machine learning designed to do batch processing as per the algorithm The product companies realised this one and they started migrating to quantum machine learning and executing the classical algorithms on quantum concept gives better result than deep learning algorithms on classical computer and the target to merge both to give very wonderful result References Quora - Good Discussion Quora - The Bridge Discussion Pdf - Nice Discussion Google - Google Research Discussion Microsoft - Microsoft plan to merge both IBM - IBM plan to merge both IBM Project - IBM Project idea MIT and Google - Solutions for all questions QUANTUM MEETUPS Meetup 1 - Quantum Physics Meetup 2 - Quantum Computing London Meetup 3 - Quantum Computing New York Meetup 4 - Quantum Computing Canada Meetup 5 - Quantum Artificial Intelligence Texas Meetup 6 - Genarl Quantum Mechanics , Mathematics New York Meetup 7 - Quantum Computing Mountain View California Meetup 8 - Statistical Analysis New York Meetup 9 - Quantum Mechanics London UK Meetup 10 - Quantum Physics Sydney Australia Meetup 11 - Quantum Physics Berkeley CA Meetup 12 - Quantum Computing London UK Meetup 13 - Quantum Mechanics Carmichael CA Meetup 14 - Maths and Science Group Portland Meetup 15 - Quantum Physics Santa Monica, CA Meetup 16 - Quantum Mechanics London Meetup 17 - Quantum Computing London Meetup 18 - Quantum Meta Physics ,Kansas City , Missouri ,US Meetup 19 - Quantum Mechanics and Physics ,Boston ,Massachusetts ,US Meetup 20 - Quantum Physics and Mechanics ,San Francisco ,California Meetup 21 - Quantum Mechanics ,Langhorne, Pennsylvania Meetup 22 - Quantum Mechanics ,Portland QUANTUM BASED DEGREES Plenty of courses around the world and many Universities Launching it day by day ,Instead of covering only Quantum ML , Covering all Quantum Related topics gives more idea in the order below Available Courses Quantum Mechanics for Science and Engineers Online Standford university - Nice Preparatory Course edx - Quantum Mechanics for Everyone NPTEL 1 - Nice Series of Courses to understand basics and backbone of quantum mechanics NPTEL 2 NPTEL 3 NPTEL 4 NPTEL 5 Class Based Course UK Bristol Australia Australian National University Europe Maxs Planks University Quantum Physics Online MIT - Super Explanation and well basics NPTEL - Nice Series of Courses to understand basics and backbone of quantum Physics Class Based Course Europe University of Copenhagen Quantum Chemistry Online NPTEL 1 - Nice Series of Courses to understand basics and backbone of quantum Chemistry NPTEL 2 - Class Based Course Europe UGent Belgium Quantum Computing Online MIT - Super Explanation and well basics edx - Nice Explanation NPTEL - Nice Series of Courses to understand basics and backbone of quantum Computing Class Based Course Canada uwaterloo Singapore National University Singapore USA Berkley China Baidu Quantum Technology Class Based Course Canada uwaterloo Singapore National University Singapore Europe Munich Russia Skoltech Quantum Information Science External Links quantwiki Online MIT - Super Explanation and well basics edx - Nice Explanation NPTEL - Nice Series of Courses to understand basics and backbone of quantum information and computing Class Based Course USA MIT Standford University Joint Center for Quantum Information and Computer Science - University of Maryland Canada Perimeter Institute Singapore National University Singapore Europe ULB Belgium IQOQI Quantum Electronics Online MIT - Wonderful Course NPTEL - Nice Series of Courses to understand basics and backbone of quantum Electronics Class Based Course USA Texas Europe Zurich ICFO Asia Tata Institute Quantum Field Theory Online Standford university - Nice Preparatory Course edx - Some QFT Concepts available Class Based Course UK Imperial Europe Vrije Quantum Computer Science Class Based Course USA Oxford Joint Center for Quantum Information and Computer Science - University of Maryland Quantum Artificial Intelligence and Machine Learning External Links Quora 1 Quora 1 Artificial Agents Research for Quantum Designs Quantum Mathematics Class Based Course USA University of Notre CONSOLIDATED Quantum Research Papers scirate - Plenty of Quantum Research Papers Available Peter Wittek - Famous Researcher for the Quantum Machine Leanrning , Published a book in this topic [Murphy Yuezhen Niu] (https://scholar.google.com/citations?user=0wJPxfkAAAAJ&hl=en) - A good researcher published some nice articles Recent Quantum Updates forum ,pages and newsletter Quantum-Tech - A Beautiful Newsletter Page Publishing Amazing Links facebook Quantum Machine Learning - Running By me . Not that much good :). You can get some ideas Linkedlin Quantum Machine Learning - A nice page running by experts. Can get plenty of ideas FOSDEM 2019 Quantum Talks - A one day talk in fosdem 2019 with more than 10 research topics,tools and ideas FOSDEM 2020 Quantum Talks - Live talk in fosdem 2020 with plenty new research topics,tools and ideas License Dedicated Opensources ![Dedicated Opensources]() Source code of plenty of Algortihms in Image Processing , Data Mining ,etc in Matlab, Python ,Java and VC++ Scripts Good Explanations of Plenty of algorithms with flow chart etc Comparison Matrix of plenty of algorithms Is Quantum Machine Learning Will Reveal the Secret Maths behind Astrology? Awesome Machine Learning and Deep Learning Mathematics is online Published Basic Presentation of the series Quantum Machine Learning Contribution If you think this page might helpful. Please help for World Education Charity or kids who wants to learn

Ultimate-Data-Science-Toolkit---From-Python-Basics-to-GenerativeAI
github
LLM Vibe Score0.555
Human Vibe Score0.3470230117125603
bansalkanavMar 27, 2025

Ultimate-Data-Science-Toolkit---From-Python-Basics-to-GenerativeAI

Getting started with Machine Learning and Deep Learning Star this repo if you find it useful :star: Module 1 - Python Programming | Topic Name | What's Covered | | :---: | :---: | | Intro to Python | Applications and Features of Python, Hello World Program, Identifiers and Rules to define identifiers, Data Types (numeric, boolean, strings, list, tuple, set and dict), Comments, Input and Output, Operators - Arithmatic, Reltaional, Equality, Logical, Bitwise, Assignment, Ternary, Identity and Membership | | Data Structures in Python (Strings, List, Tuple, Set, Dictionary) | Strings - Creating a string, Indexing, Slicing, Split, Join, etc, List - Initialization, Indexing, Slicing, Sorting, Appending, etc, Tuple - Initialization, Indexing, Slicing, Count, Index, etc, Set - Initialization, Unordered Sequence, Set Opertaions, etc, Dictionary - Initialization, Updating, Keys, Values, Items, etc | | Control Statements (Conditionals and Loops) | Conditional Statements - Introducing Indentation, if statement, if...else statement, if..elif...else statement, Nested if else statement, Loops - while loops, while...else loop, Membership operator, for loop, for...else loop, Nested Loops, Break and Continue Statement, Why else? | | Functions and Modules | Functions - Introduction to Python Functions, Function Definition and Calling, Functions with Arguments/Parameters, Return Statement, Scope of a Variable, Global Variables, Modules - Introduction to Modules, Importing a Module, Aliasing, from...import statement, import everything, Some important modules - math, platform, random, webbrowser, etc | | Object Oriented Programming | Classes and Objects - Creating a class, Instantiating an Object, Constructor, Class Members - Variables and Mentods, Types of Variables - Instance, Static and Local Variables, Types of Methods - Instance, Class and Static Methods, Access Modifiers - Public, Private and Protected, Pillars of Object Oriented Programming - Inheritance, Polymorphism, Abstraction and Encapsulation, Setters and Getters, Inheritance vs Association | | Exception Handling | Errors vs Exception, Syntax and Indentation Errors, try...except block, Control Flow in try...except block, try with multiple except, finally block, try...except...else, Nested try...except...finally, User Defined Exception | | File Handling | Introduction to File Handling, Opening and Closing a File, File Object Properties, Read Data from Text Files, Write Data to Text Files, with statement, Renaming and Deleting Files | | Web API | Application Programming Interface, Indian Space Station API, API Request, Status Code, Query Parameters, Getting JSON from an API Request, Working with JSON - dump and load, Working with Twitter API | | Databases | Introduction to Databases, SQLite3 - Connecting Python with SQLite3, Performing CRUD Opertations, MySQL - Connecting Python with MySQL, Performing CRUD Opertations, MongoDB - Connecting Python with MongoDB, Performing CRUD Opertations, Object Relation Mapping - SQLAlchemy ORM, CRUD operations and Complex DB operations | | List Comprehension, Lambda, Filter, Map, Reduce) | List Comprehension, Anonymous Functions, Filter, Map, Reduce, Function Aliasing | | Problem Solving for Interviews | Swapping two numbers, Factorial of a number, Prime Number, Fibbonnacci Sequence, Armstrong Number, Palindrome Number, etc | Module 2 - Python for Data Analysis | Topic Name | What's Covered | | :---: | :---: | | Data Analytics Framework | Data Collection, Business Understanding, Exploratory Data Analysis, Data Preparation, Model Building, Model Evaluation, Deployment, Understanding Cross Industry Standard Process for Data Mining (CRISP-DM) and Microsoft's Team Data Science Process (TDSP) | | Numpy | Array Oriented Numerical Computations using Numpy, Creating a Numpy Array, Basic Operations on Numpy Array - Check Dimensions, Shape, Datatypes and ItemSize, Why Numpy, Various ways to create Numpy Array, Numpy arange() function, Numpy Random Module - rand(), randn(), randint(), uniform(), etc, Indexing and Slicing in Numpy Arrays, Applying Mathematical Operations on Numpy Array - add(), subtract(), multiply(), divide(), dot(), matmul(), sum(), log(), exp(), etc, Statistical Operations on Numpy Array - min(), max(), mean(), median(), var(), std(), corrcoef(), etc, Reshaping a Numpy Array, Miscellaneous Topics - Linspace, Sorting, Stacking, Concatenation, Append, Where and Numpy Broadcasting | | Pandas for Beginners | Pandas Data Structures - Series, Dataframe and Panel, Creating a Series, Data Access, Creating a Dataframe using Tuples and Dictionaries, DataFrame Attributes - columns, shape, dtypes, axes, values, etc, DataFrame Methods - head(), tail(), info(), describe(), Working with .csv and .xlsx - readcsv() and readexcel(), DataFrame to .csv and .xlsx - tocsv() and toexcel() | | Advance Pandas Operations | What's Covered | | Case Study - Pandas Manipulation | What's Covered | | Missing Value Treatment | What's Covered | | Visuallization Basics - Matplotlib and Seaborn | What's Covered | | Case Study - Covid19TimeSeries | What's Covered | | Plotly and Express | What's Covered | | Outliers - Coming Soon | What's Covered | Module 3 - Statistics for Data Analysis | Topic Name | What's Covered | | :---: | :---: | | Normal Distribution | What's Covered | | Central Limit Theorem | What's Covered | | Hypothesis Testing | What's Covered | | Chi Square Testing | What's Covered | | Performing Statistical Test | What's Covered | Module 4 - Machine Learning Data Preparation and Modelling with SKLearn Working with Text Data Working with Image Data Supervised ML Algorithms K - Nearest Neighbours Linear Regression Logistic Regression Gradient Descent Decision Trees Support Vector Machines Models with Feature Engineering Hyperparameter Tuning Ensembles Unsupervised ML Algorithms Clustering Principal Component Analysis Module 5 - MLOPs | Topic Name | What's Covered | | :---: | :---: | | Model Serialization and Deserialization | What's Covered | | Application Integration | What's Covered | | MLFlow - Experiment Tracking and Model Management | What's Covered | | Prefect - Orchestrate ML Pipeline | What's Covered | Module 6 - Case Studies | Topic Name | What's Covered | | :---: | :---: | | Car Price Prediction (Regression) | What's Covered | | Airline Sentiment Analysis (NLP - Classification) | What's Covered | | Adult Income Prediction (Classification) | What's Covered | | Web App Development + Serialization and Deserialization | What's Covered | | AWS Deployment | What's Covered | | Streamlit Heroku Deployment | What's Covered | | Customer Segmentation | What's Covered | | Web Scrapping | What's Covered | Module 7 - Deep Learning | Topic Name | What's Covered | | :---: | :---: | | Introduction to Deep Learning | What's Covered | | Training a Deep Neural Network + TensorFlow.Keras | What's Covered | | Convolutional Neural Network + TensorFlow.Keras | What's Covered | | Auto Encoders for Image Compression) | What's Covered | | Recurrent Neural Network (Coming Soon) | What's Covered |

eiten
github
LLM Vibe Score0.549
Human Vibe Score0.754375921646308
tradyticsMar 27, 2025

eiten

Eiten - Algorithmic Investing Strategies for Everyone Eiten is an open source toolkit by Tradytics that implements various statistical and algorithmic investing strategies such as Eigen Portfolios, Minimum Variance Portfolios, Maximum Sharpe Ratio Portfolios, and Genetic Algorithms based Portfolios. It allows you to build your own portfolios with your own set of stocks that can beat the market. The rigorous testing framework included in Eiten enables you to have confidence in your portfolios. If you are looking to discuss these tools in depth and talk about more tools that we are working on, please feel free to join our Discord channel where we have a bunch of more tools too. Files Description | Path | Description | :--- | :---------- | eiten | Main folder. | └ figures | Figures for this github repositories. | └ stocks | Folder to keep your stock lists that you want to use to create your portfolios. | └ strategies | A bunch of strategies implemented in python. | backtester.py | Backtesting module that both backtests and forward tests all portfolios. | data_loader.py | Module for loading data from yahoo finance. | portfolio_manager.py | Main file that takes in a bunch of arguments and generates several portfolios for you. | simulator.py | Simulator that uses historical returns and monte carlo to simulate future prices for the portfolios. | strategy_manager.py | Manages the strategies implemented in the 'strategies' folder. Required Packages You will need to install the following package to train and test the models. Scikit-learn Numpy Tqdm Yfinance Pandas Scipy You can install all packages using the following command. Please note that the script was written using python3. Build your portfolios Let us see how we can use all the strategies given in the toolkit to build our portfolios. The first thing you need to do is modify the stocks.txt file in the stocks folder and add the stocks of your choice. It is recommended to keep the list small i.e anywhere between 5 to 50 stocks should be fine. We have already put a small stocks list containing a bunch of tech stocks like AAPL, MSFT, TSLA etc. Let us build our portfolios now. This is the main command that you need to run. This command will use last 5 years of daily data excluding the last 90 days and build several portfolios for you. Based on those portfolios, it will then test them on the out of sample data of 90 days and show you the performance of each portfolio. Finally, it will also compare the performance with your choice of market index which is QQQ here. Let's dive into each of the parameters in detail. istest: The value determined if the program is going to keep some separate data for future testing. When this is enabled, the value of futurebars should be larger than 5. future_bars: These are the bars that the tool will exclude during portfolio building and will forward test the portfolios on the excluded set. This is also called out of sample data. datagranularityminutes: How much granular data do you want to use to build your portfolios. For long term portfolios, you should use daily data but for short term, you can use hourly or minute level data. The possible values here are 3600, 60, 30, 15, 5, 1. 3600 means daily. historytouse: Whether to use a specific number of historical bars or use everything that we receive from yahoo finance. For minute level data, we only receive up to one month of historical data. For daily, we receive 5 years worth of historical data. If you want to use all available data, the value should be all but if you want to use smaller history, you can set it to an integer value e.g 100 which will only use the last 100 bars to build the portfolios. applynoisefiltering: This uses random matrix theory to filter out the covariance matrix from randomness thus yielding better portfolios. A value of 1 will enable it and 0 will disable it. market_index: Which index do you want to use to compare your portfolios. This should mostly be SPY but since we analyzed tech stocks, we used QQQ. only_long: Whether to use long only portfolio or enable short selling as well. Long only portfolios have shown to have better performance using algorithmic techniques. eigenportfolionumber: Which eigen portfolio to use. Any value between 1-5 should work. The first eigen portfolio (1) represents the market portfolio and should act just like the underlying index such as SPY or QQQ. The second one is orthogonal and uncorrelated to the market and poses the greatest risk and reward. The following ones have reduced risk and reward. Read more on eigen-portfolios. stocksfilepath: File that contains the list of stocks that you want to use to build your portfolio. Some Portfolio Building Examples Here are a few examples for building different types of portfolios. Both long and short portfolios by analyzing last 90 days data and keeping the last 30 days as testing data. This will give us 60 days of portfolio construction data and 30 days of testing. Only long portfolio on 60 minute bars of the last 30 days. No future testing. Compare the results with SPY index instead of QQQ. Do not apply noise filtering on the covariance matrix. Use the first eigen portfolio (market portfolio) and compare with SQQQ, Portfolio Strategies Four different portfolio strategies are currently supported by the toolkit. Eigen Portfolios These portfolios are orthogonal and uncorrelated to the market in general thus yielding high reward and alpha. However, since they are uncorrelated to the market, they can also provide great risk. The first eigen portfolio is considered to be a market portfolio which is often ignored. The second one is uncorrelated to the others and provides the highest risk and reward. As we go down the numbering, the risk as well as the reward are reduced. Minimum Variance Portfolio (MVP) MVP tries to minimize the variance of the portfolio. These portfolios are lowest risk and reward. Maximum Sharpe Ratio Portfolio (MSR) MSR solves an optimization problem that tries to maximize the sharpe ratio of the portfolio. It uses past returns during the optimization process which means if past returns are not the same as future returns, the results can vary in future. Genetic Algorithm (GA) based Portfolio This is our own implementation of a GA based portfolio that again tries to maximize the sharpe ratio but in a slightly more robust way. This usually provides more robust portfolios than the others. When you run the command above, our tool will generate portfolios from all these strategies and give them to you. Let us look at some resulting portfolios. Resulting Portfolios For the purpose these results, we will use the 9 stocks in the stocks/stocks.txt file. When we run the above command, we first get the portfolio weights for all four strategies. For testing purposes, the above command used last five years of daily data up till April 29th. The remaining data for this year was used for forward testing i.e the portfolio strategies had no access to it when building the portfolios. What if my portfolio needs different stocks?: All you need to do is change the stocks in the stocks.txt file and run the tool again. Here is the final command again that we run in order to get our portfolios: Portfolio Weights We can see that the eigen portfolio is giving a large weight to TSLA while the others are dividing their weights more uniformly. An interesting phenomena happening here is the hedging with SQQQ that all the strategies have learned automatically. Every tool is assigning some positive weight to SQQQ while also assigning positive weights to other stocks which indicates that the strategies are automatically trying to hedge the portfolios from risk. Obviously this is not perfect, but just the fact that it's happening is fascinating. Let us look at the backtest results on the last five years prior to April 29, 2020. Backtest Results The backtests look pretty encouraging. The black dotted line is the market index i.e QQQ. Other lines are the strategies. Our custom genetic algorithm implementation seems to have the best backtest results because it's an advanced version of other strategies. The eigen portfolio that weighed TSLA the most have the most volatility but its profits are also very high. Finally, as expected, the MVP has the minimum variance and ultimately the least profits. However, since the variance is extremely low, it is a good portfolio for those who want to stay safe. The most interesting part comes next, let us look at the forward or future test results for these portfolios. Forward Test Results These results are from April 29th, 2020 to September 4th, 2020. The eigen portfolio performed the best but it also had a lot of volatility. Moreover, most of those returns are due to TSLA rocketing in the last few months. After that, our GA algorithm worked quite effectively as it beat the market index. Again, as expected, the MVP had the lowest risk and reward and slowly went up in 4-5 months. This shows the effectiveness and power of these algorithmic portfolio optimization strategies where we've developed different portfolios for different kinds of risk and reward profiles. Conclusion and Discussion We are happy to share this toolkit with the trading community and hope that people will like and contribute to it. As is the case with everything in trading, these strategies are not perfect but they are based on rigorous theory and some great empirical results. Please take care when trading with these strategies and always manage your risk. The above results were not cherry picked but the market has been highly bullish in the last few months which has led to the strong results shown above. We would love for the community to try out different strategies and share them with us. Special Thanks Special thanks to Scott Rome's blog. The eigen portfolios and minimum variance portfolio concepts came from his blog posts. The code for filtering eigen values of the covariance matrix was also mostly obtained from one of his posts. License A product by Tradytics Copyright (c) 2020-present, Tradytics.com

magic
github
LLM Vibe Score0.629
Human Vibe Score0.011755969008053826
polterguyMar 27, 2025

magic

An AI-based Low-Code and No-Code Software Development Automation Framework IMPORTANT - Magic is no longer open source. You can read the arguments here. We will keep this repository as is, but it should be considered "legacy" and will no longer receive any updates, fixes, or changes. All work is currently committed to a closed source fork of this repository, which inevitably over time will rapidly make this repository insecure and obsolete for obvious reasons. Magic Cloud is a software development automation platform created and maintained by AINIRO.IO based upon AI, Low-Code, and No-Code. It's based upon Hyperlambda, allowing you to dynamically create and orchestrate workflows, almost within a "drag'n'drop development environment". !Editing code in HyperIDE In addition to its workflows, Magic also comes with a CRUD generator, allowing you to point it at your database, click a button, and wrap all your tables into CRUD endpoints. Combined with its workflow capabilities, this can sometimes save you 90% of your time when delivering backend APIs. Magic is built on top of .Net 8 and Angular. !CRUD generator Magic comes with Docker containers and is easy to install, but AINIRO.IO also hosts Magic for a fee. Modules Magic was created to make it very easy to create small and medium sized backend APIs, and contains components for all problems related to backend development. For more information about Magic, please refer to its documentation below. Magic Cloud Documentation License This project, and all of its satellite project, is licensed under the terms of the GPL license version 3, as published by the Free Software Foundation unless an explicit and signed exception has been provided by Thomas Hansen its copyright owner. See LICENSE file for details. For licensing inquiries you can contact Thomas Hansen thomas@ainiro.io Copyright and maintenance The projects is copyright of Thomas Hansen, Ltd 2021 - 2023, and professionally maintained by AINIRO.IO.

PhoenixGo
github
LLM Vibe Score0.542
Human Vibe Score0.07574427540822147
TencentMar 27, 2025

PhoenixGo

!PhoenixGo PhoenixGo is a Go AI program which implements the AlphaGo Zero paper "Mastering the game of Go without human knowledge". It is also known as "BensonDarr" and "金毛测试" in FoxGo, "cronus" in CGOS, and the champion of World AI Go Tournament 2018 held in Fuzhou China. If you use PhoenixGo in your project, please consider mentioning in your README. If you use PhoenixGo in your research, please consider citing the library as follows: Building and Running On Linux Requirements GCC with C++11 support Bazel (0.19.2 is known-good) (Optional) CUDA and cuDNN for GPU support (Optional) TensorRT (for accelerating computation on GPU, 3.0.4 is known-good) The following environments have also been tested by independent contributors : here. Other versions may work, but they have not been tested (especially for bazel). Download and Install Bazel Before starting, you need to download and install bazel, see here. For PhoenixGo, bazel (0.19.2 is known-good), read Requirements for details If you have issues on how to install or start bazel, you may want to try this all-in-one command line for easier building instead, see FAQ question Building PhoenixGo with Bazel Clone the repository and configure the building: ./configure will start the bazel configure : ask where CUDA and TensorRT have been installed, specify them if need. Then build with bazel: Dependices such as Tensorflow will be downloaded automatically. The building process may take a long time. Recommendation : the bazel building uses a lot of RAM, if your building environment is lack of RAM, you may need to restart your computer and exit other running programs to free as much RAM as possible. Running PhoenixGo Download and extract the trained network: The PhoenixGo engine supports GTP (Go Text Protocol), which means it can be used with a GUI with GTP capability, such as Sabaki. It can also run on command-line GTP server tools like gtp2ogs. But PhoenixGo does not support all GTP commands, see FAQ question. There are 2 ways to run PhoenixGo engine 1) start.sh : easy use Run the engine : scripts/start.sh start.sh will automatically detect the number of GPUs, run mcts_main with proper config file, and write log files in directory log. You could also use a customized config file (.conf) by running scripts/start.sh {config_path}. If you want to do that, see also #configure-guide. 2) mcts_main : fully control If you want to fully control all the options of mcts_main (such as changing log destination, or if start.sh is not compatible for your specific use), you can run directly bazel-bin/mcts/mcts_main instead. For a typical usage, these command line options should be added: --gtp to enable GTP mode --config_path=replace/with/path/to/your/config/file to specify the path to your config file it is also needed to edit your config file (.conf) and manually add the full path to ckpt, see FAQ question. You can also change options in config file, see #configure-guide. for other command line options , see also #command-line-options for details, or run ./mcts_main --help . A copy of the --help is provided for your convenience here For example: (Optional) : Distribute mode PhoenixGo support running with distributed workers, if there are GPUs on different machine. Build the distribute worker: Run distzeromodel_server on distributed worker, one for each GPU. Fill ip:port of workers in the config file (etc/mcts_dist.conf is an example config for 32 workers), and run the distributed master: On macOS Note: Tensorflow stop providing GPU support on macOS since 1.2.0, so you are only able to run on CPU. Use Pre-built Binary Download and extract CPU-only version (macOS) Follow the document included in the archive : usingphoenixgoon_mac.pdf Building from Source Same as Linux. On Windows Recommendation: See FAQ question, to avoid syntax errors in config file and command line options on Windows. Use Pre-built Binary GPU version : The GPU version is much faster, but works only with compatible nvidia GPU. It supports this environment : CUDA 9.0 only cudnn 7.1.x (x is any number) or lower for CUDA 9.0 no AVX, AVX2, AVX512 instructions supported in this release (so it is currently much slower than the linux version) there is no TensorRT support on Windows Download and extract GPU version (Windows) Then follow the document included in the archive : how to install phoenixgo.pdf note : to support special features like CUDA 10.0 or AVX512 for example, you can build your own build for windows, see #79 CPU-only version : If your GPU is not compatible, or if you don't want to use a GPU, you can download this CPU-only version (Windows), Follow the document included in the archive : how to install phoenixgo.pdf Configure Guide Here are some important options in the config file: numevalthreads: should equal to the number of GPUs num_search_threads: should a bit larger than num_eval_threads evalbatchsize timeoutmsper_step: how many time will used for each move maxsimulationsper_step: how many simulations(also called playouts) will do for each move gpu_list: use which GPUs, separated by comma modelconfig -> traindir: directory where trained network stored modelconfig -> checkpointpath: use which checkpoint, get from train_dir/checkpoint if not set modelconfig -> enabletensorrt: use TensorRT or not modelconfig -> tensorrtmodelpath: use which TensorRT model, if enabletensorrt maxsearchtree_size: the maximum number of tree nodes, change it depends on memory size maxchildrenper_node: the maximum children of each node, change it depends on memory size enablebackgroundsearch: pondering in opponent's time earlystop: genmove may return before timeoutmsperstep, if the result would not change any more unstable_overtime: think timeout_ms_per_step time_factor more if the result still unstable behind_overtime: think timeout_ms_per_step timefactor more if winrate less than actthreshold Options for distribute mode: enable_dist: enable distribute mode distsvraddrs: ip:port of distributed workers, multiple lines, one ip:port in each line distconfig -> timeoutms: RPC timeout Options for async distribute mode: Async mode is used when there are huge number of distributed workers (more than 200), which need too many eval threads and search threads in sync mode. etc/mctsasyncdist.conf is an example config for 256 workers. enable_async: enable async mode enable_dist: enable distribute mode distsvraddrs: multiple lines, comma sperated lists of ip:port for each line numevalthreads: should equal to number of distsvraddrs lines evaltaskqueue_size: tunning depend on number of distribute workers numsearchthreads: tunning depend on number of distribute workers Read mcts/mcts_config.proto for more config options. Command Line Options mcts_main accept options from command line: --config_path: path of config file --gtp: run as a GTP engine, if disable, gen next move only --init_moves: initial moves on the go board, for example usage, see FAQ question --gpulist: override gpulist in config file --listen_port: work with --gtp, run gtp engine on port in TCP protocol --allowip: work with --listenport, list of client ip allowed to connect --forkperrequest: work with --listen_port, fork for each request or not Glog options are also supported: --logtostderr: log message to stderr --log_dir: log to files in this directory --minloglevel: log level, 0 - INFO, 1 - WARNING, 2 - ERROR --v: verbose log, --v=1 for turning on some debug log, --v=0 to turning off mcts_main --help for more command line options. A copy of the --help is provided for your convenience here Analysis For analysis purpose, an easy way to display the PV (variations for main move path) is --logtostderr --v=1 which will display the main move path winrate and continuation of moves analyzed, see FAQ question for details It is also possible to analyse .sgf files using analysis tools such as : GoReviewPartner : an automated tool to analyse and/or review one or many .sgf files (saved as .rsgf file). It supports PhoenixGo and other bots. See FAQ question for details FAQ You will find a lot of useful and important information, also most common problems and errors and how to fix them Please take time to read the FAQ

aioquic
github
LLM Vibe Score0.518
Human Vibe Score0.04117299426077279
aiortcMar 27, 2025

aioquic

aioquic ======= .. image:: https://img.shields.io/pypi/l/aioquic.svg :target: https://pypi.python.org/pypi/aioquic :alt: License .. image:: https://img.shields.io/pypi/v/aioquic.svg :target: https://pypi.python.org/pypi/aioquic :alt: Version .. image:: https://img.shields.io/pypi/pyversions/aioquic.svg :target: https://pypi.python.org/pypi/aioquic :alt: Python versions .. image:: https://github.com/aiortc/aioquic/workflows/tests/badge.svg :target: https://github.com/aiortc/aioquic/actions :alt: Tests .. image:: https://img.shields.io/codecov/c/github/aiortc/aioquic.svg :target: https://codecov.io/gh/aiortc/aioquic :alt: Coverage .. image:: https://readthedocs.org/projects/aioquic/badge/?version=latest :target: https://aioquic.readthedocs.io/ :alt: Documentation What is `aioquic? aioquic is a library for the QUIC network protocol in Python. It features a minimal TLS 1.3 implementation, a QUIC stack and an HTTP/3 stack. aioquic is used by Python opensource projects such as dnspython_, hypercorn, mitmproxy and the Web Platform Tests_ cross-browser test suite. It has also been used extensively in research papers about QUIC. To learn more about aioquic please read the documentation_. Why should I use aioquic? aioquic has been designed to be embedded into Python client and server libraries wishing to support QUIC and / or HTTP/3. The goal is to provide a common codebase for Python libraries in the hope of avoiding duplicated effort. Both the QUIC and the HTTP/3 APIs follow the "bring your own I/O" pattern, leaving actual I/O operations to the API user. This approach has a number of advantages including making the code testable and allowing integration with different concurrency models. A lot of effort has gone into writing an extensive test suite for the aioquic code to ensure best-in-class code quality, and it is regularly tested for interoperability against other QUIC implementations. Features minimal TLS 1.3 implementation conforming with RFC 8446_ QUIC stack conforming with RFC 9000 (QUIC v1) and RFC 9369 (QUIC v2) IPv4 and IPv6 support connection migration and NAT rebinding logging TLS traffic secrets logging QUIC events in QLOG format version negotiation conforming with RFC 9368_ HTTP/3 stack conforming with RFC 9114_ server push support WebSocket bootstrapping conforming with RFC 9220_ datagram support conforming with RFC 9297_ Installing The easiest way to install aioquic is to run: .. code:: bash pip install aioquic Building from source If there are no wheels for your system or if you wish to build aioquic from source you will need the OpenSSL development headers. Linux ..... On Debian/Ubuntu run: .. code-block:: console sudo apt install libssl-dev python3-dev On Alpine Linux run: .. code-block:: console sudo apk add openssl-dev python3-dev bsd-compat-headers libffi-dev OS X .... On OS X run: .. code-block:: console brew install openssl You will need to set some environment variables to link against OpenSSL: .. code-block:: console export CFLAGS=-I$(brew --prefix openssl)/include export LDFLAGS=-L$(brew --prefix openssl)/lib Windows ....... On Windows the easiest way to install OpenSSL is to use Chocolatey_. .. code-block:: console choco install openssl You will need to set some environment variables to link against OpenSSL: .. code-block:: console $Env:INCLUDE = "C:\Progra~1\OpenSSL\include" $Env:LIB = "C:\Progra~1\OpenSSL\lib" Running the examples aioquic comes with a number of examples illustrating various QUIC usecases. You can browse these examples here: https://github.com/aiortc/aioquic/tree/main/examples License aioquic is released under the BSD license`_. .. _read the documentation: https://aioquic.readthedocs.io/en/latest/ .. _dnspython: https://github.com/rthalley/dnspython .. _hypercorn: https://github.com/pgjones/hypercorn .. _mitmproxy: https://github.com/mitmproxy/mitmproxy .. _Web Platform Tests: https://github.com/web-platform-tests/wpt .. _tested for interoperability: https://interop.seemann.io/ .. _QUIC implementations: https://github.com/quicwg/base-drafts/wiki/Implementations .. _cryptography: https://cryptography.io/ .. _Chocolatey: https://chocolatey.org/ .. _BSD license: https://aioquic.readthedocs.io/en/latest/license.html .. _RFC 8446: https://datatracker.ietf.org/doc/html/rfc8446 .. _RFC 9000: https://datatracker.ietf.org/doc/html/rfc9000 .. _RFC 9114: https://datatracker.ietf.org/doc/html/rfc9114 .. _RFC 9220: https://datatracker.ietf.org/doc/html/rfc9220 .. _RFC 9297: https://datatracker.ietf.org/doc/html/rfc9297 .. _RFC 9368: https://datatracker.ietf.org/doc/html/rfc9368 .. _RFC 9369: https://datatracker.ietf.org/doc/html/rfc9369

OpenAI-CLIP
github
LLM Vibe Score0.507
Human Vibe Score0.015912940499642817
moein-shariatniaMar 27, 2025

OpenAI-CLIP

Update (December 2023) I am happy to find out that this code has been used and cited in the following papers: Domino: Discovering Systematic Errors with Cross-Modal Embeddings by Eyuboglu et. al. at ICLR 2022 GSCLIP : A Framework for Explaining Distribution Shifts in Natural Language by Zhu et. al. at ICML 2022 UIC-NLP at SemEval-2022 Task 5: Exploring Contrastive Learning for Multimodal Detection of Misogynistic Memes by Cuervo et. al. at SemEval-2022 cdsBERT - Extending Protein Language Models with Codon Awareness by Hallee et. al. from University of Delaware (Sep 2023) ENIGMA-51: Towards a Fine-Grained Understanding of Human-Object Interactions in Industrial Scenarios by Ragusa et. al. (Nov 2023) You can find the citation info on the right section of this GitHub repo page named: Cite this repository or use the below citation info. Introduction It was in January of 2021 that OpenAI announced two new models: DALL-E and CLIP, both multi-modality models connecting texts and images in some way. In this article we are going to implement CLIP model from scratch in PyTorch. OpenAI has open-sourced some of the code relating to CLIP model but I found it intimidating and it was far from something short and simple. I also came across a good tutorial inspired by CLIP model on Keras code examples and I translated some parts of it into PyTorch to build this tutorial totally with our beloved PyTorch! What does CLIP do? Why is it fun? In Learning Transferable Visual Models From Natural Language Supervision paper, OpenAI introduces their new model which is called CLIP, for Contrastive Language-Image Pre-training. In a nutshell, this model learns the relationship between a whole sentence and the image it describes; in a sense that when the model is trained, given an input sentence it will be able to retrieve the most related images corresponding to that sentence. The important thing here is that it is trained on full sentences instead of single classes like car, dog, etc. The intuition is that when trained on whole sentences, the model can learn a lot more things and finds some pattern between images and texts. They also show that when this model is trained on a huge dataset of images and their corresponding texts, it can also act as a classifier too. I encourage you to study the paper to learn more about this exciting model and their astonishing results on benchmarking datasets . To mention just one, CLIP model trained with this strategy classifies ImageNet better than those SOTA models trained on the ImageNet itself optimized for the only task of classification! As a teaser (!), let's see what the final model that we will build in this article from scratch is capable of: given a query (raw text) like "a boy jumping with skateboard" or "a girl jumping from swing", the model will retrieve the most relevant images: !title_img Let's see some more outputs: Config A note on config and CFG: I wrote the codes with python scripts and then converted it into a Jupyter Notebook. So, in case of python scripts, config is a normal python file where I put all the hyperparameters and in the case of Jupyter Notebook, its a class defined in the beginning of the notebook to keep all the hyperparameters. Utils Dataset As you can see in the tittle image of this article, we need to encode both images and their describing texts. So, the dataset needs to return both images and texts. Of course we are not going to feed raw text to our text encoder! We will use DistilBERT model (which is smaller than BERT but performs nearly as well as BERT) from HuggingFace library as our text encoder; so, we need to tokenize the sentences (captions) with DistilBERT tokenizer and then feed the token ids (input_ids) and the attention masks to DistilBERT. Therefore, the dataset needs to take care of the tokenization as well. Below you can see the dataset's code. Below that I'll explain the most important things that is happening in the code. In the \\init\\ we receive a tokenizer object which is actually a HuggingFace tokinzer; this tokenizer will be loaded when running the model. We are padding and truncating the captions to a specified maxlength. In the \\getitem\\ we will first load an encoded caption which is a dictionary with keys inputids and attention_mask, make tensors out of its values and after that we will load the corresponding image, transform and augment it (if there is any!) and then we make it a tensor and put it in the dictionary with "image" as the key. Finally we put the raw text of the caption with the key "caption" in the dictionary only for visualization purposes. I did not use additional data augmentations but you can add them if you want to improve the model's performance. Image Encoder The image encoder code is straight forward. I'm using PyTorch Image Models library (timm) here which makes a lot of different image models available from ResNets to EfficientNets and many more. Here we will use a ResNet50 as our image encoder. You can easily use torchvision library to use ResNets if you don't want to install a new library. The code encodes each image to a fixed size vector with the size of the model's output channels (in case of ResNet50 the vector size will be 2048). This is the output after the nn.AdaptiveAvgPool2d() layer. Text Encoder As I mentioned before, I'll use DistilBERT as the text encoder. Like its bigger brother BERT, two special tokens will be added to the actual input tokens: CLS and SEP which mark the start and end of a sentence. To grab the whole representation of a sentence (as the related BERT and DistilBERT papers point out) we use the final representations of the CLS token and we hope that this representation captures the overall meaning of the sentence (caption). Thinking it in this way, it is similar to what we did to images and converted them into a fixed size vector. In the case of DistilBERT (and also BERT) the output hidden representation for each token is a vector with size 768. So, the whole caption will be encoded in the CLS token representation whose size is 768. Projection Head I used Keras code example implementation of projection head to write the following in PyTorch. Now that we have encoded both our images and texts into fixed size vectors (2048 for image and 768 for text) we need to bring (project) them into a new world (!) with similar dimensions for both images and texts in order to be able to compare them and push apart the non-relevant image and texts and pull together those that match. So, the following code will bring the 2048 and 768 dimensional vectors into a 256 (projection_dim) dimensional world, where we can compare them. "embeddingdim" is the size of the input vector (2048 for images and 768 for texts) and "projectiondim" is the the size of the output vector which will be 256 for our case. For understanding the details of this part you can refer to the CLIP paper. CLIP This part is where all the fun happens! I'll also talk about the loss function here. I translated some of the code from Keras code examples into PyTorch for writing this part. Take a look at the code and then read the explanation below this code block. Here we will use the previous modules that we built to implement the main model. The \\init\\ function is self-explanatory. In the forward function, we first encode the images and texts separately into fixed size vectors (with different dimensionalities). After that, using separate projection modules we project them to that shared world (space) that I talked about previously. Here the encodings will become of similar shape (256 in our case). After that we will compute the loss. Again I recommend reading CLIP paper to get it better but I'll try my best to explain this part. In Linear Algebra, one common way to measure if two vectors are of similar characteristics (they are like each other) is to calculate their dot product (multiplying the matching entries and take the sum of them); if the final number is big, they are alike and if it is small they are not (relatively speaking)! Okay! What I just said is the most important thing to have in mind to understand this loss function. Let's continue. We talked about two vectors, but, what do we have here? We have imageembeddings, a matrix with shape (batchsize, 256) and textembeddings with shape (batchsize, 256). Easy enough! it means we have two groups of vectors instead of two single vectors. How do we measure how similar two groups of vectors (two matrices) are to each other? Again, with dot product (@ operator in PyTorch does the dot product or matrix multiplication in this case). To be able to multiply these two matrices together, we transpose the second one. Okay, we get a matrix with shape (batchsize, batchsize) which we will call logits. (temperature is equal to 1.0 in our case, so, it does not make a difference. You can play with it and see what difference it makes. Also look at the paper to see why it is here!). I hope you are still with me! If not it's okay, just review the code and check their shapes. Now that we have our logits, we need targets. I need to say that there is a more straight forward way to obtain targets but I had to do this for our case (I'll talk about why in a next paragraph). Let's consider what we hope that this model learns: we want it to learn "similar representations (vectors)" for a given image and the caption describing it. Meaning that either we give it an image or the text describing it, we want it to produce same 256 sized vectors for both. Check the cell below this code block for the continue of the explanations So, in the best case scenario, textembeddings and imageembedding matricies should be the same because they are describing similar things. Let's think now: if this happens, what would the logits matrix be like? Let's see with a simple example! So logits, in the best case, will be a matrix that if we take its softmax, will have 1.0s in the diagonal (An identity matrix to call it with fancy words!). As the loss function's job is to make model's predictions similar to targets (at least in most cases!), we want such a matrix as our target. That's the reason why we are calculating imagessimilarity and textssimilarity matrices in the code block above. Now that we've got our targets matrix, we will use simple cross entropy to calculate the actual loss. I've written the full matrix form of cross entropy as a function which you can see in the bottom of the code block. Okay! We are done! Wasn't it simple?! Alright, you can ignore the next paragraph but if you are curious, there is an important note in that. Here's why I didn't use a simpler approach: I need to admit that there's a simpler way to calculate this loss in PyTorch; by doing this: nn.CrossEntropyLoss()(logits, torch.arange(batch_size)). Why I did not use it here? For 2 reasons. 1- The dataset we are using has multiple captions for a single image; so, there is the possibility that two identical images with their similar captions exist in a batch (it is rare but it can happen). Taking the loss with this easier method will ignore this possibility and the model learns to pull apart two representations (assume them different) that are actually the same. Obviously, we don't want this to happen so I calculated the whole target matrix in a way that takes care of these edge cases. 2- Doing it the way I did, gave me a better understanding of what is happening in this loss function; so, I thought it would give you a better intuition as well! Train Here are some funtions to help us load train and valid dataloaders, our model and then train and evaluate our model on those. There's not much going on here; just simple training loop and utility functions Here's a handy function to train our model. There's not much happening here; just loading the batches, feeding them to the model and stepping the optimizer and lr_scheduler. Running the next cell start training the model. Put the kernel on GPU mode. Every epoch should take about 24 minutes on GPU (even one epoch is enough!). It can take one minute before training actually starts because we are going to encode all the captions once in the train and valid dataset, so please don't stop it! Every thing is working fine. Inference Okay! We are done with training the model. Now, we need to do inference which in our case will be giving the model a piece of text and want it to retrieve the most relevant images from an unseen validation (or test) set. Getting Image Embeddings In this function, we are loading the model that we saved after training, feeding it images in validation set and returning the imageembeddings with shape (validset_size, 256) and the model itself. Finding Matches This function does the final task that we wished our model would be capable of: it gets the model, image_embeddings, and a text query. It will display the most relevant images from the validation set! Isn't it amazing? Let's see how it performs after all! This is how we use this function. Aaaannnndddd the results: Final words I hope you have enjoyed this article. Implementing this paper was a really interesting experience for me. I want to thank Khalid Salama for the great Keras code example he provided which inspired me to write something similar in PyTorch.

machine-learning-blackjack-solution
github
LLM Vibe Score0.42
Human Vibe Score0.022610872675250356
GregSommervilleMar 27, 2025

machine-learning-blackjack-solution

machine-learning-blackjack-solution Introduction A genetic algorithm is a type of artificial intelligence programming that uses ideas from evolution to solve complex problems. It works by creating a population of (initially random) candidate solutions, then repeatedly selecting pairs of candidates and combining their solutions using a process similar to genetic crossover. Sometimes candidate solutions even go through mutation, just to introduce new possibilities into the population. After a large number of generations, the best solution found up to that point is often the optimal, best solution possible. Genetic algorithms are particularly well-suited for combinatorial problems, where there are huge numbers of potential solutions to a problem. The evolutionary process they go through is, in essence, a search through a huge solution space. A solution space so large that you simply could never use a brute force approach. This project is a demonstration of using a genetic algorithm to find an optimal strategy for playing the casino game Blackjack. Please see this article for a story about how this program was used, and what the results were. The article describes some of the available settings, and shows how different values for those settings affect the final result. The source code is for a Windows application written in Cthat allows you to play with different settings like population size, selection style and mutation rate. Each generation's best solution is displayed, so you can watch the program literally evolve a solution. !blackjack strategy tester screenshot The property grid located at the upper left of the screen is where you adjust settings. There's an informational area below that, and the right side of the screen is the display area for the three tables that represent a strategy for playing Blackjack. The tall table on the left is for hard hands, the table in the upper right is for soft hands, and the table in the lower right is for pairs. We'll talk more about how to interpret this strategy in a bit. The columns along the tops of the three tables are for the dealer upcard. When you play Blackjack the dealer has one of his two cards initially turned face up, and the rank of that card has a big impact on recommended strategy. Notice that the upcard ranks don't include Jack, Queen or King. That's because those cards all count 10, so we group them and the Ten together and simplify the tables. To use the tables, first, determine if you have a pair, soft hand, or hard hand. Then look in the appropriate table, with the correct dealer upcard column. The cell in the table will be "H" when the correct strategy is to hit, "S" when the correct strategy is to stand, "D" for double-down, and (in the pairs table only) "P" for split. A Word About This "Optimal" Strategy Before we go any further, it needs to be stated that this problem of finding an optimal Blackjack strategy has already been solved. Back in the 1960s, a mathematician named Edward O. Thorp authored a book called Beat the Dealer, which included charts showing the optimal "Basic" strategy. That strategy looks like this: !optimal blackjack strategy So we're solving a problem that has already been solved, but that's actually good. That means we can compare our results to the known best solution. For example, if our result strategy tells us to do anything but stand when holding a pair of Tens, Jacks, Queens or Kings, we know there's a problem. There's one other thing to get out of the way before we go any further, and that's the idea of nondeterministic code. That means that if we run the same code twice in a row, we're likely to get two different results. That's something that happens with genetic algorithms due to their inherent randomness. There's no guarantee you'll find the absolute optimal solution, but it is assured that you will find an optimal or near-optimal solution. It's something that isn't typical when writing code, so it takes some adjustment for most programmers. Genetic Algorithms Now let's talk about the details of a genetic algorithm. Fitness Scores First of all, we need a way to evaluate candidates so we can compare them to each other. That means a numeric fitness score, which in this case is quite simple: you simulate playing a certain number of hands using the strategy, and then count the number of chips you have at the end. The big question is, how many hands should we test with? The challenge of trying to test a strategy is that due to the innate randomness of Blackjack, you could use the same strategy ten times and get ten completely different results. Obviously, the more hands you play, the more the randomness gets smoothed out, and the quality of the underlying strategy starts to emerge. If you doubt this, just think about flipping a coin. If you only flip it five times, there's certainly a possibility that it'll come up heads all five times (in fact, that happens just over 3% of the time). However, if you flip it 500 times, there's no way it's going to end up all heads - the odds of it happening are 0.5500, which works out to be roughly once every 3 x 10150 times you try it. After some testing and analysis, it was determined that a minimum of 100,000 hands per test is needed for a reasonable level of accuracy. There's still variance even at that number, but in order to cut the variance in half, you'd need to bump the number of hands to 500,000. One reason this accuracy is important is that in the later generations, the differences between candidates are very small. Evolution has caused the main parts of the strategy to converge on a particular approach, and towards the end all it's doing is refining the minor details. In those cases it's important to accurately determine the difference between two similar candidates. Representation Representation is simply the idea that we need to use a data structure for a candidate solution that can be combined via crossover, and possibly mutated. In this case, that's also quite simple because the way that human beings represent a Blackjack strategy is to use three tables, as we've seen. Representing those in code with three two-dimensional arrays is the obvious approach. Each cell in those three tables will have "Hit", "Stand", "Double-Down", or (only for pairs) "Split". By the way, since there are 160 cells in the hard hands table, and 80 cells in the soft hands table, and 100 cells in the pairs table, we can calculate exactly how many possible distinct strategies there are for Blackjack: 4100 x 380 x 3160 = 5 x 10174 possible Blackjack strategies That's a big number, which is obviously impossible to search using brute force. Genetic algorithms (GAs) are extremely helpful when trying to find an optimal solution from a very large set of possible solutions like this. Blackjack Rules and Strategies The rules of Blackjack are fairly simple. The dealer and the player both are dealt two cards. The player sees both of their cards (they are usually dealt face up), and one of the dealer's cards is dealt face up. Each card has a value - for cards between 2 and 10, the value is the same as the card's rank (so an Eight of Spades counts as 8, for example). All face cards count as 10, and an Ace can either be 1 or 11 (it counts as 11 only when that does not result in a hand that exceeds 21). The suit of a card does not matter. After the cards are dealt, if the player has Blackjack (a total of 21) and the dealer does not, the player is immediately paid 1.5 times their original bet, and a new hand is dealt. If the player has 21 and the dealer does also, then it's a tie and the player gets their original bet back, and a new hand is dealt. If the player wasn't dealt a Blackjack, then play continues with the player deciding whether to Stand (not get any more cards), Hit (receive an additional card), Double-down (place an additional bet, and receive one and only one more card), or, in the case of holding a pair, splitting the hand, which means placing an additional bet and receiving two new cards, so the end result is that the player is now playing two (or, in the case of multiple splits, more than two) hands simultaneously. If the player hits or double-downs and has a resulting hand that exceeds 21, then they lose and play continues with the next hand. If not, then the dealer draws until their hand totals at least 17. If the dealer exceeds 21 at this point, the player receives a payment equal to twice their original bet. If the dealer doesn't exceed 21, then the hands are compared and the player with the highest total that doesn't exceed 21 wins. Because of these rules, certain effective strategies emerge. One common strategy is that if you hold a hard hand with a value of 20, 19 or 18, you should Stand, since you avoid busting by going over 21, and you have a nice hand total that might win in a showdown with the dealer. Another common strategy is to split a pair of Aces, since Aces are so powerful (due to the fact that count as 11 or 1, you can often Hit a hand with a soft Ace with no risk of busting). Likewise, splitting a pair of 8s is a good idea because with a hard total of 16, it's likely you will bust if you take a Hit (since so many cards count as 10). As a human being, all it takes is a little knowledge about the rules in order to construct a strategy. The GA program doesn't have that advantage, and operates completely without any pre-programmed knowledge of Blackjack. It simply uses the relative fitness scores and the mechanism of evolution to find the solution. GA Settings There are many variables or settings for a GA. You can adjust population size, how parent candidates are selected, how the resulting children may be mutated, and several other items. The following sections describe some of these settings: Setting: Selection Style Once we've solved representation and have a fitness function, the next step is to select two candidates for crossover during the process of building a new generation. There are three common styles for selection, and this program supports all of them. First, you can choose Roulette Wheel selection. It's named for a Roulette wheel because you can imagine each candidate's fitness score being a wedge in a pie chart, with a size proportionate to its relative fitness compared to the other candidates. (Of course, this assumes that all fitness scores are positive, which we will talk about shortly). The main benefit of Roulette Wheel selection is that selection is fitness-proportionate. Imagine if you had only three candidates, with fitness scores of 1, 3, and 8. The relative selection probabilities for those candidates will be 1/12, 3/12, and 8/12. The downside of Roulette Wheel selection is that it tends to be somewhat slow in terms of processing. The selection process is done by iterating through the candidates until a particular condition is matched - in other words, O(N) performance. Another potential problem with Roulette Wheel selection is that there may be situations where fitness scores vary widely, to such an extent that only certain candidates have any reasonable chance of being selected. This happens frequently in early generations, since the majority of candidates are mostly random. Although this might sound like a positive (since you ultimately want to select candidates with high fitness scores), it also results in a loss of genetic diversity. In other words, even though a particular candidate may have a low fitness score in an early generation, it may contain elements that are needed to find the ultimate solution in later generations. Ranked Selection is the solution to this problem. Instead of using raw fitness scores during the selection process, the candidates are sorted by fitness, with the worst candidate receiving a score of 0, the second worse receiving 1, and so forth, all the way to the best candidate, which has a score equal to the population size - 1. Ranked Selection is quite slow, since it combines the O(N) performance of Roulette Wheel, with the additional requirement that the candidates be sorted before selection. However, there may be circumstances where it performs better than other selection approaches. Finally, the fastest selection method of all is called Tournament Selection. This method simply selects N random candidates from the current generation, and then uses the one with the best fitness score. A tournament size of 2 means two random candidates are selected, and the best of those two is used. If you have a large tournament size (like 10), then 10 different candidates will be selected, with the best of those being the ultimate selection. That obviously tilts the balance between randomness and quality. Tournament selection works well in most cases, but it does require some experimentation to find the best tourney size. Setting: Elitism Elitism is a technique that helps ensure that the best candidates are always maintained. Since all selection methods are random to some degree, it is possible to completely lose the best candidates from one generation to another. By using Elitism, we automatically advance a certain percentage of the best candidates to the next generation. Elitism does have a negative impact on performance since all of the candidates must be sorted by fitness score. Typically Elitism is done before filling the rest of a new generation with new candidates created by crossover. Crossover Details Once two candidate solutions have been selected, the next step in building a new generation is to combine those two into a single new candidate, hopefully using the best of both parent strategies. There are a number of ways to do crossover, but the method used in this program is quite straightforward - the two fitness scores are compared, and crossover happens in a relatively proportionate way. If one candidate has a fitness of 10, and the other has a fitness of 5, then the one with fitness 10 contributes twice as much to the child as the parent with a fitness of 5. Since the fitness scores in this program are based on how much the strategy would win over thousands of hands, almost all fitness scores will be negative. (This is obviously because the rules are set up so the house always wins.) This makes it difficult to calculate relative fitnesses (how do you compare a positive number with a negative, and find relative proportions?), and also causes problems with selection methods like Roulette Wheel or Ranked. To solve this, we find the lowest fitness score of the generation and add that value to each candidate. This results in an adjusted fitness score of 0 for the very worse candidate, so it never gets selected. Mutation As has been mentioned a few times, maintaining genetic diversity in our population of candidate solutions is a good thing. It helps the GA ultimately find the very best solution, by occasionally altering a candidate in a positive direction. There are two settings for mutation. MutationRate controls what percentage of new candidates have mutation done on them. MutationImpact controls what percentage of their strategy is randomized. Population Size Population size has a significant impact on performance. The smaller the population size, the faster the GA will execute. On the other hand, if the size is too low the population may not have enough genetic diversity to find the ultimate solution. During testing, it looks like 700 to 1000 is a good balance between speed and correctness. Performance Notes This program consumes a lot of processing power. Running tests of hundreds of thousands of hands of Blackjack for hundreds or thousands of candidates consumes a lot of time. It's really imperative to write the code so that it works as efficiently as possible. If your CPU isn't consistently at or above 95% usage, there's still room for improvement. Multi-threading is a natural fit for genetic algorithms because we often want to perform the same action on each candidate. The best example of this is when we calculate fitness scores. This is often an operation that takes quite a bit of time. In our case, we're dealing out 100,000 hands, and each hand has to be played until the end. If we're single-threading that code, it's going to take a long time. Multi-threading is really the way to go. Luckily, there's a ridiculously simple way to efficiently use all of your processors for an operation like this. This code loops over all of the candidates in the currentGeneration list, calls the fitness function and sets the fitness property for each: Regardless of the number of items in the list or the number of processors on your machine, the code will efficiently run the code in a multi-threaded manner, and continue only when all of the threads are complete. One of the side effects of making this code multi-threaded is that all of the code relating to evaluating a candidate must be thread-safe, including any Singleton objects. When making code thread-safe, pay attention that you don't accidentally introduce code that will slow your program down unintentionally, because sometimes it can be quite subtle. Random numbers are central to how genetic algorithms work, so it's critical that they can be used correctly from a multithreaded environment. That means that each random number generator must be separate from the others, and it also means that each must produce a distinct series of random numbers. Random number generators use seed values which are usually time-based, like the number of milliseconds the computer has been turned on. Starting with that seed, subsequent calls will return a series of numbers that look random, but really aren't. If you start with the same seed, you get the same sequence. And that's a problem because if you create multiple random number generator objects in a loop using the default time-based seed, several of them will have the same time-based initial seed value, which will result in the same sequence of "random" numbers. That's a bug, because it can reduce the true randomness of the program a great deal, and that's vital to a genetic algorithm. There are a couple of ways to solve this problem. First, you can make the random object truly a singleton, and restrict access to it by using a Clock statement. The makes all access serialized for any random number need, which reduces performance. Another approach is to make the variable static per thread. By declaring the variable as static and also marking it with the [ThreadStatic] attribute, the .NET runtime allocates one static variable per thread. That eliminates the locking/serialization, but also has performance issues. The approach used in this application is to use a non-default seed value. In this case we call Guid.NewGuid().GetHashCode(), which generates a new, unique GUID, then gets an integer hashcode value that should be unique, depending on how GetHashCode is implemented. While multithreading really helps performance, there are also other things we can do to improve performance. For example, when dealing with large populations, the hundreds or thousands of objects that will be generated each generation can quickly turn into a huge problem related to garbage collection. In the end, the easiest way to solve that is to look through the code and find objects being allocate inside a loop. It's better to declare the variable outside of the loop, and then clear it in the loop, rather than reallocate it. In a program like this one where you could be looping hundreds of thousands of times, this can result in a very significant performance boost. For example, in an early version of this code, a Deck object was created for each hand. Since there are hundreds of candidate solutions running hundreds of thousands of trial hands, this was a huge inefficiency. The code was changed to allocate one deck per test sequence. The deck was shuffled as needed, so it never needs to be reallocated. Beyond the cards in the deck, another object type that was repeatedly created and destroyed were the candidate strategies. To mitigate this problem, a StrategyPool class was created that handles allocation and deallocation. This means that strategy objects are reused, rather than dynamically created when needed. The pool class has to be thread-safe, so it does serialize access to its methods via a Clock statement, but overall using the pool approach produced a good performance increase. Finally, a subtle form of object allocation is conversion. In an early version of the code, a utility card function used Convert.ToInt32(rankEnum). Obviously, the easiest way to convert from an enum to an int is simply to cast it, like (int)rankEnum. But it's hard to know exactly what the difference is between that approach, int.Parse(), int.TryParse(), or Convert.ToInt32(), since they can all be used and are roughly equivalent. Perhaps the compiler was boxing the enum value before passing it to Convert.ToInt32(), because the profiler identified this as a function that had large amounts of thread contention waiting - and the problem got much, much worse as the generations passed. By rewriting the conversion to use a simple cast, the program performance increased threefold (3x). Contributing Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us. Author Greg Sommerville - Initial work* License This project is licensed under the Apache 2.0 License - see the LICENSE.md file for details

lecca-io
github
LLM Vibe Score0.531
Human Vibe Score0.004614254564337112
lecca-digitalMar 27, 2025

lecca-io

Lecca.io Lecca.io is an AI platform that allows you to configure and deploy Large Language Models (LLMs) equipped with powerful tools and workflows. Build, customize, and automate your AI agents with ease. 🚀 Quick Start Visit app.lecca.io to use the cloud version immediately. Add your API keys and start building intelligent agents for free. Want to self-host or contribute? Check out our development guide. ✨ Key Features Custom LLM Configuration: Choose from multiple AI providers and models Tool Integration: Equip your agents with powerful tools to interact with various services Workflow Builder: Create complex automation workflows similar to n8n, Make.com, or Zapier Build in RAG: Enjoy basic built-in RAG features to easily upload and query data Build your own tools: Build custom apps, actions, and triggers using our docs Automate LLMs: Configure triggers that will enable your AI Agents to work autonomously. 🔧 Available Tools Visit our Tools page for a complete list 🤖 Supported AI Providers Visit our AI Providers page for a complete list 📖 Documentation Concepts Local Development Creating Custom Apps Adding AI Providers Running Ollama Locally 🤝 Contributing We welcome contributions! See our Development Docs for more details. 📄 License Lecca.io Community Edition is distributed under the Apache-2.0 License with Commons Clause. Enterprise features are available under the Commercial License. Built with ❤️ by Lecca Digital (Tony Ramirez)

yoha
github
LLM Vibe Score0.556
Human Vibe Score0.3408299306652369
handtracking-ioMar 27, 2025

yoha

Yoha A practical hand tracking engine. Note: Yoha is currently unmaintained. Quick Links: Demo (Code) Docs Website npm Installation npm install @handtracking.io/yoha Please note: You need to serve the files from node_modules/@handtracking.io/yoha since the library needs to download the model files from here. (Webpack Example) You need to serve your page with https for webcam access. (Webpack Example) You should use cross-origin isolation as it improves the engine's performance in certain scenarios. (Webpack Example) Description Yoha is a hand tracking engine that is built with the goal of being a versatile solution in practical scenarios where hand tracking is employed to add value to an application. While ultimately the goal is to be a general purpose hand tracking engine supporting any hand pose, the engine evolves around specific hand poses that users/developers find useful. These poses are detected by the engine which allows to build applications with meaningful interactions. See the demo for an example. Yoha is currently in beta. About the name: Yoha is short for ("Your Hand Tracking"). Language Support Yoha is currently available for the web via JavaScript. More languages will be added in the future. If you want to port Yoha to another language and need help feel free reach out. Technical Details Yoha was built from scratch. It uses a custom neural network trained using a custom dataset. The backbone for the inference in the browser is currently TensorFlow.js Features: Detection of 21 2D-landmark coordinates (single hand). Hand presence detection. Hand orientation (left/right hand) detection. Inbuilt pose detection. Supported Hand Poses: Pinch (index finger and thumb touch) Fist Your desired pose is not on this list? Feel free to create an issue for it. Performance Yoha was built with performance in mind. It is able to provide realtime user experience on a broad range of laptops and desktop devices. The performance on mobile devices is not great which hopefuly will change with the further development of inference frameworks like TensorFlow.js Please note that native inference speed can not be compared with the web inference speed. Differently put, if you were to run Yoha natively it would be much faster than via the web browser. Minimal Example Source Running locally: Drawing Demo Live Version Source Running locally:

airplay2-receiver
github
LLM Vibe Score0.498
Human Vibe Score0.0426074723730768
openairplayMar 27, 2025

airplay2-receiver

Experimental Somewhat comprehensive python implementation of AP2 receiver using some multi-room features. For now it implements: HomeKit transient pairing (SRP/Curve25519/ChaCha20-Poly1305) - bit flag 48 HomeKit non-transient pairing Some refinements for HomeKit interaction (e.g. managed/active flags) Persist device name and some HomeKit properties across restarts (just use the -m flag again to set the device name anew) FairPlay (v3) authentication and decryption of AES keys - the first and only Python implementation. Credit to @systemcrash for implementation. Receiving of both REALTIME and BUFFERED Airplay2 audio streams Airplay2 Service publication Decoding of all Airplay2 supported CODECs: ALAC, AAC, OPUS, PCM. Ref: here and here Output latency compensation for sync with other Airplay receivers ANNOUNCE and RSA AES for unbuffered streaming from iTunes/Windows Spotify (via AirPlay2) and other live media streams with AES keys. RTCP RFC2198 RTP Redundancy handling (basic); enable bit flag 61 streamConnections; enable bit flag 59 For now it does not implement: FairPlay v2 Accurate audio sync (with help of PTP and/or NTP) It may never implement: MFi Authentication (requires MFi hardware module) This code is experimental, yet fully functional. It can act as a real receiver but does not implement all airplay protocols and related pairing/authentication methods. Next steps: PTP (Precision Time Protocol) Remove all os specific code (Soft Volume management) Sender (branch-sender) - Implementation Raspbian package DACP/(+MRP?) Support FairPlay v2 Support Multiple Connections Since multithreading is now enabled, this allows multiple concurrent connections. There are no safeguards built to prevent you playing multiple streams. Python multiprocessing makes this "DJ" mode a possibility but makes stream management and session management (global state data) nigh impossible. So threading is the right approach in the receiver. HomeKit and other AP senders can now connect concurrently to the receiver and perform operations. This opens the path to Remote Control functionality. mDNS/ZeroConf If you encounter strange errors like NonUniqueNameException, or Address already in use, and you run on macOS, you may have noticed that macOS and this app both try to send updates. Here is a possible workaround. Raspberry Pi 4 Install docker and then build the image: To run the receiver: Default network device is wlan0, you can change this with AP2IFACE env variable: Docker Compose Example Docker Compose Debian macOS Catalina To run the receiver please use Python 3 and do the following: Run the following commands Note: in recent macOS versions (e.g. Ventura), you must disable AirPlay Receiver: System Settings -> AirDrop & Handoff -> AirPlay Receiver: disable. Windows To run the receiver please use Python 3 and do the following: Run the following commands the AirPlay 2 receiver is announced as myap2. Tested on Python 3.7.5 / macOS 10.15.2 with iPhone X 13.3 and Raspberry Pi 4 Protocol notes https://emanuelecozzi.net/docs/airplay2

Godot4ThirdPersonCombatPrototype
github
LLM Vibe Score0.424
Human Vibe Score0.04749392650546089
SnaielMar 27, 2025

Godot4ThirdPersonCombatPrototype

Godot4ThirdPersonCombatPrototype https://github.com/user-attachments/assets/a080634b-b9f3-4a6d-abf5-c0003fe16b34 A base project for third person combat. Feature-filled setup with core systems implemented for player character, combat, and enemies. Downloading the Project Using Godot 4.3 You must have Blender installed and have Blender imports (https://docs.godotengine.org/en/stable/tutorials/assetspipeline/importingscenes.html#importing-blend-files-directly-within-godot) configured in your Godot editor. If not, you will get an error saying Scene file 'Main.tcsn' appears to be invalid/corrupt or Error while loading file 'Main.tcsn' caused by the broken dependencies from the blender files not being imported. Please have a look at https://github.com/Snaiel/Godot4ThirdPersonCombatPrototype/issues/3. Acknowledgements Sekiro: Shadows Die Twice for being the game with the best combat mechanics General Development https://www.youtube.com/watch?v=UpF7wm0186Q provided the base movement and camera controller https://www.youtube.com/watch?v=74y6zWZfQKk as an introduction to composition https://kenney.nl/assets/prototype-textures for the grid texture Models and Animation https://www.mixamo.com/ for the character models and animation https://www.youtube.com/watch?v=2gx1lfhqnFM as an introduction to blend trees https://www.youtube.com/watch?v=fq0hR2tIsRk showed how to enable root motion https://github.com/finepointcgi/Mixamo-Root blender addon for adding root bone to animations https://www.youtube.com/watch?v=A2JMYQBWeig for showing how to attach weapons to a character AI Behaviour https://www.youtube.com/watch?v=6VBCXvfNlCM behaviour tree introduction https://www.gamedeveloper.com/programming/behavior-trees-for-ai-how-they-work in depth behaviour tree introduction https://github.com/bitbrain/beehave behaviour tree library for Godot https://www.youtube.com/watch?v=EOocBMBbL-E&t=4s for navmesh basics State Machines https://www.youtube.com/watch?v=ow_Lum-Agbs introduction into state machines https://medium.com/dotcrossdot/hierarchical-finite-state-machine-c9e3f4ce0d9e introduction into hierarchical finite state machines Audio https://www.audacityteam.org/ Audacity free audio editor https://www.kenney.nl/assets/category:Audio?sort=update sound packs from Kenney https://opengameart.org/content/crystal-cave-song18 ambient background music from Cynic Music https://opengameart.org/content/hyper-ultra-racing fast paced music from Cynic Music Custom Resources https://docs.godotengine.org/en/stable/tutorials/scripting/resources.html wonderful documentation https://www.youtube.com/watch?v=vzRZjM9MTGw great explanation Attribution Giving credit is not necessary but much appreciated!

obsei
github
LLM Vibe Score0.545
Human Vibe Score0.10175553624190911
obseiMar 27, 2025

obsei

Note: Obsei is still in alpha stage hence carefully use it in Production. Also, as it is constantly undergoing development hence master branch may contain many breaking changes. Please use released version. Obsei (pronounced "Ob see" | /əb-'sē/) is an open-source, low-code, AI powered automation tool. Obsei consists of - Observer: Collect unstructured data from various sources like tweets from Twitter, Subreddit comments on Reddit, page post's comments from Facebook, App Stores reviews, Google reviews, Amazon reviews, News, Website, etc. Analyzer: Analyze unstructured data collected with various AI tasks like classification, sentiment analysis, translation, PII, etc. Informer: Send analyzed data to various destinations like ticketing platforms, data storage, dataframe, etc so that the user can take further actions and perform analysis on the data. All the Observers can store their state in databases (Sqlite, Postgres, MySQL, etc.), making Obsei suitable for scheduled jobs or serverless applications. !Obsei diagram Future direction - Text, Image, Audio, Documents and Video oriented workflows Collect data from every possible private and public channels Add every possible workflow to an AI downstream application to automate manual cognitive workflows Use cases Obsei use cases are following, but not limited to - Social listening: Listening about social media posts, comments, customer feedback, etc. Alerting/Notification: To get auto-alerts for events such as customer complaints, qualified sales leads, etc. Automatic customer issue creation based on customer complaints on Social Media, Email, etc. Automatic assignment of proper tags to tickets based content of customer complaint for example login issue, sign up issue, delivery issue, etc. Extraction of deeper insight from feedbacks on various platforms Market research Creation of dataset for various AI tasks Many more based on creativity 💡 Installation Prerequisite Install the following (if not present already) - Install Python 3.7+ Install PIP Install Obsei You can install Obsei either via PIP or Conda based on your preference. To install latest released version - Install from master branch (if you want to try the latest features) - Note: all option will install all the dependencies which might not be needed for your workflow, alternatively following options are available to install minimal dependencies as per need - pip install obsei[source]: To install dependencies related to all observers pip install obsei[sink]: To install dependencies related to all informers pip install obsei[analyzer]: To install dependencies related to all analyzers, it will install pytorch as well pip install obsei[twitter-api]: To install dependencies related to Twitter observer pip install obsei[google-play-scraper]: To install dependencies related to Play Store review scrapper observer pip install obsei[google-play-api]: To install dependencies related to Google official play store review API based observer pip install obsei[app-store-scraper]: To install dependencies related to Apple App Store review scrapper observer pip install obsei[reddit-scraper]: To install dependencies related to Reddit post and comment scrapper observer pip install obsei[reddit-api]: To install dependencies related to Reddit official api based observer pip install obsei[pandas]: To install dependencies related to TSV/CSV/Pandas based observer and informer pip install obsei[google-news-scraper]: To install dependencies related to Google news scrapper observer pip install obsei[facebook-api]: To install dependencies related to Facebook official page post and comments api based observer pip install obsei[atlassian-api]: To install dependencies related to Jira official api based informer pip install obsei[elasticsearch]: To install dependencies related to elasticsearch informer pip install obsei[slack-api]:To install dependencies related to Slack official api based informer You can also mix multiple dependencies together in single installation command. For example to install dependencies Twitter observer, all analyzer, and Slack informer use following command - How to use Expand the following steps and create a workflow - Step 1: Configure Source/Observer Twitter Youtube Scrapper Facebook Email Google Maps Reviews Scrapper AppStore Reviews Scrapper Play Store Reviews Scrapper Reddit Reddit Scrapper Note: Reddit heavily rate limit scrappers, hence use it to fetch small data during long period Google News Web Crawler Pandas DataFrame Step 2: Configure Analyzer Note: To run transformers in an offline mode, check transformers offline mode. Some analyzer support GPU and to utilize pass device parameter. List of possible values of device parameter (default value auto): auto: GPU (cuda:0) will be used if available otherwise CPU will be used cpu: CPU will be used cuda:{id} - GPU will be used with provided CUDA device id Text Classification Text classification: Classify text into user provided categories. Sentiment Analyzer Sentiment Analyzer: Detect the sentiment of the text. Text classification can also perform sentiment analysis but if you don't want to use heavy-duty NLP model then use less resource hungry dictionary based Vader Sentiment detector. NER Analyzer NER (Named-Entity Recognition) Analyzer: Extract information and classify named entities mentioned in text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc Translator PII Anonymizer Dummy Analyzer Dummy Analyzer: Does nothing. Its simply used for transforming the input (TextPayload) to output (TextPayload) and adding the user supplied dummy data. Step 3: Configure Sink/Informer Slack Zendesk Jira ElasticSearch Http Pandas DataFrame Logger This is useful for testing and dry running the pipeline. Step 4: Join and create workflow source will fetch data from the selected source, then feed it to the analyzer for processing, whose output we feed into a sink to get notified at that sink. Step 5: Execute workflow Copy the code snippets from Steps 1 to 4 into a python file, for example example.py and execute the following command - Demo We have a minimal streamlit based UI that you can use to test Obsei. !Screenshot Watch UI demo video Check demo at (Note: Sometimes the Streamlit demo might not work due to rate limiting, use the docker image (locally) in such cases.) To test locally, just run To run Obsei workflow easily using GitHub Actions (no sign ups and cloud hosting required), refer to this repo. Companies/Projects using Obsei Here are some companies/projects (alphabetical order) using Obsei. To add your company/project to the list, please raise a PR or contact us via email. Oraika: Contextually understand customer feedback 1Page: Giving a better context in meetings and calls Spacepulse: The operating system for spaces Superblog: A blazing fast alternative to WordPress and Medium Zolve: Creating a financial world beyond borders Utilize: No-code app builder for businesses with a deskless workforce Articles Sr. No. Title Author 1 AI based Comparative Customer Feedback Analysis Using Obsei Reena Bapna 2 LinkedIn App - User Feedback Analysis Himanshu Sharma Tutorials Sr. No. Workflow Colab Binder 1 Observe app reviews from Google play store, Analyze them by performing text classification and then Inform them on console via logger PlayStore Reviews → Classification → Logger 2 Observe app reviews from Google play store, PreProcess text via various text cleaning functions, Analyze them by performing text classification, Inform them to Pandas DataFrame and store resultant CSV to Google Drive PlayStore Reviews → PreProcessing → Classification → Pandas DataFrame → CSV in Google Drive 3 Observe app reviews from Apple app store, PreProcess text via various text cleaning function, Analyze them by performing text classification, Inform them to Pandas DataFrame and store resultant CSV to Google Drive AppStore Reviews → PreProcessing → Classification → Pandas DataFrame → CSV in Google Drive 4 Observe news article from Google news, PreProcess text via various text cleaning function, Analyze them via performing text classification while splitting text in small chunks and later computing final inference using given formula Google News → Text Cleaner → Text Splitter → Classification → Inference Aggregator 💡Tips: Handle large text classification via Obsei Documentation For detailed installation instructions, usages and examples, refer to our documentation. Support and Release Matrix Linux Mac Windows Remark Tests ✅ ✅ ✅ Low Coverage as difficult to test 3rd party libs PIP ✅ ✅ ✅ Fully Supported Conda ❌ ❌ ❌ Not Supported Discussion forum Discussion about Obsei can be done at community forum Changelogs Refer releases for changelogs Security Issue For any security issue please contact us via email Stargazers over time Maintainers This project is being maintained by Oraika Technologies. Lalit Pagaria and Girish Patel are maintainers of this project. License Copyright holder: Oraika Technologies Overall Apache 2.0 and you can read License file. Multiple other secondary permissive or weak copyleft licenses (LGPL, MIT, BSD etc.) for third-party components refer Attribution. To make project more commercial friendly, we void third party components which have strong copyleft licenses (GPL, AGPL etc.) into the project. Attribution This could not have been possible without these open source softwares. Contribution First off, thank you for even considering contributing to this package, every contribution big or small is greatly appreciated. Please refer our Contribution Guideline and Code of Conduct. Thanks so much to all our contributors

CollabAI
github
LLM Vibe Score0.449
Human Vibe Score0.07795191529604462
sjinnovationMar 27, 2025

CollabAI

CollabAI About Welcome to Collabai.software, where we've taken the world of AI to new heights. We've been working tirelessly to bring you the most advanced, user-friendly platform that seamlessly integrates with the powerful OpenAI API, Gemini, and Claude. Imagine running your own ChatGPT on your server, with the ability to manage access for your entire team. Picture creating custom AI assistants that cater to your unique needs, and organizing your employees into groups for streamlined collaboration. With Collabai.software, this is not just a dream, but a reality. Collabai.software Features: Self-Hosting on Your Cloud: Gain full control by hosting the platform on your private cloud. Ensure data privacy by using your API codes, allowing for secure data handling. Enhanced Team Management: Manage teams with private accounts and customizable access levels (Departments). Prompt Templates: Utilize generic templates to streamline team usage. Departmental Access & Assistant Assignment: Assign AI assistants to specific departments for shared team access. Customizable AI Assistants: Create personalized AI assistants for users or organizations. Tagging Feature in Chats: Organize and retrieve chat data efficiently with custom tags. Chat Storage and Retrieval: Save all chats and replies for future analysis, with an option to restore accidentally deleted chats from Trash. Optimized Performance: Experience our high-speed, efficient platform. Our clients have been using it for over a year, with some spending $1500-$2000 per month on the API. File Upload & GPT-4 Vision Integration: Enhance interactions by uploading files for analysis and sending pictures for AI description. OpenAI API, Gemini, and Claude Integration: Seamlessly integrate with the powerful OpenAI API, Gemini, and Claude for a comprehensive suite of AI capabilities. API-Based Function Calls: Execute custom functions and automate tasks directly through the API. Usage Monitoring: Track your daily and monthly API usage costs to optimize spending. Day and Night Mode: Switch between light and dark themes to enhance visual comfort. Additional Features: Private Accounts: Ensure the security and privacy of your team members' data. Customizable Access Levels: Tailor access permissions to meet the specific needs of your organization. Shared Team Access: Foster collaboration by assigning AI assistants to specific departments or teams. AI-Powered File Analysis: Gain insights and automate tasks by uploading files for AI analysis. AI-Generated Image Descriptions: Enhance communication and understanding by sending pictures for AI-powered descriptions. !image !image !image Folder Structure Client The client folder contains the React-based frontend code for the application. This includes JSX, CSS, and JavaScript files, as well as any additional assets such as images or fonts. Below is a brief overview of the main subdirectories within the client folder: src: This directory contains the React components, styles, and scripts for the frontend application. public: Static assets, such as images or favicon.ico, go here. This folder is served as-is and not processed by the build system. Server The server folder contains all the backend-related code for the application, following a Model-View-Controller (MVC) pattern. Here is a breakdown of the main subdirectories within the server folder: controllers: This directory holds the controller files responsible for handling requests, processing data, and interacting with models. models: Data models and database-related code are organized in this folder. config: Configuration files for the backend, such as database configuration or any other service configuration should be stored here, can be stored in this directory. Getting Started Follow the steps below to get the project up and running. Prerequisites Node.js (Version: >=20.x) MongoDB NPM Development Setup Clone the Repository bash cd client Install Dependencies bash cd ../server Install Backend Dependencies bash npm start To initialize the application data and create a superadmin user, you can use either cURL or Postman: Using cURL If you prefer command-line tools, you can use curl to make a POST request to the /init-setup endpoint. Open your terminal and run the following command: curl -X POST http://localhost:8011/api/init -H "Content-Type: application/json" -d '{ "fname": "Super", "lname": "Admin", "email": "superadmin@example.com", "password": "yourSecurePassword", "employeeCount": 100, "companyName": "INIT_COMPANY" }' Initializing Setup with Postman Open Postman: Launch the Postman application. Create a New Request: Click on the '+' or 'New' button to create a new request. Set HTTP Method to POST: Ensure that the HTTP method is set to POST. Enter URL: Enter the URL http://localhost:8011/api/init. Set Headers: Go to the 'Headers' tab. Set Content-Type to application/json. Set Request Body: Switch to the 'Body' tab. Select the 'raw' radio button. Enter the JSON data for your superadmin user: Send Request: Click the 'Send' button to make the request. This will send a POST request to http://localhost:8011/api/init with the provided JSON payload, creating a superadmin user with the specified details. Site Setup: Login with the superadmin credentials and set up your site by adding configs from your settings page, for ex. API keys, etc. Reference CollaborativeAI Reference Guide Contributing If you would like to contribute to the project, we welcome your contributions! Please follow the guidelines outlined in the CONTRIBUTING.md file. Feel free to raise issues, suggest new features, or send pull requests to help improve the project. Your involvement is greatly appreciated! Thank you for contributing to our project! License MIT

bootcamp_machine-learning
github
LLM Vibe Score0.469
Human Vibe Score0.0690798818433794
42-AIMar 26, 2025

bootcamp_machine-learning

Bootcamp Machine Learning One week to learn the basics in Machine Learning! :robot: Table of Contents Download Curriculum Module05 - Stepping Into Machine Learning Module06 - Univariate Linear Regression Module07 - Multivariate Linear Regression Module08 - Logistic Regression Module09 - Regularization Acknowledgements Contributors Beta-testers This project is a Machine Learning bootcamp created by 42 AI. As notions seen during this bootcamp can be complex, we very strongly advise students to have previously done the following bootcamp: Python 42 Artificial Intelligence is a student organization of the Paris campus of the school 42. Our purpose is to foster discussion, learning, and interest in the field of artificial intelligence, by organizing various activities such as lectures and workshops. Download The pdf files of each module can be downloaded from our realease page: https://github.com/42-AI/bootcampmachine-learning/releases Curriculum Module05 - Stepping Into Machine Learning Get started with some linear algebra and statistics Sum, mean, variance, standard deviation, vectors and matrices operations. Hypothesis, model, regression, loss function. Module06 - Univariate Linear Regression Implement a method to improve your model's performance: gradient descent, and discover the notion of normalization Gradient descent, linear regression, normalization. Module07 - Multivariate Linear Regression Extend the linear regression to handle more than one features, build polynomial models and detect overfitting Multivariate linear hypothesis, multivariate linear gradient descent, polynomial models. Training and test sets, overfitting. Module08 - Logistic Regression Discover your first classification algorithm: logistic regression! Logistic hypothesis, logistic gradient descent, logistic regression, multiclass classification. Accuracy, precision, recall, F1-score, confusion matrix. Module09 - Regularization Fight overfitting! Regularization, overfitting. Regularized loss function, regularized gradient descent. Regularized linear regression. Regularized logistic regression. Acknowledgements Contributors Amric Trudel (amric@42ai.fr) Maxime Choulika (maxime@42ai.fr) Pierre Peigné (ppeigne@student.42.fr) Matthieu David (mdavid@student.42.fr) Benjamin Carlier (bcarlier@student.42.fr) Pablo Clement (pclement@student.42.fr) Amir Mahla (amahla@42ai.fr) Mathieu Perez (mathieu.perez@42ai.fr) Beta-testers Richard Blanc (riblanc@student.42.fr) Solveig Gaydon Ohl (sgaydon-@student.42.fr) Quentin Feuillade--Montixi (qfeuilla@student.42.fr)

panda-etl
github
LLM Vibe Score0.548
Human Vibe Score0.003720964303080932
sinaptik-aiMar 25, 2025

panda-etl

🐼 PandaETL !Version PandaETL is an open-source, no-code ETL (Extract, Transform, Load) tool designed to extract and parse data from various document types including PDFs, emails, websites, audio files, and more. With an intuitive interface and powerful backend, PandaETL simplifies the process of data extraction and transformation, making it accessible to users without programming skills. ✨ Features 📝 No-Code Interface: Easily set up and manage ETL processes without writing a single line of code. 📄 Multi-Document Support: Extract data from PDFs, emails, websites, audio files, and more. 🔧 Customizable Workflows: Create and customize extraction workflows to fit your specific needs (coming soon). 🔗 Extensive Integrations: Integrate with various data sources and destinations (coming soon). 💬 Chat with Documents: Chat with your documents to retrieve information and answer questions (coming soon). 🚀 Getting Started 📋 Prerequisites Node.js and npm (or yarn) Python 3.x Conda Poetry (Python package manager) 🖥️ Project Setup Clone the repository: Frontend Setup Navigate to the frontend directory: Install dependencies (including Husky): Create a .env file in the frontend directory with the following: or copy the .env.example file to .env Run the development server: Open http://localhost:3000 with your browser to see the result. Backend Setup Navigate to the backend directory: Create and activate a Conda environment: Install Poetry within the Conda environment: Install dependencies using Poetry (including pre-commit): Set up pre-commit hooks: Create an environment file from the example: Apply database migrations: Start the backend server: 📚 Usage 🆕 Creating a New Project Navigate to the "Projects" page. Click on "New Project". Fill in the project details and click "Create". ⚙️ Setting Up an Extraction Process Open a project and navigate to the "Processes" tab. Click on "New Process". Follow the steps to configure your extraction process. 💬 Chat with Your Documents (Coming Soon) Stay tuned for our upcoming feature that allows you to chat with your documents, making data retrieval even more interactive and intuitive. 🤝 Contributing We welcome contributions from the community. To contribute: Fork the repository. Create a new branch for your feature or bugfix. Commit your changes and push to your fork. Create a pull request with a detailed description of your changes. 📜 License This project is licensed under the MIT Expat License. See the LICENSE file for details. 🙏 Acknowledgements We would like to thank all the contributors and the open-source community for their support. 📞 Contact For any questions or feedback, please open an issue on GitHub. Development Setup This project uses pre-commit hooks in the backend and Husky in the frontend to ensure code quality and consistency. Frontend (Husky) Husky is set up in the frontend to run linting checks before each commit. To manually run the frontend linting:

dennis.tim-gmail.com
github
LLM Vibe Score0.394
Human Vibe Score0.02196798710271764
carpentries-incubatorMar 25, 2025

dennis.tim-gmail.com

Intro to AI for GLAM Our aim with this lesson is to empower GLAM (Galleries, Libraries, Archives, and Museums)) staff with the foundation to support, participate in and begin to undertake in their own right, machine learning based research and projects with heritage collections. After following this lesson, learners will be able to: Explain and differentiate key terms, phrases, and concepts associated with AI and Machine Learning in GLAM Describe ways in which AI is being innovatively used in the cultural heritage context today Identify what kinds of tasks machine learning models excel at in GLAM applications Identify weaknesses in machine learning models Reflect on ethical implications of applying machine learning to cultural heritage collections and discuss potential mitigation strategies Summarise the practical, technical steps involved in undertaking machine learning projects Identify additional resources on AI and Machine Learning in GLAM Contributing We welcome all contributions to improve the lesson! Maintainers will do their best to help you if you have any questions, concerns, or experience any difficulties along the way. We'd like to ask you to familiarize yourself with our Contribution Guide and have a look at the [more detailed guidelines][lesson-example] on proper formatting, ways to render the lesson locally, and even how to write new episodes. Please see the current list of issues for ideas for contributing to this repository. For making your contribution, we use the GitHub flow, which is nicely explained in the chapter Contributing to a Project in Pro Git by Scott Chacon. Look for the tag !good\first\issue. This indicates that the maintainers will welcome a pull request fixing this issue. Maintainer(s) Current maintainers of this lesson are Mark Bell Nora McGregor Daniel van Strien Mike Trizna Authors A list of contributors to the lesson can be found in Citation To cite this lesson, please consult with [lesson-example]: https://carpentries.github.io/lesson-example

Showing 49-72 of 176 resources in category: github