VibeBuilders.ai Logo
VibeBuilders.ai
coursera-practical-data-science-specialization

coursera-practical-data-science-specialization

honghanhh
October 9, 2024
github

Solutions on Practical Data Science Specialization

Access all courses in the Coursera Practical Data Science Specialization Specialization offered by deeplearning.ai.

This repo contains the SOLUTIONS of exercises/labs to achieve the badge.

Course keynotes and solutions of related quizzes, assignments

Practical Data Science Specialization on Coursera contains three courses:

1. Course 1: Analyze Datasets and Train ML Models using AutoML

Week 1:

  1. Artificial Intelligence (AI) mimics human behavior.
  1. Machine Learning (ML) is a subset of AI that uses statistical methods and algorithms that are able to learn from data without being explicitly programmed.
  1. Deep learning (DL) is a subset of machine learning that uses artificial neural networks to learn from data.
  1. AWS SageMaker -->

Week 2:

  1. Statistical Bias: Training data does not comprehensively represent the underlying problem space.
  1. Statistical Bias Causes: Activity Bias, Societal Bias, Selection Bias, Data Drift/Shift, ...
  1. Class Imbalance (CI) measures the imbalance in the number of members between different facet values.
  1. Detecting Statistical Bias by AWS SageMaker DataWrangler and AWS SageMaker Clarify.
  1. Feature Importance explains the features that make up the training data using a score. How useful or valuable the feature is relative to other features?
  1. SHAP (SHapley Additive exPlanations) -->

Week 3:

  1. Data Prepreration includes Ingesting & Analyzing, Prepraring & Transforming, Training & Tuning, and Deploying & Managing.
  1. AutoML aims at automating the process of building a model.
  1. Model Hosting. -->

Week 4:

  1. Built-in Alogrithms in AWS SageMaker supports Classification, Regression, and Clustering problems.
  1. Text Analysis Evolution: Word2Vec (CBOW & Skip-gram), GloVe, FastText, Transformer, BlazingText, ELMo, GPT, BERT, ... -->

2. Course 2: Build, Train, and Deploy ML Pipelines using BERT

Week 1

  1. Feature Engineering involves converting raw data from one or more sources into meaningful features that can be used for training machine learning models.
  1. Feature Engineering Step includes feature selection, creation, and transformation.
  1. BERT is Transformer-based pretrained language models that sucessfully capture bidirectional contexts in word representation.
  1. Feature Store: centralized, reusable, discoverable. -->

Week 2

Learn how to train a customized Pretrained BERT and its variant models, debug, and profile with AWS SageMaker. -->

Week 3

  1. MLOps builds on DevOps practices that encompass people, process, and technology. MLOps also includes considerations and practices that are really unique to machine learning workloads. -->

3. Course 3: Optimize ML Models and Deploy Human-in-the-Loop Pipelines

Week 1

  1. Model Tuning aims to fit the model to the underlying data patterns in your training data and learn the best possible parameters for your model.
  1. Automatic Model Tuning includes grid search, random search, bayesian optimization, hyperband.
  1. Challenges: checkpointing, distribution training strategy. -->

Week 2

Week 3


Disclaimer

The solutions here are ONLY FOR REFERENCE to guide you if you get stuck somewhere. Highly recommended to try out the quizzes and assignments yourselves first before referring to the solutions here.

Feel free to discuss further with me on Linkedin: hognhanhh.

Vibe Score

LLM Vibe Score

0.465

Sentiment

Human Vibe Score

0.0230635140825568

Rate this Resource

Join the VibeBuilders.ai Newsletter

The newsletter helps digital entrepreneurs how to harness AI to build your own assets for your funnel & ecosystem without bloating your subscription costs.

Start the free 5-day AI Captain's Command Line Bootcamp when you sign up:

By subscribing, you agree to our Privacy Policy and Terms of Service.