Data Science with computer vision

6 Months

  • Statistics
  • Data visualization in python
  • EDA
  • Regression
  • Supervised Machine Learning
  • Unsupervised Machine Learning
  • Ensemble Techniques
  • Association Rule
  • Recommendation system
  • Artificial Neural Network
  • CNN
  • Introduction to Computer Vision
  • Introduction to OpenCV
  • Computer Vision Techniques
  • Object Detection
  • Image Segmentation
  • Image colorization with OpenCV
  • Working with Video and Video Streams
  • Transfer Learning and Fine
  • Generative Adversarial Networks
  • Autoencoders
  • Modern CNN Architectures including Vision Transformers
  • Image similarity
  • Facial Recognition
  • Deep Fake Generation
  • Video Classification
  • Optical Character Recognition
  • Image Captioning
  • Assignments for assessment
  • Projects
  • Internship

Course Outline

Statistical Foundations

In this module, you will learn everything you need to know about all the statistical methods used for decision making in this Data Science course.

  • Probability distribution – Binomial, Poisson, and Normal Distribution in Python.
  • Bayes’ theorem – Baye’s Theorem is a mathematical formula named after Thomas Bayes, which determines conditional probability. Conditional Probability is the probability of an outcome occurring predicated on the previously occurred outcome.
  • Central limit theorem – This module will teach you how to estimate a normal distribution using the Central Limit Theorem (CLT).
  • Hypothesis testing – This module will teach you about Hypothesis Testing in Statistics. One Sample T-Test, Anova and Chi-Square test.

Exploratory Data Analysis (EDA)

This module of 6 months in Data Science courses will teach you all about Exploratory Data Analysis like Pandas, Seaborn, Matplotlib, and Summary Statistics.

  • Pandas – Pandas is one of the most widely used Python libraries. Pandas is used to analyze and manipulate data. This module will give you a deep understanding of exploring data sets using Pandas.
  • Summary statistics (mean, median, mode, variance, standard deviation) – In this module, you will learn about various statistical formulas and implement them using Python.
  • Seaborn – Seaborn is also one of the most widely used Python libraries. Seaborn is a Matplotlib based data visualization library in Python. This module will give you a deep understanding of exploring data sets using Seaborn.
  • Matplotlib – Matplotlib is another widely used Python library. Matplotlib is a library to create statically animated, interactive visualizations. This module will give you a deep understanding of exploring data sets using Matplotlib.

 

Regression- Linear Regression

This module will get us comfortable with all the techniques used in Linear and Logistic Regression.

  • Multiple linear regression – Multiple Linear Regression is used for predicting one dependent variable using various independent variables.
  • Fitted regression lines – A fitted regression line is a mathematical regression equation on a graph for your data. 
  • AIC, BIC, Model Fitting, Training and Test Data – In this module, you will go through everything you need to know about several models such as AIC, BIC, Model Fitting, Training, and Test Data.

 

Regression- Logistic Regression

  • Introduction to Logistic regression, interpretation, odds ratio – It is a simple classification algorithm to predict the categorical dependent variables with the assistance of independent variables.
  • Misclassification, Probability, AUC, R-Square – This module will teach everyone how to work with Misclassification, Probability, AUC, and R-Square.

 

Supervised Machine Learning 

In the next module, you will learn all the Supervised Learning techniques used in Machine Learning.

  • CART – CART is a predictive machine learning model that describes the prediction of outcome variable’s values predicated on other values.
  • KNN – KNN is one of the most straightforward machine learning algorithms for solving regression and classification problems.
  • Decision Trees – Decision Tree is a Supervised Machine Learning algorithm used for both classification and regression problems. It is a hierarchical structure where internal nodes indicate the dataset features, branches represent the decision rules, and each leaf node indicates the result.
  • Naive Bayes – Naive Bayes Algorithm is used to solve classification problems using Baye’s Theorem. 

 

Unsupervised Learning

In the next module, you will learn all the Unsupervised Learning techniques used in Machine Learning.

  • Clustering – K-Means & Hierarchical – Clustering is an unsupervised learning technique involving the grouping of data. In this module, you will learn everything you need to know about the method and its types, like K-means clustering and hierarchical clustering.
  • Distance methods – This module will teach you how to work with all the distance methods or measures such as Euclidean, Manhattan, Cosine.
  • Features of a Cluster – Labels, Centroids, Inertia – This module will drive you through all the features of a Cluster like Labels, Centroids, and Inertia.
  • Eigen vectors and Eigen values – In this module, you will learn how to implement Eigenvectors and Eigenvalues in a matrix.
  • Principal component analysis – Principal Component Analysis is a technique to reduce the complexity of a model, like eliminating the number of input variables for a predictive model to avoid overfitting.

Ensemble Techniques

In this Machine Learning, we discuss supervised standalone models’ shortcomings and learn a few techniques, such as Ensemble techniques, to overcome these shortcomings.

  • Bagging & Boosting – Bagging is a meta-algorithm in machine learning used for enhancing the stability and accuracy of machine learning algorithms, which are used in statistical classification and regression.
    Boosting is a meta-algorithm in machine learning that converts robust classifiers from several weak classifiers. 
  • Random Forest – Random Forest comprises several decision trees on the provided dataset’s several subsets. Then, it calculates the average for enhancing the dataset’s predictive accuracy.
  • AdaBoost & Gradient boosting – Boosting can be further classified as Gradient boosting and ADA boosting or Adaptive boosting. This module will teach you about Gradient boosting and ADA boosting.

Association Rules Mining & Recommendation Systems

Association rule mining is the data mining process of finding the rules that may govern associations and causal objects between sets of items.

Recommendation engines are a subclass of machine learning which generally deal with ranking or rating products / users. Loosely defined, a recommender system is a system which predicts ratings a user might give to a specific item. These predictions will then be ranked and returned back to the user.

 

Understanding to Deep Learning – Single Layer Perceptron

Artificial neural networks, usually simply called neural networks or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain.

Convolutional Neural Network

A convolutional neural network is a feed-forward neural network that is generally used to analyze visual images by processing data with grid-like topology. It’s also known as a ConvNet. A convolutional neural network is used to detect and classify objects in an image

 

  • Introduction to Computer Vision

Get a conceptual overview of image classification, object localization, object detection, and image segmentation. Also be able to describe multi-label classification, and distinguish between semantic segmentation and instance segmentation.

  • OpenCV Introduction

How Image are being Stored & Numpy Introduction

Reading & Writing Images

Understanding Color Spaces

Using Different Color Spaces

Drawing in CV2

Callbacks & Trackbar in CV2

  • Computer Vision Techniques

Thresholding – Thresholding is used to simplify visual data for further analysis.

Blurring and Smoothing images – Images may contain lots of noise. There are few techniques through which we can reduce the amount of noise by blurring them.

Color Filtering – When you need information about a specific color, you can take out the color you want.

Edge detection – Edge detection is used to enhance the images and image recognition becomes easier.

 

  • Object Detection

Get an overview of some popular object detection models, such as regional-CNN and ResNet-50. You’ll use object detection models that you’ll retrieve from TensorFlow Hub.

  • Image Segmentation

Image segmentation using variations of the fully convolutional neural network. With these networks, you can assign class labels to each pixel, and perform much more detailed identification of objects compared to bounding boxes. You’ll build the fully convolutional neural network, U-Net, and Mask R-CNN to identify and detect objects.

 

  • OpenCV implementations of Neural Style Transfer, YOLOv3, SSDs and a black and white image colorizer. Image colorization is the process of taking an input grayscale (black and white) image and then producing an output colorized image that represents the semantic colors and tones of the input
  • Working with Video and Video Streams – Computer Vision Techniques, are based on image recognition and statistical analysis to perform tasks such as face recognition, detection of certain image patterns, and computer-human interaction.
  • CNNs – Detailed overview of CNN Analysis, Visualizing performance, Advanced CNNs techniques 
  • Transfer Learning and Fine Tuning – Transfer learning is when a model developed for one task is reused to work on a second task. Fine-tuning is one approach to transfer learning where you change the model output to fit the new task and train only the output model. In Transfer Learning or Domain Adaptation, we train the model with a dataset.
  • Generative Adversarial Networks – A generative adversarial network (GAN) is a machine learning (ML) model in which two neural networks compete with each other to become more accurate in their predictions. CycleGAN, ArcaneGAN, SuperResolution, StyleGAN
  • Autoencoders – An autoencoder is an unsupervised learning technique for neural networks that learns efficient data representations by training the network to ignore signal noise. Autoencoders can be used for image denoising, image compression, and, in some cases, even generation of image data.
  • Modern CNN Architectures including Vision Transformers (ResNets, DenseNets, MobileNET, VGG19, InceptionV3, EfficientNET and ViTs) – The visual transformer divides an image into fixed-size patches, correctly embeds each of them, and includes positional embedding as an input to the transformer encoder.
  • Siamese Networks for image similarity – Image similarity is the measure of how similar two images are. In other words, it quantifies the degree of similarity between intensity patterns in two images.
  • Facial Recognition (Age, Gender, Emotion, Ethnicity) – Face recognition systems use computer algorithms to pick out specific, distinctive details about a person’s face.
  • Object Detection with YOLOv5 and v4, EfficientDetect, SSDs, Faster R-CNNs – Object detection is a computer vision technique for locating instances of objects in images or videos. Object detection algorithms typically leverage machine learning or deep learning to produce meaningful results.
  • Deep Fake Generation – Deepfake refers to realistic, but fake images, sounds, and videos generated by artificial intelligence methods.
  • Video Classification – Video Classification is the task of producing a label that is relevant to the video given its frames. A good video level classifier is one that not only provides accurate frame labels, but also best describes the entire video given the features and the annotations of the various frames in the video.
  • Optical Character Recognition (OCR) – OCR is a technique for detecting printed or handwritten text characters inside digital images of paper files, such as scanning paper records.
  • Image Captioning – Image Captioning is the process of generating textual description of an image. It uses both Natural Language Processing and Computer Vision to generate the captions.
  • Assignments for assessment 
  • Projects
  • Internship