77 Crescent St Waltham MA | 469-664-3655 | ssk150530@gmail.com
Currently I am working on Long Short Term Memory using Tensorflow on Distributed GPU (AWS) EC2 instance. I have experience in solving problems using Deep Learning techniques. My interests include Real Time Signal Processing, Machine learning and Deep learning.
Prior to this, I was working as intern at Deep Cognition Labs on SqueezeNet and Yolo object detection algorithms for computer vision applications on mobile phones
Apart from these, I write posts on Machine Learning and Tensorflow. I like to Cook, Good badminton player and I love hiking (went to Himalayas once)
Efficiently detects Voice and Noise signals. Using tensorflow, to create model and test it on dataset. Training on ConvNet 13 layer architecture . Implementing the trained model on smartphone. Real time constraint such as time complexity are taken care. Working on graph quantization, for memory constraint. Currently accuracy is 90%, still working on feature engineering to improve.
Generated own dataset (ex: cosine values). Implemented using Dynamic RNN with cell size of 10. Used Tensorboard to visualize the graph structure, weights and loss function. Working on LSTM to improve results.
Used Pandas to fill in missing values and standardise all the continuous inputs. Categorical features were binarized and labels were converted to one hot encoding. Used SVM with kernel to fit the classifier, as samples were limited to train (700 samples). Accuracy on test data was 91%
Supervised machine learning technique on stock prices of two months. Used Support Vector Regression algorithm (scikit-learn) to predict the missing values. Major part spent on receiving and cleaning data.
As part of Kaggle competition. Used Python language for implementation(Scikit learn). Using Pandas Input samples were cleaned and missing values were filled in. Implemented using RandomForests classifier for both projects. Code is available in github and can be ran using Anaconda Jupyter Notebook.
Baby names from Social Security website for period of 10 years was scraped using regular expressions (Python). Arranged and plotted most/least occuring name in each year. Results were interesting. This is part of Google Python class project.
MNIST is collection of handwritten digits' images with 40*40 dimension. Using them as dataset, ran ConvNet to classify each digit, Used Tensorflow to implement the Architecture. Accuracy : 99%
Implemented Item-Item Collaborative filtering algorithm. Used Cosine similarities to recommend movies. Pandas (data extraction) and Scikit-learn (algorithm and performance metrics) module to obtain ROI curve.
GridSearchCV : Cross Validation. PCA : Dimensionality Reduction. Lambda Function : "Just because I know how to use it" :P KNN : Classification Algorithm. This Project was mainly for understanding How Cross Validation helps choosing best hyper parameter (Cross Validation), in our case number of neighbors.
Used C language to implement on ARM processor. (MathCoder). Developed and optimized fixed point and floating point coding. Implemented Doppler Effect, DTMF Decoder, Real time filtering (FIR), Adaptive filtering and FFT using MATLAB.
Programmed and tested Electronic Control Module (Name: DCM 2.5) for Hyundai Motors. Worked on CAN communication protocol and feature like Fuel Injection in Daimler Project. Extensively used Polyspace (Static testing) and RTRT (Dynamic testing) tools for unit testing embedded systems.
Python, MATLAB, C++, C
Linux, Windows, Basics of Android
Tensorflow, Matconvnet , Numpy, SciPy, Matplotlib, Scikit-Learn, Pandas , PostgreSQL, PySpark (MapReduce), Anaconda (jupyter notebook), MongodB, Android Studio, RTRT, Polyspace