HtmlToText
a data scientist blog, by philippe dagher rss blog archives classifying bees with google tensorflow jan 19 th , 2016 2:52 pm there is often confusion about the differences between bumblebees and honeybees and even some of our top media channels will publish pictures of bumblebees when they are discussing/ writing about honeybees. these bees have different behaviors and appearances, but given the variety of backgrounds, positions, and image resolutions it can be a challenge for machines to tell them apart. wild bees are important pollinators and the spread of colony collapse disorder has only made their role more critical. in this post, we will build a basic tensorflow algorithm to determine the genus—apis (honey bee) or bombus (bumble bee)—based on photographs of the insects. the purpose is to test google tensorflow and not to reach the 99.56 accuracy obtained during the metis challenge . read on → predicting heart disease with hadoop, spark and python jan 2 nd , 2016 7:23 am this post is the last one of the series “how to install step by step a data lake”. our purpose here is to use the tools and logic that we setup in the first and second posts, to process data received from several hospitals and analyze it, in order to predict heart disease. read on → how to install step by step a local data lake (2/3) dec 16 th , 2015 9:22 am this post is the second of a series on how to install step by step a local data lake . before reading, i suggest to follow this tutorial which will allow you to get the tools up and running on a hosted or virtual machine. you should have now the following architecture ready to receive data flows, crunch them and expose them to machine learning or business intelligence tools: the purpose of this post is to process the data received according to its type of flow, schema and format, save it in parquet tables for a later use by machine learning tools and in hive tables for jdbc access by bi tools. read on → step by step installation of a local data lake (1/3) dec 12 th , 2015 8:35 pm this post will guide you through a step by step installation and configuration of a local data lake on ubuntu with packages such as hadoop , hive , spark , thriftserver , maven , scala , python , jupyter and zeppelin . it is the first of a series of 3 posts that will allow you to familiarize with state of the art tools to practice data science on big data . in the first post we will setup the environment on ubuntu using a cloud host or a virtual machine . in the second post we will crunch incoming data and expose it to data mining and machine learning tools. in the third post, we will apply machine learning and data science techniques to conventional business cases. read on → metis discourse may 19 th , 2015 2:40 pm you will find below snapshots from what i learned and practiced during 12 weeks of metis data science bootcamp. i will be organizing soon in paris, afterwork sessions for persons who are curious about data science and eager to learn without being afraid of getting their hands dirty. read on → arabic reputation apr 26 th , 2015 9:15 pm the purpose of this post is to test topic modeling techniques with python on arabic texts in order to grasp the efficiency of the approach used in my previous work on a different langage. read on → behaviour modeling apr 6 th , 2015 9:07 pm the objective of my final project at metis from weeks 9 to 12, is to categorize drivers based on their behaviour on the roads - their driving style and the type of roads that they follow. the challenge associated with this objective is to identify uniquely a driver (and hence his proper “driving behaviour”) based on the gps log of a mobile phone located inside the car. my idea to solve this issue is to experiment topic modeling techniques especially latent semantic indexing/analysis (lsi/lsa) and latent dirichlet allocation (lda) and explain the observed trips by the unobserved behaviour of drivers. read on → unsupervised learning mar 8 th , 2015 4:12 am renault, a leading french car manufacturer, is currently launching the “espace v”. let’s discover with data science how the market is welcoming this new car model . for that purpose, i scraped more than 3000 forum messages from 3 major automotive websites in france: forum-auto , passion-espace and planetrenault to analyze the sentiment of renault lovers on the espace v. this post is related to the work done in weeks 7 and 8 at metis new economy skills training in new york, using natural language toolkit (nltk), unsupervised learning techniques such as clustering algorithms (k-means, hierarchical clustering, dbscan, mean shift, etc), dimensionality reduction, topic modeling with latent dirichlet allocation, as well as nearest neighbor and approximate nearest neighbor algorithms (kd-trees, locality sensitive hashing, etc). i was able to identify the main topics around which internet users are discussing: read on → supervised learning feb 22 nd , 2015 12:57 am can we predict heart disease? yes! knowledge of the risk factors associated with heart disease helps health care professionals to identify patients at high risk of having heart disease. the main objective of this project that i led on week 4, 5 and 6 at metis new economy skills training in new york - is to develop an intelligent heart disease prediction system that uses the patient’s diagnosis data to perform the prediction. the dataset i looked at is publicly available from the university of california ; in particular, 4 databases coming from the hungarian institute of cardiology in budapest, the university hospitals of zurich and basel in switzerland , as well as the v.a. medical center in long beach and the cleveland clinic foundation in the usa. risk factors associated with heart disease proved to be age, blood pressure, smoking habit, total cholesterol, diabetes, family history of heart disease, obesity, lack of physical activity, etc. the attributes from each patient that i considered are described in this file and will be detailed in the code section below. to build my prediction model, i used all supervised machine learning classifiers such as logistic regression, k nearest neighbor, decision trees, random forests, various naive bayes implementations as well as support vector machines and generalized linear models (using poisson and ordinal regressions). i also tried deep learning techniques such as neural networks and the restricted boltzmann machine. on the other hand, i applied feature selection and feature extraction techniques in order to improve my model. the metrics that i wanted to optimize are precision and recall . the precision is the ratio of people that actually develop heart disease out of those the model says will. a precision of 50% means only half those the model says will develop heart disease actually develop it. we need a high precision in order to avoid predicting heart disease to healthy people! read on → linear regression feb 2 nd , 2015 4:26 am this post is about linear regression or project luther that i led on week 2 and 3 at metis new economy skills training in new york. my client - promocinema - an advertising agency located in france, is in charge of promoting movies during their release on the screens in france. promocinéma works for major distrubutors that represent exclusively 14 us studios. they are paid on a percentage basis of the revenues generated by these movies in the french market. hence, they need to minimize their advertising budget in order to maximize profit. my proposal is to predict the revenues generated by a movie , based on: 1- the rating issued by the french press when they preview the movie before its release 2- the revenues generated by the movie in the us market during the first we of its release 3- the delta in release dates between french and us markets so that pomocinéma can adjust their budjet accordingly… the conclusion of this analysis is twofold: 1- each star given by a journalist is worth 21948 ent