# Baseline for MNIST Educational Challenge on AIcrowd
#### Author : Ayush Shivani
%% Cell type:markdown id: tags:
## To open this notebook on Google Computing platform Colab, click below!
%% Cell type:markdown id: tags:
[](https://colab.research.google.com/github/ayushshivani/aicrowd_educational_baselines/blob/master/MNIST_baseline.ipynb)
%% Cell type:markdown id: tags:
## Download Necessary Packages
%% Cell type:code id: tags:
``` python
importsys
!{sys.executable}-mpipinstallnumpy
!{sys.executable}-mpipinstallpandas
!{sys.executable}-mpipinstallscikit-learn
```
%% Cell type:markdown id: tags:
## Download data
The first step is to download out train test data. We will be training a classifier on the train data and make predictions on test data. We submit our predictions
We use pandas library to load our data. Pandas loads them into dataframes which helps us analyze our data easily. Learn more about it [here](https://www.tutorialspoint.com/python_data_science/python_pandas.htm)
%% Cell type:code id: tags:
``` python
train_data_path="train.csv"#path where data is stored
```
%% Cell type:code id: tags:
``` python
train_data=pd.read_csv(train_data_path,header=None)#load data in dataframe using pandas
```
%% Cell type:markdown id: tags:
## Visualise the Dataset
%% Cell type:code id: tags:
``` python
train_data.head()
```
%% Cell type:markdown id: tags:
You can see the columns goes from 0 to 784, where columns from 1 to 784 denotes pixel values each between 0-255 and the first column i.e. 0th is the digit it represents between 0-9.
%% Cell type:markdown id: tags:
## Split Data into Train and Validation
Now we want to see how well our classifier is performing, but we dont have the test data labels with us to check. What do we do ? So we split our dataset into train and validation. The idea is that we test our classifier on validation set in order to get an idea of how well our classifier works. This way we can also ensure that we dont [overfit](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/) on the train dataset. There are many ways to do validation like [k-fold](https://machinelearningmastery.com/k-fold-cross-validation/),[leave one out](https://en.wikipedia.org/wiki/Cross-validation_(statistics), etc## Split the data in train/test
Here we have selected the size of the testing data to be 20% of the total data. You can change it and see what effect it has on the accuracies. To learn more about the train_test_split function [click here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).
%% Cell type:markdown id: tags:
Now, since we have our data splitted into train and validation sets, we need to get the label separated from the data.
We have fixed our data and now we train a classifier. The classifier will learn the function by looking at the inputs and corresponding outputs. There are a ton of classifiers to choose from some being [Logistic Regression](https://towardsdatascience.com/logistic-regression-detailed-overview-46c4da4303bc), [SVM](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Random Forests](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Decision Trees](https://towardsdatascience.com/decision-trees-in-machine-learning-641b9c4e8052), etc.
Tip: A good model doesnt depend solely on the classifier but on the features(columns) you choose. So make sure to play with your data and keep only whats important.
We have used [Logistic Regression](https://en.wikipedia.org/wiki/Logistic_regression) as a classifier here and set few of the parameteres. But one can set more parameters and increase the performance. To see the list of parameters visit [here](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html).
Also given are SVM and Decision Tree examples. Check out SVM's parameters [here](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) and Decision Tree's [here](https://scikit-learn.org/stable/modules/tree.html)
%% Cell type:markdown id: tags:
We can also use other classifiers. To read more about sklean classifiers visit [here](https://scikit-learn.org/stable/supervised_learning.html). Try and use other classifiers to see how the performance of your model changes.
%% Cell type:markdown id: tags:
## Train the classifier
%% Cell type:code id: tags:
``` python
classifier.fit(X_train,y_train)
```
%% Cell type:markdown id: tags:
Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)
%% Cell type:markdown id: tags:
## Predict on Validation
Now we predict our trained classifier on the validation set and evaluate our model
%% Cell type:code id: tags:
``` python
y_pred=classifier.predict(X_test)
y_pred=classifier.predict(X_val)
```
%% Cell type:markdown id: tags:
## Evaluate the Performance
We use the same metrics as that will be used for the test set.
[F1 score](https://en.wikipedia.org/wiki/F1_score) and [Log Loss](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.log_loss.html) are the metrics for this challenge
Note: Do take a look at the submission format.The submission file should contain a header.For eg here it is "label".
%% Cell type:markdown id: tags:
## To download the generated csv in colab run the below command
%% Cell type:code id: tags:
``` python
fromgoogle.colabimportfiles
files.download('/tmp/submission.csv')
```
%% Cell type:markdown id: tags:
### Go to [platform](https://www.aicrowd.com/challenges/mnist-recognise-handwritten-digits/). Participate in the challenge and submit the submission.csv generated.
# Baseline for MNIST Educational Challenge on AIcrowd
#### Author : Ayush Shivani
%% Cell type:markdown id: tags:
## To open this notebook on Google Computing platform Colab, click below!
%% Cell type:markdown id: tags:
[](https://colab.research.google.com/github/ayushshivani/aicrowd_educational_baselines/blob/master/MNIST_baseline.ipynb)
%% Cell type:markdown id: tags:
## Download Necessary Packages
%% Cell type:code id: tags:
``` python
importsys
!{sys.executable}-mpipinstallnumpy
!{sys.executable}-mpipinstallpandas
!{sys.executable}-mpipinstallscikit-learn
```
%% Cell type:markdown id: tags:
## Download data
The first step is to download out train test data. We will be training a classifier on the train data and make predictions on test data. We submit our predictions
We use pandas library to load our data. Pandas loads them into dataframes which helps us analyze our data easily. Learn more about it [here](https://www.tutorialspoint.com/python_data_science/python_pandas.htm)
%% Cell type:code id: tags:
``` python
train_data_path="train.csv"#path where data is stored
```
%% Cell type:code id: tags:
``` python
train_data=pd.read_csv(train_data_path,header=None)#load data in dataframe using pandas
```
%% Cell type:markdown id: tags:
## Visualise the Dataset
%% Cell type:code id: tags:
``` python
train_data.head()
```
%% Cell type:markdown id: tags:
You can see the columns goes from 0 to 784, where columns from 1 to 784 denotes pixel values each between 0-255 and the first column i.e. 0th is the digit it represents between 0-9.
%% Cell type:markdown id: tags:
## Split Data into Train and Validation
Now we want to see how well our classifier is performing, but we dont have the test data labels with us to check. What do we do ? So we split our dataset into train and validation. The idea is that we test our classifier on validation set in order to get an idea of how well our classifier works. This way we can also ensure that we dont [overfit](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/) on the train dataset. There are many ways to do validation like [k-fold](https://machinelearningmastery.com/k-fold-cross-validation/),[leave one out](https://en.wikipedia.org/wiki/Cross-validation_(statistics), etc## Split the data in train/test
Here we have selected the size of the testing data to be 20% of the total data. You can change it and see what effect it has on the accuracies. To learn more about the train_test_split function [click here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).
%% Cell type:markdown id: tags:
Now, since we have our data splitted into train and validation sets, we need to get the label separated from the data.
We have fixed our data and now we train a classifier. The classifier will learn the function by looking at the inputs and corresponding outputs. There are a ton of classifiers to choose from some being [Logistic Regression](https://towardsdatascience.com/logistic-regression-detailed-overview-46c4da4303bc), [SVM](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Random Forests](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Decision Trees](https://towardsdatascience.com/decision-trees-in-machine-learning-641b9c4e8052), etc.
Tip: A good model doesnt depend solely on the classifier but on the features(columns) you choose. So make sure to play with your data and keep only whats important.
We have used [Logistic Regression](https://en.wikipedia.org/wiki/Logistic_regression) as a classifier here and set few of the parameteres. But one can set more parameters and increase the performance. To see the list of parameters visit [here](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html).
Also given are SVM and Decision Tree examples. Check out SVM's parameters [here](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) and Decision Tree's [here](https://scikit-learn.org/stable/modules/tree.html)
%% Cell type:markdown id: tags:
We can also use other classifiers. To read more about sklean classifiers visit [here](https://scikit-learn.org/stable/supervised_learning.html). Try and use other classifiers to see how the performance of your model changes.
%% Cell type:markdown id: tags:
## Train the classifier
%% Cell type:code id: tags:
``` python
classifier.fit(X_train,y_train)
```
%% Cell type:markdown id: tags:
Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)
%% Cell type:markdown id: tags:
## Predict on Validation
Now we predict our trained classifier on the validation set and evaluate our model
%% Cell type:code id: tags:
``` python
y_pred=classifier.predict(X_test)
y_pred=classifier.predict(X_val)
```
%% Cell type:markdown id: tags:
## Evaluate the Performance
We use the same metrics as that will be used for the test set.
[F1 score](https://en.wikipedia.org/wiki/F1_score) and [Log Loss](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.log_loss.html) are the metrics for this challenge
Note: Do take a look at the submission format.The submission file should contain a header.For eg here it is "label".
%% Cell type:markdown id: tags:
## To download the generated csv in colab run the below command
%% Cell type:code id: tags:
``` python
fromgoogle.colabimportfiles
files.download('/tmp/submission.csv')
```
%% Cell type:markdown id: tags:
### Go to [platform](https://www.aicrowd.com/challenges/mnist-recognise-handwritten-digits/). Participate in the challenge and submit the submission.csv generated.