Commit c56dc221 authored by ashivani's avatar ashivani
Browse files

minor changes in baseline

parent 46a9fb26
%% Cell type:markdown id: tags:
# Baseline for DIBRD Practice Challenge on AIcrowd
# Baseline for DIBRD Challenge on AIcrowd
#### Author : Shubham Sharma
%% Cell type:markdown id: tags:
## To open this notebook on Google Computing platform Colab, click below!
%% Cell type:markdown id: tags:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ayushshivani/aicrowd_educational_baselines/blob/master/MNIST_baseline.ipynb)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ayushshivani/aicrowd_educational_baselines/blob/master/PKHND_baseline.ipynb)
%% Cell type:markdown id: tags:
## Download Necessary Packages
......@@ -27,17 +28,21 @@
%% Cell type:markdown id: tags:
## Download dataset
%% Cell type:markdown id: tags:
The first step is to download out train test data. We will be training a classifier on the train data and make predictions on test data. We submit our predictions
%% Cell type:code id: tags:
``` python
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_mnist/data/public/test.zip
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_mnist/data/public/train.zip
!unzip train.zip
!unzip test.zip
!rm -rf data
!mkdir data
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/dibrd/v0.1/train.csv -O data/train.csv
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/dibrd/v0.1/test.csv -O data/test.csv
```
%% Cell type:markdown id: tags:
......@@ -54,16 +59,17 @@
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score
```
%% Cell type:markdown id: tags:
## Load the data
## Load Data
We use pandas library to load our data. Pandas loads them into dataframes which helps us analyze our data easily. Learn more about it [here](https://www.tutorialspoint.com/python_data_science/python_pandas.htm)
%% Cell type:code id: tags:
``` python
train_data_path = "train.csv" #path where data is stored
train_data_path = "data/train.csv" #path where data is stored
```
%% Cell type:code id: tags:
``` python
......@@ -80,20 +86,21 @@
train_data.head()
```
%% Cell type:markdown id: tags:
You can see the columns goes from 0 to 20, where columns from 0 to 19 represents features extracted from the image set and last column represents the type of patient i.e 1 if if signs of Diabetic Retinopathy is present else 0.
You can see the columns goes from 0 to 19, where columns from 0 to 19 represents features extracted from the image set and last column represents the type of patient i.e 1 if if signs of Diabetic Retinopathy is present else 0.
%% Cell type:markdown id: tags:
## Split the data in train/test
## Split Data into Train and Validation
Now we want to see how well our classifier is performing, but we dont have the test data labels with us to check. What do we do ? So we split our dataset into train and validation. The idea is that we test our classifier on validation set in order to get an idea of how well our classifier works. This way we can also ensure that we dont [overfit](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/) on the train dataset. There are many ways to do validation like [k-fold](https://machinelearningmastery.com/k-fold-cross-validation/),[leave one out](https://en.wikipedia.org/wiki/Cross-validation_(statistics), etc
%% Cell type:code id: tags:
``` python
X_train, X_test= train_test_split(train_data, test_size=0.2, random_state=42)
X_train, X_val= train_test_split(train_data, test_size=0.2, random_state=42)
```
%% Cell type:markdown id: tags:
Here we have selected the size of the testing data to be 20% of the total data. You can change it and see what effect it has on the accuracies. To learn more about the train_test_split function [click here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).
......@@ -104,16 +111,19 @@
%% Cell type:code id: tags:
``` python
X_train,y_train = X_train.iloc[:,1:],X_train.iloc[:,0]
X_test,y_test = X_test.iloc[:,1:],X_test.iloc[:,0]
X_val,y_val = X_val.iloc[:,1:],X_val.iloc[:,0]
```
%% Cell type:markdown id: tags:
## Define the classifier
## Define the Classifier
Now we come to the juicy part. We have fixed our data and now we train a classifier. The classifier will learn the function by looking at the inputs and corresponding outputs. There are a ton of classifiers to choose from some being [Logistic Regression](https://towardsdatascience.com/logistic-regression-detailed-overview-46c4da4303bc), [SVM](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Random Forests](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Decision Trees](https://towardsdatascience.com/decision-trees-in-machine-learning-641b9c4e8052), etc.
Tip: A good model doesnt depend solely on the classifier but on the features(columns) you choose. So make sure to play with your data and keep only whats important.
%% Cell type:code id: tags:
``` python
classifier = LogisticRegression(solver = 'lbfgs',multi_class='auto',max_iter=10)
......@@ -138,33 +148,36 @@
```
%% Cell type:markdown id: tags:
Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)
Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes and also see how the performance changes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)
%% Cell type:markdown id: tags:
# Predict on test set
## Predict on Validation
Now we predict our trained classifier on the validation set and evaluate our model# Predict on test set
%% Cell type:code id: tags:
``` python
y_pred = classifier.predict(X_test)
y_pred = classifier.predict(X_val)
```
%% Cell type:markdown id: tags:
## Find the scores
## Evaluate the Performance
We use the same metrics as that will be used for the test set.
[F1 score](https://en.wikipedia.org/wiki/F1_score) are the metrics for this challenge
%% Cell type:code id: tags:
``` python
precision = precision_score(y_test,y_pred,average='micro')
recall = recall_score(y_test,y_pred,average='micro')
accuracy = accuracy_score(y_test,y_pred)
f1 = f1_score(y_test,y_pred,average='macro')
precision = precision_score(y_val,y_pred,average='micro')
recall = recall_score(y_val,y_pred,average='micro')
accuracy = accuracy_score(y_val,y_pred)
f1 = f1_score(y_val,y_pred,average='macro')
```
%% Cell type:code id: tags:
``` python
......@@ -178,22 +191,24 @@
# Prediction on Evaluation Set
%% Cell type:markdown id: tags:
# Load the evaluation data
## Load Test Set
Load the test data now# Load the evaluation data
%% Cell type:code id: tags:
``` python
final_test_path = "test.csv"
final_test_path = "data/test.csv"
final_test = pd.read_csv(final_test_path,header=None)
```
%% Cell type:markdown id: tags:
## Predict on evaluation set
## Predict Test Set
Time for the moment of truth! Predict on test set and time to make the submission.
%% Cell type:code id: tags:
``` python
submission = classifier.predict(final_test)
......
%% Cell type:markdown id: tags:
# Baseline for DIBRD Practice Challenge on AIcrowd
# Baseline for DIBRD Challenge on AIcrowd
#### Author : Shubham Sharma
%% Cell type:markdown id: tags:
## To open this notebook on Google Computing platform Colab, click below!
%% Cell type:markdown id: tags:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ayushshivani/aicrowd_educational_baselines/blob/master/MNIST_baseline.ipynb)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ayushshivani/aicrowd_educational_baselines/blob/master/PKHND_baseline.ipynb)
%% Cell type:markdown id: tags:
## Download Necessary Packages
......@@ -27,17 +28,21 @@
%% Cell type:markdown id: tags:
## Download dataset
%% Cell type:markdown id: tags:
The first step is to download out train test data. We will be training a classifier on the train data and make predictions on test data. We submit our predictions
%% Cell type:code id: tags:
``` python
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_mnist/data/public/test.zip
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/aicrowd_educational_mnist/data/public/train.zip
!unzip train.zip
!unzip test.zip
!rm -rf data
!mkdir data
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/dibrd/v0.1/train.csv -O data/train.csv
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/dibrd/v0.1/test.csv -O data/test.csv
```
%% Cell type:markdown id: tags:
......@@ -54,16 +59,17 @@
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score
```
%% Cell type:markdown id: tags:
## Load the data
## Load Data
We use pandas library to load our data. Pandas loads them into dataframes which helps us analyze our data easily. Learn more about it [here](https://www.tutorialspoint.com/python_data_science/python_pandas.htm)
%% Cell type:code id: tags:
``` python
train_data_path = "train.csv" #path where data is stored
train_data_path = "data/train.csv" #path where data is stored
```
%% Cell type:code id: tags:
``` python
......@@ -80,20 +86,21 @@
train_data.head()
```
%% Cell type:markdown id: tags:
You can see the columns goes from 0 to 20, where columns from 0 to 19 represents features extracted from the image set and last column represents the type of patient i.e 1 if if signs of Diabetic Retinopathy is present else 0.
You can see the columns goes from 0 to 19, where columns from 0 to 19 represents features extracted from the image set and last column represents the type of patient i.e 1 if if signs of Diabetic Retinopathy is present else 0.
%% Cell type:markdown id: tags:
## Split the data in train/test
## Split Data into Train and Validation
Now we want to see how well our classifier is performing, but we dont have the test data labels with us to check. What do we do ? So we split our dataset into train and validation. The idea is that we test our classifier on validation set in order to get an idea of how well our classifier works. This way we can also ensure that we dont [overfit](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/) on the train dataset. There are many ways to do validation like [k-fold](https://machinelearningmastery.com/k-fold-cross-validation/),[leave one out](https://en.wikipedia.org/wiki/Cross-validation_(statistics), etc
%% Cell type:code id: tags:
``` python
X_train, X_test= train_test_split(train_data, test_size=0.2, random_state=42)
X_train, X_val= train_test_split(train_data, test_size=0.2, random_state=42)
```
%% Cell type:markdown id: tags:
Here we have selected the size of the testing data to be 20% of the total data. You can change it and see what effect it has on the accuracies. To learn more about the train_test_split function [click here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).
......@@ -104,16 +111,19 @@
%% Cell type:code id: tags:
``` python
X_train,y_train = X_train.iloc[:,1:],X_train.iloc[:,0]
X_test,y_test = X_test.iloc[:,1:],X_test.iloc[:,0]
X_val,y_val = X_val.iloc[:,1:],X_val.iloc[:,0]
```
%% Cell type:markdown id: tags:
## Define the classifier
## Define the Classifier
Now we come to the juicy part. We have fixed our data and now we train a classifier. The classifier will learn the function by looking at the inputs and corresponding outputs. There are a ton of classifiers to choose from some being [Logistic Regression](https://towardsdatascience.com/logistic-regression-detailed-overview-46c4da4303bc), [SVM](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Random Forests](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Decision Trees](https://towardsdatascience.com/decision-trees-in-machine-learning-641b9c4e8052), etc.
Tip: A good model doesnt depend solely on the classifier but on the features(columns) you choose. So make sure to play with your data and keep only whats important.
%% Cell type:code id: tags:
``` python
classifier = LogisticRegression(solver = 'lbfgs',multi_class='auto',max_iter=10)
......@@ -138,33 +148,36 @@
```
%% Cell type:markdown id: tags:
Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)
Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes and also see how the performance changes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)
%% Cell type:markdown id: tags:
# Predict on test set
## Predict on Validation
Now we predict our trained classifier on the validation set and evaluate our model# Predict on test set
%% Cell type:code id: tags:
``` python
y_pred = classifier.predict(X_test)
y_pred = classifier.predict(X_val)
```
%% Cell type:markdown id: tags:
## Find the scores
## Evaluate the Performance
We use the same metrics as that will be used for the test set.
[F1 score](https://en.wikipedia.org/wiki/F1_score) are the metrics for this challenge
%% Cell type:code id: tags:
``` python
precision = precision_score(y_test,y_pred,average='micro')
recall = recall_score(y_test,y_pred,average='micro')
accuracy = accuracy_score(y_test,y_pred)
f1 = f1_score(y_test,y_pred,average='macro')
precision = precision_score(y_val,y_pred,average='micro')
recall = recall_score(y_val,y_pred,average='micro')
accuracy = accuracy_score(y_val,y_pred)
f1 = f1_score(y_val,y_pred,average='macro')
```
%% Cell type:code id: tags:
``` python
......@@ -178,22 +191,24 @@
# Prediction on Evaluation Set
%% Cell type:markdown id: tags:
# Load the evaluation data
## Load Test Set
Load the test data now# Load the evaluation data
%% Cell type:code id: tags:
``` python
final_test_path = "test.csv"
final_test_path = "data/test.csv"
final_test = pd.read_csv(final_test_path,header=None)
```
%% Cell type:markdown id: tags:
## Predict on evaluation set
## Predict Test Set
Time for the moment of truth! Predict on test set and time to make the submission.
%% Cell type:code id: tags:
``` python
submission = classifier.predict(final_test)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment