Commit f62c1277 authored by ashivani's avatar ashivani
Browse files

minor changes

parent 9dd361cb
......@@ -29,7 +29,7 @@ Following files can be found in `resources` section:
- Sample submission format available at `sample_submission.csv`.
**Make your first submission [here](https://www.aicrowd.com/challenges/dibrd-predict-diabetic-retinopathy/submissions/new) 🚀 !!**
**Make your first submission [here](https://www.aicrowd.com/challenges/dibrd/submissions/new) 🚀 !!**
# 🖊 Evaluation Criteria
......@@ -39,9 +39,9 @@ During evaluation [F1 score](https://scikit-learn.org/stable/modules/generated/s
# 🔗 Links
* 💪 Challenge Page : [https://www.aicrowd.com/challenges/aicrowd-blitz-may-2020/problems/dibrd-predict-diabetic-retinopathy/](https://www.aicrowd.com/challenges/aicrowd-blitz-may-2020/problems/dibrd-predict-diabetic-retinopathy/)
* 🗣️ Discussion Forum : [https://www.aicrowd.com/challenges/aicrowd-blitz-may-2020/problems/dibrd-predict-diabetic-retinopathy/discussion]( https://www.aicrowd.com/challenges/aicrowd-blitz-may-2020/problems/dibrd-predict-diabetic-retinopathy/discussion])
* 🏆 leaderboard : [https://www.aicrowd.com/challenges/aicrowd-blitz-may-2020/problems/dibrd-predict-diabetic-retinopathy/leaderboards](https://www.aicrowd.com/challenges/aicrowd-blitz-may-2020/problems/dibrd-predict-diabetic-retinopathy/leaderboards)
* 💪 Challenge Page : [https://www.aicrowd.com/challenges/aicrowd-blitz-may-2020/problems/dibrd/](https://www.aicrowd.com/challenges/aicrowd-blitz-may-2020/problems/dibrd/)
* 🗣️ Discussion Forum : [https://www.aicrowd.com/challenges/aicrowd-blitz-may-2020/problems/dibrd/discussion]( https://www.aicrowd.com/challenges/aicrowd-blitz-may-2020/problems/dibrd/discussion])
* 🏆 leaderboard : [https://www.aicrowd.com/challenges/aicrowd-blitz-may-2020/problems/dibrd/leaderboards](https://www.aicrowd.com/challenges/aicrowd-blitz-may-2020/problems/dibrd/leaderboards)
# 📱 Contact
- [Shubham Sharma](shubham@ext.aicrowd.com)
......
%% Cell type:markdown id: tags:
# Baseline for DIBRD Challenge on AIcrowd
#### Author : Shubham Sharma
%% Cell type:markdown id: tags:
## To open this notebook on Google Computing platform Colab, click below!
![AIcrowd-Logo](https://raw.githubusercontent.com/AIcrowd/AIcrowd/master/app/assets/images/misc/aicrowd-horizontal.png)
%% Cell type:markdown id: tags:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ayushshivani/aicrowd_educational_baselines/blob/master/PKHND_baseline.ipynb)
# Baseline for [DIBRD](https://www.aicrowd.com/challenges/dibrd) Challenge on AIcrowd
#### Author : Shubham Sharma
%% Cell type:markdown id: tags:
## Download Necessary Packages
%% Cell type:code id: tags:
``` python
import sys
!{sys.executable} -m pip install numpy
!{sys.executable} -m pip install pandas
!{sys.executable} -m pip install scikit-learn
!pip install numpy
!pip install pandas
!pip install scikit-learn
```
%% Cell type:markdown id: tags:
## Download dataset
......@@ -37,12 +31,14 @@
%% Cell type:code id: tags:
``` python
!rm -rf data
!mkdir data
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/dibrd/v0.1/train.csv -O data/train.csv
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/dibrd/v0.1/test.csv -O data/test.csv
!wget https://datasets.aicrowd.com/default/aicrowd-practice-challenges/public/dibrd/v0.1/test.csv
!wget https://datasets.aicrowd.com/default/aicrowd-practice-challenges/public/wineq/v0.1/train.csv
!mv train.csv data/train.csv
!mv test.csv data/test.csv
```
%% Cell type:markdown id: tags:
......@@ -60,11 +56,13 @@
```
%% Cell type:markdown id: tags:
## Load Data
We use pandas library to load our data. Pandas loads them into dataframes which helps us analyze our data easily. Learn more about it [here](https://www.tutorialspoint.com/python_data_science/python_pandas.htm)
- We use pandas 🐼 library to load our data.
- Pandas loads the data into dataframes and facilitates us to analyse the data.
- Learn more about it [here](https://www.tutorialspoint.com/python_data_science/python_pandas.htm) 🤓
%% Cell type:code id: tags:
``` python
train_data_path = "data/train.csv" #path where data is stored
......@@ -90,40 +88,62 @@
You can see the columns goes from 0 to 19, where columns from 0 to 19 represents features extracted from the image set and last column represents the type of patient i.e 1 if if signs of Diabetic Retinopathy is present else 0.
%% Cell type:markdown id: tags:
## Split Data into Train and Validation
Now we want to see how well our classifier is performing, but we dont have the test data labels with us to check. What do we do ? So we split our dataset into train and validation. The idea is that we test our classifier on validation set in order to get an idea of how well our classifier works. This way we can also ensure that we dont [overfit](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/) on the train dataset. There are many ways to do validation like [k-fold](https://machinelearningmastery.com/k-fold-cross-validation/),[leave one out](https://en.wikipedia.org/wiki/Cross-validation_(statistics), etc
## Split Data into Train and Validation 🔪
- The next step is to think of a way to test how well our model is performing. we cannot use the test data given as it does not contain the data labels for us to verify.
- The workaround this is to split the given training data into training and validation. Typically validation sets give us an idea of how our model will perform on unforeseen data. it is like holding back a chunk of data while training our model and then using it to for the purpose of testing. it is a standard way to fine-tune hyperparameters in a model.
- There are multiple ways to split a dataset into validation and training sets. following are two popular ways to go about it, [k-fold](https://machinelearningmastery.com/k-fold-cross-validation/), [leave one out](https://en.wikipedia.org/wiki/Cross-validation_statistics). 🧐
- Validation sets are also used to avoid your model from [overfitting](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/) on the train dataset.
%% Cell type:code id: tags:
``` python
X_train, X_val= train_test_split(train_data, test_size=0.2, random_state=42)
```
%% Cell type:markdown id: tags:
Here we have selected the size of the testing data to be 20% of the total data. You can change it and see what effect it has on the accuracies. To learn more about the train_test_split function [click here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).
- We have decided to split the data with 20 % as validation and 80 % as training.
- To learn more about the train_test_split function [click here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html). 🧐
- This is of course the simplest way to validate your model by simply taking a random chunk of the train set and setting it aside solely for the purpose of testing our train model on unseen data. as mentioned in the previous block, you can experiment 🔬 with and choose more sophisticated techniques and make your model better.
%% Cell type:markdown id: tags:
Now, since we have our data splitted into train and validation sets, we need to get the label separated from the data.
- Now, since we have our data splitted into train and validation sets, we need to get the corresponding labels separated from the data.
- with this step we are all set move to the next step with a prepared dataset.
%% Cell type:code id: tags:
``` python
```
%% Cell type:code id: tags:
``` python
X_train,y_train = X_train.iloc[:,1:],X_train.iloc[:,0]
X_val,y_val = X_val.iloc[:,1:],X_val.iloc[:,0]
X_train,y_train = X_train.iloc[:,:-1],X_train.iloc[:,-1]
X_val,y_val = X_val.iloc[:,:-1],X_val.iloc[:,-1]
```
%% Cell type:markdown id: tags:
## Define the Classifier
Now we come to the juicy part. We have fixed our data and now we train a classifier. The classifier will learn the function by looking at the inputs and corresponding outputs. There are a ton of classifiers to choose from some being [Logistic Regression](https://towardsdatascience.com/logistic-regression-detailed-overview-46c4da4303bc), [SVM](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Random Forests](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Decision Trees](https://towardsdatascience.com/decision-trees-in-machine-learning-641b9c4e8052), etc.
Tip: A good model doesnt depend solely on the classifier but on the features(columns) you choose. So make sure to play with your data and keep only whats important.
# TRAINING PHASE 🏋️
%% Cell type:markdown id: tags:
## Define the Model
- We have fixed our data and now we are ready to train our model.
- There are a ton of classifiers to choose from some being [Logistic Regression](https://towardsdatascience.com/logistic-regression-detailed-overview-46c4da4303bc), [SVM](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Random Forests](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Decision Trees](https://towardsdatascience.com/decision-trees-in-machine-learning-641b9c4e8052), etc.🧐
- Remember that there are no hard-laid rules here. you can mix and match classifiers, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.
- A good model does not depend solely on the classifier but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand. you can gain important insight from [here](https://towardsdatascience.com/the-5-feature-selection-algorithms-every-data-scientist-need-to-know-3a6b566efd2).🧐
%% Cell type:code id: tags:
``` python
classifier = LogisticRegression(solver = 'lbfgs',multi_class='auto',max_iter=10)
......@@ -133,11 +153,11 @@
We have used [Logistic Regression](https://en.wikipedia.org/wiki/Logistic_regression) as a classifier here and set few of the parameteres. But one can set more parameters and increase the performance. To see the list of parameters visit [here](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html).
%% Cell type:markdown id: tags:
We can also use other classifiers. To read more about sklean classifiers visit [here](https://scikit-learn.org/stable/supervised_learning.html). Try and use other classifiers to see how the performance of your model changes.
To read more about other sklearn classifiers visit [here 🧐](https://scikit-learn.org/stable/supervised_learning.html). Try and use other classifiers to see how the performance of your model changes. Try using [Logistic Regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) or [MLP](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html) and compare how the performance changes.
%% Cell type:markdown id: tags:
## Train the classifier
......@@ -152,24 +172,33 @@
Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes and also see how the performance changes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)
%% Cell type:markdown id: tags:
# Validation Phase 🤔
Wonder how well your model learned! Lets check it.
%% Cell type:markdown id: tags:
## Predict on Validation
Now we predict our trained classifier on the validation set and evaluate our model# Predict on test set
Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data.
%% Cell type:code id: tags:
``` python
y_pred = classifier.predict(X_val)
```
%% Cell type:markdown id: tags:
## Evaluate the Performance
We use the same metrics as that will be used for the test set.
[F1 score](https://en.wikipedia.org/wiki/F1_score) are the metrics for this challenge
- We have used basic metrics to quantify the performance of our model.
- This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.
- Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors.
- [F1 score](https://en.wikipedia.org/wiki/F1_score) is the metric for this challenge
%% Cell type:code id: tags:
``` python
precision = precision_score(y_val,y_pred,average='micro')
......@@ -187,16 +216,18 @@
print("F1 score of the model is :" ,f1)
```
%% Cell type:markdown id: tags:
# Prediction on Evaluation Set
# Testing Phase 😅
We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.
%% Cell type:markdown id: tags:
## Load Test Set
Load the test data now# Load the evaluation data
Load the test data now
%% Cell type:code id: tags:
``` python
final_test_path = "data/test.csv"
......@@ -220,26 +251,38 @@
%% Cell type:code id: tags:
``` python
submission = pd.DataFrame(submission)
submission.to_csv('/tmp/submission.csv',header=['label'],index=False)
submission.to_csv('submission.csv',header=['label'],index=False)
```
%% Cell type:markdown id: tags:
Note: Do take a look at the submission format.The submission file should contain a header.For eg here it is "label".
🚧 Note :
- Do take a look at the submission format.
- The submission file should contain a header.
- Follow all submission guidelines strictly to avoid inconvenience.## Save the prediction to csv
%% Cell type:markdown id: tags:
## To download the generated csv in colab run the below command
%% Cell type:code id: tags:
``` python
from google.colab import files
files.download('/tmp/submission.csv')
try:
from google.colab import files
files.download('submission.csv')
except:
print('only in colab')
```
%% Cell type:markdown id: tags:
### Go to [platform](https://www.aicrowd.com/challenges/mnist-recognise-handwritten-digits/). Participate in the challenge and submit the submission.csv generated.
### Well Done! 👍 We are all set to make a submission and see your name on leaderborad. Lets navigate to [challenge page](https://www.aicrowd.com/challenges/dibrd) and make one.
%% Cell type:code id: tags:
``` python
```
......
%% Cell type:markdown id: tags:
# Baseline for DIBRD Challenge on AIcrowd
#### Author : Shubham Sharma
%% Cell type:markdown id: tags:
## To open this notebook on Google Computing platform Colab, click below!
![AIcrowd-Logo](https://raw.githubusercontent.com/AIcrowd/AIcrowd/master/app/assets/images/misc/aicrowd-horizontal.png)
%% Cell type:markdown id: tags:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/aicrowd-bot/761175b71c995e5a44a0877daf9e0597)
# Baseline for [DIBRD](https://www.aicrowd.com/challenges/dibrd) Challenge on AIcrowd
#### Author : Shubham Sharma
%% Cell type:markdown id: tags:
## Download Necessary Packages
%% Cell type:code id: tags:
``` python
import sys
!{sys.executable} -m pip install numpy
!{sys.executable} -m pip install pandas
!{sys.executable} -m pip install scikit-learn
!pip install numpy
!pip install pandas
!pip install scikit-learn
```
%% Cell type:markdown id: tags:
## Download dataset
......@@ -37,12 +31,14 @@
%% Cell type:code id: tags:
``` python
!rm -rf data
!mkdir data
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/dibrd/v0.1/train.csv -O data/train.csv
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/dibrd/v0.1/test.csv -O data/test.csv
!wget https://datasets.aicrowd.com/default/aicrowd-practice-challenges/public/dibrd/v0.1/test.csv
!wget https://datasets.aicrowd.com/default/aicrowd-practice-challenges/public/wineq/v0.1/train.csv
!mv train.csv data/train.csv
!mv test.csv data/test.csv
```
%% Cell type:markdown id: tags:
......@@ -60,11 +56,13 @@
```
%% Cell type:markdown id: tags:
## Load Data
We use pandas library to load our data. Pandas loads them into dataframes which helps us analyze our data easily. Learn more about it [here](https://www.tutorialspoint.com/python_data_science/python_pandas.htm)
- We use pandas 🐼 library to load our data.
- Pandas loads the data into dataframes and facilitates us to analyse the data.
- Learn more about it [here](https://www.tutorialspoint.com/python_data_science/python_pandas.htm) 🤓
%% Cell type:code id: tags:
``` python
train_data_path = "data/train.csv" #path where data is stored
......@@ -90,26 +88,38 @@
You can see the columns goes from 0 to 19, where columns from 0 to 19 represents features extracted from the image set and last column represents the type of patient i.e 1 if if signs of Diabetic Retinopathy is present else 0.
%% Cell type:markdown id: tags:
## Split Data into Train and Validation
Now we want to see how well our classifier is performing, but we dont have the test data labels with us to check. What do we do ? So we split our dataset into train and validation. The idea is that we test our classifier on validation set in order to get an idea of how well our classifier works. This way we can also ensure that we dont [overfit](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/) on the train dataset. There are many ways to do validation like [k-fold](https://machinelearningmastery.com/k-fold-cross-validation/),[leave one out](https://en.wikipedia.org/wiki/Cross-validation_(statistics), etc
## Split Data into Train and Validation 🔪
- The next step is to think of a way to test how well our model is performing. we cannot use the test data given as it does not contain the data labels for us to verify.
- The workaround this is to split the given training data into training and validation. Typically validation sets give us an idea of how our model will perform on unforeseen data. it is like holding back a chunk of data while training our model and then using it to for the purpose of testing. it is a standard way to fine-tune hyperparameters in a model.
- There are multiple ways to split a dataset into validation and training sets. following are two popular ways to go about it, [k-fold](https://machinelearningmastery.com/k-fold-cross-validation/), [leave one out](https://en.wikipedia.org/wiki/Cross-validation_statistics). 🧐
- Validation sets are also used to avoid your model from [overfitting](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/) on the train dataset.
%% Cell type:code id: tags:
``` python
X_train, X_val= train_test_split(train_data, test_size=0.2, random_state=42)
```
%% Cell type:markdown id: tags:
Here we have selected the size of the testing data to be 20% of the total data. You can change it and see what effect it has on the accuracies. To learn more about the train_test_split function [click here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).
- We have decided to split the data with 20 % as validation and 80 % as training.
- To learn more about the train_test_split function [click here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html). 🧐
- This is of course the simplest way to validate your model by simply taking a random chunk of the train set and setting it aside solely for the purpose of testing our train model on unseen data. as mentioned in the previous block, you can experiment 🔬 with and choose more sophisticated techniques and make your model better.
%% Cell type:markdown id: tags:
Now, since we have our data splitted into train and validation sets, we need to get the label separated from the data.
- Now, since we have our data splitted into train and validation sets, we need to get the corresponding labels separated from the data.
- with this step we are all set move to the next step with a prepared dataset.
%% Cell type:code id: tags:
``` python
```
%% Cell type:code id: tags:
``` python
X_train,y_train = X_train.iloc[:,:-1],X_train.iloc[:,-1]
......@@ -117,13 +127,23 @@
```
%% Cell type:markdown id: tags:
## Define the Classifier
Now we come to the juicy part. We have fixed our data and now we train a classifier. The classifier will learn the function by looking at the inputs and corresponding outputs. There are a ton of classifiers to choose from some being [Logistic Regression](https://towardsdatascience.com/logistic-regression-detailed-overview-46c4da4303bc), [SVM](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Random Forests](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Decision Trees](https://towardsdatascience.com/decision-trees-in-machine-learning-641b9c4e8052), etc.
Tip: A good model doesnt depend solely on the classifier but on the features(columns) you choose. So make sure to play with your data and keep only whats important.
# TRAINING PHASE 🏋️
%% Cell type:markdown id: tags:
## Define the Model
- We have fixed our data and now we are ready to train our model.
- There are a ton of classifiers to choose from some being [Logistic Regression](https://towardsdatascience.com/logistic-regression-detailed-overview-46c4da4303bc), [SVM](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Random Forests](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Decision Trees](https://towardsdatascience.com/decision-trees-in-machine-learning-641b9c4e8052), etc.🧐
- Remember that there are no hard-laid rules here. you can mix and match classifiers, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.
- A good model does not depend solely on the classifier but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand. you can gain important insight from [here](https://towardsdatascience.com/the-5-feature-selection-algorithms-every-data-scientist-need-to-know-3a6b566efd2).🧐
%% Cell type:code id: tags:
``` python
classifier = LogisticRegression(solver = 'lbfgs',multi_class='auto',max_iter=10)
......@@ -133,11 +153,11 @@
We have used [Logistic Regression](https://en.wikipedia.org/wiki/Logistic_regression) as a classifier here and set few of the parameteres. But one can set more parameters and increase the performance. To see the list of parameters visit [here](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html).
%% Cell type:markdown id: tags:
We can also use other classifiers. To read more about sklean classifiers visit [here](https://scikit-learn.org/stable/supervised_learning.html). Try and use other classifiers to see how the performance of your model changes.
To read more about other sklearn classifiers visit [here 🧐](https://scikit-learn.org/stable/supervised_learning.html). Try and use other classifiers to see how the performance of your model changes. Try using [Logistic Regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) or [MLP](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html) and compare how the performance changes.
%% Cell type:markdown id: tags:
## Train the classifier
......@@ -152,24 +172,33 @@
Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes and also see how the performance changes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)
%% Cell type:markdown id: tags:
# Validation Phase 🤔
Wonder how well your model learned! Lets check it.
%% Cell type:markdown id: tags:
## Predict on Validation
Now we predict our trained classifier on the validation set and evaluate our model# Predict on test set
Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data.
%% Cell type:code id: tags:
``` python
y_pred = classifier.predict(X_val)
```
%% Cell type:markdown id: tags:
## Evaluate the Performance
We use the same metrics as that will be used for the test set.
[F1 score](https://en.wikipedia.org/wiki/F1_score) are the metrics for this challenge
- We have used basic metrics to quantify the performance of our model.
- This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.
- Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors.
- [F1 score](https://en.wikipedia.org/wiki/F1_score) is the metric for this challenge
%% Cell type:code id: tags:
``` python
precision = precision_score(y_val,y_pred,average='micro')
......@@ -187,16 +216,18 @@
print("F1 score of the model is :" ,f1)
```
%% Cell type:markdown id: tags:
# Prediction on Evaluation Set
# Testing Phase 😅
We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.
%% Cell type:markdown id: tags:
## Load Test Set
Load the test data now# Load the evaluation data
Load the test data now
%% Cell type:code id: tags:
``` python
final_test_path = "data/test.csv"
......@@ -220,26 +251,38 @@
%% Cell type:code id: tags:
``` python
submission = pd.DataFrame(submission)
submission.to_csv('/tmp/submission.csv',header=['label'],index=False)
submission.to_csv('submission.csv',header=['label'],index=False)
```
%% Cell type:markdown id: tags:
Note: Do take a look at the submission format.The submission file should contain a header.For eg here it is "label".
🚧 Note :
- Do take a look at the submission format.
- The submission file should contain a header.
- Follow all submission guidelines strictly to avoid inconvenience.## Save the prediction to csv
%% Cell type:markdown id: tags:
## To download the generated csv in colab run the below command
%% Cell type:code id: tags:
``` python
from google.colab import files
files.download('/tmp/submission.csv')
try:
from google.colab import files
files.download('submission.csv')
except:
print('only in colab')
```
%% Cell type:markdown id: tags:
### Go to [platform](https://www.aicrowd.com/challenges/aicrowd-blitz-may-2020/problems/dibrd). Participate in the challenge and submit the submission.csv generated.
### Well Done! 👍 We are all set to make a submission and see your name on leaderborad. Lets navigate to [challenge page](https://www.aicrowd.com/challenges/dibrd) and make one.
%% Cell type:code id: tags:
``` python
```
......
dibrd.jpg

19.7 KB

Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment