CRDIO_baseline.ipynb 14 KB
Newer Older
shubham_sharma's avatar
shubham_sharma committed
1 2 3 4 5 6
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
ashivani's avatar
ashivani committed
7
    "![AIcrowd-Logo](https://raw.githubusercontent.com/AIcrowd/AIcrowd/master/app/assets/images/misc/aicrowd-horizontal.png)"
shubham_sharma's avatar
shubham_sharma committed
8 9 10 11 12 13
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
ashivani's avatar
ashivani committed
14 15
    "# Baseline for [CRDIO](https://www.aicrowd.com/challenges/crdio) Challenge on AIcrowd\n",
    "#### Author : Shubham Sharma"
shubham_sharma's avatar
shubham_sharma committed
16 17 18 19 20 21
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
ashivani's avatar
ashivani committed
22
    "## Download Necessary Packages 📚"
shubham_sharma's avatar
shubham_sharma committed
23 24 25
   ]
  },
  {
ashivani's avatar
ashivani committed
26 27
   "cell_type": "code",
   "execution_count": null,
shubham_sharma's avatar
shubham_sharma committed
28
   "metadata": {},
ashivani's avatar
ashivani committed
29
   "outputs": [],
shubham_sharma's avatar
shubham_sharma committed
30
   "source": [
ashivani's avatar
ashivani committed
31 32 33
    "!pip install numpy\n",
    "!pip install pandas\n",
    "!pip install scikit-learn"
shubham_sharma's avatar
shubham_sharma committed
34 35 36 37 38 39
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
ashivani's avatar
ashivani committed
40 41
    "## Download Data\n",
    "The first step is to download out train test data. We will be training a model on the train data and make predictions on test data. We submit our predictions\n"
shubham_sharma's avatar
shubham_sharma committed
42 43 44 45
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
46
   "execution_count": null,
shubham_sharma's avatar
shubham_sharma committed
47 48 49
   "metadata": {},
   "outputs": [],
   "source": [
ashivani's avatar
ashivani committed
50 51 52 53 54 55 56
    "!rm -rf data\n",
    "!mkdir data \n",
    "!wget https://datasets.aicrowd.com/default/aicrowd-practice-challenges/public/crdio/v0.1/test.csv\n",
    "!wget https://datasets.aicrowd.com/default/aicrowd-practice-challenges/public/crdio/v0.1/train.csv\n",
    "!mv test.csv data/test.csv\n",
    "!mv train.csv data/train.csv\n",
    "\n"
shubham_sharma's avatar
shubham_sharma committed
57 58 59 60 61 62 63 64 65 66 67 68
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "## Import packages"
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
69
   "execution_count": 2,
shubham_sharma's avatar
shubham_sharma committed
70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.neural_network import MLPClassifier\n",
    "from sklearn.svm import SVC\n",
    "from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Load Data\n",
ashivani's avatar
ashivani committed
87 88 89
    "- We use pandas 🐼 library to load our data.   \n",
    "- Pandas loads the data into dataframes and facilitates us to analyse the data.   \n",
    "- Learn more about it [here](https://www.tutorialspoint.com/python_data_science/python_pandas.htm) 🤓"
shubham_sharma's avatar
shubham_sharma committed
90 91 92 93
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
94
   "execution_count": null,
shubham_sharma's avatar
shubham_sharma committed
95 96 97
   "metadata": {},
   "outputs": [],
   "source": [
ashivani's avatar
ashivani committed
98
    "all_data_path = \"data/train.csv\" #path where data is stored"
shubham_sharma's avatar
shubham_sharma committed
99 100 101 102
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
103
   "execution_count": null,
shubham_sharma's avatar
shubham_sharma committed
104 105 106
   "metadata": {},
   "outputs": [],
   "source": [
ashivani's avatar
ashivani committed
107
    "all_data = pd.read_csv(all_data_path) #load data in dataframe using pandas"
shubham_sharma's avatar
shubham_sharma committed
108 109 110 111 112 113
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
ashivani's avatar
ashivani committed
114
    "## Visualize the data 👀"
shubham_sharma's avatar
shubham_sharma committed
115 116 117 118
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
119 120 121 122 123
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "all_data.head()"
shubham_sharma's avatar
shubham_sharma committed
124 125 126 127 128 129
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
ashivani's avatar
ashivani committed
130
    "The dataset consists of `24` attributes out of which first `23` attributes describes details of `CTGs` features and last attribute called `NSP` is used to classify these `CTGs` on the basis of fetal state."
shubham_sharma's avatar
shubham_sharma committed
131 132 133 134 135 136
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
ashivani's avatar
ashivani committed
137 138 139 140 141
    "## Split Data into Train and Validation 🔪\n",
    "-  The next step is to think of a way to test how well our model is performing. we cannot use the test data given as it does not contain the data labels for us to verify.    \n",
    "- The workaround this is to split the given training data into training and validation. Typically validation sets give us an idea of how our model will perform on unforeseen data. it is like holding back a chunk of data while training our model and then using it to for the purpose of testing. it is a standard way to fine-tune hyperparameters in a model.    \n",
    "- There are multiple ways to split a dataset into validation and training sets. following are two popular ways to go about it, [k-fold](https://machinelearningmastery.com/k-fold-cross-validation/), [leave one out](https://en.wikipedia.org/wiki/Cross-validation_statistics). 🧐\n",
    "- Validation sets are also used to avoid your model from [overfitting](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/) on the train dataset."
shubham_sharma's avatar
shubham_sharma committed
142 143 144 145
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
146
   "execution_count": null,
shubham_sharma's avatar
shubham_sharma committed
147 148 149
   "metadata": {},
   "outputs": [],
   "source": [
ashivani's avatar
ashivani committed
150
    "X_train, X_val= train_test_split(all_data, test_size=0.2, random_state=42) "
shubham_sharma's avatar
shubham_sharma committed
151 152 153 154 155 156
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
ashivani's avatar
ashivani committed
157 158 159
    "- We have decided to split the data with 20 % as validation and 80 % as training.  \n",
    "- To learn more about the train_test_split function [click here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html). 🧐  \n",
    "- This is of course the simplest way to validate your model by simply taking a random chunk of the train set and setting it aside solely for the purpose of testing our train model on unseen data. as mentioned in the previous block, you can experiment 🔬 with and choose more sophisticated techniques and make your model better."
shubham_sharma's avatar
shubham_sharma committed
160 161 162 163 164 165
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
ashivani's avatar
ashivani committed
166 167
    "- Now, since we have our data splitted into train and validation sets, we need to get the corresponding labels separated from the data.   \n",
    "- with this step we are all set move to the next step with a prepared dataset."
shubham_sharma's avatar
shubham_sharma committed
168 169 170 171
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
172
   "execution_count": null,
shubham_sharma's avatar
shubham_sharma committed
173 174 175 176 177 178 179 180 181 182 183
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train,y_train = X_train.iloc[:,:-1],X_train.iloc[:,-1]\n",
    "X_val,y_val = X_val.iloc[:,:-1],X_val.iloc[:,-1]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
ashivani's avatar
ashivani committed
184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
    "# TRAINING PHASE 🏋️"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Define the Model\n",
    "\n",
    "- We have fixed our data and now we are ready to train our model.   \n",
    "\n",
    "- There are a ton of classifiers to choose from some being [Logistic Regression](https://towardsdatascience.com/logistic-regression-detailed-overview-46c4da4303bc), [SVM](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Random Forests](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47), [Decision Trees](https://towardsdatascience.com/decision-trees-in-machine-learning-641b9c4e8052), etc.🧐         \n",
    "\n",
    "- Remember that there are no hard-laid rules here. you can mix and match classifiers, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.     \n",
    "   \n",
    "- A good model does not depend solely on the classifier but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand.  you can gain important insight from [here](https://towardsdatascience.com/the-5-feature-selection-algorithms-every-data-scientist-need-to-know-3a6b566efd2).🧐         "
shubham_sharma's avatar
shubham_sharma committed
200 201 202 203
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
204
   "execution_count": null,
shubham_sharma's avatar
shubham_sharma committed
205 206 207
   "metadata": {},
   "outputs": [],
   "source": [
ashivani's avatar
ashivani committed
208
    "classifier = SVC(gamma='auto')\n",
shubham_sharma's avatar
shubham_sharma committed
209
    "\n",
ashivani's avatar
ashivani committed
210 211
    "# from sklearn.linear_model import LogisticRegression\n",
    "# classifier = LogisticRegression()"
shubham_sharma's avatar
shubham_sharma committed
212 213 214 215 216 217
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
ashivani's avatar
ashivani committed
218 219 220
    "- To start you off, We have used a basic [Support Vector Machines](https://scikit-learn.org/stable/modules/svm.html#classification) classifier here.    \n",
    "- But you can tune parameters and increase the performance. To see the list of parameters visit [here](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).   \n",
    "- Do keep in mind there exist sophisticated techniques for everything, the key as quoted earlier is to search them and experiment to fit your implementation."
shubham_sharma's avatar
shubham_sharma committed
221 222 223 224 225 226
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
ashivani's avatar
ashivani committed
227 228 229
    "\n",
    "\n",
    "To read more about other sklearn classifiers visit [here 🧐](https://scikit-learn.org/stable/supervised_learning.html). Try and use other classifiers to see how the performance of your model changes. Try using [Logistic Regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) or [MLP](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html) and compare how the performance changes."
shubham_sharma's avatar
shubham_sharma committed
230 231 232 233 234 235 236 237 238 239 240
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Train the classifier"
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
241 242 243
   "execution_count": null,
   "metadata": {},
   "outputs": [],
shubham_sharma's avatar
shubham_sharma committed
244 245 246 247 248 249 250 251 252 253 254
   "source": [
    "classifier.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes and also see how the performance changes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)"
   ]
  },
ashivani's avatar
ashivani committed
255 256 257 258 259 260 261 262
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Validation Phase 🤔\n",
    "Wonder how well your model learned! Lets check it."
   ]
  },
shubham_sharma's avatar
shubham_sharma committed
263 264 265 266 267
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Predict on Validation\n",
ashivani's avatar
ashivani committed
268 269
    "\n",
    "Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data."
shubham_sharma's avatar
shubham_sharma committed
270 271 272 273
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
274
   "execution_count": null,
shubham_sharma's avatar
shubham_sharma committed
275 276 277 278 279 280 281 282 283 284 285
   "metadata": {},
   "outputs": [],
   "source": [
    "y_pred = classifier.predict(X_val)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Evaluate the Performance\n",
ashivani's avatar
ashivani committed
286 287 288 289 290
    "\n",
    "- We have used basic metrics to quantify the performance of our model.  \n",
    "- This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.\n",
    "- Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors. \n",
    "- [F1 score](https://en.wikipedia.org/wiki/F1_score) are the metrics for this challenge"
shubham_sharma's avatar
shubham_sharma committed
291 292 293 294
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
295
   "execution_count": null,
shubham_sharma's avatar
shubham_sharma committed
296 297 298 299 300 301 302 303 304 305 306
   "metadata": {},
   "outputs": [],
   "source": [
    "precision = precision_score(y_val,y_pred,average='micro')\n",
    "recall = recall_score(y_val,y_pred,average='micro')\n",
    "accuracy = accuracy_score(y_val,y_pred)\n",
    "f1 = f1_score(y_val,y_pred,average='macro')"
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
307 308 309 310 311
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
shubham_sharma's avatar
shubham_sharma committed
312 313 314 315 316 317 318 319 320 321 322
   "source": [
    "print(\"Accuracy of the model is :\" ,accuracy)\n",
    "print(\"Recall of the model is :\" ,recall)\n",
    "print(\"Precision of the model is :\" ,precision)\n",
    "print(\"F1 score of the model is :\" ,f1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
ashivani's avatar
ashivani committed
323 324 325
    "# Testing Phase 😅\n",
    "\n",
    "We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission."
shubham_sharma's avatar
shubham_sharma committed
326 327 328 329 330 331 332
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Load Test Set\n",
ashivani's avatar
ashivani committed
333 334
    "\n",
    "Load the test data on which final submission is to be made."
shubham_sharma's avatar
shubham_sharma committed
335 336 337 338
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
339
   "execution_count": null,
shubham_sharma's avatar
shubham_sharma committed
340 341 342
   "metadata": {},
   "outputs": [],
   "source": [
ashivani's avatar
ashivani committed
343
    "final_test_path = \"data/test.csv\"\n",
shubham_sharma's avatar
shubham_sharma committed
344 345 346 347 348 349 350 351 352 353 354 355 356
    "final_test = pd.read_csv(final_test_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Predict Test Set\n",
    "Time for the moment of truth! Predict on test set and time to make the submission."
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
357
   "execution_count": null,
shubham_sharma's avatar
shubham_sharma committed
358 359 360 361 362 363 364 365 366 367 368 369 370 371 372
   "metadata": {},
   "outputs": [],
   "source": [
    "submission = classifier.predict(final_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Save the prediction to csv"
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
373
   "execution_count": null,
shubham_sharma's avatar
shubham_sharma committed
374 375 376 377 378 379 380 381
   "metadata": {},
   "outputs": [],
   "source": [
    "#change the header according to the submission guidelines"
   ]
  },
  {
   "cell_type": "code",
ashivani's avatar
ashivani committed
382
   "execution_count": null,
shubham_sharma's avatar
shubham_sharma committed
383 384 385 386
   "metadata": {},
   "outputs": [],
   "source": [
    "submission = pd.DataFrame(submission)\n",
ashivani's avatar
ashivani committed
387
    "submission.to_csv('data/submission.csv',header=['NSP'],index=False)"
shubham_sharma's avatar
shubham_sharma committed
388 389 390 391 392 393
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
ashivani's avatar
ashivani committed
394 395 396 397
    "🚧 Note :    \n",
    "- Do take a look at the submission format.   \n",
    "- The submission file should contain a header.   \n",
    "- Follow all submission guidelines strictly to avoid inconvenience."
shubham_sharma's avatar
shubham_sharma committed
398 399 400 401 402 403 404 405 406 407 408 409 410 411 412
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## To download the generated csv in colab run the below command"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
ashivani's avatar
ashivani committed
413 414 415 416 417
    "try:\n",
    "  from google.colab import files\n",
    "  files.download('submission.csv')\n",
    "except ImportError as e:\n",
    "  print(\"Only for Collab\") "
shubham_sharma's avatar
shubham_sharma committed
418 419 420 421 422 423
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
ashivani's avatar
ashivani committed
424
    "### Well Done! 👍 We are all set to make a submission and see your name on leaderborad. Lets navigate to [challenge page](https://www.aicrowd.com/challenges/crdio) and make one."
shubham_sharma's avatar
shubham_sharma committed
425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
ashivani's avatar
ashivani committed
444
   "version": "3.7.6"
shubham_sharma's avatar
shubham_sharma committed
445 446 447 448 449
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}