task 1 erste dateien
This commit is contained in:
		
							
								
								
									
										511
									
								
								Carsten_Solutions/Exercise 1.ipynb
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										511
									
								
								Carsten_Solutions/Exercise 1.ipynb
									
									
									
									
									
										Normal file
									
								
							| @ -0,0 +1,511 @@ | ||||
| { | ||||
|  "cells": [ | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "# Exercise 1" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 6, | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "outputs": [], | ||||
|    "source": [ | ||||
|     "import nltk\n", | ||||
|     "from nltk import word_tokenize, pos_tag" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "## Classifiers\n", | ||||
|     "note: for model1 and model3 you can try different classifiers: Hidden Markov Model, Logistic Regression, Maximum Entropy Markov Models, Decision Trees, Naive Bayes, etc.. __choose one!__" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 7, | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "outputs": [], | ||||
|    "source": [ | ||||
|     "from sklearn.tree import DecisionTreeClassifier\n", | ||||
|     "from sklearn.feature_extraction import DictVectorizer\n", | ||||
|     "from sklearn.pipeline import Pipeline" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "### 1. model1 = your POS tagger model (english)" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 8, | ||||
|    "metadata": {}, | ||||
|    "outputs": [ | ||||
|     { | ||||
|      "name": "stdout", | ||||
|      "output_type": "stream", | ||||
|      "text": [ | ||||
|       "{'word': 'bims', 'length': 4, 'is_capitalized': False, 'prefix-1': 'b', 'suffix-1': 's', 'prev_word': 'i', 'next_word': 'der'}\n" | ||||
|      ] | ||||
|     } | ||||
|    ], | ||||
|    "source": [ | ||||
|     "def features(sentence, index):\n", | ||||
|     "    return {\n", | ||||
|     "        'word': sentence[index],\n", | ||||
|     "        'length': len(sentence[index]),\n", | ||||
|     "        'is_capitalized': sentence[index][0].upper() == sentence[index][0],\n", | ||||
|     "        'prefix-1': sentence[index][0],\n", | ||||
|     "        'suffix-1': sentence[index][-1],\n", | ||||
|     "        'prev_word': '' if index == 0 else sentence[index - 1],\n", | ||||
|     "        'next_word': '' if index == len(sentence) - 1 else sentence[index + 1]\n", | ||||
|     "    }\n", | ||||
|     "\n", | ||||
|     "print(features(\"halli hallo i bims der Programmierer\".strip().split(\" \"), 3))" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "### 2. model2 = pre-trained POS tagger model using NLTK (maxentropy english)\n" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": null, | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "outputs": [], | ||||
|    "source": [] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "### 3. model3.x = rule-based classifiers (x = 1 to 5)" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": null, | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "outputs": [], | ||||
|    "source": [] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "### 4. model4 = your POS tagger model (not english)" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": null, | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "outputs": [], | ||||
|    "source": [] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "### 5. model5 = pre-trained POS tagger model using RDRPOSTagger 1 or TreeTagger 2 (not english)" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": null, | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "outputs": [], | ||||
|    "source": [] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "## Corpora\n", | ||||
|     "note: data split for training/test = 0.8/0.2 (sequencial)" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "#### 1. X1 = nltk.corpus.treebank (english)" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 9, | ||||
|    "metadata": {}, | ||||
|    "outputs": [ | ||||
|     { | ||||
|      "name": "stdout", | ||||
|      "output_type": "stream", | ||||
|      "text": [ | ||||
|       "[nltk_data] Downloading package treebank to\n", | ||||
|       "[nltk_data]     /Users/Carsten/nltk_data...\n", | ||||
|       "[nltk_data]   Package treebank is already up-to-date!\n" | ||||
|      ] | ||||
|     } | ||||
|    ], | ||||
|    "source": [ | ||||
|     "nltk.download('treebank')\n", | ||||
|     "x1 = nltk.corpus.treebank" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "#### 2. X2 = nltk.corpus.brown (english)" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 10, | ||||
|    "metadata": {}, | ||||
|    "outputs": [ | ||||
|     { | ||||
|      "name": "stdout", | ||||
|      "output_type": "stream", | ||||
|      "text": [ | ||||
|       "[nltk_data] Downloading package brown to /Users/Carsten/nltk_data...\n", | ||||
|       "[nltk_data]   Package brown is already up-to-date!\n" | ||||
|      ] | ||||
|     } | ||||
|    ], | ||||
|    "source": [ | ||||
|     "nltk.download('brown')\n", | ||||
|     "x2 = nltk.corpus.brown" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "#### 3. X3 = other language (not english)" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 11, | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "outputs": [], | ||||
|    "source": [ | ||||
|     "#nltk.download('brown')\n", | ||||
|     "#x3 = other language" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "### Task 1\n", | ||||
|     "* get results for english (plot a graph with all classifiers x results)\n", | ||||
|     "    * performance 1.1 = model1 in X1" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "##### Generate Training and Testdata\n", | ||||
|     "1. split annotaed sentences into training and testdata\n", | ||||
|     "2. split trainingdata into input data and teacherdata\n", | ||||
|     "    *input is the feature vector of each word\n", | ||||
|     "    *output is a list of POS tags for each word and sentences" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 12, | ||||
|    "metadata": {}, | ||||
|    "outputs": [ | ||||
|     { | ||||
|      "name": "stdout", | ||||
|      "output_type": "stream", | ||||
|      "text": [ | ||||
|       "got  3131  training sentences and  783  test sentences\n" | ||||
|      ] | ||||
|     } | ||||
|    ], | ||||
|    "source": [ | ||||
|     "#to generate trainingsdata, delete the assigned tags as a function\n", | ||||
|     "def untag(tagged_sentence):\n", | ||||
|     "    return [w for w, t in tagged_sentence]\n", | ||||
|     "\n", | ||||
|     "#object including the annotated sentences\n", | ||||
|     "annotated_sent = nltk.corpus.treebank.tagged_sents()\n", | ||||
|     "\n", | ||||
|     "#to split the data, calculate the borders for ratio\n", | ||||
|     "cutoff = int(.8 * len(annotated_sent))\n", | ||||
|     "training_sentences = annotated_sent[:cutoff]\n", | ||||
|     "test_sentences = annotated_sent[cutoff:]\n", | ||||
|     "\n", | ||||
|     "#show the amount of sentences\n", | ||||
|     "print(\"got \",len(training_sentences),\" training sentences and \", len(test_sentences), \" test sentences\")\n", | ||||
|     "\n", | ||||
|     "#for training split sentences with its tags into y (for a sentences its resulting tags for each word) and transform sentences and x as a list of the features extracet for echt word in the sentences\n", | ||||
|     "def transform_to_dataset(tagged_sentences):\n", | ||||
|     "    X, y = [], []\n", | ||||
|     "    for tagged_sentence in tagged_sentences:\n", | ||||
|     "        for index in range(len(tagged_sentence)):\n", | ||||
|     "            X.append(features(untag(tagged_sentence), index))\n", | ||||
|     "            y.append(tagged_sentence[index][1]) \n", | ||||
|     "    return X, y\n", | ||||
|     "\n", | ||||
|     "#trainings inputset X and training teacher set y\n", | ||||
|     "X, y = transform_to_dataset(training_sentences)" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "source": [ | ||||
|     "#### Implementing a classifier\n", | ||||
|     "relevant imports\n", | ||||
|     "* decision tree as the AI for classfing\n", | ||||
|     "* dict vercorizer transforms the feature dictionary into a vector as the input for the tree" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 13, | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "outputs": [], | ||||
|    "source": [ | ||||
|     "from sklearn.tree import DecisionTreeClassifier\n", | ||||
|     "from sklearn.feature_extraction import DictVectorizer\n", | ||||
|     "from sklearn.pipeline import Pipeline" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "Pipeline manages vectorizer and classifier" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 14, | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "outputs": [], | ||||
|    "source": [ | ||||
|     "clf = Pipeline([\n", | ||||
|     "    ('vectorizer', DictVectorizer(sparse=False)),\n", | ||||
|     "    ('classifier', DecisionTreeClassifier(criterion='entropy'))\n", | ||||
|     "])" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "##### Calculate performance 1.1 \n", | ||||
|     "* fit the decision tree for a limited amount (size) of training \n", | ||||
|     "* test data and compare with score function on testdata" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 15, | ||||
|    "metadata": {}, | ||||
|    "outputs": [ | ||||
|     { | ||||
|      "name": "stdout", | ||||
|      "output_type": "stream", | ||||
|      "text": [ | ||||
|       "training OK\n", | ||||
|       "Accuracy: 0.880832376865\n" | ||||
|      ] | ||||
|     } | ||||
|    ], | ||||
|    "source": [ | ||||
|     "size=10000\n", | ||||
|     "clf.fit(X[:size], y[:size])\n", | ||||
|     " \n", | ||||
|     "print('training OK')\n", | ||||
|     " \n", | ||||
|     "X_test, y_test = transform_to_dataset(test_sentences)\n", | ||||
|     "\n", | ||||
|     "performance1_1 = clf.score(X_test, y_test)\n", | ||||
|     "\n", | ||||
|     "print(\"Accuracy:\", performance1_1)" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "##### Calculate other performances" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 16, | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "outputs": [], | ||||
|    "source": [ | ||||
|     "performance1_2 = 0\n", | ||||
|     "performance1_3 = 0\n", | ||||
|     "performance1_4 = 0\n", | ||||
|     "performance1_5 = 0\n", | ||||
|     "performance1_6 = 0" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "#### Using the classifier\n", | ||||
|     "for results the link of pos_tags:\n", | ||||
|     "https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 17, | ||||
|    "metadata": {}, | ||||
|    "outputs": [ | ||||
|     { | ||||
|      "name": "stdout", | ||||
|      "output_type": "stream", | ||||
|      "text": [ | ||||
|       "3.6.3\n", | ||||
|       "checking...\n", | ||||
|       "[('Hello', 'NNP'), ('world', 'VBD'), (',', ','), ('lets', 'NNS'), ('do', 'VB'), ('something', 'VBG'), ('awesome', 'NN'), ('today', 'NN'), ('!', 'CD')]\n" | ||||
|      ] | ||||
|     } | ||||
|    ], | ||||
|    "source": [ | ||||
|     "def pos_tag(sentence):\n", | ||||
|     "    print('checking...')\n", | ||||
|     "    tagged_sentence = []\n", | ||||
|     "    tags = clf.predict([features(sentence, index) for index in range(len(sentence))])\n", | ||||
|     "    return zip(sentence, tags)\n", | ||||
|     "\n", | ||||
|     "import platform\n", | ||||
|     "print(platform.python_version())\n", | ||||
|     "\n", | ||||
|     "print(list(pos_tag(word_tokenize('Hello world, lets do something awesome today!'))))" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "### Results for Task 1\n", | ||||
|     "* get results for english (plot a graph with all classifiers x results)\n", | ||||
|     "    * performance 1.1 = model1 in X1\n", | ||||
|     "    * performance 1.2 = model2 in X1\n", | ||||
|     "    * performance 1.3.x = model3.x in X1\n", | ||||
|     "    * performance 1.4 = model1 in X2\n", | ||||
|     "    * performance 1.5 = model2 in X2\n", | ||||
|     "    * performance 1.6.x = model3.x in X2" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 22, | ||||
|    "metadata": {}, | ||||
|    "outputs": [ | ||||
|     { | ||||
|      "name": "stdout", | ||||
|      "output_type": "stream", | ||||
|      "text": [ | ||||
|       "High five! You successfully sent some data to your account on plotly. View your plot in your browser at https://plot.ly/~carsten95/0 or inside your plot.ly account where it is named 'basic-bar'\n" | ||||
|      ] | ||||
|     }, | ||||
|     { | ||||
|      "data": { | ||||
|       "text/html": [ | ||||
|        "<iframe id=\"igraph\" scrolling=\"no\" style=\"border:none;\" seamless=\"seamless\" src=\"https://plot.ly/~carsten95/0.embed\" height=\"525px\" width=\"100%\"></iframe>" | ||||
|       ], | ||||
|       "text/plain": [ | ||||
|        "<plotly.tools.PlotlyDisplay object>" | ||||
|       ] | ||||
|      }, | ||||
|      "execution_count": 22, | ||||
|      "metadata": {}, | ||||
|      "output_type": "execute_result" | ||||
|     } | ||||
|    ], | ||||
|    "source": [ | ||||
|     "import plotly\n", | ||||
|     "plotly.tools.set_credentials_file(username='carsten95', api_key='vElf5IOxiFheQdjTxjXW')\n", | ||||
|     "plotly.__version__\n", | ||||
|     "import plotly.plotly as py\n", | ||||
|     "import plotly.graph_objs as go\n", | ||||
|     "\n", | ||||
|     "data = [go.Bar(\n", | ||||
|     "            x=['performance 1.1', 'performance 1.2', 'performance 1.3', 'performance 1.4', 'performance 1.5' , 'performance 1.6'],\n", | ||||
|     "            y=[performance1_1, performance1_2, performance1_3, performance1_4, performance1_5, performance1_6]\n", | ||||
|     "    )]\n", | ||||
|     "\n", | ||||
|     "py.iplot(data, filename='basic-bar')" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": null, | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "outputs": [], | ||||
|    "source": [] | ||||
|   } | ||||
|  ], | ||||
|  "metadata": { | ||||
|   "kernelspec": { | ||||
|    "display_name": "Python 3", | ||||
|    "language": "python", | ||||
|    "name": "python3" | ||||
|   }, | ||||
|   "language_info": { | ||||
|    "codemirror_mode": { | ||||
|     "name": "ipython", | ||||
|     "version": 3 | ||||
|    }, | ||||
|    "file_extension": ".py", | ||||
|    "mimetype": "text/x-python", | ||||
|    "name": "python", | ||||
|    "nbconvert_exporter": "python", | ||||
|    "pygments_lexer": "ipython3", | ||||
|    "version": "3.6.3" | ||||
|   } | ||||
|  }, | ||||
|  "nbformat": 4, | ||||
|  "nbformat_minor": 2 | ||||
| } | ||||
							
								
								
									
										304
									
								
								Carsten_Solutions/NLP - Test 01.ipynb
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										304
									
								
								Carsten_Solutions/NLP - Test 01.ipynb
									
									
									
									
									
										Normal file
									
								
							
										
											
												File diff suppressed because one or more lines are too long
											
										
									
								
							| @ -0,0 +1,545 @@ | ||||
| { | ||||
|  "cells": [ | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "# NLP Lab Task 1 SoSe 18\n", | ||||
|     "\n", | ||||
|     "## POS Tagger\n", | ||||
|     "due to 08.05.2018\n" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": null, | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "outputs": [], | ||||
|    "source": [] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 1, | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "outputs": [], | ||||
|    "source": [ | ||||
|     "import nltk\n", | ||||
|     "from nltk import word_tokenize, pos_tag" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 2, | ||||
|    "metadata": {}, | ||||
|    "outputs": [ | ||||
|     { | ||||
|      "name": "stdout", | ||||
|      "output_type": "stream", | ||||
|      "text": [ | ||||
|       "[('Hi', 'NNP'), (',', ','), ('welcome', 'NN'), ('to', 'TO'), ('the', 'DT'), ('NLP', 'NNP'), ('lab', 'NN'), ('!', '.')]\n" | ||||
|      ] | ||||
|     } | ||||
|    ], | ||||
|    "source": [ | ||||
|     "tokens = word_tokenize(\"Hi, welcome to the NLP lab!\")\n", | ||||
|     "print(pos_tag(tokens))" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "## Exploring the Penn TreeBank (PTB) Corpus" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 3, | ||||
|    "metadata": {}, | ||||
|    "outputs": [ | ||||
|     { | ||||
|      "name": "stdout", | ||||
|      "output_type": "stream", | ||||
|      "text": [ | ||||
|       "[nltk_data] Downloading package treebank to\n", | ||||
|       "[nltk_data]     /Users/Carsten/nltk_data...\n", | ||||
|       "[nltk_data]   Package treebank is already up-to-date!\n", | ||||
|       "[('Pierre', 'NNP'), ('Vinken', 'NNP'), (',', ','), ('61', 'CD'), ('years', 'NNS'), ('old', 'JJ'), (',', ','), ('will', 'MD'), ('join', 'VB'), ('the', 'DT'), ('board', 'NN'), ('as', 'IN'), ('a', 'DT'), ('nonexecutive', 'JJ'), ('director', 'NN'), ('Nov.', 'NNP'), ('29', 'CD'), ('.', '.')]\n", | ||||
|       "Tagged sentences:  3914\n", | ||||
|       "Tagged words: 100676\n", | ||||
|       "[nltk_data] Downloading package tagsets to /Users/Carsten/nltk_data...\n", | ||||
|       "[nltk_data]   Package tagsets is already up-to-date!\n", | ||||
|       "$: dollar\n", | ||||
|       "    $ -$ --$ A$ C$ HK$ M$ NZ$ S$ U.S.$ US$\n", | ||||
|       "'': closing quotation mark\n", | ||||
|       "    ' ''\n", | ||||
|       "(: opening parenthesis\n", | ||||
|       "    ( [ {\n", | ||||
|       "): closing parenthesis\n", | ||||
|       "    ) ] }\n", | ||||
|       ",: comma\n", | ||||
|       "    ,\n", | ||||
|       "--: dash\n", | ||||
|       "    --\n", | ||||
|       ".: sentence terminator\n", | ||||
|       "    . ! ?\n", | ||||
|       ":: colon or ellipsis\n", | ||||
|       "    : ; ...\n", | ||||
|       "CC: conjunction, coordinating\n", | ||||
|       "    & 'n and both but either et for less minus neither nor or plus so\n", | ||||
|       "    therefore times v. versus vs. whether yet\n", | ||||
|       "CD: numeral, cardinal\n", | ||||
|       "    mid-1890 nine-thirty forty-two one-tenth ten million 0.5 one forty-\n", | ||||
|       "    seven 1987 twenty '79 zero two 78-degrees eighty-four IX '60s .025\n", | ||||
|       "    fifteen 271,124 dozen quintillion DM2,000 ...\n", | ||||
|       "DT: determiner\n", | ||||
|       "    all an another any both del each either every half la many much nary\n", | ||||
|       "    neither no some such that the them these this those\n", | ||||
|       "EX: existential there\n", | ||||
|       "    there\n", | ||||
|       "FW: foreign word\n", | ||||
|       "    gemeinschaft hund ich jeux habeas Haementeria Herr K'ang-si vous\n", | ||||
|       "    lutihaw alai je jour objets salutaris fille quibusdam pas trop Monte\n", | ||||
|       "    terram fiche oui corporis ...\n", | ||||
|       "IN: preposition or conjunction, subordinating\n", | ||||
|       "    astride among uppon whether out inside pro despite on by throughout\n", | ||||
|       "    below within for towards near behind atop around if like until below\n", | ||||
|       "    next into if beside ...\n", | ||||
|       "JJ: adjective or numeral, ordinal\n", | ||||
|       "    third ill-mannered pre-war regrettable oiled calamitous first separable\n", | ||||
|       "    ectoplasmic battery-powered participatory fourth still-to-be-named\n", | ||||
|       "    multilingual multi-disciplinary ...\n", | ||||
|       "JJR: adjective, comparative\n", | ||||
|       "    bleaker braver breezier briefer brighter brisker broader bumper busier\n", | ||||
|       "    calmer cheaper choosier cleaner clearer closer colder commoner costlier\n", | ||||
|       "    cozier creamier crunchier cuter ...\n", | ||||
|       "JJS: adjective, superlative\n", | ||||
|       "    calmest cheapest choicest classiest cleanest clearest closest commonest\n", | ||||
|       "    corniest costliest crassest creepiest crudest cutest darkest deadliest\n", | ||||
|       "    dearest deepest densest dinkiest ...\n", | ||||
|       "LS: list item marker\n", | ||||
|       "    A A. B B. C C. D E F First G H I J K One SP-44001 SP-44002 SP-44005\n", | ||||
|       "    SP-44007 Second Third Three Two * a b c d first five four one six three\n", | ||||
|       "    two\n", | ||||
|       "MD: modal auxiliary\n", | ||||
|       "    can cannot could couldn't dare may might must need ought shall should\n", | ||||
|       "    shouldn't will would\n", | ||||
|       "NN: noun, common, singular or mass\n", | ||||
|       "    common-carrier cabbage knuckle-duster Casino afghan shed thermostat\n", | ||||
|       "    investment slide humour falloff slick wind hyena override subhumanity\n", | ||||
|       "    machinist ...\n", | ||||
|       "NNP: noun, proper, singular\n", | ||||
|       "    Motown Venneboerger Czestochwa Ranzer Conchita Trumplane Christos\n", | ||||
|       "    Oceanside Escobar Kreisler Sawyer Cougar Yvette Ervin ODI Darryl CTCA\n", | ||||
|       "    Shannon A.K.C. Meltex Liverpool ...\n", | ||||
|       "NNPS: noun, proper, plural\n", | ||||
|       "    Americans Americas Amharas Amityvilles Amusements Anarcho-Syndicalists\n", | ||||
|       "    Andalusians Andes Andruses Angels Animals Anthony Antilles Antiques\n", | ||||
|       "    Apache Apaches Apocrypha ...\n", | ||||
|       "NNS: noun, common, plural\n", | ||||
|       "    undergraduates scotches bric-a-brac products bodyguards facets coasts\n", | ||||
|       "    divestitures storehouses designs clubs fragrances averages\n", | ||||
|       "    subjectivists apprehensions muses factory-jobs ...\n", | ||||
|       "PDT: pre-determiner\n", | ||||
|       "    all both half many quite such sure this\n", | ||||
|       "POS: genitive marker\n", | ||||
|       "    ' 's\n", | ||||
|       "PRP: pronoun, personal\n", | ||||
|       "    hers herself him himself hisself it itself me myself one oneself ours\n", | ||||
|       "    ourselves ownself self she thee theirs them themselves they thou thy us\n", | ||||
|       "PRP$: pronoun, possessive\n", | ||||
|       "    her his mine my our ours their thy your\n", | ||||
|       "RB: adverb\n", | ||||
|       "    occasionally unabatingly maddeningly adventurously professedly\n", | ||||
|       "    stirringly prominently technologically magisterially predominately\n", | ||||
|       "    swiftly fiscally pitilessly ...\n", | ||||
|       "RBR: adverb, comparative\n", | ||||
|       "    further gloomier grander graver greater grimmer harder harsher\n", | ||||
|       "    healthier heavier higher however larger later leaner lengthier less-\n", | ||||
|       "    perfectly lesser lonelier longer louder lower more ...\n", | ||||
|       "RBS: adverb, superlative\n", | ||||
|       "    best biggest bluntest earliest farthest first furthest hardest\n", | ||||
|       "    heartiest highest largest least less most nearest second tightest worst\n", | ||||
|       "RP: particle\n", | ||||
|       "    aboard about across along apart around aside at away back before behind\n", | ||||
|       "    by crop down ever fast for forth from go high i.e. in into just later\n", | ||||
|       "    low more off on open out over per pie raising start teeth that through\n", | ||||
|       "    under unto up up-pp upon whole with you\n", | ||||
|       "SYM: symbol\n", | ||||
|       "    % & ' '' ''. ) ). * + ,. < = > @ A[fj] U.S U.S.S.R * ** ***\n", | ||||
|       "TO: \"to\" as preposition or infinitive marker\n", | ||||
|       "    to\n", | ||||
|       "UH: interjection\n", | ||||
|       "    Goodbye Goody Gosh Wow Jeepers Jee-sus Hubba Hey Kee-reist Oops amen\n", | ||||
|       "    huh howdy uh dammit whammo shucks heck anyways whodunnit honey golly\n", | ||||
|       "    man baby diddle hush sonuvabitch ...\n", | ||||
|       "VB: verb, base form\n", | ||||
|       "    ask assemble assess assign assume atone attention avoid bake balkanize\n", | ||||
|       "    bank begin behold believe bend benefit bevel beware bless boil bomb\n", | ||||
|       "    boost brace break bring broil brush build ...\n", | ||||
|       "VBD: verb, past tense\n", | ||||
|       "    dipped pleaded swiped regummed soaked tidied convened halted registered\n", | ||||
|       "    cushioned exacted snubbed strode aimed adopted belied figgered\n", | ||||
|       "    speculated wore appreciated contemplated ...\n", | ||||
|       "VBG: verb, present participle or gerund\n", | ||||
|       "    telegraphing stirring focusing angering judging stalling lactating\n", | ||||
|       "    hankerin' alleging veering capping approaching traveling besieging\n", | ||||
|       "    encrypting interrupting erasing wincing ...\n", | ||||
|       "VBN: verb, past participle\n", | ||||
|       "    multihulled dilapidated aerosolized chaired languished panelized used\n", | ||||
|       "    experimented flourished imitated reunifed factored condensed sheared\n", | ||||
|       "    unsettled primed dubbed desired ...\n", | ||||
|       "VBP: verb, present tense, not 3rd person singular\n", | ||||
|       "    predominate wrap resort sue twist spill cure lengthen brush terminate\n", | ||||
|       "    appear tend stray glisten obtain comprise detest tease attract\n", | ||||
|       "    emphasize mold postpone sever return wag ...\n", | ||||
|       "VBZ: verb, present tense, 3rd person singular\n", | ||||
|       "    bases reconstructs marks mixes displeases seals carps weaves snatches\n", | ||||
|       "    slumps stretches authorizes smolders pictures emerges stockpiles\n", | ||||
|       "    seduces fizzes uses bolsters slaps speaks pleads ...\n", | ||||
|       "WDT: WH-determiner\n", | ||||
|       "    that what whatever which whichever\n", | ||||
|       "WP: WH-pronoun\n", | ||||
|       "    that what whatever whatsoever which who whom whosoever\n", | ||||
|       "WP$: WH-pronoun, possessive\n", | ||||
|       "    whose\n", | ||||
|       "WRB: Wh-adverb\n", | ||||
|       "    how however whence whenever where whereby whereever wherein whereof why\n", | ||||
|       "``: opening quotation mark\n", | ||||
|       "    ` ``\n", | ||||
|       "None\n" | ||||
|      ] | ||||
|     } | ||||
|    ], | ||||
|    "source": [ | ||||
|     "nltk.download('treebank')\n", | ||||
|     "annotated_sent = nltk.corpus.treebank.tagged_sents()\n", | ||||
|     " \n", | ||||
|     "print(annotated_sent[0])\n", | ||||
|     "print(\"Tagged sentences: \", len(annotated_sent))\n", | ||||
|     "print(\"Tagged words:\", len(nltk.corpus.treebank.tagged_words()))\n", | ||||
|     "\n", | ||||
|     "# tagsets\n", | ||||
|     "nltk.download('tagsets')\n", | ||||
|     "print(nltk.help.upenn_tagset())" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "## Training" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 4, | ||||
|    "metadata": {}, | ||||
|    "outputs": [ | ||||
|     { | ||||
|      "name": "stdout", | ||||
|      "output_type": "stream", | ||||
|      "text": [ | ||||
|       "{'is_capitalized': False,\n", | ||||
|       " 'next_word': 'sentence',\n", | ||||
|       " 'prefix-1': 'a',\n", | ||||
|       " 'prev_word': 'is',\n", | ||||
|       " 'suffix-1': 'a',\n", | ||||
|       " 'word': 'a'}\n" | ||||
|      ] | ||||
|     } | ||||
|    ], | ||||
|    "source": [ | ||||
|     "# TODO: improve this feature extraction function\n", | ||||
|     "    \n", | ||||
|     "def features(sentence, index):\n", | ||||
|     "    return {\n", | ||||
|     "        'word': sentence[index],\n", | ||||
|     "        'is_capitalized': sentence[index][0].upper() == sentence[index][0],\n", | ||||
|     "        'prefix-1': sentence[index][0],\n", | ||||
|     "        'suffix-1': sentence[index][-1],\n", | ||||
|     "        'prev_word': '' if index == 0 else sentence[index - 1],\n", | ||||
|     "        'next_word': '' if index == len(sentence) - 1 else sentence[index + 1]\n", | ||||
|     "    }\n", | ||||
|     "import pprint \n", | ||||
|     "pprint.pprint(features(['This', 'is', 'a', 'sentence'], 2))" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 5, | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "outputs": [], | ||||
|    "source": [ | ||||
|     "def untag(tagged_sentence):\n", | ||||
|     "    return [w for w, t in tagged_sentence]" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 6, | ||||
|    "metadata": {}, | ||||
|    "outputs": [ | ||||
|     { | ||||
|      "name": "stdout", | ||||
|      "output_type": "stream", | ||||
|      "text": [ | ||||
|       "2935\n", | ||||
|       "979\n" | ||||
|      ] | ||||
|     } | ||||
|    ], | ||||
|    "source": [ | ||||
|     "cutoff = int(.75 * len(annotated_sent))\n", | ||||
|     "training_sentences = annotated_sent[:cutoff]\n", | ||||
|     "test_sentences = annotated_sent[cutoff:]\n", | ||||
|     " \n", | ||||
|     "print(len(training_sentences))\n", | ||||
|     "print(len(test_sentences))\n", | ||||
|     " \n", | ||||
|     "def transform_to_dataset(tagged_sentences):\n", | ||||
|     "    X, y = [], []\n", | ||||
|     "    for tagged in tagged_sentences:\n", | ||||
|     "        for index in range(len(tagged)):\n", | ||||
|     "            X.append(features(untag(tagged), index))\n", | ||||
|     "            y.append(tagged[index][1])\n", | ||||
|     " \n", | ||||
|     "    return X, y\n", | ||||
|     " \n", | ||||
|     "X, y = transform_to_dataset(training_sentences)" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "## Implementing a classifier" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 7, | ||||
|    "metadata": {}, | ||||
|    "outputs": [ | ||||
|     { | ||||
|      "name": "stdout", | ||||
|      "output_type": "stream", | ||||
|      "text": [ | ||||
|       "training OK\n", | ||||
|       "Accuracy: 0.878515185602\n" | ||||
|      ] | ||||
|     } | ||||
|    ], | ||||
|    "source": [ | ||||
|     "from sklearn.tree import DecisionTreeClassifier\n", | ||||
|     "from sklearn.feature_extraction import DictVectorizer\n", | ||||
|     "from sklearn.pipeline import Pipeline\n", | ||||
|     "\n", | ||||
|     "size=10000\n", | ||||
|     "\n", | ||||
|     "clf = Pipeline([\n", | ||||
|     "    ('vectorizer', DictVectorizer(sparse=False)),\n", | ||||
|     "    ('classifier', DecisionTreeClassifier(criterion='entropy'))\n", | ||||
|     "])\n", | ||||
|     "clf.fit(X[:size], y[:size])\n", | ||||
|     " \n", | ||||
|     "print('training OK')\n", | ||||
|     " \n", | ||||
|     "X_test, y_test = transform_to_dataset(test_sentences)\n", | ||||
|     " \n", | ||||
|     "print(\"Accuracy:\", clf.score(X_test, y_test))" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "## Using the classifier" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 8, | ||||
|    "metadata": {}, | ||||
|    "outputs": [ | ||||
|     { | ||||
|      "name": "stdout", | ||||
|      "output_type": "stream", | ||||
|      "text": [ | ||||
|       "3.6.3\n", | ||||
|       "checking...\n", | ||||
|       "[('Hello', 'NN'), ('world', 'NN'), (',', ','), ('lets', 'NNS'), ('do', 'VB'), ('something', 'VBG'), ('awesome', 'NN'), ('today', 'NN'), ('!', 'NNP')]\n" | ||||
|      ] | ||||
|     } | ||||
|    ], | ||||
|    "source": [ | ||||
|     "def pos_tag(sentence):\n", | ||||
|     "    print('checking...')\n", | ||||
|     "    tagged_sentence = []\n", | ||||
|     "    tags = clf.predict([features(sentence, index) for index in range(len(sentence))])\n", | ||||
|     "    return zip(sentence, tags)\n", | ||||
|     "\n", | ||||
|     "import platform\n", | ||||
|     "print(platform.python_version())\n", | ||||
|     "\n", | ||||
|     "print(list(pos_tag(word_tokenize('Hello world, lets do something awesome today!'))))" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "## Rule-based POS taggers\n", | ||||
|     "1. DefaultTagger that simply tags everything with the same tag\n", | ||||
|     "2. RegexpTagger that applies tags according to a set of regular expressions\n", | ||||
|     "3. N-Gram (n-gram tagger is a generalization of a unigram tagger whose context is the current word together with the part-of-speech tags of the n-1 preceding token)\n", | ||||
|     "    * UnigramTagger\n", | ||||
|     "    * BigramTagger\n", | ||||
|     "    * TrigramTagger" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": 9, | ||||
|    "metadata": {}, | ||||
|    "outputs": [ | ||||
|     { | ||||
|      "ename": "NameError", | ||||
|      "evalue": "name 'brown_tagged_sents' is not defined", | ||||
|      "output_type": "error", | ||||
|      "traceback": [ | ||||
|       "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", | ||||
|       "\u001b[0;31mNameError\u001b[0m                                 Traceback (most recent call last)", | ||||
|       "\u001b[0;32m<ipython-input-9-cac1441958dc>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      7\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mnltk\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mTrigramTagger\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mtg\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      8\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 9\u001b[0;31m \u001b[0msize\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mbrown_tagged_sents\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m*\u001b[0m \u001b[0;36m0.9\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     10\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     11\u001b[0m patterns = [(r'.*ing$', 'VBG'), (r'.*ed$', 'VBD'), (r'.*es$', 'VBZ'), (r'.*ould$', 'MD'), (r'.*\\'s$', 'NN$'),               \n", | ||||
|       "\u001b[0;31mNameError\u001b[0m: name 'brown_tagged_sents' is not defined" | ||||
|      ] | ||||
|     } | ||||
|    ], | ||||
|    "source": [ | ||||
|     "#nltk.download('brown')\n", | ||||
|     "\n", | ||||
|     "from nltk.corpus import brown\n", | ||||
|     "from nltk import DefaultTagger as df\n", | ||||
|     "from nltk import UnigramTagger as ut\n", | ||||
|     "from nltk import BigramTagger as bt\n", | ||||
|     "from nltk import TrigramTagger as tg\n", | ||||
|     "\n", | ||||
|     "size = int(len(brown_tagged_sents) * 0.9)\n", | ||||
|     "\n", | ||||
|     "patterns = [(r'.*ing$', 'VBG'), (r'.*ed$', 'VBD'), (r'.*es$', 'VBZ'), (r'.*ould$', 'MD'), (r'.*\\'s$', 'NN$'),               \n", | ||||
|     "             (r'.*s$', 'NNS'), (r'^-?[0-9]+(.[0-9]+)?$', 'CD'), (r'.*', 'NN')]\n", | ||||
|     "\n", | ||||
|     "brown_tagged_sents = brown.tagged_sents(categories='news')\n", | ||||
|     "brown_sents = brown.sents(categories='news')\n", | ||||
|     "\n", | ||||
|     "train_sents = brown_tagged_sents[:size]\n", | ||||
|     "test_sents = brown_tagged_sents[size:]\n", | ||||
|     "\n", | ||||
|     "def_model = nltk.DefaultTagger('NN')\n", | ||||
|     "uni_model = nltk.UnigramTagger(train_sents)\n", | ||||
|     "bi_model = nltk.BigramTagger(train_sents)\n", | ||||
|     "tri_model = nltk.TrigramTagger(train_sents)\n", | ||||
|     "regexp_model = nltk.RegexpTagger(patterns)\n", | ||||
|     "\n", | ||||
|     "# performance of Default Tagger\n", | ||||
|     "print(def_model.evaluate(train_sents))\n", | ||||
|     "print(def_model.evaluate(test_sents))\n", | ||||
|     "print()\n", | ||||
|     "# performance of Unigram Tagger\n", | ||||
|     "print(uni_model.evaluate(train_sents))\n", | ||||
|     "print(uni_model.evaluate(test_sents))\n", | ||||
|     "print()\n", | ||||
|     "# performance of Bigram Tagger\n", | ||||
|     "print(bi_model.evaluate(train_sents))\n", | ||||
|     "print(bi_model.evaluate(test_sents))\n", | ||||
|     "print()\n", | ||||
|     "# performance of Trigram Tagger\n", | ||||
|     "print(tri_model.evaluate(train_sents))\n", | ||||
|     "print(tri_model.evaluate(test_sents))\n", | ||||
|     "print()\n", | ||||
|     "# performance of Regex Tagger\n", | ||||
|     "print(regexp_model.evaluate(train_sents))\n", | ||||
|     "print(regexp_model.evaluate(test_sents))\n", | ||||
|     "print()" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "markdown", | ||||
|    "metadata": {}, | ||||
|    "source": [ | ||||
|     "## Exercise 01\n", | ||||
|     "### In this lab you will learn how to train your own POS tagger classifier and test it against some pre-trained models\n", | ||||
|     "__Pleases implement your code and upload it to git using (jupyter notebook format)__\n", | ||||
|     "#### Classifiers\n", | ||||
|     "1. model1 = your POS tagger model (english)\n", | ||||
|     "2. model2 = pre-trained POS tagger model using NLTK (maxentropy english)\n", | ||||
|     "3. model3.x = rule-based classifiers (x = 1 to 5)\n", | ||||
|     "4. model4 = your POS tagger model (not english)\n", | ||||
|     "5. model5 = pre-trained POS tagger model using RDRPOSTagger 1 or TreeTagger 2 (not english)\n", | ||||
|     "\n", | ||||
|     "note: for model1 and model3 you can try different classifiers: Hidden Markov Model, Logistic Regression, Maximum Entropy Markov Models, Decision Trees, Naive Bayes, etc..__choose one!__\n", | ||||
|     "#### Corpora\n", | ||||
|     "1. X1 = nltk.corpus.treebank (english)\n", | ||||
|     "2. X2 = nltk.corpus.brown (english)\n", | ||||
|     "3. X3 = other language (not english)\n", | ||||
|     "note: data split for training/test = 0.8/0.2 (sequencial)\n", | ||||
|     "#### Task 1\n", | ||||
|     "* get results for english (plot a graph with all classifiers x results)\n", | ||||
|     "    * performance 1.1 = model1 in X1\n", | ||||
|     "    * performance 1.2 = model2 in X1\n", | ||||
|     "    * performance 1.3.x = model3.x in X1\n", | ||||
|     "    * performance 1.4 = model1 in X2\n", | ||||
|     "    * performance 1.5 = model2 in X2\n", | ||||
|     "    * performance 1.6.x = model3.x in X2\n", | ||||
|     "#### Task 2\n", | ||||
|     "* train your model with standard features (plot a graph with all classifiers x results)\n", | ||||
|     "    * performance 2.1 = model4 in X3\n", | ||||
|     "    * performance 2.2 = model5 in X3\n", | ||||
|     "### notes:\n", | ||||
|     "1. you can save your trained models using pickle (import pickle)\n", | ||||
|     "2. please upload your jupyter file to git\n", | ||||
|     "3. this script just gives a general idea, please organize and comment your code accordingly\n", | ||||
|     "4. you have to make sure the language you choose is supported for one of the classifiers suggested (see above) AND you are able to find a corpus in that language (example: Tiger Corpus for German). You can also search the Web in order to try to find a pre-trained classifier in your language. If that is not possible, just choose one existing. Please also make sure the language you have choosen does not overlap with other students.\n", | ||||
|     "5. If you are able to find an annotated corpus and format is CoNLL, you can easly read it using the following method in NLTK:\n", | ||||
|     "corp = nltk.corpus.ConllCorpusReader()\n", | ||||
|     "6. a nice library to create charts: https://plot.ly/python/bar-charts/" | ||||
|    ] | ||||
|   }, | ||||
|   { | ||||
|    "cell_type": "code", | ||||
|    "execution_count": null, | ||||
|    "metadata": { | ||||
|     "collapsed": true | ||||
|    }, | ||||
|    "outputs": [], | ||||
|    "source": [] | ||||
|   } | ||||
|  ], | ||||
|  "metadata": { | ||||
|   "kernelspec": { | ||||
|    "display_name": "Python 3", | ||||
|    "language": "python", | ||||
|    "name": "python3" | ||||
|   }, | ||||
|   "language_info": { | ||||
|    "codemirror_mode": { | ||||
|     "name": "ipython", | ||||
|     "version": 3 | ||||
|    }, | ||||
|    "file_extension": ".py", | ||||
|    "mimetype": "text/x-python", | ||||
|    "name": "python", | ||||
|    "nbconvert_exporter": "python", | ||||
|    "pygments_lexer": "ipython3", | ||||
|    "version": "3.6.3" | ||||
|   } | ||||
|  }, | ||||
|  "nbformat": 4, | ||||
|  "nbformat_minor": 2 | ||||
| } | ||||
		Reference in New Issue
	
	Block a user