### 6/21/2017

I took some time away from the TweetToSparseFeatureVector today and focused on getting scores for all the different training data sets: Anger, Fear, Joy, and Sadness. I ran my MultiFilter on Fear, Joy, and Sadness, since I had already ran it on Anger.

=== Anger Summary ===

`Correlation coefficient 0.625 `

`Kendall's tau 0.4454 `

`Spearman's rho 0.6155 `

`Mean absolute error 0.1069 `

`Root mean squared error 0.1358 `

`Relative absolute error 76.1346 % `

`Root relative squared error 79.0147 % `

`Total Number of Instances 760`

=== Fear Summary ===

`Correlation coefficient 0.6216 `

`Kendall's tau 0.4382 `

`Spearman's rho 0.6087 `

`Mean absolute error 0.1275 `

`Root mean squared error 0.1575 `

`Relative absolute error 77.1929 % `

`Root relative squared error 78.3684 % `

`Total Number of Instances 995`

=== Joy Summary ===

`Correlation coefficient 0.636 `

`Kendall's tau 0.4603 `

`Spearman's rho 0.6435 `

`Mean absolute error 0.1348 `

`Root mean squared error 0.1688 `

`Relative absolute error 73.702 % `

`Root relative squared error 77.5147 % `

`Total Number of Instances 714`

=== Sadness Summary ===

`Correlation coefficient 0.7094 `

`Kendall's tau 0.5229 `

`Spearman's rho 0.7116 `

`Mean absolute error 0.1142 `

`Root mean squared error 0.1431 `

`Relative absolute error 67.2283 % `

`Root relative squared error 70.4165 % `

`Total Number of Instances 673`

The next step was to compare this system to the Weka Baseline system created by the task creators. If my system is on par (or better) than the Weka Baseline system, I can then use it to compare my Linear Regression and eventually Deep Learning models to.

System comparison:

System | Avg. Pearson | Avg. Spearman | Anger Pearson | Anger Spearman | Fear Pearson | Fear Spearman | Joy Pearson | Joy Spearman | Sadness Pearson | Sadness Spearman |

Weka Baseline System | 0.648 | 0.641 | 0.639 | 0.615 | 0.652 | 0.635 | 0.654 | 0.662 | 0.648 | 0.651 |

My System (4 filters) | 0.648 | 0.6448 | 0.625 | 0.6155 | 0.6216 | 0.6087 | 0.636 | 0.6435 | 0.7094 | 0.7116 |

As shown in the table, my system is not that far off in the different emotions (in fact, it performed better than the Weka Baseline system for Sadness), and the average Pearson and Spearman scores are higher. While I could use this system to test my future models on, I want to try one more time and add the TweetToSparseFeatureVector, as I think it will benefit my system.

Additionally, today I looked into some tools that I could use when building my future Sentiment Analysis model. I found a python module called Natural Language Tool Kit (NLTK) which makes NLP in python very simple. There are some other cool tools such as H2O.ai, and Turi’s GraphLab Create (which I have used in the past).

By Friday, I hope to have the preliminary testing with Weka done, and then hopefully I can get started with my own model next week.