This blog-post demonstrate the finbert-embedding pypi package which extracts token and sentence level embedding from FinBERT model (BERT language model fine-tuned on financial news articles). The finbert model was trained and open sourced by Dogu Tan Araci (University of Amsterdam).
BERT, published by Google, is conceptually simple and empirically powerful as it obtained state-of-the-art results on eleven natural language processing tasks.
The objective of this project is to obtain the word or sentence embeddings from FinBERT, pre-trained model by Dogu Tan Araci. FinBERT, which is a BERT language model further trained on financial news articles for adapting financial domain. It achieved the state-of-the-art on FiQA sentiment scoring and Financial PhraseBank dataset. Paper here.
Instead of building and perform fine-tuning for an end-to-end NLP model, You can directly utilize word embeddings from Financial BERT to build NLP models for various downstream tasks eg. Text clustering or Word clustering or Extractive summarization etc.
Features
- Creates an abstraction to remove dealing with inferencing the pre-trained FinBERT model.
- Require only two lines of code to get sentence or token-level encoding for a text sentence.
- The package takes care of OOVs (out of vocabulary) inherently.
- Downloads and installs FinBERT pre-trained model (during first initialization, usage in next section).
Install
(Recommended to create a conda env to have isolation and avoid dependency clashes)
pip install finbert-embedding==0.1.4
Note: If you get error in installing this package (common error with Tensorflow):
Installing collected packages: wrapt, tensorflow
Found existing installation: wrapt 1.10.11
ERROR: Cannot uninstall ‘wrapt’. It is a distutils installed project……
then, just do this:
pip install wrapt --upgrade --ignore-installed pip install finbert-embedding==0.1.4
Usage 1
Token/Sentence Embedding Extraction
Word embeddings generated are list of 768 dimensional embeddings for each word.
Sentence embedding generated is 768 dimensional embedding which is average of each token.
One can also set model_path
parameter in FinbertEmbedding()
class to extract token/sentence embeddings from any other fine-tuned or even original BERT model. FinBERT model automatically gets downloaded from the dropbox as the class initialization is done i.e. finbert = FinbertEmbedding()
from finbert_embedding.embedding import FinbertEmbedding text = "Another PSU bank, Punjab National Bank which also reported numbers managed to see a slight improvement in asset quality." # Class Initialization (You can set default 'model_path=None' as your finetuned BERT model path while Initialization) finbert = FinbertEmbedding() word_embeddings = finbert.word_vector(text) sentence_embedding = finbert.sentence_vector(text) print("Text Tokens: ", finbert.tokens) # Text Tokens: ['another', 'psu', 'bank', ',', 'punjab', 'national', 'bank', 'which', 'also', 'reported', 'numbers', 'managed', 'to', 'see', 'a', 'slight', 'improvement', 'in', 'asset', 'quality', '.'] print ('Shape of Word Embeddings: %d x %d' % (len(word_embeddings), len(word_embeddings[0]))) # Shape of Word Embeddings: 21 x 768 print("Shape of Sentence Embedding = ",len(sentence_embedding)) # Shape of Sentence Embedding = 768
Usage 2
Similarity or Text Comparison
A decent representation for a downstream task doesn’t mean that it will be meaningful in terms of cosine distance. Since cosine distance is a linear space where all dimensions are weighted equally. if you want to use cosine distance anyway, then please focus on the rank not the absolute value.
Namely, do not use:
if cosine(A, B) > 0.9, then A and B are similar
Please consider the following instead:
if cosine(A, B) > cosine(A, C), then A is more similar to B than C.
from finbert_embedding.embedding import FinbertEmbedding text = "After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank." finbert = FinbertEmbedding() word_embeddings = finbert.word_vector(text) from scipy.spatial.distance import cosine diff_bank = 1 - cosine(word_embeddings[9], word_embeddings[18]) same_bank = 1 - cosine(word_embeddings[9], word_embeddings[5]) print('Vector similarity for similar bank meanings (bank vault & bank robber): %.2f' % same_bank) print('Vector similarity for different bank meanings (bank robber & river bank): %.2f' % diff_bank) # Vector similarity for similar bank meanings (bank vault & bank robber): 0.92 # Vector similarity for different bank meanings (bank robber & river bank): 0.64
Warning
According to BERT author Jacob Devlin: I'm not sure what these vectors are, since BERT does not generate meaningful sentence vectors. It seems that this is doing average pooling over the word tokens to get a sentence vector, but we never suggested that this will generate meaningful sentence representations. And even if they are decent representations when fed into a DNN trained for a downstream task, it doesn't mean that they will be meaningful in terms of cosine distance. (Since cosine distance is a linear space where all dimensions are weighted equally).
However, with the [CLS] token, it does become meaningful if the model has been fine-tuned, where the last hidden layer of this token is used as the “sentence vector” for downstream sequence classification task. This package encode sentence by average pooling over all word tokens. It would be definitely good to experiment with sentence embeddings for similarity purposes.
To Do (Next Version)
- Extend it to give word embeddings for a paragraph/Document (Currently, it takes one sentence as input). Chunkize your paragraph or text document into sentences using Spacy or NLTK before using finbert_embedding.
- Adding batch processing feature.
- More ways of handing OOVs (Currently, uses average of all tokens of a OOV word).
Future Goal
- Create generic downstream framework using FinBERT language model for any financial labelled text classification task like sentiment classification, Financial news classification, Financial Document classification etc.
One can check the package here at PyPi and codes for token/sentence embedding extraction at GitHub here.
Happy Deep Learning 🙂
Now that’s a good work
Like
Thanks
Like
I am getting the following error
ERROR: Could not find a version that satisfies the requirement torch==1.1.0 (from finbert-embedding==0.1.4) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.1.0 (from finbert-embedding==0.1.4)
Like
I like this post,And I figure that they having a great time to peruse this post,they might take a decent site to make an information,thanks for sharing it to me Pretty good post. ExcelR Data Science Courses
Like
Interesting work BUT – quite picky with installation. It does NOT install on common linux … hope this will be improved. Spending hours figuring out why the module is “not” available…
Like