ns_exponent (float, optional):exponent用来形成negative sampling的分布.A value of 1.0 samples exactly in proportion to the frequencies, 0.0 samples all words equally, while a negative value samples low-frequency words more than high-frequency words. 默认值0.75是由原始Word2Vec论文选择的。而最近的论文 指出在推荐系统的应用中,设置其他值也许会得到更好的性能。
min_alpha (float, optional) –随着训练的进行,学习率将线性下降到 min alpha。
seed (int, optional):Seed for the random number generator. Initial vectors for each word are seeded with a hash of the concatenation of word + str(seed). Note that for a fully deterministically-reproducible run, you must also limit the model to a single worker thread (workers=1), to eliminate ordering jitter from OS thread scheduling. (In Python 3, reproducibility between interpreter launches also requires use of the PYTHONHASHSEED environment variable to control hash randomization).
batch_words (int, optional):Target size (in words) for batches of examples passed to worker threads (and thus cython routines).(Larger batches will be passed if individual texts are longer than 10000 words, but the standard cython code truncates to that maximum.)
compute_loss (bool, optional):If True, computes and stores loss value which can be retrieved using get_latest_training_loss().
callbacks (iterable of CallbackAny2Vec, optional):Sequence of callbacks to be executed at specific stages during training.