faknow.run.content_based
- faknow.run.content_based.multimodal
- faknow.run.content_based.multimodal.run_algm
- faknow.run.content_based.multimodal.run_cafe
- faknow.run.content_based.multimodal.run_eann
- faknow.run.content_based.multimodal.run_hmcan
- faknow.run.content_based.multimodal.run_mcan
- faknow.run.content_based.multimodal.run_mfan
- faknow.run.content_based.multimodal.run_safe
- faknow.run.content_based.multimodal.run_spotfake
faknow.run.content_based.run_endef
- class faknow.run.content_based.run_endef.TokenizerENDEF(max_len=170, bert='hfl/chinese-roberta-wwm-ext')[source]
Bases:
object
Tokenizer for ENDEF
- faknow.run.content_based.run_endef.run_endef(train_path: str, base_model: ~faknow.model.model.AbstractModel | None = MDFEND( (bert): BertModel( (embeddings): BertEmbeddings( (word_embeddings): Embedding(21128, 768, padding_idx=0) (position_embeddings): Embedding(512, 768) (token_type_embeddings): Embedding(2, 768) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): BertEncoder( (layer): ModuleList( (0): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) (intermediate_act_fn): GELUActivation() ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (1): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) (intermediate_act_fn): GELUActivation() ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (2): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) (intermediate_act_fn): GELUActivation() ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (3): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) (intermediate_act_fn): GELUActivation() ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (4): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) (intermediate_act_fn): GELUActivation() ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (5): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) (intermediate_act_fn): GELUActivation() ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (6): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) (intermediate_act_fn): GELUActivation() ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (7): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) (intermediate_act_fn): GELUActivation() ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (8): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) (intermediate_act_fn): GELUActivation() ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (9): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) (intermediate_act_fn): GELUActivation() ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (10): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) (intermediate_act_fn): GELUActivation() ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (11): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) (intermediate_act_fn): GELUActivation() ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768, ), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) (pooler): BertPooler( (dense): Linear(in_features=768, out_features=768, bias=True) (activation): Tanh() ) ) (loss_func): BCELoss() (experts): ModuleList( (0): TextCNNLayer( (convs): ModuleList( (0): Conv2d(1, 64, kernel_size=(1, 768), stride=(1, 1)) (1): Conv2d(1, 64, kernel_size=(2, 768), stride=(1, 1)) (2): Conv2d(1, 64, kernel_size=(3, 768), stride=(1, 1)) (3): Conv2d(1, 64, kernel_size=(5, 768), stride=(1, 1)) (4): Conv2d(1, 64, kernel_size=(10, 768), stride=(1, 1)) ) ) (1): TextCNNLayer( (convs): ModuleList( (0): Conv2d(1, 64, kernel_size=(1, 768), stride=(1, 1)) (1): Conv2d(1, 64, kernel_size=(2, 768), stride=(1, 1)) (2): Conv2d(1, 64, kernel_size=(3, 768), stride=(1, 1)) (3): Conv2d(1, 64, kernel_size=(5, 768), stride=(1, 1)) (4): Conv2d(1, 64, kernel_size=(10, 768), stride=(1, 1)) ) ) (2): TextCNNLayer( (convs): ModuleList( (0): Conv2d(1, 64, kernel_size=(1, 768), stride=(1, 1)) (1): Conv2d(1, 64, kernel_size=(2, 768), stride=(1, 1)) (2): Conv2d(1, 64, kernel_size=(3, 768), stride=(1, 1)) (3): Conv2d(1, 64, kernel_size=(5, 768), stride=(1, 1)) (4): Conv2d(1, 64, kernel_size=(10, 768), stride=(1, 1)) ) ) (3): TextCNNLayer( (convs): ModuleList( (0): Conv2d(1, 64, kernel_size=(1, 768), stride=(1, 1)) (1): Conv2d(1, 64, kernel_size=(2, 768), stride=(1, 1)) (2): Conv2d(1, 64, kernel_size=(3, 768), stride=(1, 1)) (3): Conv2d(1, 64, kernel_size=(5, 768), stride=(1, 1)) (4): Conv2d(1, 64, kernel_size=(10, 768), stride=(1, 1)) ) ) (4): TextCNNLayer( (convs): ModuleList( (0): Conv2d(1, 64, kernel_size=(1, 768), stride=(1, 1)) (1): Conv2d(1, 64, kernel_size=(2, 768), stride=(1, 1)) (2): Conv2d(1, 64, kernel_size=(3, 768), stride=(1, 1)) (3): Conv2d(1, 64, kernel_size=(5, 768), stride=(1, 1)) (4): Conv2d(1, 64, kernel_size=(10, 768), stride=(1, 1)) ) ) ) (gate): Sequential( (0): Linear(in_features=1536, out_features=384, bias=True) (1): ReLU() (2): Linear(in_features=384, out_features=5, bias=True) (3): Softmax(dim=1) ) (attention): _MaskAttentionLayer( (attention_layer): Linear(in_features=768, out_features=1, bias=True) ) (domain_embedder): Embedding(8, 768) (classifier): _MLP( (mlp): Sequential( (0): Linear(in_features=320, out_features=384, bias=True) (1): BatchNorm1d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() (3): Dropout(p=0.2, inplace=False) (4): Linear(in_features=384, out_features=1, bias=True) ) ) ), bert='hfl/chinese-roberta-wwm-ext', max_len=170, batch_size=64, num_epochs=50, lr=0.0005, weight_decay=5e-05, step_size=100, gamma=0.98, metrics: ~typing.List | None = None, validate_path: str | None = None, test_path: str | None = None, device='cpu')[source]
run ENDEF, including training, validation and testing. If validate_path and test_path are None, only training is performed.
- Parameters:
train_path (str) – path of training data
base_model (AbstractModel) – the base model of ENDEF. Default=MDFEND(‘hfl/chinese-roberta-wwm-ext’, domain_num=8)
bert (str) – bert model name, default=”hfl/chinese-roberta-wwm-ext”
max_len (int) – max length of input text, default=170
batch_size (int) – batch size, default=64
num_epochs (int) – number of epochs, default=50
lr (float) – learning rate, default=0.0005
weight_decay (float) – weight decay, default=5e-5
step_size (int) – step size of learning rate scheduler, default=100
gamma (float) – gamma of learning rate scheduler, default=0.98
metrics (List) – evaluation metrics, if None, [‘accuracy’, ‘precision’, ‘recall’, ‘f1’] is used, default=None
validate_path (str) – path of validation data, default=None
test_path (str) – path of testing data, default=None
device (str) – device to run model, default=’cpu’
faknow.run.content_based.run_m3fend
- faknow.run.content_based.run_m3fend.run_m3fend(dataset: str = 'ch', domain_num: int = 3, emb_dim: int = 768, mlp_dims: list = [384], batch_size: int = 64, num_workers: int = 4, max_len: int = 170, lr: float = 0.0001, dropout: float = 0.2, weight_decay: float = 5e-05, semantic_num: int = 7, emotion_num: int = 7, style_num: int = 2, lnn_dim: int = 50, early_stop: int = 3, epochs: int = 50, device: str = 'gpu', gpu: str = '', metrics: List | None = None)[source]
Train and evaluate the M3FEND model.
- Parameters:
dataset (str, optional) – Dataset name. Defaults to ‘ch’.
domain_num (int, optional) – Number of domains. Defaults to 3.
emb_dim (int, optional) – Dimension of the embeddings. Defaults to 768.
mlp_dims (list, optional) – List of dimensions for the MLP layers. Defaults to [384].
batch_size (int, optional) – Batch size. Defaults to 64.
num_workers (int, optional) – Number of workers for data loading. Defaults to 4.
max_len (int, optional) – Maximum sequence length. Defaults to 170.
lr (float, optional) – Learning rate. Defaults to 0.0001.
dropout (float, optional) – Dropout probability. Defaults to 0.2.
weight_decay (float, optional) – Weight decay for optimization. Defaults to 0.00005.
semantic_num (int, optional) – Number of semantic categories. Defaults to 7.
emotion_num (int, optional) – Number of emotion categories. Defaults to 7.
style_num (int, optional) – Number of style categories. Defaults to 2.
lnn_dim (int, optional) – Dimension of the latent narrative space. Defaults to 50.
early_stop (int, optional) – Number of epochs for early stopping. Defaults to 3.
epochs (int, optional) – Number of training epochs. Defaults to 50.
device (str, optional) – Device to run the training on (‘cuda’ or ‘gpu’). Defaults to ‘gpu’.
gpu (str, optional) – GPU device ID. Defaults to an empty string.
metrics (List, optional) – List of evaluation metrics. Defaults to None.
faknow.run.content_based.run_mdfend
- faknow.run.content_based.run_mdfend.run_mdfend(train_path: str, bert='hfl/chinese-roberta-wwm-ext', max_len=170, domain_num=9, batch_size=64, num_epochs=50, lr=0.0005, weight_decay=5e-05, step_size=100, gamma=0.98, metrics: List | None = None, validate_path: str | None = None, test_path: str | None = None, device='cpu')[source]
run MCAN, including training, validation and testing. If validate_path and test_path are None, only training is performed.
- Parameters:
train_path (str) – path of training data
bert (str) – bert model name, default=”hfl/chinese-roberta-wwm-ext”
max_len (int) – max length of input text, default=170
domain_num (int) – number of domains, default=9
batch_size (int) – batch size, default=64
num_epochs (int) – number of epochs, default=50
lr (float) – learning rate, default=0.0005
weight_decay (float) – weight decay, default=5e-5
step_size (int) – step size of learning rate scheduler, default=100
gamma (float) – gamma of learning rate scheduler, default=0.98
metrics (List) – evaluation metrics, if None, [‘accuracy’, ‘precision’, ‘recall’, ‘f1’] is used, default=None
validate_path (str) – path of validation data, default=None
test_path (str) – path of testing data, default=None
device (str) – device to run model, default=’cpu’
faknow.run.content_based.run_textcnn
- class faknow.run.content_based.run_textcnn.TokenizerTextCNN(vocab: Dict[str, int], max_len=255, stop_words: List[str] | None = None, language='zh')[source]
Bases:
object
tokenizer for TextCNN
- __init__(vocab: Dict[str, int], max_len=255, stop_words: List[str] | None = None, language='zh') None [source]
- Parameters:
vocab (Dict[str, int]) – vocabulary of the corpus
max_len (int) – max length of the text, default=255
stop_words (List[str]) – stop words, default=None
language (str) – language of the corpus, ‘zh’ or ‘en’, default=’zh’
- faknow.run.content_based.run_textcnn.run_textcnn(train_path: str, vocab: ~typing.Dict[str, int], stop_words: ~typing.List[str], word_vectors: ~torch.Tensor, language='zh', max_len=255, filter_num=100, kernel_sizes: ~typing.List[int] | None = None, activate_func: ~typing.Callable | None = <function relu>, dropout=0.5, freeze=False, batch_size=50, lr=0.001, num_epochs=25, metrics: ~typing.List | None = None, validate_path: str | None = None, test_path: str | None = None, device='cpu') None [source]
run TextCNN, including training, validation and testing. If validate_path and test_path are None, only training is performed.
- Parameters:
train_path (str) – path of the training set
vocab (Dict[str, int]) – vocabulary of the corpus
stop_words (List[str]) – stop words
word_vectors (torch.Tensor) – word vectors
language (str) – language of the corpus, ‘zh’ or ‘en’, default=’zh’
max_len (int) – max length of the text, default=255
filter_num (int) – number of filters, default=100,
kernel_sizes (List[int]) – list of different kernel_num sizes for TextCNNLayer, if None, [3, 4, 5] is taken as default, default=None
activate_func (Callable) – activate function for TextCNNLayer, default=relu
dropout (float) – drop out rate of fully connected layer, default=0.5
freeze (bool) – whether to freeze weights in word embedding layer while training,default=False
batch_size (int) – batch size, default=100
lr (float) – learning rate, default=0.001
num_epochs (int) – number of epochs, default=100
metrics (List) – metrics, if None, [‘accuracy’, ‘precision’, ‘recall’, ‘f1’] is used, default=None
validate_path (str) – path of the validation set, default=None
test_path (str) – path of the test set, default=None
device (str) – device, default=’cpu’