Zum Inhalt Zur Navigation Zur Subnavigation

Artificial Intelligence

The Artificial Intelligence of the Fake Shop Detector

Numerous fake shop operators copy parts of their online presence because they want to quickly cause a lot of damage before they are exposed. Even if fake shops look completely different at first glance, programmers unconsciously leave their own signature in the code. For example, how often the same libraries are used,  whether there are recurring structural patterns in the code, and also the absence of features can have a significant influence on the decision-making of the artificial intelligence.

The central task of the detector models is to learn clear patterns and fingerprints from thousands of features of a website and their interaction to classify and distinguish fake shops from legitimate online retailers. The three detector models in use were trained on over 6,000 archived online shops. The accuracy of the AI in distinguishing between fraudulent and legitimate merchants is 97% of the scientific ground truth.

The evaluation of the AI in practical use has shown that it is precisely this multitude of over 22,000 learned features that contributes to a great robustness of the detector with an accuracy of 90.38%, although the individual factors are insignificant in themselves, and each contribute little to the overall assessment of the AI. In order to learn new attack patterns, constant re-learning of the model is necessary.

Procedure

The following features were identified as relevant through an evaluation by AIT's Data Science and Artificial Intelligence experts in the KIRAS study KOSOH: tokenised HTML, CSS and JS text, comments as well as individual tags, tag attribute value patterns and the tree structure in the archived HTML scripts. For feature analysis and data cleansing, t-SNE (t-Distributed Stochastic Neighbor Embedding) was used. The resulting text data is converted to numerical values to train the ML models and evaluate their classification ability. The implementation of the analytics is based on Tf-idf Vectorizer, a function that converts text data into a matrix representation of tf-idf (Term Frequency - Inverse Document Frequency) features. Tf-idf aims to create vectors that encode how important a word is to a document in a document collection (in this case, the codebase of the websites being analysed). The tf-idf value increases proportionally to the frequency with which a word occurs in the document and is balanced by the number of documents in the corpus that contain the word. This helps the model to compensate for the fact that certain words are generally more frequent.

Performance of the detector models

The following machine learning methods were tested for their suitability in classification and prediction: tree-based algorithms such as random forest and boosted trees, support vector machines (SVMs with kernels with linear and radial basis functions), Naive Bayes, neural networks and unsupervised clustering methods. Overall, tree-based algorithms showed the best performance for all metrics, especially XGBoost (eXtreme Gradient Boosting) with adapted parameterisation. This is a fast and particularly well adapted implementation of the general boosted tree algorithm that uses a set of features to predict a target value. Furthermore, it is designed in such a way that the decisions based on features in the decision tree are comprehensible and thus enable a good degree of explainability of the classification. As already explained in the results of the publication on selected data sets, it is seen that it is also possible to achieve a significantly higher true positive rate in the fake shop detector in practice using an aggregated overall model consisting of equally distributed parts of the three trained machine learning models XGBoost, Random Forest and a neural network.

Herewith we present the Receiver Operating Characteristics (ROC), which contrast (1) the TPR or true-positive rate and (2) the FPR or false-positive rate, and (3) T-SNE of each model, which allows visualisation of high-dimensional data and intrinsic clusters. The true positive rate is used to measure the percentage of actual positives which are correctly identified. The following terms were assigned here.

  • True Positives: A shop is correctly classified as fake by the model.
  • False positives: A shop is incorrectly classified as a fake by the ML model.
  • True Negatives: A shop is correctly identified as secure by the model.
  • False Negatives: A shop is wrongly classified as safe by the model.

 

Results of the trained MAL2 single detection models
Model Accuracy Precision Recall F1-Score
Random Forest 95% 97% 94% 96%
Neural Net 94% 99% 90% 94%
XG Boost 97% 97% 97% 97%

Cybercrime prevention tools

The Expert Analysis Dashboard offers the experts a direct interaction option for interacting with the trained AI models. The integration of the explainability tools Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP) allow detailed insights into the learned relationships in the model and the influence of individual features and their weighting on the predictions made.