Saturday, September 13, 2025
HomeForexParameters of the "Neuro Future" MT5 indicator with an in depth description...

Parameters of the “Neuro Future” MT5 indicator with an in depth description – My Buying and selling – 5 September 2025


Content material:

I. Neuro Future indicator parameters, default values.

II. Detailed description of parameters.

III. Technical description of the varieties of networks used.

I. Neuro Future indicator parameters, default values:

  1. —— BASIC SETTINGS ——

  • Predicting quantity : 6 – Predicting N-1 bars for T, TBin, and N bars for TDif.
  • Community filename prefix   : “net_” – Community file identify prefix
  • Filename postfix sort   : T_ACT_SYMBOL_TF – File identify postfix sort (image+timeframe)
  • Use widespread folder for recordsdata   : true – Use shared folder for recordsdata
  • Auto shade by sort   : true – Computerized shade by sort
  • Handbook shade   : clrGray – Handbook shade choice (if AutoColor is fake)
  • —— NETWORK STRUCTURE ——

    • Community sort   : T1 – Community Sort
    • Activation preset   : Auto – Preset of activation capabilities
    • Loss perform sort   : MSE – Loss Perform
    • Enter layer dimension   : 40 – Enter layer dimension
    • Hidden layer 1 dimension   : 27 – Dimension of the primary hidden layer
    • Hidden layer 2 dimension   : 12 – Dimension of the second hidden layer (0 = disabled)
    • Output layer dimension   : 6 – Output layer dimension (from 2, from 1 for TDif)
    • Community remark   : “” – Touch upon the community
  • —— MANUAL ACTIVATION SETTINGS ——

    • Hidden layer activation   : Tanh – Activate Hidden Layers
    • Output layer activation   : Tanh – Output layer activation
    • Enter scaling   : S11 – Enter Scaling
    • Output scaling   : S11 – Output Scaling
    • Gradient limiting   : false – Gradient restrict
    • Max gradient worth   : 0.1 – Most gradient worth
  • —— TRAINING PARAMETERS ——

    • Begin studying after loading   : false – Begin coaching after loading
    • Use coaching date   : false – Use coaching date
    • Coaching date   : DATE – Date of coaching
    • Max coaching epochs   : 500 – Max variety of coaching epochs
    • Max coaching samples   : 100 – Max. variety of examples for coaching
    • Bars between samples   : 1 – Minimal bar between samples
    • Use adaptive studying fee   : true – Use adaptive studying fee
    • Studying fee   : 0.1 – Studying pace
    • Goal error   : 0.0001 – Goal Error
  • —— VALIDATION SETTINGS ——

    • Validation interval   : 0 – Validation interval (bars)
    • Use validation standards   : false – Use validation standards for early stopping
    • Validation mode   : Revenue – Validation mode
    • Validate solely chosen bar   : false – Validate solely ‘Prediction quantity’
    • Validation threshold   : 0.1 – Prediction threshold for validation
  • —— EARLY STOP ——

    • Ending endurance early   : 1000 – Persistence for an early cease
    • Min epochs for early cease   : 10 – Min. epochs earlier than early cease
    • Save greatest weights   : false – Maintain greatest weights
  • —— ADAPTIVE LEARNING RATE ——

    • LR lower issue   : 0.5 – LR discount issue
    • LR improve issue   : 1.01 – LR Magnification Issue
    • Error distinction for LR lower   : 1.01 – Error distinction for LR discount
    • Minimal studying fee   : 0.00001 – Minimal studying fee
    • Most studying fee   : 10000 – Most studying pace
  • —— NOTIFICATIONS ——

    • Allow alerts   : true – Allow notifications
    • Alert threshold   : 0.1 – Alert threshold
    • Push notifications   : false – Push notifications
    • Electronic mail alerts   : false – Electronic mail Notifications
    • Sound alerts   : false – Sound notifications
  • ——LOGING ——

    • Coaching logs   : true – Coaching logs
    • Log frequency discount   : 10 – Logging frequency
    • Instance logs   : false – Instance logs
    • Outcome logs   : false – Outcome logs
    • Load logs   : false – Loading logs
    • Save logs   : false – Save logs
  • —— OTHER SETTINGS ——

    • Common output scaling   : true – Common output scaling
    • Repair indicator min/max   : true – Repair min/max of indicator window
    • Max bars in window   : 5000 – Max bars in indicator window

    II. Detailed description of parameters:

    Prediction quantity (from 2(1 for TDif) to Out):
    Predicting N-1 bars for T, TBin, and N bars for TDif. For many community sorts (T1, T2, T3, T4, T3Bin, T4Bin), the minimal worth is 2. For distinction sorts (T1Dif, T2Dif), the minimal worth is 1.

    Community filename prefix:
    Prefix (preliminary half) of the file identify that shops the weights of the educated neural community. Permits you to use completely different networks for various devices or methods whereas sustaining a transparent identify.

    Filename postfix sort:
    The kind of postfix (ultimate half) of the file identify. Determines what information can be robotically added to the file identify to make it distinctive.

    • NO: The postfix shouldn’t be added.

    • SYMBOL_TF: Provides an emblem and timeframe (eg: net_EURUSDH1).

    • T_SYMBOL_TF: Provides community sort, image and timeframe (eg: net_T1_EURUSDH1).

    • T_ACT_SYMBOL_TF: Provides community sort, activation sort, image and timeframe (eg: net_T1_Tanh_EURUSDH1).

    Use widespread folder for recordsdata:
    Specifies the situation of the recordsdata with the neural community weights. If the worth true, recordsdata can be saved and loaded from the widespread folder of all terminals  TerminalCommonFiles. That is handy for accessing the identical community from completely different terminals. If  false – from a particular terminal folder  MQL5Files .

    Auto shade by sort:
    Allow computerized choice of indicator drawing shade relying on the kind of neural community forecast. If the worth  true, parameter Handbook shade ignored.

    Handbook shade (if AutoColor false):
    Units a customized shade for drawing the indicator. Energetic provided that the parameter  Auto shade by sort  disabled (false).

    —— NETWORK STRUCTURE —— 

    Community sort:
    The kind of neural community structure and function. Every sort is optimized for a particular market and evaluation fashion. (Observe: Learn III. Technical description of the varieties of networks used).

    • T1 (Normalized Evaluation): Foreign exchange (majors), indices, metals.

    • T1Dif (Distinction Evaluation): Cryptocurrencies, commodities, foreign exchange (minors).

    • T2 (Context-Conscious): Foreign exchange (crosses), metals, indices.

    • T2Dif (Context-Conscious Distinction): Foreign exchange (crosses), metals, indices.

    • T3 (Pattern Detection): Foreign exchange (majors), commodities, indices.

    • T3Bin (Binary Pattern): All markets (coaching).

    • T4 (Momentum Detection): Cryptocurrencies, Commodities, Metals.

    • T4Bin (Binary Momentum): Cryptocurrencies, US shares, commodities.

    Activation preset:
    A predefined set of activation capabilities and parameters for community layers. Permits you to rapidly choose a confirmed configuration.

    • Auto: Computerized choice relying on the chosen community sort (Community sort).

    • Handbook: Manually modify the activation and scaling capabilities.

    • // Fundamental & Really useful:

    • Customary: Tanh-Tanh[-1,1] (T1,T2,T3,T2Dif) Gradient off

    • Basic: Sigm-Sigm[0,1] (T3,T3Bin,T4Bin) Gradient off

    • Distinction: LRelu-Linear[-1,1] (T1Dif,T4,T2Dif) Gradient 0.1.

    • BinaryMomentum: Relu-Sigm[-1,1]-[0,1](T4Bin,T3Bin) Grad 0.08

    • // Superior:

    • Uneven: Tanh-Tanh[-1,1]-[0,1](T1,T2,T3) Grad off

    • ReLUNetwork: Relu-Relu[-1,1](T4,T4Bin) Grad 0.1

    • Regression: Tanh-Linear[-1,1](T1Dif,T2,T2Dif,T4) Grad off

    • MixedAsymmetric: Tanh-Sigm[-1,1]-[0,1](T2,T3) Grad off

    • // Different:

    • Different: Tanh-Tanh[0,1](T1,T2,T3) Grad off

    • ReLURegression: Relu-Linear[-1,1](T1Dif,T4,T2Dif) Grad 0.12

    • LeakyReLU: LRelu-LRelu[-1,1](T4,T4Bin,T1Dif) Grad 0.1

    • FullyLinear: Linear-Linear[-1,1](Exp,T1Dif,T4) Grad 0.15

    • // Experimental:

    • Hybrid: Sigm-Tanh[0,1]-[-1,1](T2,T3,T2Dif) Grad off

    • ReLUSigmoid: Relu-Sigmoid[0,1](T4Bin,T3Bin) Grad 0.1

    • ComboMomentum: Relu-Tanh[-1,1](T4,T1Dif) Grad 0.1

    • Experimental: Sigm-Linear[0,1]-[-1,1](Any,untested) Grad off

    • ComboLeaky: LRelu-Tanh[-1,1](T4,T1Dif,T2Dif) Grad 0.1

    • ComboMixed: Tanh-Sigm[-1,1]-[0,1](T2,T3,T3Bin) Grad off

    Loss perform sort:
    The kind of error perform (loss perform) that the neural community tries to attenuate in the course of the studying course of.

    • MSE (Imply Squared Error): Delicate to outliers.

    • MAE (Imply Absolute Error): Imply absolute error. Much less delicate to outliers.

    • L_HUBER: HuberLoss. A compromise between MSE and MAE.

    • BinCE (Binary Cross-Entropy): Binary cross-entropy. Designed for binary classification (output within the vary [0,1]).

    Enter layer dimension:
    The dimensions of the enter layer of the neural community. Determines the variety of neurons that obtain enter information from the market.

    Hidden layer 1 dimension:
    The dimensions of the primary hidden layer of a neural community.

    Hidden layer 2 dimension (0 = disabled):
    Dimension of the second hidden layer. If set to 0, this layer is disabled and the community turns into a three-layer (input-hidden-output) community.

    Output layer dimension (from 2, from 1-for TDif):
    Output layer dimension. Determines the dimensionality of the neural community forecast. Often from 2 and extra. For distinction varieties of networks (T1Dif, T2Dif) it may be from 1.

    Community remark:
    An arbitrary textual content remark that can be saved together with the community. Can be utilized to make notes concerning the coaching purpose, information, and so on.

    —— MANUAL ACTIVATION SETTINGS —— 

    Hidden layer activation:
    Activation perform for all hidden layers of the community. Energetic solely when preset is chosen    Handbook .

    • Sigm: Sigmoid. output (0,1).

    • Relu: Rectified linear block. output max(0,x).

    • LRelu: Leaky ReLU. Permits small destructive values to move by means of.

    • Linear: Linear activation. output x.

    • Tanh: Hyperbolic tangent. output (-1,1).

    Output layer activation:
    Activation perform for the output layer of the community. Solely energetic when a preset is chosen.  Handbook .

    Enter scaling:
    A way for scaling enter information earlier than feeding it into the community.

    Output scaling:
    A way for scaling community output.

    Gradient limiting:
    Allow gradient clipping. Helps fight the “exploding gradients” drawback throughout coaching.

    Max gradient worth:
    Most absolute worth of the gradient. Used if enabled  Gradient limiting .

    —— LEARNING PARAMETERS ——

    Begin studying after loading:
    If enabled ( true ), the coaching course of will begin instantly after the indicator is loaded onto the chart.

    Use coaching date:
    Whether or not to make use of a particular date for the coaching information slice. If off ( false ), coaching will finish with the present date (minus the output layer).

    Coaching date:
    The date on which the coaching dataset ends. Any bars after this date won’t be used for coaching.

    Max coaching epochs:
    Most variety of coaching epochs. One epoch is a full move by means of all the coaching dataset.

    Max coaching samples:
    The utmost variety of examples (samples) that can be used for coaching from all the accessible historical past. Within the T4, T4Bin methods, solely people who fulfill the situations can be chosen from this variety of examples.

    Bars between samples:
    The minimal variety of bars between two consecutive samples in a knowledge set. Helps improve the variety of the info.

    Use adaptive studying fee:
    Allow adaptive studying fee change. The algorithm will robotically improve or lower the training fee throughout coaching.

    Studying fee:
    Preliminary studying fee. Determines the “step” with which the neural community adjusts its weights throughout coaching.

    Goal error:
    Goal error worth. If the error on the coaching pattern reaches this worth, coaching will cease.

    —— VALIDATION SETTINGS —— 

    Validation interval (bars):
    The validation interval dimension in bars. Knowledge for this era shouldn’t be utilized in coaching and is used to examine the standard of the community. If 0, validation is disabled.

    Use validation standards for early stopping (and saving greatest weights):
    Whether or not to make use of a criterion on the validation set to cease coaching early and preserve the very best weights.

    Validation mode:
    The criterion by which the standard of a mannequin on a validation pattern is decided.

    • Binar: Binary accuracy of the path forecast.

    • Revenue: A hypothetical revenue calculated because the distinction between the bars in factors of the present instrument (excluding the unfold and commissions and different market situations).

    • NetError: Community error (MSE, MAE, and so on.) on validation pattern.

    Validate solely ‘Prediction quantity’ (for Binar,Revenue):
    If enabled, validation by standards    Binary    And    Revenue    can be held just for the bar laid out in    Prediction quantity .

    Validation prediction threshold:
    Threshold worth for forecast. Utilized in modes    Binary    And    Revenue    to find out whether or not the prognosis is taken into account constructive or destructive.

    —— EARLY STOPPING ——

    Early stopping endurance:
    The variety of epochs throughout which the error on the validation set could not enhance earlier than early stopping is triggered.

    Min epochs earlier than early stopping:
    The minimal variety of coaching epochs that should be accomplished earlier than the early stopping mechanism kicks in.

    Save greatest weights:
    Whether or not to maintain the weights of the very best performing mannequin fairly than the final weights after coaching.

    —— ADAPTIVE LEARNING RATE ——

    LR lower issue:
    A multiplier to scale back the training fee if the error has stopped lowering considerably.

    LR improve issue:
    A multiplier to extend the training fee if the error is steadily lowering.

    Error distinction for LR lower:
    The edge for error progress to provoke a lower in studying fee.

    Minimal studying fee:
    The decrease sure on the training fee, beneath which it can’t be lowered.

    Most studying fee:
    An higher restrict on the training fee, past which it can’t be elevated.

    —— NOTIFICATIONS ——

    Allow alerts:
    Allow all notifications.

    Alert threshold:
    The forecast threshold worth that, if exceeded, will generate an alert.

    Push notifications:
    Ship push notifications to the MetaTrader cellular software.

    Electronic mail alerts:
    Ship e-mail alerts.

    Sound alerts:
    Play sound notification.

    —— LOGGING ——

    Coaching logs:
    Allow logging of the training course of.

    Log frequency discount:
    What number of instances to scale back the frequency of logging. For instance, a price of 10 implies that logging will happen each tenth epoch.

    Instance logs:
    Maintain detailed logs for every coaching instance.

    Outcome logs:
    Maintain logs of ultimate outcomes.

    Load logs:
    Maintain logs of the community loading course of from a file.

    Save logs:
    Maintain logs of the community saving course of to a file.

    —— OTHER INDICATOR SETTINGS ——

    Common output scaling:
    Apply common scaling to output information for show within the indicator window. Alert threshold can be utilized to the common scale.

    Repair indicator min/max:
    Repair the minimal and most values of the indicator scale to enhance visualization.

    Max bars in indicator window:
    The utmost variety of historical past bars for which the indicator will calculate and show its values.

    III. Technical description of the varieties of networks used:

    1. T1 – Normalized evaluation:
    • Enter: Normalized window of L1 opening costs.
    • Output: Normalized window of L4 predicted opening costs.
    • The gist: The neural community learns to instantly predict future costs primarily based on historic ones.
  • T2 – Contextual forecast:
    • Enter: Normalized window of L1 opening costs.
    • Output: Projected opening costs normalized to the vary of enter information.
    • The purpose: The forecast is scaled relative to the present volatility, which may give extra correct ends in flat situations.
  • T1Dif / T2Dif – Forecast of adjustments:
    • Entry: Variations between future costs and the present opening value, normalized to protect the signal.
    • Output (T1Dif): Predicted value variations, normalized to protect signal.
    • Output (T2Dif): Predicted value variations normalized by the size of the enter variations.
    • Essence: Extra details about path, entry and exit signal are preserved. E1Dif – for normal adjustments, T2Dif for extra exact adjustments.
  • T3 – Pattern Filter:
    • Entry: Normalized opening value window.
    • Output: If all L4 future bars are above (or beneath) the present value, their values are normalized and fed to the output. In any other case, the output values are full of 0 (with a scale of [-1;1]).
    • The gist: The community learns to detect moments when motion is secure and unidirectional.
  • T4 – Pulse Filter:
    • Entry: Normalized opening value window.
    • Output: Just like T3, however studying happens solely on clearly expressed actions. Examples that don’t fulfill the situations are skipped and don’t take part in studying.
    • The gist: Extra rigorous choice than T3. The community focuses on discovering sturdy, momentum strikes.
  • T3Bin / T4Bin – Binary classification:
    • Enter: Just like T3 and T4.
    • Output: If the pattern/momentum situation is met, then all future bars which are increased are given a price of 1, and people which are decrease are given a price of 0 (or -1.0 on the size [-1,1]).
    • The thought: Simplify the issue to binary classification to acquire clearer indicators.

    RELATED ARTICLES

    Most Popular

    Recent Comments