Free Access
Issue
A&A
Volume 643, November 2020
Article Number A153
Number of page(s) 10
Section The Sun and the Heliosphere
DOI https://doi.org/10.1051/0004-6361/202038921
Published online 18 November 2020

© ESO 2020

1. Introduction

Based on the Gauss linking number, magnetic helicity is a measure of the level of entanglement of magnetic field lines within a magnetized plasma (Moffatt 1969). It is strictly conserved within the ideal magneto-hydrodynamic paradigm (Woltjer 1958), and its dissipation is relatively weak even in non-ideal magneto-hydrodynamics (Taylor 1974), the latter even in the presence of strong non-ideal effects (Pariat et al. 2015). In the context of solar eruptivity, this favorable property can explain the existence of plasma ejecta in order to prevent infinite accumulation within the solar atmosphere (Rust 1994; Low 1996).

The basic formulation of magnetic helicity lacks gauge transform invariance for magnetically open systems, i.e., it is not directly applicable to studies of the solar corona since magnetic flux is continuously penetrating the coronal volume through the solar surface. To circumvent this limitation, Berger & Field (1984) and Finn & Antonsen (1984) defined the so-called relative magnetic helicity as

(1)

a gauge-invariant quantity related to the magnetic helicity within a volume, 𝒱, bounded by a surface, ∂𝒱. Here B and B0 are the 3D magnetic field under study and a reference field, respectively, while A and A0 are the vector potentials satisfying B = ∇ × A and B0 = ∇ × A0, respectively.

As its name implies, the relative helicity expresses the helicity of a magnetic field with respect to a reference field, B0, which shares the normal component of the studied field B on ∂𝒱. Generally, B0 is chosen to be a potential (current-free) field (see Prior & Yeates 2014 for an alternative choice). For practical cases, Valori et al. (2012) demonstrated the validity and physical meaningfulness to compute (and track in time) H𝒱 by evaluating Eq. (1) in order to characterize (the evolution of) a magnetic system.

Berger (1999) decomposed H𝒱 into two separately gauge-invariant quantities

(2)

(3)

so that H𝒱 = HJ + HPJ. Here HJ is the magnetic helicity in the volume associated with the electric current, and HPJ is the helicity associated with the component of the field that is threading 𝒱. Since B0 is designed to share the normal component with B on ∂𝒱, consequently H𝒱, HJ, and HPJ are all independently gauge invariant. For a pilot study of the time evolution of these quantities in the solar context see Moraitis et al. (2014). For completeness we note that, unlike H𝒱 in Eq. (1), HJ and HPJ are not conserved quantities as a gauge-invariant transfer term between them dominates their dynamics (Linan et al. 2018).

In particluar, HJ in Eq. (2) attracts attention as it provides additional information compared to H𝒱. More precisely, the so-called helicity ratio, |HJ|/|H𝒱|, appeared as a promising candidate in characterizing the eruptive potential of the underlying magnetic structure. This was noted not only from numerical simulations (e.g., Pariat et al. 2017; Zuccarello et al. 2018; Linan et al. 2018), but also from application to solar observations (James et al. 2018; Moraitis et al. 2019; Thalmann et al. 2019a).

Magnetic helicity studies of solar observations are often performed based on nonlinear force-free (NLFF) coronal magnetic field extrapolations, i.e., the numerical solution of

(4)

(5)

where B represents the 3D coronal magnetic field, subject to the measured surface magnetic field as a boundary condition (for a review see, e.g., Wiegelmann & Sakurai 2012). For instance, James et al. (2018) used a magneto-frictional method to solve Eqs. (4) and (5), while the works of Moraitis et al. (2019) and Thalmann et al. (2019a) were based on an optimization approach. Whatever method used, unavoidable numerical errors prevent the exact fulfillment of Eqs. (4) and (5). In particular, the level of solenoidality of the obtained NLFF solution is highly important for relative helicity computations (Valori et al. 2013; and see Sect. 2.3 for details).

Recently, Moraitis et al. (2019, hereafter M19) studied the eruptivity of NOAA 12673 which produced the two strongest flares of Solar Cycle 24 on 6 September 2017. A confined X2.2 flare started at 08:57 UT (SOL2017-09-06T08:57), and an eruptive X9.3 flare followed three hours later (start time 11:53 UT; SOL2017-09-06T11:53). In their study a ten-hour time interval centered around the X2.2 flare of 6 September that also includes the X9.3 flare was used. Their analysis used a mix of NLFF solutions, based on the optimization method of Wiegelmann et al. (2012, hereafter W12), a specialization of the method originally described in Wiegelmann & Inhester (2010) for the application to Solar Dynamics Observatory Helioseismic and Magnetic Imager data, and on its original predecessor (Wiegelmann 2004, hereafter W04) using standard (free) model parameter settings. They argued that employing NLFF models based on different code versions optimizes the final NLFF time series used to compute the coronal relative helicity, when retaining only those that perform best in terms of solenoidality. Thus, at each time instant within the studied time series, they checked the solenoidal quality of the W04 and W12 solutions, and used the particular NLFF solution of highest solenoidal quality, i.e., that with the smallest value of ∇ ⋅ B. Time instances where none of the employed models were providing an acceptably small level of solenoidality were discarded entirely. The time evolution of |HJ|/|H𝒱| that resulted based on this pre-selection of NLFF models depicts an increase in |HJ|/|H𝒱| to values > 0.15 prior to the X-class flares, as well as corresponding decreases in the course of the flares (see their Fig. 7).

DeRosa et al. (2015) delivered, although as a secondary result, the first comparative analysis of relative helicity computations based on different NLFF methods, picturing consistent relative helicity estimates as a challenging yet achievable task. In that work the W12-based NLFF solutions were found to deliver distinctly different values for the relative helicity in comparison to those deduced from other NLFF methods (a magneto-frictional and three Grad-Rubin methods), which was explained by the insufficient solenoidal quality of the NLFF model. This issue was found to be linked to the use of standard choices of (free) model parameters, as suggested in W12, which resulted in NLFF solutions with non-solenoidal errors on the order of the inherent free magnetic energy (see their Fig. 7b). It was also shown, however, that an alternative W12-based NLFF solution based on an adjusted set of model parameters resulted in a significant improvement of the solenoidal quality, and hence the corresponding relative helicity computation (see appendix of their work).

The above works suggests that there is great potential for improving the accuracy of relative helicity estimates, based on different model parameter choices and/or particular versions of the optimization approach. Since it has been shown that the W12 method delivers NLFF solutions with a higher degree of force-freeness and lower solenoidal level in comparison to the W04 method (see Table 2 in Wiegelmann et al. 2012), we restrict ourselves to making use of the W12 method and employ a number of NLFF solutions based on different choices for the adjustable (free) model parameters. In order to make our results comparable to the previous study of M19, we use the same vector magnetic field data as input to NLFF modeling. We attempt to assess the resulting spread of the relative helicity and related quantities, most importantly that of |HJ|/|H𝒱|, for this particular AR and time interval in relation to the particular NLFF model parameters used. On that basis, we aim to provide a recipe for a realistic computation of the relative helicity, including an appropriate estimation of related uncertainties.

2. Data and methods

2.1. Vector magnetic field data

In our study we use the data set originally designed to study the eruptivity of NOAA AR 12673 in M19, originally based on the 12-min cadence HMI.SHARP_720S data product, constructed from polarization measurements of the Solar Dynamics Observatory Helioseismic and Magnetic Imager (SDO/HMI; Scherrer et al. 2012). A field of view covering 320 × 320 pixels was extracted from the full-disk HMI.SHARP_720S data, centered at the Carrington coordinates (118.4°,−9.2°). A cylindrical equal area (CEA) projection was applied to the chosen subfield of the HMI.SHARP_720S data vector field data, following the description in Sun (2013). The resulting CEA-remapped field vector (Br,Bθ,Bϕ) was binned by a factor of two to a resolution of 0.06° (∼720 km at disk center). In this way a total of 17 CEA vector magnetic field maps were constructed, covering the time span 2017-Sep-06 04:00 UT–13:00 UT, around two major flares hosted by NOAA 12673 (a confined X2.2 flare that peaked at 09:10 UT, and an eruptive X9.3 flare that peaked at 12:02 UT).

2.2. NLFF modeling

Based on the vector magnetic field data described in Sect. 2.1, we compute a series of NLFF equilibria for each of the 17 time instances. NLFF modeling involves one computational task at least (optimization; see Sect. 2.2.2), and two computational tasks at most (a preprocessing step, see Sect. 2.2.1, followed by subsequent optimization). Preprocessing is necessary, because the vector magnetic field data deduced from polarization measurements at photospheric levels does not conform with the force-free criteria (Aly 1989).

2.2.1. Preprocessing

During preprocessing, the measured 2D vector magnetic field data is modified in order to obtain a vector field that is (more) force-free consistent. The preprocessing method of Wiegelmann et al. (2006) minimizes a function of the form

(6)

where the individual contributions Lpp, i are summed over all grid points of the 2D photospheric grid, and are weighted individually by the corresponding pre-factors μi. In discretized form, Lpp, 1 is the square of the total magnetic force, and Lpp, 2 the square of the total magnetic torque; Lpp, 3 measures the difference between the preprocessed and original input field, and Lpp, 4 reduces small-scale variations in the measured field (applying a Laplacian smoothing).

In the solar context, preprocessing aims to approximate the chromospheric magnetic field, assumed to be more force-free consistent than the photospheric magnetic field. In Wiegelmann et al. (2008) a realistic model active-region is used to test the effect of preprocessing. In addition to the original scope of that study, the preprocessing has been shown to remove non-magnetic forces in the model photosphere and to yield a chromospheric-like model field (see their Table 1 and Fig. 2). In particular, the smoothing term, Lpp, 4, in Eq. (6) is physically motivated by the desire to approximate the characteristic spatial scales at a chromospheric level, i.e., to remove all scales below super-granular diameter (Wiegelmann et al. 2006). The smoothing term naturally competes with the changes to the data due to the terms assigned to enforce force-free compatibility (Lpp, 1 and Lpp, 2). Since Lpp, 1 and Lpp, 2 are weighted distinctly stronger (usually, μ1 = μ2 = 1 and μ4 ≪ μ1), Lpp, 4 may have only a limited effect. As a consequence, small spatial scales might actually be enhanced (see corresponding remarks in Sect. 8.1 of Valori et al. 2013).

The application of preprocessing prior to optimization has the desired effect of delivering NLFF solutions of higher quality, independent of the particular NLFF method used, but appeared advantageous especially for NLFF methods that rely on numerical differentiation (see Sect. 7.3 of Metcalf et al. 2008). It improves the final NLFF solution, both in terms of force- and divergence-freeness, naturally because of the more force-free consistent nature of the preprocessed data (see Table 2 in W12 and also Table 2 in Wiegelmann et al. 2008).

The recommended (standard) relative weightings suggested in W12 are (μ1, μ2, μ3, μ4) = (1, 1, 10−3, 10−2), with the weightings μ1 and μ2 set several orders of magnitude larger than μ3 and μ4. This is because meeting the (nearly) vanishing total magnetic force and torque is essential, while the nearness to the actually observed data and its smoothness are desired (but secondary) requirements. In our work, we use the suggested setting μ1 = μ2 = 1, and inspect separately the effect of enforcing nearness to the observed data (μ3 = 10−3 versus μ3 = 0) and that of smoothing (μ4 = 10−2 versus μ4 = 0). Though the advantageous effect of smoothing onto the quality of the resulting NLFF solution is known, its impact on relative helicity computation is still unclear. Using the just presented extreme choices of corresponding relative weightings, we are able to clarify the relative influences of the actually observed data and smoothing.

2.2.2. Optimization

In order to perform the NLFF optimization, we apply the method of W12, i.e., we combine the improved optimization scheme of Wiegelmann & Inhester (2010) and a multiscale approach (Wiegelmann 2008). In our work we apply a three-level multiscale approach to the (non-)preprocessed vector magnetic field data. The optimization approach is designed such that the function

(7)

is minimized yielding the volume-integrated Lorentz force and divergence to become small.

The surface term in Eq. (7) allows deviations between the NLFF solution, B, and the magnetic field information at the lower boundary, Bin, in order to find a more force-free solution. The deviation from Bin is controlled by the diagonal error matrix (i.e., the non-diagonal elements are zero), W, which allows it to account for the uncertainties on each component of the magnetic field, and in each pixel, separately. Here Bin may be either a directly measured and force-free consistent or a preprocessed vector magnetic field. The following model parameters can be freely assigned in Eq. (7):

– Separate weightings of the volume-integrated force (wf) and divergence (wd). In the original notation of W12 they are set as wf = wd = 1.

– The injection speed of the lower boundary, i.e., the relative importance of the surface term in Eq. (7), is controlled by ν. W12 tested ν in the range 10−4–10−1 and found ν = 10−3 to represent a most qualified choice for the application to HMI data. This is because higher values yield a lower force-free and solenoidal quality of the resulting NLFF solution, while lower values yield little corresponding improvement, despite drastically increased computation times. Therefore, we use a value of ν = 10−3 in our study, as has also been used as in the work of M19.

– The components w (controlling the weighting of the horizontal field components Bx and By) and w (controlling the weighting of the vertical field component Bz) of the diagonal error matrix, W, can be defined in different ways; in the most sophisticated of which each individual pixel may be weighted based on the actual HMI measurement uncertainties. Only recently such an attempt has been presented in M19, who chose

(8)

where B denotes the magnetic field strength and σB is the total magnetic field variance from inversion fitting. The authors thus assumed a typical noise threshold of 200 G and a typical value of 0.03 for σB/B. This particular weighting was designed in order to compensate for low-quality inversion results, covering regions of strong magnetic field and spreading out in the later frames of the time series. Hereafter we refer to this choice of parameters w and w as WHMI.

When measurement uncertainties are not known, a reasonable choice is to set

(9)

for each pixel separately. This choice was put forward by W12, based on a comparison of different definitions of W. With this particular choice the vertical field is empirically measured at the highest accuracy level, and that the accuracy of the measured horizontal field increases with its strength. Hereafter we refer to this choice of parameters w and w as WEMP. To date, with the sole exception of M19, this setting has often been applied when performing coronal NLFF modeling with the W12 method. This motivates us to test the performance of this type of error matrix, by comparison to the corresponding application of WHMI.

Successful NLFF modeling involves finding a combination of the free model parameters for the optimization function L and, if applied, for the preprocessing step (μ3, μ4) that delivers optimized results, in terms of force- and divergence-freeness. In order to quantify the force-free consistency of the obtained NLFF solutions for a certain choice of the free model parameters, a frequently used metric is the current-weighted angle between the modeled magnetic field and electric current density, θJ, (Schrijver et al. 2006). Ideally, for an entirely force-free solution we would find θJ = 0°.

As noted by Wiegelmann et al. (2012), for the application to long-term HMI data series it is not practical to carry out NLFF modeling based on several different model parameter sets. For a short time span, as in our analysis, it is doable, and may be used to study the uncertainty of physical quantities based on the different model parameter choices.

2.3. Helicity computation

We use the finite-volume (FV) method of Thalmann et al. (2011) to compute the relative helicity based on Eqs. (1)–(3). It solves systems of partial differential equations to obtain the vector potentials A and A0, using the Coulomb gauge, ∇ ⋅ A = ∇ ⋅ A0 = 0. The method defines the reference field as B0 = ∇ϕ, with ϕ being the scalar potential, subject to the constraint ∇nϕ = Bn on ∂𝒱, where n denotes the normal component with respect to the boundaries of 𝒱.

The method has been tested in the framework of an extended proof-of-concept study on FV helicity computation methods (Valori et al. 2016), where it has been shown that for various test setups the methods deliver helicity values in line with each other, differing by only small percentage points. It has also been used in Thalmann et al. (2019b) to show that the computed helicity is highly dependent on the level to which the underlying NLFF magnetic field solution satisfies the divergence-free condition. A metric for quantifying the divergence-free consistency of an obtained NLFF solution, as introduced by Wheatland et al. (2000) and often used in literature is the fractional flux, ⟨|fi|⟩ (for an recent in-depth analysis of this measure see Gilchrist et al. 2020). Though not shown explicitly in our work, we note for completeness that in all studied NLFF models, we find ⟨|fi|⟩ ≲ 4 × 10−4.

Alternatively, in order to test the level of solenoidality of the magnetic field used as an input for helicity computation, the ratio Ediv/E has been put forward by Valori et al. (2013) as a useful criterion. The value of Ediv/E expresses the non-solenoidal fraction of the total (NLFF) energy. The quantity Ediv is derived from the solenoidal and non-solenoidal parts of the potential field (B0 = B0,s + B0,ns) and current-carrying field (BJ = BJ, s + BJ, ns), which stem from the initial decomposition of the (NLFF) magnetic field into its potential (B0) and current-carrying (BJ) component. Then the total energy of a given magnetic field may be written as the sum of the corresponding energy budgets in the form

(10)

with Emix being the energy corresponding to all cross terms (see Eq. (8) in Valori et al. 2013, for details), and EJ, s being a measure for the free energy. All contributions to E in Eq. (10), except for Emix, are positive definite. In the case that the input field is perfectly solenoidal, we would find E0,ns = EJ, ns = Emix = 0, thus Ediv = 0. The energy associated with all non-solenoidal components can then be defined as

(11)

which represents an upper limit, as the absolute value of Emix is involved. Usually, E0,s >  EJ, s >  Emix >  EJ, ns >  E0,ns (see, e.g., DeRosa et al. 2015).

In the proof-of-concept study by Valori et al. (2016), based on solar-like numerical experiments, it was suggested that only Ediv/E ≲ 0.08 is sufficient for a reliable helicity computation. In a follow-up study, Thalmann et al. (2019b) suggested an even lower threshold (Ediv/E ≲ 0.05) for solar applications.

Based on the energy decomposition above, we can also use the non-solenoidal contribution to the free energy |Emix|/EJ, s to refine the quantification of the acceptable degree of non-solenoidality in an underlying NLFF model field. As shown in the comparative study of DeRosa et al. (2015), the application of the W12 method, using standard choices for the (free) model parameters, may result in NLFF solutions with non-solenoidal errors on the order of the inherent free magnetic energy (|Emix|≃EJ, s, see their Fig. 7b). It was also shown, however, that an alternative W12-based NLFF solution based on an adjusted set of model parameters (more precisely in setting wd >  wf; see Sect. 2.2.2) resulted in a significant improvement of the solenoidal quality (see appendix of their work). A refined quantification of the solenoidal quality of the NLFF magnetic fields in context with relative helicity computation, based on |Emix|/EJ, s has not been attempted so far.

2.4. Choice of free model parameters

The solenoidality of a NLFF solution obtained by minimizing L in Eq. (7) naturally depends on Bin (thus, the free parameter choices of μ3 and μ4 in the preprocessing step, if applied), and on the choice of the diagonal elements of W (either WHMI or WEMP in the present study).

For any combination of the above quantities, the divergence-freeness of the obtained NLFF solution can be enhanced by assigning a stronger relative importance of the divergence term, i.e., by choosing wd >  wf (see explanation of the free model parameters of L in Sect. 2.2.2 for details). As a consequence, a NLFF solution based on a certain choice of the other free parameters may not qualify to be used as an input to helicity computation when choosing wd = 1, but may do so when choosing an enhanced weight wd >  wf. This has been found in DeRosa et al. (2015), where the application of the standard setting (wf, wd) = (1, 1) was found to deliver an NLFF solution, but without the solenoidal quality required for relative helicity computation, as the relative contribution of the mixed terms was comparable to that of the free energy (|Emix|≈EJ, s; see Fig. 7b in their work). The NLFF model solution based on the choice (wf, wd) = (1, 1.5), however, resulted in a significant decrease in the contribution Emix to the total energy, thus represented a valid input for relative helicity computation (with |Emix|< EJ, s; see Fig. 11 in the appendix of their work). In order to test the effect of an improved solenoidal quality of the NLFF model on the relative helicity computation, we therefore use the choices (wf, wd) = (1, 1) and (wf, wd) = (1, 2) in our work.

Table 1 summarizes the tested NLFF time series regarding the specific parameter choices used to for their realization, their appearance throughout the manuscript, and the corresponding plotting symbols used. The influence of the distinct choice of preprocessing parameters has not been tested yet. Therefore, we use the specific combinations (μ3, μ4) = (0, 0), (μ3, μ4) = (10−3, 0), (μ3, μ4) = (0, 10−2), and (μ3, μ4) = (10−3, 10−2), the last representing the preferred standard choice suggested by W12 (“Standard preprocessing cases”). In combination with different combinations of optimization parameters wd = (1, 2) and W = (WHMI, WEMP), 16 NLFF model time series can be employed in total. Four additional NLFF time series are employed based on the non-preprocessed (original) data. This is motivated by the fact that the minimization of the surface term in Eq. (7) might still yield a high-quality NLFF solution, despite the force-free incompatibility of the measured vector magnetic field. In this context, it should also be noted that the input magnetic field data (preprocessed or not) necessarily differs from the final NLFF lower boundary data due to the effect of the surface term. We note here that we do not explicitly show the results of all of the “special cases” NLFF time series, as their relative performance is comparable to that of respective exemplary cases presented hereafter (see comments in the last column). Furthermore, we exclude all SP3 cases from the analysis, due to their insufficient solenoidal quality (Ediv/E >  0.1).

Table 1.

Synoptic view model parameters for the employed NLFF models and their appearance in the manuscript (plot symbol and figure or appearance), if applicable.

As in M19, all NLFF computations were carried out for a computational volume of dimensions 320 × 320 × 320 pixels, containing a buffer layer towards the lateral and top boundaries of nd = 32 pixels, within which the NLFF solution drops in the form of a cos-profile to the field prescribed on the boundaries (for details see W04). For relative helicity computation only the inner physical volume (excluding the buffer layer) was kept, and further cut in height at roughly two-thirds of the total height, yielding a size of the finally analyzed model coronal field of 256 × 256 × 203 pixels.

3. Results

3.1. Effect of preprocessing

In order to test the effect of preprocessing, we minimize Eq. (6) once using the standard relative weighting (μ3 = 10−3, μ4 = 10−2; hereafter “standard preprocessing”), once omitting smoothing (μ3 = 10−3, μ4 = 0), and once neglecting both (μ3 = μ4 = 0). For the subsequent minimization of Eq. (7), we apply the error matrix for optimization of the lower boundary as used in M19 (WHMI) and use standard settings for the remaining model parameters, as suggested in W12 (see Sect. 2.2.2). As a kind of non-ideal reference we also run the optimization on the non-preprocessed data and compare the resulting NLFF time series in the following.

From Figs. 1a,b it can be seen that the application of preprocessing clearly lowers the contribution of solenoidal errors. For the NLFF models based on non-preprocessed data (light blue circles) Ediv/E and |Emix|/EJ, s are the highest at all considered times. Corresponding values are, on average, lowest for the NLFF solutions based on standard preprocessing (violet bullets). The effect of smoothing can be seen by comparison to the corresponding values of the “no smoothing” cases (dark and light blue bullets versus violet bullets). Overall, the application of smoothing causes a decrease in both Ediv/E and |Emix|/EJ, s, though more pronounced at later instances of the considered time period. It also appears that there is no distinct difference for the non-smoothed cases, whether or not enforcing a certain degree of nearness to the actually observed data (compare light and dark blue bullets).

thumbnail Fig. 1.

Evolution of (a) Ediv/E and (b) |Emix|/EJ, s and (c) ϵforce for different NLFF models based on different input data: not preprocessed (light blue circles), standard preprocessed (violet bullets), preprocessed without smoothing (dark blue bullets), preprocessed without smoothing and no nearness to observed data enforced (light blue bullets). (d) Corresponding time evolution of |HJ|/|H𝒱|. The horizontal dashed line in (a) indicates the nominal threshold of Ediv/E = 0.08, an upper limit for the accepted solenoidality of the input magnetic field. The vertical bars show the time span between the nominal GOES start and peak time of the X2.2 (peak time 08:57 UT) and X9.3 flare (peak time 11:53 UT).

It is noteworthy that all NLFF time series show a deteriorating quality as a function of time, i.e., the values of Ediv/E and |Emix|/EJ, s are increasing, supposedly due to the corresponding decrease in the inversion quality of the underlying vector magnetic field measurement (see Eq. (8) and corresponding notes), but is least pronounced for the NLFF solutions based on the standard preprocessed input. It is also noteworthy that all except those solutions exhibit values Ediv/E ≳ 0.08 for the considered time instances after 08:48 UT.

In terms of θJ the volumetric parameter usually used to quantify the force-free consistency of a NLFF model, there is no distinct difference between the cases when preprocessing is applied or not. For completeness we note that θJ is below ≈7° prior to the X2.2 flare and 7° ≲ θJ ≲ 12° afterwards. In order to picture the effect of preprocessing more clearly, we therefore show the force-balance parameter, ϵforce, in Fig. 1c, which is normally used to quantify the force-free consistency of a given vector magnetogram prior to NLFF modeling (see explanation in Sect. 2 of Wiegelmann et al. 2006). Here we use ϵforce not only to quantify how force-free consistent the input data is, but also how force-free the final 2D NLFF lower boundary is. We know from previous studies that non-preprocessed vector magnetograph data is inconsistent with a force-free approach (ϵforce ≳ 0.1; gray circles) and should not be used for force-free modeling. If used nevertheless, the optimization procedure will still deliver a NLFF solution, with its 2D lower boundary being more force-free consistent (ϵforce ≳ 0.08; light blue circles). The application to preprocessed data clearly improves the NLFF model results (ϵforce ≲ 0.08; bullets) without any obvious dependencies on the particular parameter setting for preprocessing.

The corresponding trends of |HJ|/|H𝒱| (Fig. 1d) suggest a clear segregation between NLFF time series based on smoothed (violet bullets) or non-smoothed (other symbols) input data. For completeness, we note that for all of the considered cases the relative helicities, H𝒱, based on smoothed data are systematically higher, as are their individual contributions (more pronounced in HJ than in HPJ). We discuss the possible reasons in Sect. 4.

For completeness we note that the results presented above also hold when using WEMP instead of WHMI. For simplicity, and motivated by the similarity of performance of the special cases (μ3,μ4) = (0,0) and (μ3,μ4) = (10−3,0), respectively labeled Sp2i and Sp4i in Table 1, we do not explicitly show the Sp2 cases in the remaining analysis.

3.2. Effect of preferring solenoidality over force-freeness

As explained in Sect. 2.4, for any choice of combination of other model parameters the divergence-freeness of the obtained NLFF solution may be enhanced by assigning a stronger relative importance of the divergence term, i.e., by choosing wd >  wf. For simplicity, here we restrict ourselves to analyzing the corresponding effect based on standard preprocessed data (using μ3 = 10−3 and μ4 = 10−2 in Eq. (6)). For completeness, we note that the results presented in the following are similar for application to non-smoothed data (μ4 = 0; compare Sect. 3.3 and Fig. 3), and also in the cases that the empirical error matrix, WEMP, is used (see Sect. 3.4 and Fig. 4).

In Fig. 2 we compare the results based on the standard setting, where the Lorentz force and divergence term in Eq. (7) are weighted equally (wf = wd = 1; bullets) to that with an enhanced enforcement of solenoidality (wd = 2; squares). The stronger enforcement of solenoidality leads to lower values of Ediv/E and |Emix|/EJ, s (Figs. 2a,b, respectively), and is at the expense of the force-freeness of the obtained NLFF solutions (compare θJ in Fig. 2c). Instead, for the standard weighting, θJ ≲ 10° for the entire time series, the values are about a factor of two higher if wd = 2 is applied.

thumbnail Fig. 2.

Evolution of (a) Ediv/E and (b) |Emix|/EJ, s and (c) θJ for different NLFF models, based on standard preprocessed input data, with different weighting of the volume-integrated divergence: wd = 1 (bullets), and wd = 2 (squares); (d) corresponding time evolution of |HJ|/|H𝒱|. Vertical bars as in Fig. 1.

Both NLFF time series satisfy Ediv/E <  0.08 (Fig. 2a), i.e., qualify for subsequent relative helicity computation. Though the obtained trend of |HJ|/|H𝒱| in Fig. 2d is similar for most of the time instances, the NLFF series based on the standard setting (wd = 1; bullets) depicts a decrease in |HJ|/|H𝒱| prior to and an increase during the confined X2.2 flare, while the solutions based on wd = 2 (squares) indicate a pre-flare increase and subsequent helicity relaxation. Both time series agree on a helicity accumulation to values |HJ|/|H𝒱| ≳ 0.15 prior to the eruptive X9.3 flare, and a pronounced helicity relaxation in correspondence to the eruptive flare.

3.3. Combined effects

There is an interplay between particular choices of model parameters for the preprocessing and optimization, as individually discussed in Sects. 3.1 and 3.2, respectively. Therefore, we describe combined effects for the WHMI-based models in the following, and those for the WEMP-based models in Sect. 3.4.

The choice wd = 2 (squares) during optimization has a similar effect on the final NLFF solution, regardless of whether smoothed (violet symbols) or non-smoothed (blue symbols) input data are used. On average, this value lowers the non-solenoidal energy contributions (Figs. 3a,b) and simultaneously increases θJ to a comparable level (compare Fig. 3c). The effective increase in solenoidality, however, is stronger for the NLFF models based on non-smoothed data (blue symbols), yielding the improved NLFF series to fulfill Ediv/E <  0.08 at all times (blue squares). It also appears that NLFF solutions with a smaller value of Ediv/E also exhibit on overall smaller values of |Emix|/EJ, s.

thumbnail Fig. 3.

Evolution of (a) Ediv/E and (b) |Emix|/EJ, s and (c) θJ for different NLFF models, using the empirical error matrix WHMI, and based on differently preprocessed input data (including smoothing: violet symbols; omitting smoothing: blue symbols), and with different weighting of the volume-integrated divergence (wd = 1: bullets; wd = 2: squares); (d) corresponding time evolution of |HJ|/|H𝒱|. The horizontal dashed line in (a) indicates the nominal threshold of Ediv/E = 0.08, an upper limit for the accepted solenoidality of the input magnetic field. The horizontal dashed line in (b) indicates a suggested upper limit for the acceptable non-solenoidal error to the free magnetic energy (|Emix|/EJ, s = 0.35). Vertical bars as in Fig. 1.

It is also evident that NLFF series of comparable solenoidal quality do not necessarily deliver similar helicity ratios (compare blue and violet squares in Figs. 3b,d). Instead, the values of |HJ|/|H𝒱| retrieved from non-smoothed boundaries (blue symbols) are found to be systematically lower for most time instances. Even so, all of the tested solutions depict a decrease in |HJ|/|H𝒱| during both flares with the sole exception of the NLFF solutions based on the standard preprocessed data (violet bullets), which suggest a decrease in |HJ|/|H𝒱| during the preceding confined X2.2 flare. All of the tested solutions show a rise in |HJ|/|H𝒱| prior to the X9.3 flare to values close to those prior to the X2.2 flare.

3.4. Effect of the particular choice of error matrix W

As noted by W12, a reasonable approximation of the accuracy of the measured vector magnetogram data may be such that it weights the vertical magnetic field measurement strongest (based on its empirically known highest measurement accuracy), followed by the strong horizontal field, and with weak horizontal fields being weighted least strong (see WEMP as defined in Eq. (9) in Sect. 2.2.2). In the following we test the performance of this empirical weighting, and repeat the model experiments applied to the measurement-based error matrix WHMI as presented in Sect. 3.3.

Trends common to those presented in Sect. 3.3 for the WHMI-based models include that the choice wd = 2 (squares) during optimization lowers the non-solenoidal energy contributions (Figs. 4a,b), while θJ is systematically higher (Fig. 4c). In addition, systematically lower values of |HJ|/|H𝒱| are found for all time instances from the NLFF models that are based on non-smoothed input data (orange symbols). For the WEMP-based models the individual relative helicity contributions, HPJ and especially HJ, are also systematically higher based on non-smoothed input data, and more pronounced than for the WHMI-based models. The increase in |HJ|/|H𝒱| between the two consecutive flares is less pronounced than for the WHMI-based models. Finally, all of the tested WEMP-based models consistently picture a decrease in |HJ|/|H𝒱| during both X-class flares, as well as periods of helicity accumulation prior to both flares (though rather weak for most of the NLFF series).

thumbnail Fig. 4.

As in Fig. 3, but using the empirical error matrix WEMP.

Other findings are different from those of the WHMI-based models. For instance, the WEMP-based solutions show a deteriorating quality as a function of time, more pronounced than the WHMI-based solutions (compare Figs. 3a,b and 4a,b). Furthermore, a lower value of Ediv/E does not necessarily imply a lower value of |Emix|/EJ, s for the WEMP-based models, and thus NLFF solutions that satisfy Ediv/E <  0.08 may exhibit large non-solenoidal contributions to the free magnetic energy.

3.5. Putting everything together: a recipe

Not all of the analyzed NLFF models presented in Sects. 3.3 and 3.4 satisfy the nominal threshold of Ediv/E <  0.08 (as suggested by Valori et al. 2016) at all considered times, i.e., not all do qualify for relative helicity computation. Besides, based on the time evolution of the relative helicity in NOAA 11158, computed with different FV helicity computation methods, Thalmann et al. (2019b) argued for an even more restrictive threshold in solar applications (Ediv/E ≲ 0.05). This is approximately fulfilled for the WHMI-based models that satisfy the nominal threshold Ediv/E <  0.08, inherently including a small non-solenoidal contribution to the free magnetic energy (|Emix|/EJ, s≲0.25).

For the WEMP-based models, however, values of Ediv/E <  0.08 do not necessarily imply small values of |Emix|/EJ, s (see orange and red squares at the last two time instances in Fig. 4b, for which |Emix|/EJ, s ≳ 0.5). Thus, NLFF solutions with high levels of non-solenoidal energy compared to their free energy would enter the relative helicity computation. As mentioned before, it appears crucial for applications of the W12 method to minimize the non-solenoidal contribution to the free magnetic energy (see corresponding remarks in Sect. 2.3). We thus suggest, in addition to respecting the nominal threshold of Ediv/E = 0.08, to keep only the best-performing snapshots in terms of smallest non-solenoidal contribution to the free magnetic energy from each NLFF time series, namely those for which |Emix|/EJ, s <  0.35.

In order to place the contribution of Emix into context, we note here that the free magnetic energy during the analyzed time interval is in the range ≳20% of the total magnetic energy (i.e., EJ, s/E ≳ 0.2). Since Emix comprises a small percentage of E only (≲10%), we may safely assume that it is rooted in numerical reasons, and that a corresponding thresholding has the desired effect of sorting out NLFF solutions with a related undesirably high contribution.

Based on the above reasoning, we can then compute a mean value, ⟨|HJ|/|H𝒱|⟩, from all of the accepted NLFF solutions at each time instant, and also deduce an uncertainty estimate based on the spread of the contributing solutions. For time instances when only one contributing NLFF solution remains based on the above selection criteria, no mean value can be retrieved and the respective value of |HJ|/|H𝒱| has to be assumed as indicative of the true coronal relative helicity.

Figure 5a shows the time evolutions of |HJ|/|H𝒱|, computed from all qualifying WHMI-based NLFF models (colored symbols), together with the mean value ⟨|HJ|/|H𝒱|⟩ (black solid line) and the standard deviation as an indication of the corresponding uncertainty (gray shaded area). The corresponding time evolutions for the WEMP-based NLFF models are shown in Fig. 5b. Overall, the WHMI-based estimates show less variation of ⟨|HJ|/|H𝒱|⟩ as a function of time, though the trend is very similar to that of the WEMP-based estimates. A period of rather monotonic helicity accumulation prior to the confined X2.2 flare terminates in values of ⟨|HJ|/|H𝒱|⟩ ≃ 0.13. The subsequent flare-related helicity relaxation (apparently more pronounced in the WEMP-based estimates) is followed by a period of relative helicity replenishment between ∼10:00 UT and 11:30 UT, peaking shortly before the eruptive X9.3 flare (⟨|HJ|/|H𝒱|⟩ ≳ 0.12). Finally, the X9.3 flare-related helicity relaxation shows a decrease to values of ⟨|HJ|/|H𝒱|⟩ ≲ 0.1.

thumbnail Fig. 5.

Time evolution of |HJ|/|H𝒱|, computed from the best-performing NLFF models using the error matrix (a) WHMI or (b) WEMP. At each time instance, only contributions from NLFF solutions are shown that satisfy |Emix|/EJ, s ≤ 0.4. The black solid line represents the mean value, computed from all qualifying solutions at each time instant. The gray shaded area represents the respective standard deviation. Vertical bars as in Fig. 1.

4. Summary

We studied the coronal magnetic field and helicity of AR 12673, a ten-hour time interval centered around a preceding X2.2 flare (SOL2017-09-06T08:57) that also includes an eruptive X9.3 flare that occurred three hours later (SOL2017-09-06T11:53). Our aim was to assess the spread of the relative helicities computed from Eqs. (1) and (3), using the finite-volume (FV) method of Thalmann et al. (2011) when based on different time series of NLFF coronal magnetic fields. The corresponding NLFF coronal magnetic fields were modeled using an optimization approach (Wiegelmann et al. 2012, hereafter W12) based on different choices of (free) model parameters, consequently differing in their solenoidal quality. The solenoidal quality of an underlying NLFF model, however, is a highly important factor if one attempts a reliable relative helicity computation, and different thresholds have been suggested in the past (Valori et al. 2016; Thalmann et al. 2019b).

The aim of our study was to gain insight into the effects of particular choices of (free) model parameters on the final W12 NLFF solutions and subsequent relative helicity computation. Based on the in-depth analysis of the solenoidal quality of the underlying NLFF solutions, in context with the subsequently computed relative helicity ratio, |HJ|/|H𝒱| (a promising indicator for the eruptivity of solar ARs; see Pariat et al. 2017 for a pioneering work), our goal was to provide a recipe for successful and reliable relative helicity computation (including uncertainties).

The W12 method involves two computational tasks. In the first preprocessing step, individual weights can be assigned to the nearness to the actually observed data and the degree of smoothing applied to the 2D vector magnetic field data (controlled by μ3 and μ4, respectively, see Sect. 2.2.1 for details). In our work we tested the standard choices μ3 = 10−3 and μ4 = 10−2, and also the limiting values μ3 = 0 and μ4 = 0. In the subsequent optimization step (see Sect. 2.2.2 for details), the volume-integrated Lorentz force and divergence can be weighted individually (via wf and wd, respectively), and the handling of the (preprocessed) vector magnetic field data can be defined. The latter is realized by a diagonal error matrix where we tested two options. First, an error matrix originally defined in the study of Moraitis et al. (2019, hereafter M19), based on the actual measurement uncertainties of HMI (WHMI as defined in Eq. (8)), and then a commonly used empirical one (WEMP as defined in Eq. (9); see also W12) which assumes that vertical fields are measured with highest accuracy, and that the reliability of the measured horizontal field decreases with decreasing strength.

The volumetric force-freeness of the realized NLFF models was estimated using the current-weighted angle between the modeled magnetic field and electric current density, θJ, (Schrijver et al. 2006). For the quantification of their solenoidal quality, we used the normalized non-solenoidal energy ratio Ediv/E, as suggested by Valori et al. (2013). Moreover, since it appears crucial to minimize the non-solenoidal contribution to the free magnetic energy (see corresponding remarks in Sect. 2.3), we analyzed the energy ratio |Emix|/EJ, s in detail in our work, which has not been studied before.

Regarding the impact of particular choices of (free) model parameters, independent of the particular error matrix used (WHMI or WEMP), on the solenoidal quality of the final NLFF solutions we found the following:

– The application of preprocessing prior to optimization considerably lowers the non-solenoidal contributions in the final NLFF solution, with the NLFF solution being also more force-free (Figs. 1a–c). A crucial ingredient in lowering the solenoidal errors appears to be the application of smoothing (μ4 ≠ 0). Thus, the solenoidal quality of NLFF solutions based on the preprocessing as suggested in W12 (μ1 = μ2 = 1, μ3 = 10−3, μ4 = 10−2) is highest overall and can be safely recommended as a standard setting.

– The enhanced weighting of the volume-integrated divergence over the force-freeness (wd >  wf) also lowers the non-solenoidal contributions in the final NLFF solution (Figs. 2a,b), though at the expense of force-freeness (compare Fig. 2c). The effective increase in solenoidal quality is more drastic for the NLFF models based on non-smoothed data (blue and orange symbols in Figs. 3a,b and 4a,b, respectively), underlining the corresponding desired effect of using smoothed data as input to NLFF modeling.

Different choices of the (free) model parameters during preprocessing and optimization allow the computation of multiple values for the relative helicities (H𝒱, HPJ, HJ), and consequently for the helicity ratio, |HJ|/|H𝒱|, at a certain time instant for a particular error matrix (for WHMI- and WEMP-based models see Figs. 3d and 4d, respectively). In this context, we found the following causal impacts:

– Using smoothed data as input to NLFF modeling yields systematically higher values of |HJ|/|H𝒱| (blue and orange symbols in Figs. 3d and 4d, respectively), and similarly for the individual contributions, HPJ and HJ, (independent of the error matrix used). Although HJ has a clear physical meaning, namely the linking of the current-carrying field with itself, an enhanced level of HJ does not necessarily imply the presence of systematically stronger electric currents (Régnier 2009). And indeed, we do not find a systematically higher total unsigned current in the NLFF models based on smoothed data. Instead, we find higher total magnetic energies, E, and lower potential field energies, E0, in those models. Thus, we suspect that the origin of the overall higher helicities for the NLFF models is based on smoothed data in the systematically enhanced current-carrying magnetic field.

– For those NLFF models that satisfy the originally suggested threshold of Ediv/E <  0.08, the non-solenoidal contributions to the free energy, |Emix|/EJ, s, are distinctly different. While the WHMI-based NLFF solutions satisfying Ediv/E ≲ 0.05 (a refined threshold for solar applications suggested by Thalmann et al. 2019b) also satisfy |Emix|/EJ, s ≲ 0.25, this is not true for the WEMP-based models. Therefore, and motivated by minimizing non-solenoidal errors in the free magnetic energy, an additional threshold based on |Emix|/EJ, s appears useful in order to (dis-)qualify NLFF solutions for subsequent relative helicity computation.

– Using an upper limit of |Emix|/EJ, s = 0.35, we obtain similar trends for the mean time evolution, ⟨|HJ|/|H𝒱|⟩, from the two types of NLFF series (based on either WHMI or WEMP, see Figs. 5a,b, respectively). Then, the empirical error matrix WEMP may be validly used as an alternative to a measurement-based definition (such as WHMI).

Based on the above findings, we are able to provide a recipe to obtain a reliable estimate of the coronal relative helicity together with a corresponding uncertainty estimate. In particular, we recommend employing a mean estimate of the relative helicity (and of any related quantity such as ⟨|HJ|/|H𝒱|⟩) at any particular time instant, computed from a number of NLFF models based on different (free) model parameter choices that individually satisfy Ediv/E <  0.08 and |Emix|/EJ, s <  0.35. Using this approach we found a consistent estimate of ⟨|HJ|/|H𝒱|⟩ from the two types of NLFF model series (WEMP- and WHMI-based). This includes an increase in ⟨|HJ|/|H𝒱|⟩ prior to the confined X2.2 flare and between the preceding X2.2 and following X9.3 flare, together with helicity (ratio) relaxation in correspondence to the occurrences of the flares.

However, the spread of the contributing values of |HJ|/|H𝒱| is quite variable over the time series, about ≲0.04 before the occurrence of the X2.2 flare and ≳0.06 prior to the eruptive flare. Overall it appears that the spread of |HJ|/|H𝒱| scales with the quality of the underlying NLFF time series. We recall that all of the employed NLFF time series show a deteriorating quality, i.e., the values of Ediv/E and |Emix|/EJ, s increase with time, supposedly due to the corresponding decrease in the inversion quality of the underlying vector magnetic field measurement (see corresponding notes in Sect. 2.2.2).

5. Discussion

Multiple attempts were made to model and interpret the coronal magnetic field configuration of AR 12673, focused on the approximation of the self-helicity of a coronal model flux rope recovered from NLFF modeling. We note here for completeness that in all of the finally qualifying NLFF time series, a magnetic flux rope is present prior to the confined X2.2 flare, of differing morphology but in overall agreement with earlier model attempts. Therefore, we assume that our NLFF model fields realistically represent the active-region corona of AR 12673. An in-depth comparison of the distinct model magnetic field configurations, including the extent of recovering a possibly existing “double-decker” system (see discussion below), is left for a future work.

Based on an magneto-hydrodynamic relaxation method, Zou et al. (2020) pictured the formation and gradual growing of a magnetic flux rope prior to the confined X2.2 flare, covering the time span 00:00 UT to 11:48 UT on 2017 September 6. The existing magnetic flux rope was found to grow in an accelerated manner after the occurrence of the confined flare, along with a (mild) increase in the flux rope twist (an approximation for its self-helicity). In agreement, though not explicitly shown, we note that all our tested NLFF model series depict rather monotonically increasing relative helicity H𝒱 (and its individual contributions HPJ and HJ) before the X2.2 flare, and show another increase between the preceding X2.2 and following X9.3 flare. Based on a series of optimization-based NLFF models, Liu et al. (2018) pictured the pre-X2.2 flare coronal magnetic field configuration in the form of a system of multiple flux ropes, overlying each other and composed of field of opposite handedness (double-decker; see also Hou et al. 2018). In particular, using the twist number method, they pictured a considerable increase in the flux rope twist during the confined flare. Based on the same method, Zou et al. (2020) pictured a rather weakly increasing self-helicity during the X2.2 flare (see their Fig. 4c). In this context we note that only one of our tested NLFF model series depicts a weak increase in |HJ|/|H𝒱| during the X2.2 flare (violet bullets in Fig. 2d). All the other NLFF series, and consequently ⟨|HJ|/|H𝒱|⟩, depict a corresponding relative helicity relaxation. This does not necessarily conflict with a flare’s confined nature, since a corresponding variation in HJ may be simply due to the exchange with HPJ (Linan et al. 2018). Finally, all of our tested NLFF series suggest a relative helicity relaxation (and also of ⟨|HJ|/|H𝒱|⟩) during the eruptive X9.3 flare. In agreement with Liu et al. (2018), among others, this is expected since the current-carrying magnetic structure (i.e., the coronal flux rope) was physically ejected from the corona.

In M19 the relative helicity of NOAA 12673 was studied in detail, based on a mix of NLFF models computed using either the W04 or W12 method (using standard model parameter choices), depending on which of the NLFF fields had a lower value Ediv/E, and necessarily Ediv/E <  0.08 (see their Fig. 4). In particular, the W12 models at 08:36 UT and 08:48 UT were of lower solenoidal quality than the corresponding W04 solutions and were thus dropped from the analysis. For the remaining time instances the W12 solutions were retained due to their relatively lower solenoidal errors. The resulting time evolution of |HJ|/|H𝒱| depicted an increase in |HJ|/|H𝒱| to values ≳0.15 prior to the X-class flares, corresponding decreases in the course of the flares, and the replenishment of relative the helicity ratio before the X9.3 flare (see their Fig. 7). In comparison, we find a similar time evolution of |HJ|/|H𝒱|, though indicating lower characteristic pre-flare values of ≃0.13 (violet bullets in Fig. 2d). Only the pronounced pre-X2.2 flare peak of |HJ|/|H𝒱| >  0.15 found in M19 is not recovered in our NLFF solutions. It should be noted that their estimates at 08:36 UT and 08:48 UT were based on two W04-based solutions with values of Ediv/E marginally below the nominal threshold of Ediv = 0.08. For NLFF models with Ediv/E ≳ 0.05, however, estimates of |HJ|/|H𝒱| may vary considerably among different helicity computation methods, even when based on the same sequence of NLFF models (compare Figs. 2c and 4c in Thalmann et al. 2019b). We therefore may explain our lower pre-X2.2 flare values as the inherent uncertainty of relative helicity estimates for a solenoidal quality of the underlying NLFF models in the regime 0.05 ≲ Ediv/E ≲ 0.08.

6. Conclusion

In conclusion, reliable estimations of the relative helicity budget (and that of related quantities) based on NLFF coronal magnetic field models remain a challenging task. The extended analysis of the various NLFF model parameters in this work and the comparison with the analysis presented in M19 showed that finite-volume relative helicity computation is highly sensitive to the details of the underlying magnetic field modeling.

A way to compensate for related issues is to employ multiple NLFF time series based on different (free) model parameter choices and to employ mean estimates based on the subset of NLFF models that satisfy Ediv/E <  0.08 and |Emix|/EJ, s <  0.35 at a particular time instant. In this way, it is possible to obtain reliable estimates of the relative helicity (and related quantities) along with corresponding uncertainty estimates. This involves a large computational effort and time, but it substantially increases the understanding and the reliability of the obtained results. As noted by W12, this might not be doable for long time series, but might be a favorable approach around the times of the occurring flares.

Acknowledgments

We thank the anonymous referee for careful consideration of the manuscript and insightful comments. J. K. T. and M. G. were supported by the Austrian Science Fund (FWF): P31413-N27. X. S. is partially supported by NSF awards #1848250, #1854760, and NASA award #80NSSC190263. K. M. acknowledges support of the French Agence Nationale pour la Recherche through the HELISOL project ANR-15-CE31-0001. SDO data are courtesy of the NASA/SDO AIA and HMI science teams. This article profited from discussions during the meetings of the ISSI International Team Magnetic Helicity in Astrophysical Plasmas.

References

  1. Aly, J. J. 1989, Sol. Phys., 120, 19 [NASA ADS] [CrossRef] [Google Scholar]
  2. Berger, M. A. 1999, Plasma Phys. Control. Fusion, 41, B167 [NASA ADS] [CrossRef] [Google Scholar]
  3. Berger, M. A., & Field, G. B. 1984, J. Fluid Mech., 147, 133 [NASA ADS] [CrossRef] [Google Scholar]
  4. DeRosa, M. L., Wheatland, M. S., Leka, K. D., et al. 2015, ApJ, 811, 107 [NASA ADS] [CrossRef] [Google Scholar]
  5. Finn, J., & Antonsen, T. J. 1984, Comments Plasma Phys. Control. Fusion, 9, 111 [Google Scholar]
  6. Gilchrist, S. A., Leka, K. D., Barnes, G., Wheatland, M. S., & DeRosa, M. L. 2020, ApJ, 900, 136 [CrossRef] [Google Scholar]
  7. Hou, Y. J., Zhang, J., Li, T., Yang, S. H., & Li, X. H. 2018, A&A, 619, A100 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  8. James, A. W., Valori, G., Green, L. M., et al. 2018, ApJ, 855, L16 [NASA ADS] [CrossRef] [Google Scholar]
  9. Linan, L., Pariat, É., Moraitis, K., Valori, G., & Leake, J. 2018, ApJ, 865, 52 [NASA ADS] [CrossRef] [Google Scholar]
  10. Liu, L., Cheng, X., Wang, Y., et al. 2018, ApJ, 867, L5 [NASA ADS] [CrossRef] [Google Scholar]
  11. Low, B. C. 1996, Sol. Phys., 167, 217 [Google Scholar]
  12. Metcalf, T. R., De Rosa, M. L., Schrijver, C. J., et al. 2008, Sol. Phys., 247, 269 [Google Scholar]
  13. Moffatt, H. K. 1969, J. Fluid Mech., 35, 117 [NASA ADS] [CrossRef] [Google Scholar]
  14. Moraitis, K., Tziotziou, K., Georgoulis, M. K., & Archontis, V. 2014, Sol. Phys., 289, 4453 [Google Scholar]
  15. Moraitis, K., Sun, X., Pariat, É., & Linan, L. 2019, A&A, 628, A50 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  16. Pariat, E., Valori, G., Démoulin, P., & Dalmasse, K. 2015, A&A, 580, A128 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  17. Pariat, E., Leake, J. E., Valori, G., et al. 2017, A&A, 601, A125 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  18. Prior, C., & Yeates, A. R. 2014, ApJ, 787, 100 [Google Scholar]
  19. Régnier, S. 2009, A&A, 497, L17 [CrossRef] [EDP Sciences] [Google Scholar]
  20. Rust, D. M. 1994, Geophys. Res. Lett., 21, 241 [NASA ADS] [CrossRef] [Google Scholar]
  21. Scherrer, P. H., Schou, J., Bush, R. I., et al. 2012, Sol. Phys., 275, 207 [Google Scholar]
  22. Schrijver, C. J., De Rosa, M. L., Metcalf, T. R., et al. 2006, Sol. Phys., 235, 161 [Google Scholar]
  23. Sun, X. 2013, ArXiv e-prints [arXiv:1309.2392] [Google Scholar]
  24. Taylor, J. B. 1974, Phys. Rev. Lett., 33, 1139 [NASA ADS] [CrossRef] [Google Scholar]
  25. Thalmann, J. K., Inhester, B., & Wiegelmann, T. 2011, Sol. Phys., 272, 243 [Google Scholar]
  26. Thalmann, J. K., Moraitis, K., Linan, L., et al. 2019a, ApJ, 887, 64 [Google Scholar]
  27. Thalmann, J. K., Linan, L., Pariat, E., & Valori, G. 2019b, ApJ, 880, L6 [NASA ADS] [CrossRef] [Google Scholar]
  28. Valori, G., Démoulin, P., & Pariat, E. 2012, Sol. Phys., 278, 347 [Google Scholar]
  29. Valori, G., Démoulin, P., Pariat, E., & Masson, S. 2013, A&A, 553, A38 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  30. Valori, G., Pariat, E., Anfinogentov, S., et al. 2016, Space Sci. Rev., 201, 147 [Google Scholar]
  31. Wheatland, M. S., Sturrock, P. A., & Roumeliotis, G. 2000, ApJ, 540, 1150 [Google Scholar]
  32. Wiegelmann, T. 2004, Sol. Phys., 219, 87 [NASA ADS] [CrossRef] [Google Scholar]
  33. Wiegelmann, T. 2008, J. Geophys. Res.: Space Phys., 113, A03S02 [CrossRef] [Google Scholar]
  34. Wiegelmann, T., & Inhester, B. 2010, A&A, 516, A107 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  35. Wiegelmann, T., & Sakurai, T. 2012, Liv. Rev. Sol. Phys., 9, 5 [Google Scholar]
  36. Wiegelmann, T., Inhester, B., & Sakurai, T. 2006, Sol. Phys., 233, 215 [NASA ADS] [CrossRef] [Google Scholar]
  37. Wiegelmann, T., Thalmann, J. K., Schrijver, C. J., De Rosa, M. L., & Metcalf, T. R. 2008, Sol. Phys., 247, 249 [NASA ADS] [CrossRef] [Google Scholar]
  38. Wiegelmann, T., Thalmann, J. K., Inhester, B., et al. 2012, Sol. Phys., 281, 37 [NASA ADS] [Google Scholar]
  39. Woltjer, L. 1958, Proc. Natl. Acad. Sci., 44, 833 [NASA ADS] [CrossRef] [Google Scholar]
  40. Zou, P., Jiang, C., Wei, F., et al. 2020, ApJ, 890, 10 [CrossRef] [Google Scholar]
  41. Zuccarello, F. P., Pariat, E., Valori, G., & Linan, L. 2018, ApJ, 863, 41 [Google Scholar]

All Tables

Table 1.

Synoptic view model parameters for the employed NLFF models and their appearance in the manuscript (plot symbol and figure or appearance), if applicable.

All Figures

thumbnail Fig. 1.

Evolution of (a) Ediv/E and (b) |Emix|/EJ, s and (c) ϵforce for different NLFF models based on different input data: not preprocessed (light blue circles), standard preprocessed (violet bullets), preprocessed without smoothing (dark blue bullets), preprocessed without smoothing and no nearness to observed data enforced (light blue bullets). (d) Corresponding time evolution of |HJ|/|H𝒱|. The horizontal dashed line in (a) indicates the nominal threshold of Ediv/E = 0.08, an upper limit for the accepted solenoidality of the input magnetic field. The vertical bars show the time span between the nominal GOES start and peak time of the X2.2 (peak time 08:57 UT) and X9.3 flare (peak time 11:53 UT).

In the text
thumbnail Fig. 2.

Evolution of (a) Ediv/E and (b) |Emix|/EJ, s and (c) θJ for different NLFF models, based on standard preprocessed input data, with different weighting of the volume-integrated divergence: wd = 1 (bullets), and wd = 2 (squares); (d) corresponding time evolution of |HJ|/|H𝒱|. Vertical bars as in Fig. 1.

In the text
thumbnail Fig. 3.

Evolution of (a) Ediv/E and (b) |Emix|/EJ, s and (c) θJ for different NLFF models, using the empirical error matrix WHMI, and based on differently preprocessed input data (including smoothing: violet symbols; omitting smoothing: blue symbols), and with different weighting of the volume-integrated divergence (wd = 1: bullets; wd = 2: squares); (d) corresponding time evolution of |HJ|/|H𝒱|. The horizontal dashed line in (a) indicates the nominal threshold of Ediv/E = 0.08, an upper limit for the accepted solenoidality of the input magnetic field. The horizontal dashed line in (b) indicates a suggested upper limit for the acceptable non-solenoidal error to the free magnetic energy (|Emix|/EJ, s = 0.35). Vertical bars as in Fig. 1.

In the text
thumbnail Fig. 4.

As in Fig. 3, but using the empirical error matrix WEMP.

In the text
thumbnail Fig. 5.

Time evolution of |HJ|/|H𝒱|, computed from the best-performing NLFF models using the error matrix (a) WHMI or (b) WEMP. At each time instance, only contributions from NLFF solutions are shown that satisfy |Emix|/EJ, s ≤ 0.4. The black solid line represents the mean value, computed from all qualifying solutions at each time instant. The gray shaded area represents the respective standard deviation. Vertical bars as in Fig. 1.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.