Dataset Viewer
Auto-converted to Parquet Duplicate
context
stringlengths
100
14.5k
A
stringlengths
100
4.09k
B
stringlengths
100
3.15k
C
stringlengths
100
3.91k
D
stringlengths
100
4.49k
label
stringclasses
4 values
A special example of p⁢(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ )-convex risk measures which is so called OCE, is discussed in the next section. Finally, in Sect. 5, the p⁢(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ )-convex risk measures are used to study the dual representation of the p⁢(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ )-dynamic risk measures.
4 Optimized Certainty Equivalent on 𝐋𝐩⁢(⋅)superscript𝐋𝐩⋅\mathbf{L^{p(\cdot)}}bold_L start_POSTSUPERSCRIPT bold_p ( ⋅ ) end_POSTSUPERSCRIPT
3 Convex risk measures on 𝐋𝐩⁢(⋅)superscript𝐋𝐩⋅\mathbf{L^{p(\cdot)}}bold_L start_POSTSUPERSCRIPT bold_p ( ⋅ ) end_POSTSUPERSCRIPT
In this section, a special class of p⁢(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ )-convex risk measures that is the Optimized Certainty Equivalent (OCE) is studied and it will be used as an example of dynamic risk measures in Sect. 5.
5 Dynamic risk measures on 𝐋𝐩⁢(⋅)superscript𝐋𝐩⋅\mathbf{L^{p(\cdot)}}bold_L start_POSTSUPERSCRIPT bold_p ( ⋅ ) end_POSTSUPERSCRIPT
A
As we approximated u𝑢uitalic_u by u~~𝑢\tilde{u}over~ start_ARG italic_u end_ARG and u^^𝑢\hat{u}over^ start_ARG italic_u end_ARG in Lemma 4.2, we would approximate a process 2⁢Z⋅2σ⁢(⋅,S⋅)⁢S⋅2superscriptsubscript𝑍⋅2𝜎⋅subscript𝑆⋅subscript𝑆⋅\frac{2Z_{\cdot}^{2}}{\sigma(\cdot,S_{\cdot})S_{\cdot}}divide start_ARG 2 it...
Under Assumption 1, for any p>0,𝑝0p>0\,,italic_p > 0 , as t→0,→𝑡0t\rightarrow{0}\,,italic_t → 0 , we have
Under Assumption 1, for any p>0,𝑝0p>0\,,italic_p > 0 , as t→0→𝑡0t\rightarrow{0}italic_t → 0, we have
Under Assumption 1, for any p>0,𝑝0p>0\,,italic_p > 0 , as T→0→𝑇0T\rightarrow{0}italic_T → 0, we have
Under Assumption 1, for any p>0,𝑝0p>0\,,italic_p > 0 , there exists a positive constant Dpsubscript𝐷𝑝D_{p}italic_D start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT depending only on p𝑝pitalic_p such that the following inequalities hold.
B
Moreover, even if our GSA methodology was born in order to deal with simulation models, one could think about using it as a method to deal with Machine Learning-oriented methods dealing with functional data [37]. Its role in this context would be to provide a simple yet probabilistically sound way to perform significan...
Figure 5: Pvalues for the SSP2 - SSP3 Transition. In all the panels, the x axis represents time (from 2020 to 2090), while the y are the value of the adjusted (full line) and unadjusted (dotted line) p-value functions, from 0 to 1. Rows and colors denote different drivers, while the two columns are for Individual and I...
By looking at sensitivity indices, like in the previous case the impacts of income (GDPPC) and energy intensity (END) are the most evident. In the SSP2 to SSP3 case we also observe a probably significant time dynamics for the fossil fuel availability (FF) variable. Differently from the previous case, we also observe th...
A fundamental tool to understand and explore the complex dynamics that regulates this phenomenon is the use of computer models. In particular, the scientific community has oriented itself towards the use of coupled climate-energy-economy models, also known as Integrated Assessment Models (IAM). These are pieces of soft...
Matteo Fontana acknowledges financial support from the European Research Council, ERC grant agreement no. 336155 - project COBHAM ’The role of consumer behaviour and heterogeneity in the integrated assessment of energy and climate policies’. Massimo Tavoni acknowledges financial support from the European Research Counc...
D
The equivalence between the absence of arbitrage opportunities and the existence of a martingale measure, or the fundamental theorem of asset pricing (FTAP in short), is a core topic to mathematical finance. FTAP results are discussed in classical models under the assumption that the dynamics of risky assets are known ...
The pathwise approach, pioneered by [36], makes no assumptions on the dynamics of the underlying assets. Instead, the set of all models which are consistent with the prices of observed vanilla options was investigated and bounds on the prices of exotic derivatives were derived. The approach was applied to barrier optio...
of all probability measures on ΩtsubscriptΩ𝑡\Omega_{t}roman_Ω start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. In [1], a pathwise version of the first FTAP was given, under the existence of a superlinearly growing option. This condition ensures the compactness of the set of martingale measures compatible with option pr...
The three approaches are also different from technical points of view. The pathwise approach assumes that there are some traded vanilla options from which marginal distributions of the underlying assets are deduced. Techniques with martingale optimal transports are employed to derive robust bounds for other exotic opti...
From the modelling point of view, the parametrization framework differs from the pathwise and the quasi-sure approaches in different ways. In the pathwise approach, randomness and filtrations are generated by the canonical process. The quasi-sure approach works with Polish spaces and filtrations come from universal com...
A
Sadler (2015) note that cascade and diffusion utilities coincide in their binary-state binary-action setting.
Ozdaglar (2011) and others. The foundation in our general setting is a novel compactness-continuity argument.
Ozdaglar (2011), owes to certain monotonicity that does not extend beyond their binary-binary setting.
Ozdaglar (2011) provide a general treatment of observational networks in an otherwise classical setting. But they only allow for binary states and binary actions. They introduce the condition of expanding observations, explaining that this property of the network is necessary for learning. They establish that it is als...
Sadler (2015) note that cascade and diffusion utilities coincide in their binary-state binary-action setting.
B
Instead of WAP, one could compare maximin protocols in terms of their power over a local (to θ=0𝜃0\theta=0italic_θ = 0) alternative space or focus on admissible maximin protocols. In Appendix C.2, we consider a notion of local power with the property that locally most powerful protocols are also admissible when λ=0𝜆0...
Romano (2005b). We show that any globally most powerful protocol is also locally most powerful (and thus admissible if λ=0𝜆0\lambda=0italic_λ = 0) under linearity and normality.
Here, we consider the general case where λ≥0𝜆0\lambda\geq 0italic_λ ≥ 0 and show that when λ>0𝜆0\lambda>0italic_λ > 0, the planner’s subjective utility from research implies a notion of power. Globally optimal protocols generally depend on both λ𝜆\lambdaitalic_λ and the planner’s prior π𝜋\piitalic_π. We restrict ou...
We consider two notions of optimality: maximin optimality (corresponding to the case where λ=0𝜆0\lambda=0italic_λ = 0) and global optimality (corresponding to the more general case where λ≥0𝜆0\lambda\geq 0italic_λ ≥ 0). Accordingly, we say that r∗superscript𝑟r^{*}italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ...
Instead of WAP, one could compare maximin protocols in terms of their power over a local (to θ=0𝜃0\theta=0italic_θ = 0) alternative space or focus on admissible maximin protocols. In Appendix C.2, we consider a notion of local power with the property that locally most powerful protocols are also admissible when λ=0𝜆0...
A
Compared with these previous studies, this study identifies the exogenous shocks that transform the economy from stagnation to growth based on economic history studies and quantitatively examines the magnitude of the shocks.
We can incorporate into the model the elements of endogenous growth models, wherein scientists engage in the R&D of manufacturing goods in the non-Malthusian state, and the basic properties of the model would not change.
This section analytically investigates the properties of the model, particularly the population dynamics of the Malthusian state and the effect of a sudden increase in land supply.
The remainder of this paper is organized as follows: Section 2 introduces the model. Section 3 discusses the analytical properties of the proposed model.
As explained in Section 3.2, I model the relief of land constraints, which Pomeranz argues was the cause of the Great Divergence and the Industrial Revolution in Britain, as a sudden increase in Z𝑍Zitalic_Z.
C
For the experiment, we turn our focus to the purely congestive case, using the number of free-riders in a group to describe an efficient structure.
The treatment variation was implemented in the second part. In three baseline sessions, consisting of a total of 72 subjects in 18 groups, subjects were told that the second part of the experiment would be exactly the same as the first part, except that subject IDs would be randomly reassigned. In five treatment sessio...
We estimate this model using the data from our laboratory experiment and present the results of these estimations in Table 2.
In this section, we describe the design and procedures of the laboratory experiment in greater detail. The experiment was conducted using undergraduate students in the XS/FS Experimental Social Sciences Laboratory at Florida State University. We collected data from a total of 184 subjects across eight sessions. Subject...
In Section 2, we lay out a simple theoretical framework for the collaborative sharing environment. Section 3 describes the design and procedures for the laboratory experiment testing the effects of different information structure on collaboration patterns. We present and discuss the reduced-form results of the experime...
C
\mathcal{B}(\mathbb{R})\text{ measurable }\big{\}}{ italic_f ∈ roman_ℓ start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( blackboard_X , caligraphic_B ( blackboard_X ) : italic_ξ ↦ ∫ start_POSTSUBSCRIPT blackboard_X end_POSTSUBSCRIPT italic_f ( italic_x ) italic_ξ ( start_OPFUNCTION roman_d end_OPFUNCTION italic_x ) is cali...
\mathcal{B}(\mathbb{R})\text{ measurable }\big{\}}{ italic_f ∈ roman_ℓ start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( blackboard_X , caligraphic_B ( blackboard_X ) : italic_ξ ↦ ∫ start_POSTSUBSCRIPT blackboard_X end_POSTSUBSCRIPT italic_f ( italic_x ) italic_ξ ( start_OPFUNCTION roman_d end_OPFUNCTION italic_x ) is cali...
\left\{\xi\in\Xi:\xi(A)\in B\right\}\right\}{ ( italic_x , italic_a ) ∈ blackboard_X × blackboard_A : italic_P ( italic_t , italic_x , italic_a , ⋅ ) ∈ { italic_ξ ∈ roman_Ξ : italic_ξ ( italic_A ) ∈ italic_B } }
In view of Lemma B.9, we have ℬ⁢(Ξ)=ℰ⁢(Ξ)ℬΞℰΞ\mathcal{B}(\Xi)=\mathcal{E}(\Xi)caligraphic_B ( roman_Ξ ) = caligraphic_E ( roman_Ξ ).
is the σ𝜎\sigmaitalic_σ-algebra containing sets of the form {ξ∈Ξ:ξ⁢(A)∈B}conditional-set𝜉Ξ𝜉𝐴𝐵\{\xi\in\Xi:\xi(A)\in B\}{ italic_ξ ∈ roman_Ξ : italic_ξ ( italic_A ) ∈ italic_B } with A∈ℬ⁢(𝕏)𝐴ℬ𝕏A\in\mathcal{B}(\mathbb{X})italic_A ∈ caligraphic_B ( blackboard_X ) and B∈ℬ⁢([0,1])𝐵ℬ01B\in\mathcal{B}([0,1])italic_B ∈...
C
Table 3: Analysis of the expected value of including uncertainty (NV: newsvendor model; PF: point forecasts).
Table 10 in the Appendix provides additional results on combinations where we apply distributional information for two sources of uncertainty while in the third source relying on the expected value. We find that the value of including uncertainty varies between the different model components, while also the sequence of...
While the application of the lookahead policy allows the retailer to account for uncertainty in the stochastic variables demand, supply, and spoilage in a multi-period setting where we assume underlying parameters for the probability distributions to be known, in practice, retailers need to adequately estimate these di...
In our analysis, for each information scenario, the retailer optimises the replenishment order quantity in each demand period according to the information available (i.e. expected values or distributions). This allows us to estimate the EVIU, i.e. cost reductions gained from precise distributional information, for each...
Our simulation study in Chapter 4 suggests that retailers are already able to reduce costs substantially even when accounting only for demand uncertainty. Therefore, we further compare average costs when using the lookahead policy incorporating only information on the demand distribution with the benchmark policy for t...
A
Notes: The table reports FE and IV estimates with robust standard errors (in parenthesis), including time and country fixed-effects. In columns 1 and 2, the dependent variable is the log of CO2 emissions (thousand metric tons of CO2), whereas in column 3 is external debt. The external debt is instrumented by the exposu...
Our main research question is what is the effect of external debt on GHG emissions? We only found a few papers that address this relationship, most of which deal with a single country (Katircioglu and Celebi, , 2018; Beşe et al., 2021b, ; Beşe et al., 2021a, ; Beşe and Friday, , 2022; Bachegour and Qafas, , 2023). As r...
We find a positive and statistically significant effect of external debt on GHG emissions when we take into account the potential endogeneity problems. A 1 pp. rise in external debt causes, on average, a 0.5% increase in GHG emissions.
We contribute to the recent study of the relationship between external debt on GHG emissions with causal evidence in a wide panel of countries. We estimate the impact of external debt on GHG emissions in a panel of 78 EMDEs from 1990 to 2015 and, unlike previous literature, we use external instruments to address potent...
We contribute to the recent study of the relationship between external debt on GHG emissions with causal evidence in a wide panel of countries. We estimate the impact of external debt on GHG emissions in a panel of 78 EMDEs from 1990 to 2015 and, unlike previous literature, we use external instruments to address potent...
C
A regime-switching model is very natural given that the history of inflation is a succession of periods of low and high inflation of varying lengths. The idea to use such models is not new as it was first proposed for US inflation by Evans and Wachtel (1993). We follow Amisano and Fagan (2013) in the use of an regime-s...
Table 6: Parameters calibrated on the log-returns of the CPI-U. We refer to Section 3.2.4 for the definitions of these parameters.
Table 2: Parameters of the regime-switching A⁢R⁢(1)𝐴𝑅1AR(1)italic_A italic_R ( 1 ) process and the Gamma random walk. The parameters of the former are inspired from parameters calibrated on real inflation data that we present later while the parameters of the latter are obtained by moment-matching as described above.
Similarly to the previous section, we simulate 10000 one-year paths with a monthly frequency for both calibrated models and we check that the distributions of the annual log-returns (i.e. the annual inflation rates) are close to the historical ones. The comparison of the empirical densities (see 14(b)) does not reveal ...
The GRM is calibrated by matching the three first moments of the historical annual log-returns while the RSAR(1) process is calibrated by log-likelihood maximization. The calibrated parameters are reported in Table 6(b).
D
Finally, each respondent is asked to indicate what they consider a fair wage for that job description. Their answer R𝑅Ritalic_R is recorded, with values ranging between r=0𝑟0r=0italic_r = 0 and r=50𝑟50r=50italic_r = 50. Improper answers were excluded from the survey and constituted less than 0.5% of the responses.
Table 1: Humans vs. AI. These results are reported in graphical form and discussed in Fig. 1. Notice a similar trend but with a downward offset of about $5 for the AI. For the anchors of $50 and $100, the histogram splits into two modes, rendering the mean, median, and standard deviations not representative. The modal ...
A trained transformer is a deterministic map, so the collection of tokens in response to a certain input string is unchanged if I apply the string repeatedly. Each output token represents a logit vector with as many components as there are words or characters in the set of tokens (GPT-3 uses sub-word tokenization). The...
I demonstrate that the minimum wage functions as an anchor for what Prolific workers consider a fair wage: for numerical values of the minimum wage ranging from $5 to $15, the perceived fair wage shifts towards the minimum wage, thus establishing its role as an anchor (Fig. 1 and Table 1). I replicate this result for a...
A summary of the results for realistic values of the anchor is shown in Fig. 1. The full range is in Fig. 2. I aggregate data into histograms approximating the probabilities of a certain wage P𝑃Pitalic_P for each job description. For each job description, I compute:
D
\mathcal{P}\in\mathcal{M}^{\square}}\omega,\quad\text{s.t.}\ \ \square\in\{1,2\}over^ start_ARG italic_ω end_ARG start_POSTSUBSCRIPT i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT □ end_POSTSUPERSCRIPT = italic_ω start_POSTSUBSCRIPT i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT □ end_POSTSUPERSCRIPT / ∑ start_POSTSUBSCRIPT cali...
Then we perform Top-K𝐾Kitalic_K filtering guided by ω𝜔\omegaitalic_ω, i.e., sampling the refined super metapaths with importance factors in the top K𝐾Kitalic_K for subsequent feature aggregation.
After grouping the super metapaths, we update the features of the target CA by aggregating the features of other nodes in the metapath, and the final target CA feature is obtained by processing multiple super metapaths of the same group, as illustrated in Fig. 5(c).
In order to alleviate information redundancy and feature explosion during metapath feature aggregation, we adjust the importance factor of super metapaths before doing so.
After adjusting the importance factor of all the super metapaths, we then perform feature aggregation to update the CA features.
D
We also assume that, on average, there is no negative selection between discontinued drugs and their (ex-ante) profitability. This assumption is reasonable because the primary reason for discontinuations is negative clinical trial results; see, for example, DiMasi (2013) and Khmelnitskaya (2022).
Let us consider implementing the drug buyout scheme at the start of the discovery stage. This policy intervention faces different tradeoffs compared to the intervention after FDA approval. The main difference is that, at the discovery stage, the uncertainty associated with drug development has yet to be resolved, and t...
Panel (a) shows the mean of the expected cost of clinical trials and the FDA application and review process (in millions of U.S. dollars) at the time of discovery. The row “All Drugs” refers to all the drugs in our sample, and, “Drugs with Complete Path” refers to the sample of drugs for which we observe discovery, FDA...
These milestones inform us of the time it takes for a drug to reach the market from its initial discovery. In some cases, we also have sales data available, which allows us to evaluate the accuracy of our estimates of the drugs’ values. In the rest of the paper, we first summarize the institutional details and the data...
Even though most scientific experiments are completed at the time of application, additional expenses are still involved in setting up manufacturing capacity, as well as legal and administrative fees.111111The FDA has prepared a set of instructions for drugs to receive approval, which clarifies that the “FDA may approv...
D
15 voters in all, with 3 experts: N=15𝑁15N=15italic_N = 15, K=3𝐾3K=3italic_K = 3. The two treatments
With p=0.7𝑝0.7p=0.7italic_p = 0.7 and q𝑞qitalic_q uniform over [0.5,[0.5,[ 0.5 ,0.7], we have verified
In all experiments, we set π=0.5𝜋0.5\pi=0.5italic_π = 0.5, p=0.7𝑝0.7p=0.7italic_p = 0.7, and F⁢(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform
Table 1: p=0.7𝑝0.7p=0.7italic_p = 0.7, F⁢(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform over [0.5,0.7]0.50.7[0.5,0.7][ 0.5 , 0.7 ]
Table 2: p=0.7𝑝0.7p=0.7italic_p = 0.7, F⁢(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform over [0.5,0.7]0.50.7[0.5,0.7][ 0.5 , 0.7 ]
C
The applications of quantum algorithms in finance include portfolio optimization [rebentrost2018quantum],
We use the following definition to describe the quantum measurement of any arbitrary normalized state characterized by n𝑛nitalic_n-qubits. For further details, we refer to, e.g., [marinescu2011classical, Chapter 2.5].
[chakrabarti2021threshold, doriguello2022quantum, fontanela2021quantum, kubo2022pricing, ramos2021quantum, QC5_Patrick, rebentrost2018quantum, QC4_optionpricing]. We also refer to the monograph [jacquier2022quantum] and surveys [egger2020quantum, jacquieroverview2023, orus2019quantum] for (further) applications of quan...
In this paper, we propose a quantum Monte Carlo algorithm to solve high-dimensional Black-Scholes PDEs with correlation and general payoff function which is continuous and piece-wise affine (CPWA), enabling to price most relevant payoff functions used in finance (see also Section 2.1.2). Our algorithm follows the idea ...
The applications of quantum algorithms in finance include portfolio optimization [rebentrost2018quantum],
B
Here, τrsubscript𝜏𝑟\tau_{r}italic_τ start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is defined in Lemma 3.4, and ζh:=inf{s≥0;σB⁢Bs3+μB⁢s=h}assignsubscript𝜁ℎinfimumformulae-sequence𝑠0subscript𝜎𝐵subscriptsuperscript𝐵3𝑠subscript𝜇𝐵𝑠ℎ\zeta_{h}:=\inf\{s\geq 0;~{}\sigma_{B}B^{3}_{s}+\mu_{B}s=h\}italic_ζ start_POSTSU...
where the function φ⁢(r,h)∈C2⁢(ℝ+2)𝜑𝑟ℎsuperscript𝐶2superscriptsubscriptℝ2\varphi(r,h)\in C^{2}(\mathbb{R}_{+}^{2})italic_φ ( italic_r , italic_h ) ∈ italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) is given by (3...
Then, the function l⁢(r,z)𝑙𝑟𝑧l(r,z)italic_l ( italic_r , italic_z ) is a classical solution to the following Neumann problem with Neumann boundary condition at r=0𝑟0r=0italic_r = 0:
By applying Lemma 3.2 and Proposition 3.5, the function v⁢(r,h,z)𝑣𝑟ℎ𝑧v(r,h,z)italic_v ( italic_r , italic_h , italic_z ) defined by (3) is a classical solution to the following Neumann problem:
Then, the function u⁢(x,h,z)𝑢𝑥ℎ𝑧u(x,h,z)italic_u ( italic_x , italic_h , italic_z ) is a classical solution to the following HJB equation with Neumann boundary conditions:
C
It is clearly visible that closing the first firm already saves 7% of emissions and that one needs to close 7 companies to reach the emissions reduction target of 20%. The expected job loss curve (blue) and the expected output loss curve (green) show large jumps with the third firm being removed, followed by a slowly i...
To empirically test our framework, we approximate hypothetical decarbonization efforts with the removal of firms from the Hungarian production network. A firm that is removed from the production network no longer supplies its customers nor does it place demand to its (former) suppliers in the subsequent time step. It a...
The ‘Remove least-employees firms first’ strategy that aims at minimizing job loss at each individual firm, shown in Fig. 3B manages to keep expected job and output loss at low levels for the initially removed firms. But since this strategy focuses on job loss at the individual firm level, it fails to anticipate a high...
This results in only a gradual increase of expected job and output loss in the beginning, but fails to anticipate the effects of a systemically very important firm which triggers widespread job and output losses. 102 firms need to be closed in this strategy to reach the benchmark.
‘Remove least-employees firms first’ strategy that aims at minimum job loss on the individual firm level,
B
Regarding the representation of deregulation in the power sector, i.e., decoupling transmission and generation expansion decisions, one can pinpoint two generalised strategies in the literature. The first spans investigations aimed at developing an optimal transmission network expansion strategy that would account for ...
Regarding the representation of deregulation in the power sector, i.e., decoupling transmission and generation expansion decisions, one can pinpoint two generalised strategies in the literature. The first spans investigations aimed at developing an optimal transmission network expansion strategy that would account for ...
In this paper, we study the impact of the TSO infrastructure expansion decisions in combination with carbon taxes and renewable-driven investment incentives on the optimal generation mix. To examine the impact of renewables-driven policies we propose a novel bi-level modelling assessment to plan optimal transmission in...
Another strategy attempts to develop efficient modelling tools to consider the planning of the transmission and generation infrastructure expansion in a coordinated manner. For example, this coordinated modelling approach has been considered in (Moreira et al., 2017; Tian et al., 2020; Zhang et al., 2020). For the mode...
The proposed model assumes the TSO to take a leading position and anticipate the generation capacity investment decisions influenced by its transmission system expansion. This assumption leads to the bi-level structure of the proposed model. Such a modelling approach is widely used in energy market planning. As an exam...
C
\frac{2}{\pi}&,\alpha=1.\end{cases}italic_C start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT = { start_ROW start_CELL divide start_ARG 1 - italic_α end_ARG start_ARG roman_Γ ( 2 - italic_α ) roman_cos ( divide start_ARG italic_π italic_α end_ARG start_ARG 2 end_ARG ) end_ARG end_CELL start_CELL , italic_α ≠ 1 end_CELL en...
be the cumulative distribution function for the stable density fStablesubscript𝑓Stablef_{\text{Stable}}italic_f start_POSTSUBSCRIPT Stable end_POSTSUBSCRIPT.
The density fStable∈Cb∞⁢(ℝ)subscript𝑓Stablesuperscriptsubscript𝐶𝑏ℝf_{\text{Stable}}\in C_{b}^{\infty}(\mathbb{R})italic_f start_POSTSUBSCRIPT Stable end_POSTSUBSCRIPT ∈ italic_C start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( blackboard_R ) of
For stable densities we therefore suggest to set C3subscript𝐶3C_{3}italic_C start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT in Theorem
of the density is known precisely, i.e., we have to know C3subscript𝐶3C_{3}italic_C start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT
C
\pi^{j},\,i=1,\ldots,n.italic_π start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT = italic_φ start_POSTSUPERSCRIPT italic_i , * end_POSTSUPERSCRIPT ( over~ start_ARG italic_μ end_ARG start_POSTSUPERSCRIPT - italic_i end_POSTSUPERSCRIPT ) + divide start_ARG italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_AR...
{2}=0.7italic_μ = 0.03 , italic_σ = 0.2 , italic_δ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 1 , italic_δ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 2 , italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0.5 , italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0.7, and α=0.01𝛼0.01\alpha=0.01italic_α = 0.01. For the spe...
In order to solve the best response problem (3.3), we fix some investor i𝑖iitalic_i and assume that the strategies πjsuperscript𝜋𝑗\pi^{j}italic_π start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT, j≠i𝑗𝑖j\neq iitalic_j ≠ italic_i, of the other agents are given. Under these conditions we can rewrite the optimizatio...
This paper is organized as follows. In the next section, we introduce the linear price impact financial market. In Section 3, we explicitly solve the problem of maximizing expected exponential utility which results in the unique constant Nash equilibrium. The argument of the utility function consists of the difference ...
Note that we can find a unique Nash equilibrium if and only if problem (3.7) and the fixed point problem for πisuperscript𝜋𝑖\pi^{i}italic_π start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT, given in terms of the system of equations (3.8), are uniquely solvable.
D
This, in turn, means that there is currently no methodology that can adequately reflect the P flows that are necessary before biomass production. In today’s world of unprecedented geopolitical power shifts and increasingly monopolistic commodity supply structures, it is in the vital interest of any country or economy t...
Trade data is not a useful measure for the flow of P per se. Measurements are usually taken as a USD value, but not in a meaningful unit that would provide information on the material P content of a traded good.111Since 2006 quantity data (mostly tonnage) is available for some items in the used trade statistics. For ou...
Our approach to P flows therefore aims to use much more detailed trade data as the basis of the analysis \citep[see also][]chen_p_net. The novelty of our approach is that we transform and connect these data to other sources in such a way that we receive results that can again be interpreted in terms of the material flo...
or as country-wise exceedence footprints \citepp_exceed. With these approaches it is possible to cover most of the countries in the world, however, for the analysis of flows that happen before the production of biomass, the resolution of input-output data cannot deliver satisfactory results, since mineral resources, fe...
We show that trade data can be used to approximate the flow of mineral resources in a meaningful way when combined with other data sources. Our flow analysis provides a useful foundation for the analysis of global P flows in terms of phosphate rock, fertilizers and related goods before biomass production. As such, it a...
B
{2}(g_{2}-g_{1})\mathds{1}_{\{R\in(g_{2},\infty)\}}.italic_ψ start_POSTSUPERSCRIPT italic_f , 4 end_POSTSUPERSCRIPT ( italic_R ) = italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_R - italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) blackboard_1 start_POSTSUBSCRIPT { italic_R ∈ ( italic_g start_POSTSUBSCRIPT...
We can see the impact of different characteristics of an EPS, such as participation rate, leg setting, and maturity, on fair premiums. It is widely acknowledged that a typical investor would not be willing to pay an upfront premium, especially when it is substantial. Therefore, we propose to focus on EPS products with ...
The most typical specifications of the protection leg are analogous to those of the fee leg. A selection of a particular protection leg depends on the buyer’s preferences and thus it would be natural to expect that a broad spectrum of products should be offered by EPS providers.
Let us introduce two most practically relevant forms of an EPS, which are called the buffer EPS and the floor EPS. Notice that the proposed terminology for a generic EPS is referring directly to the protection leg, rather than the fee leg for which the choice of a buffer
Assuming that a perfect hedge of an EPS is feasible, the provider would be indifferent with respect to the buyer’s choice of the structure of the protection leg. However, in reality only a partial hedging can be attained for more complex cross-currency products and thus some forms of the protection leg are likely to be...
B
However, the defaulters on larger amounts or with a subsequent harsh default have substantially higher penalties in terms of income and location (see Figures 5 and 8), they move to lower median home values areas and to zip codes with lower average wages and higher shares of minorities (see Appendix J).
What seems to be happening is that there are individuals who are delinquent on smaller amounts, possibly because of uninsurable shocks, who suffer the consequences of such defaults, but substantially less than those who default on larger amounts and seek bankruptcy and other legal reliefs. The latter appear to have ove...
What seems to be happening is that there are consumers who are delinquent on smaller amounts, possibly because of uninsurable shocks, who suffer the consequences of such defaults, but substantially less than those who default on larger amounts and seek bankruptcy and other legal reliefs. The latter appear to have overe...
We find that the defaulters on larger amounts or with a subsequent harsh default have substantially higher penalties in terms of income and location, they move to lower median home values areas and to zip codes of lower economic activity.
We show that the recovery is slow, painful, and in many respects only partial. In particular, after several years, up to 10, credit scores are still lower by 16 points, incomes never recover and appear to be substantially lower (by about 7,000USD or 14% of the 2010 mean), the defaulters live in lower “quality” neighbor...
B
_{jk}\right)∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_q ⋅ italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_f start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT + ∑ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT sign ( italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - italic_s start_POS...
to Minimax. As with IRV, each ballot is a ranking of some or all of the candidates.101010While it is often recommended that equal rankings be allowed under
Minimax: Vote sincerely222222While a viability-aware strategy was included for Minimax in Wolk et al. (2023),
Block Approval: Voters vote for any number of candidates.272727We use the same sincere strategy as for single-winner Approval Voting.
Approval: Vote for all candidates with uj≥E⁢Vsubscript𝑢𝑗𝐸𝑉u_{j}\geq EVitalic_u start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≥ italic_E italic_V.
B
The two models are then calibrated to three different data sets, the 2018-21 data, the 2021-23 data and the whole 2018-23 data using Markov Chain Monte Carlo methods. This Bayesian approach to calibration allows a joint estimation of latent factors, taking into account possible interdependencies and also avoids the nee...
We calibrate the 3-factor model and the 4-factor model to the spot-price data in the time interval 2018-2021. We start with an overview of the posterior properties of the model parameters obtained from the MCMC procedure described in Section 3.5. Later in this section, we present a more detailed analysis of our calibra...
The paper is structured in the following way: In Section 1 we give a non-exhaustive overview of the literature on electricity spot price models and their calibration. Table 1 provides a direct comparison of the characteristics for some of these models. Section 2 introduces the 4444-factor model, which is an extension o...
We calibrate the 3-factor model and the 4-factor model to the spot-price data in the time interval 2021-2023. We start with an overview of the posterior properties of the model parameters obtained from the MCMC procedure described in Section 3.5. Later in this section, we present a more detailed analysis of our calibra...
We calibrate the 3-factor model and the 4-factor model with changepoint to the spot-price data in the whole time interval 2018-2023. We start with an overview of the posterior properties of the model parameters obtained from the MCMC procedure described in Section 3.5. Later in this section, we present a more detailed ...
B
{𝐘^(t)}t=k⁢r+1(k+1)⁢rsuperscriptsubscriptsuperscript^𝐘𝑡𝑡𝑘𝑟1𝑘1𝑟\{\hat{\mathbf{Y}}^{(t)}\}_{t=kr+1}^{(k+1)r}{ over^ start_ARG bold_Y end_ARG start_POSTSUPERSCRIPT ( italic_t ) end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_t = italic_k italic_r + 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k + 1 ) itali...
Figure 4. Overview of DoubleAdapt with a data adapter D⁢A𝐷𝐴{DA}italic_D italic_A and a model adapter M⁢A𝑀𝐴{MA}italic_M italic_A. The parameters are shown in red.
7       Update data adapter D⁢A𝐷𝐴{DA}italic_D italic_A and model adapter M⁢A𝑀𝐴{MA}italic_M italic_A:
Figure 4 depicts the overview of our DoubleAdapt framework, which consists of three key components: forecast model F𝐹Fitalic_F with parameters θ𝜃\thetaitalic_θ, model adapter M⁢A𝑀𝐴{MA}italic_M italic_A with parameters ϕitalic-ϕ\phiitalic_ϕ, and data adapter D⁢A𝐷𝐴{DA}italic_D italic_A with parameters ψ𝜓\psiitalic...
+M⁢A𝑀𝐴+{MA}+ italic_M italic_A+H𝐻Hitalic_H+H−1superscript𝐻1H^{-1}italic_H start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT
B
Table 4 repeats the analysis presented in Table 3 Column (3) and Column (6), but with emotions assessed separately for messages containing earnings or trading-related information (“Finance”) (Columns 2 and 6) and those conveying other information (“Chat”) (Columns 1 and 5). Next, I also contrast messages containing ori...
Notes: This table presents the relationship between investor enthusiasm and two stylized facts regarding initial public offering (IPO) returns. Columns 1-4 depict the first day return, calculated as the difference between the closing and the IPO price, divided by the IPO price. Columns 5-7 illustrate the 12-month indus...
Notes: This table presents the relationship between investor emotions, investor types and two stylized facts regarding initial public offering (IPO) returns. In Panel (a) dependent variable is the first day return, which is computed as the difference between the closing and the IPO price, divided by the IPO price. In P...
Notes: This table presents the relationship between investor emotions, information content and two stylized facts regarding initial public offering (IPO) returns. The first dependent variable is the first day return, which is computed as the difference between the closing and the IPO price, divided by the IPO price. Th...
Notes: This table presents the correlation between investor emotions, information content and two stylized facts regarding initial public offering (IPO) returns. The first dependent variable is the first day return, which is computed as the difference between the closing and the IPO price, divided by the IPO price. Thi...
C
In the first use case, we aim to improve the performance of Random Forest methods for churn prediction. We introduce quantum algorithms for Determinantal Point Processes (DPP) sampling [16], and develop a method of DPP sampling to enhance Random Forest models. We evaluate our model on the churn dataset using classical ...
In our work, we use quantum neural networks with orthogonal and compound layers. Although these neural networks roughly match the general VQC construction, they produce well-defined linear algebraic operations, which not only makes them much more interpretable but gives us the ability to analyze their complexity and sc...
In this work, we have explored the potential of quantum machine learning methods in improving forecasting in finance, with a focus on two specific use cases within the Itaú business: churn prediction and credit risk assessment. Our results demonstrate that the proposed algorithms, which leverage quantum ideas, can effe...
In the second use case, we aim to explore the performance of neural network models for credit risk assessment by incorporating ideas from quantum compound neural networks [17]. We start by using quantum orthogonal neural networks [17], which add the property of orthogonality for the trained model weights to avoid redun...
In [17], an improved method of constructing orthogonal neural networks using quantum ideas was developed. We describe it below in brief.
C
It should be emphasized that RV is agnostic with respect to gains or losses in stock returns. Nonetheless, it has been habitual that large gains and losses occur at around the same time. Here we wish to address the question of whether the largest values of RV fall on the power-law tail of the RV distribution. As is wel...
For large n𝑛nitalic_n we also observe that mGB approximates the tail end better than GB2 – consistent with smaller KS values in Fig. 15 and smaller number of nDK. However, neither approximates the preceding portion of the tail well as indicated by the ”potential” DK. This has to do with the fact that neither of the di...
With the above in mind, we first address Figs. 4 – 13. According to Figs. 4 and 5, daily RV appears to be the closest of being commensurate with the Black Swan behavior as both LF and GB2 approximate the tail of the distribution better than mGB and LF does not point to existence of either DK, p<0.05𝑝0.05p<0.05italic_p...
It should be emphasized that RV is agnostic with respect to gains or losses in stock returns. Nonetheless, it has been habitual that large gains and losses occur at around the same time. Here we wish to address the question of whether the largest values of RV fall on the power-law tail of the RV distribution. As is wel...
The main result of this paper is that the largest values of RV are in fact nDK. We find that daily returns are the closest to the BS behavior. However, with the increase of n𝑛nitalic_n we observe the development of ”potential” DK with statistically significant deviations upward from the straight line. This trend termi...
D
Feng et al. (2021) establish the convergence to equilibrium of learning algorithms in first- and second-price auctions, as well as multi-slot VCG mechanisms. Our results in §4.1 provide an empirical counterpart to their theoretical results, but also add nuance as to the speed of convergence of different algorithms in r...
The second branch instead takes the perspective of a single bidder who uses learning algorithms to guide her bidding process. Weed et al. (2016) focus on second-price auctions for a single good, and assume that the valuation can vary either stochastically or adversarially in each auction. In a similar environment, Bals...
Nekipelov et al. (2015) proposes techniques for estimating agents’ valuations in generalized second-price auctions, which stands in contrast to our method that directly utilizes agents’ learning algorithms and is independent of the specific auction format. In a different direction,
Feng et al. (2021) establish the convergence to equilibrium of learning algorithms in first- and second-price auctions, as well as multi-slot VCG mechanisms. Our results in §4.1 provide an empirical counterpart to their theoretical results, but also add nuance as to the speed of convergence of different algorithms in r...
To the best of our knowledge, the closest papers to our own are Kanmaz and Surer (2020), Elzayn et al. (2022), Banchio and Skrzypacz (2022), and Jeunen et al. (2022). The first reports on experiments using a multi-agent reinforcement-learning model in simple sequential (English) auctions for a single object, with a res...
B
Our work is related to and extends various strands of the literature, which we briefly summarise below. Prior to G&M’s research, the timing of contributions and the level of funds raised had received considerable attention in the theoretical literature. Varian (1994) shows that, under appropriate assumptions, a sequent...
Andreoni et al. 2002; Coats et al. 2009; Gächter et al. 2010; Figuieres et al. 2012; Teyssier 2012 in public goods games without a threshold). The vast majority of the aforementioned studies conclude that the sequential protocol is significantly more effective in solving the public goods problem, compared to the simult...
In this model, individuals have to make decisions sequentially, without knowing their position in the sequence (position uncertainty), but are aware of the decisions of some of their predecessors by observing a sample of past play. In the presence of position certainty, those placed in the early positions of the sequen...
A similar result regarding the superiority of the sequential mechanism has also been established in the literature on general public goods, particularly in the context of common pool resource games 444Sequential mechanisms have also been analysed in give-some and take-some social dilemma games, for example, see Tung an...
The early and recent experimental literature has provided substantial evidence on the superiority of a sequential contribution mechanism compared to a simultaneous one (see Erev and Rapoport 1990; Rapoport and Erev 1994; Rapoport 1997, in step-level public goods games; and
D
The financial market is marked by the participation of a diverse range of investors, each with their unique attitudes and investment strategies. In order to capture this diversity, we build upon existing concepts and introduce the multi-SSQW that corresponds to a multitude of investors. This approach allows us to desig...
meaningful results. Using a multi-SSQW quantum circuit to simulate financial stock distributions can be seen as employing well-orchestrated circuits. The multi-SSQW approach is designed to navigate complex quantum state spaces efficiently, aiming for rapid exploration and convergence to a desirable state or solution th...
The probability distributions in position space of DTQW [33, 34, 35], shown in Fig.(2), do not resemble the probability distributions in everyday life. In the marketplace, prices are typically determined by the interaction of buyers and sellers. The price of a good or service in the market is established through the ag...
In this section, we demonstrate the ability of multi-SSQW to function as an effective financial simulator. It is capable of accurately modeling intricate financial systems and providing reliable simulations. One of the highlights of this approach is its inherent capability to exhibit convergence and provide rapid resul...
We have extended the concept of SSQW to multi-SSQW, employing multiple walkers to represent investors with diverse investment strategies in the market. In modeling the intrinsic uncertainty in financial markets, we showcase the efficacy of our purpose-built multi-SSQW quantum algorithm and circuitry through its applica...
D
More generally, chasing past performance in financial decisions is a form of success-based imitation.
In contrast to our five experts, in Apesteguia et al. (2020) subjects could choose among 80 leaders and in Holzmeister et al. (2022) there was no choice.
The present study extends the design of Apesteguia et al. (2020) by varying the complexity of the underlying task and the information investors receive about the experts. When our investors do not have access to information on experts’ decision quality, we confirm that a substantial fraction of subjects chooses to dele...
Of course, depending on the link between an action’s current earnings and future earnings, such imitation may actually decrease payoffs (see e.g. Vega-Redondo, 1997; Huck et al., 1999; Offerman et al., 2002) as well as possibly increase them (see e.g. Schlag, 1998; Apesteguia et al., 2018).
See e.g. Pelster and Hofmann (2018) and Apesteguia et al. (2020) for discussion of the scope and operational details of such platforms.
C
We are ready to state our main result in the second part of the paper. Here, to simplify the exposition and to obtain sharp numerical results, we fix (α,β)=(0.75,0.5)𝛼𝛽0.750.5(\alpha,\beta)=(0.75,0.5)( italic_α , italic_β ) = ( 0.75 , 0.5 ), (x¯,y¯)=(4,2)¯𝑥¯𝑦42(\bar{x},\bar{y})=(4,2)( over¯ start_ARG italic_x end_A...
For λ𝜆\lambdaitalic_λ values as in Theorem 6.2 (satisfying the SC), we obtain a pretty smooth relation between λ𝜆\lambdaitalic_λ and the ergodic sums of f𝑓fitalic_f as in Figure 15 (using 5000500050005000 terms to estimate the ergodic sums). Extending Theorem 6.2 (and Figure 15) using the naive estimates of the ergo...
For 2.75<λ<42.75𝜆42.75<\lambda<42.75 < italic_λ < 4 except λ𝜆\lambdaitalic_λ values corresponding to the few windows in Figure 12 (and possibly except some λ𝜆\lambdaitalic_λ values whose total Lebesgue measure is 00, see Proposition 6.3 below), there exists a unique acim for f𝑓fitalic_f. Moreover for these λ𝜆\lamb...
We need ”Lebesgue almost” (or ”except a set of measure zero”) in Theorems 1.4, 6.2, and Proposition 6.3 since the following (anomalous) examples are known, see [Hofbauer and Keller, 1990] and [Johnson, 1987]: for a quadratic map Tλ⁢(x)=λ⁢x⁢(1−x)subscript𝑇𝜆𝑥𝜆𝑥1𝑥T_{\lambda}(x)=\lambda x(1-x)italic_T start_POSTSUBSC...
For 1<λ<41𝜆41<\lambda<41 < italic_λ < 4, the ergodic sums of f𝑓fitalic_f are as in Figure 1 (possibly except some λ𝜆\lambdaitalic_λ values whose total Lebesgue measure is 00).
D
When transacting on the Uniswap Labs interface, users are shown a quoted output amount (resp. input amount) for the input amount (resp. output amount) that they entered into the interface, in the form of a quoted average execution price. After seeing the quoted price, users can then decide whether to sign and broadcast...
the minimum amount out or the maximum amount in, which is the worst case amount of the output/input asset that the user is willing to receive/spend; and a deadline, specifying a deadline by which the swap must be completed, after which the swap is invalid. Even if a transaction is finalized on the blockchain, the under...
where 𝗋𝖾𝖺𝗅𝗂𝗓𝖾𝖽𝖯𝗋𝗂𝖼𝖾isubscript𝗋𝖾𝖺𝗅𝗂𝗓𝖾𝖽𝖯𝗋𝗂𝖼𝖾𝑖\mathsf{realizedPrice}_{i}sansserif_realizedPrice start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the average realized execution price of the swap (the amount of the input asset spent, over the amount of the output asset received), and 𝗊𝗎𝗈𝗍𝖾𝖽...
The slippage tolerance of a swap is defined as the ratio of the quoted amount out over the minimum amount out (resp. the quoted amount in over the maximum amount in), minus 1, expressed in basis points (bps). The price impact of a swap is defined as the ratio of the quoted price over the market mid price minus 1, expre...
The price impact of a swap is defined as the ratio of the quoted price over the market mid price minus 1, expressed in bps, and directly measures market depth. We assume that the quoted price incorporates LP fees and the expected liquidity consumption of the swap, as in the case with the Uniswap Labs interface.
C
In addition to FMA data, we collect additional public information documented on their websites. Our aim is to categorize VASPs by their service offering. We construct categorical variables that indicate whether the VASP offers custody services, facilitates payments, allows users to exchange cryptoassets, implements a t...
Whilst the sample is small and the features are few, to ensure consistency and objectivity in categorizing VASPs we exploit an unsupervised learning method.
We implement two approaches to extract on-chain VASP-related information for the UTXO-based and the account-based DLTs. The entities that operate on the Bitcoin blockchain interact with each other as a set of pseudo-anonymous addresses. We exploit known address clustering heuristics (Androulaki et al.,, 2013; Ron & Sha...
Similarly to VASP-2, on-chain activity is higher than the value reported on the balance sheet after 2020. As expected, the amount of cryptoasset holdings is small, as the VASP is non-custodial, and exceeds 100K EUR only after 2021. All reported assets are ether: the absence of stablecoins is expected, as this VASP trad...
VASP-5 is the last we analyze; values are shown in Figure 9. This VASP bases its services on the purchase and sale of bitcoins. For this VASP, using both attribution tags in the TagPack database mentioned above and re-identification strategies, we could only gather information for a few months in between 2014 and 2017 ...
A
In convex multi-objective optimization one usually focuses on weakly Pareto optimal points (or weakly ϵitalic-ϵ\epsilonitalic_ϵ-Pareto optimal points) since they can equivalently be characterized as solutions to the weighted sum scalarization. That is, a point x∗∈𝕏superscript𝑥𝕏x^{*}\in\mathbb{X}italic_x start_POSTSU...
However, despite this issue we will be able to compute a set which contains the set of all Pareto optimal points of the convex problem (3) (and is included in the set of all ϵitalic-ϵ\epsilonitalic_ϵ-Pareto optimal points) if we make additional assumptions on the structure of the constraint set 𝕏𝕏\mathbb{X}blackboard...
The goal of this paper is to introduce a method which approximates the set of all Nash equilibria of a convex game. Hence, ϵitalic-ϵ\epsilonitalic_ϵ-approximate solution concepts are considered for both, Nash equilibria and Pareto optimality. Similar to the characterizations proven in [6], the set of ϵitalic-ϵ\epsiloni...
As mentioned above in issue (i), we try to cover the set of all Pareto optimal points and therefore make additional assumptions on the structure of the constraint set 𝕏𝕏\mathbb{X}blackboard_X. In the following, we will consider constraints sets 𝕏𝕏\mathbb{X}blackboard_X that are polytopes, whereas in Remark 4.13, we...
In the case of linear games the set of all Pareto optimal points of problem (3) can be computed exactly and Theorem 3.1 can be used to numerically compute the set of all Nash equilibria of such games, see [6]. If the game is not linear, approximations need to be considered. In the following, we will therefore relate th...
A
Different answers have been giving to this limitations. Some authors suggest to introduce price spikes thanks to jump-diffusion processes [28] [26] [33] while others explore multi-factor jump-diffusion models [46] or alternative distributions for the residuals [16]. Next sections will concentrate on these proposals and...
We begin our analysis by exploring various marginal models for spot energy price and daily temperature. In particular, we dive deep into a large literature on energy and commodity modeling [29] [58]. First, we examine mean-reverting diffusion models. Pioneering models by Gibson and Schwartz [34], Schwartz [53], and Luc...
Several papers have studied the possibility to consider non-Gaussian increments. A particularly popular one is to combine Brownian motion with a compound Poisson process [28] that would capture the price spikes usually observed in energy prices, extending (5) as follows
Taking inspiration of Schwartz’s models, several papers have explored the possibility to combine multi-factor models with Levy processes [18] [46]. The adaptation of (5) to a multi-factor model of n𝑛nitalic_n factors takes the following form:
Multi-factor models with non-Gaussian increments represent another popular alternative to model erratic dynamics. Two factors and three factors models with Gaussian increments were developed by Schwartz through different collaborations [53] [52] [34] [45]. The idea behind is that the spot prices could be driven by a lo...
B
The other possibility is that trading volume is accurate but the reported open interest is incorrect. Market participants do observe the changes in open interest and attempt to infer how informed investors are being positioned in the market. The general heuristic is that if open interest is rising and the price is incr...
In this work we consider the most liquid Bitcoin perpetual swaps on seven of the top cryptocurrency exchanges.
We find that trading volume cannot be reconciled with the reported changes in open interest for the majority of these exchanges. It is unclear whether this is due to delayed or unreported trading volume or due to incorrectly reported open interest. In our view, the most likely scenario is that both are true, perhaps, h...
Perpetual swaps were introduced by BitMEX in 2016201620162016 [Hayes,]. They are futures contracts with no expiry. These contracts allow for high leverage with most cryptocurrency exchanges offering leverage in the range of 100100100100x–125125125125x and some recent platforms allowing up-to 1000100010001000x(!) levera...
Our datasets are comprised of tick-by-tick trades, block trades, liquidations, and open interest as reported by the APIs of the respective exchanges mentioned in Table 1. We limit our attention to Bitcoin linear perpetuals quoted in USDT (https://tether.to/en/) and inverse perpetuals quoted in USD, as these are the mos...
A
(y)}\right)( italic_F ( italic_w ) , italic_L ( italic_w ) ) = ( divide start_ARG ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_w end_POSTSUPERSCRIPT italic_d italic_y italic_ρ ( italic_y ) end_ARG start_ARG ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT i...
The Gini coefficient is 00 for a density concentrated on mean wealth (that is, for a wealth-egalitarian society) whereas it approaches its upper limit of 1111 as the wealth is concentrated into an ever-vanishing proportion of the population. See [5, 19] for a discussion of the nonstandard properties of wealth distribut...
The Gini coefficient also has a geometric interpretation when the Lorenz curve is used to represent a distribution of wealth. If the population were to all have the mean wealth, then the Lorenz curve would be the identity and correspond to an egalitarian society. As a population moves toward total oligarchy, the Lorenz...
We introduced a variant of the Yard-Sale Model for which the Gini coefficient of economic inequality monotonically increases under the resulting continuum dynamics yet the rate of change in time of the Gini coefficient permits an upper bound. The way in which this bound holds is similar to the entropy – entropy product...
A Lorenz curve represents a distribution of wealth in the unit square, [0,1]×[0,1]0101[0,1]\times[0,1][ 0 , 1 ] × [ 0 , 1 ], by plotting on the abscissa the fraction of a population with wealth less than w𝑤witalic_w and the fraction of total wealth held by this subset of the population on the ordinate. More precisely,...
B
Table 4 and Figure 8 display the pricing results up to 100 dimensions. It is clear that the LSM doesn’t perform well as DKLs in high dimensional MJD. When d≥60𝑑60d\geq 60italic_d ≥ 60, the pricing errors of LSM are greater than 5%percent55\%5 % and even reach 20%percent2020\%20 % in 100 dimensions. In contrast, the ma...
A common approach to mitigate the curse of dimensionality is the regression-based Monte Carlo method, which involves simulating numerous paths and then estimating the continuation value through cross-sectional regression to obtain optimal stopping rules. [1] first used spline regression to estimate the continuation val...
Figure 5 illustrates the pricing error and computational time of DKL methods with various numbers of inducing points in 2-dimensional and 50-dimensional cases. It is noteworthy that there are no significant increases in computation time as the dimensions increase, leading to the conclusion that DKL models are not susce...
In this work, we will apply a deep learning approach based on Gaussian process regression (GPR) to the high-dimensional American option pricing problem. The GPR is a non-parametric Bayesian machine learning method that provides a flexible solution to regression problems. Previous studies have applied GPR to directly le...
Valuing an American option involves an optimal stopping problem, typically addressed through backward dynamic programming. A key idea is the estimation of the continuation value of the option at each step. While least-squares regression is commonly employed for this purpose, it encounters challenges in high-dimensions,...
D
S⁢(z,y)=∫0y−zℓ⁢(s)⁢ds.𝑆𝑧𝑦superscriptsubscript0𝑦𝑧ℓ𝑠differential-d𝑠S(z,y)=\int_{0}^{y-z}\ell(s)\,{\mathrm{d}}s\,.italic_S ( italic_z , italic_y ) = ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_y - italic_z end_POSTSUPERSCRIPT roman_ℓ ( italic_s ) roman_d italic_s .
The optimal coupling of the MK minimisation problem induced by the scoring function given in (14) is the comonotonic coupling.
The optimal coupling of the MK minimisation problem induced by the score given in (11) is the comonotonic coupling.
The optimal coupling of the MK minimisation problem induced by any consistent generalised piecewise linear score is the comonotonic coupling.
The optimal coupling of the MK minimisation problem induced by any consistent scoring function for the entropic risk measure is the comonotonic coupling.
A
\overline{x}^{(i)}\right\}.italic_U start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , over¯ start_ARG italic_x end_ARG start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) := - roman_exp { - divide start_ARG 1 end_ARG start_ARG italic_δ start_POSTSUBSCRIP...
In this section, we consider the n𝑛nitalic_n-agent games. The market model is same as [23] and each agent invests in their own specific stock or in a common riskless
For the n𝑛nitalic_n-agent games, we define, for each agent i=1,⋯,n𝑖1⋯𝑛i=1,\cdots,nitalic_i = 1 , ⋯ , italic_n, the type vector
Now we formulate the representative agent’s optimization problem. Note that this is a mean field game with common noise B𝐵Bitalic_B, so conditional expectations given B𝐵Bitalic_B will be involved. As argued in [11, 22], conditionally on the Brownian motion B, we can get some kind of law of large numbers and asymptoti...
Each agent derives a reward from their discounted inter-temporal consumption and final wealth, to be specific, for agent i𝑖iitalic_i, the expected payoff is
D
The fees paid by non-atomic arbitrage transactions exceed current block rewards on the Ethereum PoS consensus layer repeatedly. For instance, their value exceeds the current consensus layer block reward by more than a factor of 10 in 15,360 blocks during our data collection period and we further note that their value m...
Non-atomic arbitrage opportunities existed ever since the launch of DEXes, as these naturally arise when you have two markets quoting prices for the same assets. However, Ethereum’s transition from Proof-of-Work (PoW) to Proof-of-Stake (PoS) in September 2022 marked a watershed moment, due to the changes in block build...
To provide a better understanding of non-atomic arbitrage, we go through a case study of block 18,360,789 – a block with a significant price change in the lead-up to the block, i.e., the time between the previous block proposal and the block proposal itself. In Figure 4, we plot the time and the value of bids from the ...
Coming back to Figure 4, we can observe that around five seconds after the price starts to change on Binance.com the bids start to increase. At this point, the price difference appears to be big enough for non-atomic arbitrage to be profitable. Further, we find that bids from the builders that are associated with non-a...
Previous works [6, 18] have demonstrated that MEV (i.e., high-value transactions) presents a risk to the consensus layer in PoW. To be exact, the consensus is vulnerable to time-bandit attacks, as it can be rational for the block proposer to fork the blockchain to exploit MEV in previous blocks themselves. Re-orgs, req...
D
In summary, current research on LLMs in financial applications aligns with and reinforces the methodologies underpinning each component of the proposed system. However, to the best of our knowledge, the presented approach is distinct both in its design and evaluation methodology as it leverages multi-modal financial da...
The model’s output, structured in a concise format, includes a decision (”buy”, ”sell”, or ”hold”) along with a clear, step-by-step explanation of the reasoning behind this choice. The terms ”buy” and ”sell” are defined within the context of portfolio positioning (long and short positions, respectively), while ”hold” i...
MarketSenseAI’s architectural framework, depicted in Figure 1, merges four core components responsible for data inputs with a fifth component to facilitate the final recommendation (i.e., buy, hold, or sell). This component synthesizes all the information and provides a concise explanation for the respective decision. ...
The implementation of MarketSenseAI was executed using Python 3.11, leveraging the LangChain framework (Chase, 2022) for prompt construction and utilizing OpenAI’s API for accessing the GPT-4 model. Each component of MarketSenseAI, as outlined in Section 3, functions independently, running as a standalone script. The o...
The signal generation component, as the final stage in the MarketSenseAI pipeline (Figure 1), integrates the textual outputs from the news, fundamentals, price dynamics, and macroeconomic analysis components. This process results in a comprehensive investment recommendation for a specific stock, paired with a detailed ...
B
The proof of this quadrature discretisation result follows from an application of Theorem 2.1 in Grzelak (2022a). In B, we analogously obtain the pdf of the randomised component process Yjϑ⁢(t)subscriptsuperscript𝑌italic-ϑ𝑗𝑡Y^{\vartheta}_{j}(t)italic_Y start_POSTSUPERSCRIPT italic_ϑ end_POSTSUPERSCRIPT start_POSTSUB...
To address the limitation highlighted earlier, we now introduce a local volatility model for the composite process that is fully defined within the well-established framework of stochastic processes with deterministic parameters, while maintaining the marginal distributions obtained from the quadrature discretization o...
that has no added layer of randomisation. We show that the solution of this SDE has a probability density function of the same shape as is obtained from the randomised composite process Xϑ⁢(t)superscript𝑋bold-italic-ϑ𝑡X^{\bm{\vartheta}}(t)italic_X start_POSTSUPERSCRIPT bold_italic_ϑ end_POSTSUPERSCRIPT ( italic_t ), ...
The discretization achieved through Gauss quadrature is significant for the applicability of this technique, as it links the random composite process to a finite number of concrete conditional component processes. This is valuable in applications like the process calibration, where each conditional process may benefit ...
The main result of this section is the stochastic switching equivalent to 3.6, an SDE of local-volatility type which may be defined in the framework of the probability space (Ω¯,ℱ¯,ℙ¯)¯Ω¯ℱ¯ℙ(\bar{\Omega},\bar{\mathcal{F}},\bar{\mathbb{P}})( over¯ start_ARG roman_Ω end_ARG , over¯ start_ARG caligraphic_F end_ARG , over¯...
C
The dashed lines correspond to the convexity calculated in the Hull-White model using Eqs. (E.1) and (E.2). This model is calibrated at the ATM implied volatility of a caplet with the same contract duration.
We also compare the 3M SOFR futures convexity of Eq. (5.8) with the 3M Eurodollar convexity, where both include the effects of option smile and skew. For more details about the calculation of the Eurodollar convexity, see Appendix D.
Fig. 2 shows the impact of not modelling correctly the option smile and skew on the futures convexity may be around 20%percent2020\%20 % for short maturities.
the impact of correctly capturing market skew and smile in convexity by comparing with the convexity extracted using a Hull-White model calibrated to the at-the-money
Finally, we show the difference between the convexity of 3M SOFR futures and the convexity of the Eurodollar future in Fig. 3.
B
Not_Invested ×\times× High-Ability ×\times× 𝐏𝐞superscript𝐏𝐞\mathbf{P^{e}}bold_P start_POSTSUPERSCRIPT bold_e end_POSTSUPERSCRIPT
This table reports marginal effects of panel logistic regressions using subject-level random effects and a cluster–robust VCE estimator at the matched group level (standard errors in parentheses). The dependent variable is a binary variable that equals 1 if the consumers experiences undertreatment.
This table reports marginal effects of panel logistic regressions using subject-level random effects and a cluster–robust VCE estimator at the matched group level (standard errors in parentheses). The dependent variable is a binary variable that equals 1 if the consumer switched to a new expert in the current round. Un...
This table reports marginal effects of panel logistic regressions using subject-level random effects and a cluster–robust VCE estimator at the matched group level (standard errors in parentheses). The dependent variable is a binary variable that equals 1 if the consumers experiences overtreatment.
This table reports results of a panel ordered logistic regressions using subject-level random effects and a cluster–robust VCE estimator at the matched group level (standard errors in parentheses). The dependent variable is an ordinal variable that captures the number of consumers (0 - 3) who approached the expert in t...
D
Trader: Individuals or entities engaged in the purchase and sale of perpetual contracts. These traders furnish collateral to maintain and manage their positions through the trading of such contracts.
Matching Module: This module is entrusted with storing, correlating, and executing purchase and sale orders of contracts.
Risk Control Module: This module is vital for assessing and supervising the position of every trader account, contingent on the orders that have been executed. Its role is pivotal in ascertaining that the provided collateral is sufficient to offset potential deficits. Furthermore, in specific scenarios, it assumes cont...
Custody Module: This module is responsible for ensuring the security of assets across all trader accounts. It consistently updates and retains the latest balance details and facilitates both deposit and withdrawal operations initiated by traders.
The Oracle Pricing Model(Fig. 6) and VAMM Model (Fig. 7) gravitate toward a greater degree of decentralization. Both models leverage smart contracts for order matching, with Liquidity Providers assuming the role of direct counterparties to traders, engendering an indirect mode of trade execution among traders. Neverthe...
C
(iii) The Pensions Regulator’s interventions, arguing for higher levels of prudence, have specifically referenced the high SfS and TP liabilities, as reported by USS, see Section 5.1, suggesting that these invite regulatory concern.
Turning next to the USS 2023 discussion of the funding ratio condition as reproduced in Appendix A.2. USS makes clear for the first time in any consultation material that their SfS modelling ‘comfortably passes’ the benefit payment condition but that the funding ratio condition is not quite passed. Any reasonable readi...
USS public definitions of SfS made no, or minimal, reference to the funding ratio condition until 2023. Stakeholders that did reference SfS (including JEP, UUK and UCL) did not mention the funding ratio condition, see Appendix A.2.
The funding ratio condition is shown in general to dominate over the benefit payment condition, and by a significant margin. This strongly indicates that the funding ratio condition is setting the SfS liabilities121212USS indicate the funding ratio also dominates the 2023 valuation, Sec. 3.3 and App. A.2..
The funding ratio condition as described in Section 3.3 does not measure the ability to pay pension benefits. It is also clear from Section 3.2 that the funding ratio condition dominates in setting the SfS liabilities. This means the funding ratio condition obscures the other SfS condition, the benefit payment conditio...
B
We calculate the GA at a confidence level of q=99.9%𝑞percent99.9q=99.9\%italic_q = 99.9 % over a time horizon of T=1𝑇1T=1italic_T = 1 year.
For each of the inputs, we compute the GA according to each of the three approaches for the confidence level q=0.999𝑞0.999q=0.999italic_q = 0.999. Results are summarized in Table 4.3 and Figure 4.2. As can be observed, the prediction error for the first order GA approximation from (A.3) is on average about twice as la...
We compute the NN GA and the analytic approximation GA in both the actuarial CreditRisk+ model and the MtM approach and calculate the percentage error with respect to the exact GA obtained by MC simulations with IS.
First we investigate the effect of reducing the number of obligors by gradually deleting obligors from the originally sampled portfolio, depicted in Figure 5.1 (a) for the actuarial and in Figure 5.2 (a) for the MtM approach. We observe that with an increasing number of obligors the GA tends to decrease while this rela...
The GAs for the described MDB portfolios are reported in Tables 5.2 and 5.3. Our results for the percentage error between the NN GA and the exact MC GA show that the NN approach is highly accurate for both the actuarial CreditRisk+ model and the MtM approach. Comparing the results to the percentage error of the approxi...
B
Another notable advance in LLMs is the development of a multi-agent framework. Park et al. (2023)[3] suggest a novel mode of interaction among LLMs that mimics human collaborative dynamics. This framework allows individual LLMs to specialise in distinct areas of expertise, enabling them to work in concert towards a com...
This proposed multi-agent AI framework offers a comprehensive solution that automates the process of anomaly detection in tabular data, follow-up analysis and reporting. This workflow can not only improve efficiency but also enhance the accuracy and reliability of financial market analysis. By reducing the reliance on ...
The demonstration of AI in financial market analysis through a multi-agent workflow showcases the potential of emerging technologies to improve data monitoring and anomaly detection. Integrating LLMs with traditional analysis methods could significantly enhance the precision and efficiency of market oversight and decis...
These recent technology advances offer a pathway to significantly streamline, and potentially automate, the labour-intensive processes of traditional financial market data analysis. This paper introduces a framework designed to replicate and enhance the financial market data validation workflow. By employing a multi-ag...
This advance in AI-driven financial market data analysis suggests a reconfiguration of the data analysis and decision-making landscape. With ongoing advances in AI technology, the future envisages a framework capable of autonomously executing increasingly complex analytical tasks, diminishing the need for human oversig...
C
Group C - Untrained Group. The third group serves as a control to understand the performance improvement obtained from training. The agents in this group load the random initialized parameters and run simulations without training.
Group C - Untrained Group. The third group serves as a control to understand the performance improvement obtained from training. The agents in this group load the random initialized parameters and run simulations without training.
Group B - Testing Group. The agents in this group are pre-trained for 10 hours and are used in the simulation without continuing training.
For each random seed, we generate the parameters of the neural networks for the Group C agents directly. Each agent in Group C is trained for 10 hours and their parameters become the parameters used for each agent in Group B. The same parameters are used to initialize the agents in Group A. We are describing the proces...
In this experiment, we only use continual learning Group A MM agents, as they need to adapt to changing market conditions. The LT agents are also from Group A but their reward function is changed through the evolution of the target buy/sell parameters. Specifically, Figure 9 shows the price process resulting from the a...
C
However, in the case of image encoders the classification categories are largely non-overlapping. This is perhaps surprising, but the number of ‘things’ in the world is obviously unfathomably large so it wouldn’t be feasible to assign a probability to each of them. Thus, one encoder might have ‘palace’ as a category an...
Given the importance of visual aspects of properties in determining real estate values, housing has been used as an application in the computer science literature to test how deep learning can be used to conduct visual content analysis and scene understanding. Law et al. (2019) develop and train two separate architectu...
Given the importance of visual features of housing in purchasing decisions, we assess whether image data can improve housing price predictions that come from standard hedonic analyses. Hedonic pricing models offer a relatively straightforward approach to estimate housing prices using observable characteristics about a ...
These findings demonstrate how deep learning can be used to information from images that is unobservable in traditional data. We apply this to housing, where visual details are particularly relevant since these details influence buyers’ perception of a property. However, these subtle visual details are not captured by ...
While two houses may be similar across these features, many other characteristics influence the perceived worth of each property. These kinds of differences are observable when potential buyers look at photos of, or visit, the property but are not captured in structured housing data.
C
In our methodology, we employed a two-step process to analyze the impact of COVID-19 on various industries.
The graphs outline the variability of Personal Income (PI) across different industry sectors over time, from the first quarter of 2020 to the second quarter of 2023 in response to Covid-19 shock. The impacts are measured as deviations from the forecasted PI without the pandemic.The key findings include:
This study employed time-series analysis to examine the variability in Personal Income (PI) across various industry sectors during Covid 19. To capture the pre-pandemic trends and isolate the impact of COVID-19, ARIMA models were fitted to data up to 2019 Q4 (end of the pre-COVID period) for each industry. This approac...
Next, we calculated the impact of the pandemic on each industry by comparing these forecasted values with the actual data from 2020 Q1 to 2023 Q2. This comparison was made for each quarter, allowing us to assess both the immediate and extended impacts of the pandemic. The impact was quantified as the difference between...
First, we forecasted Personal Income (PI) for each industry using ARIMA models, projecting 14 quarters ahead from the first quarter of 2020, based on data up to the end of 2019. This forecasting created a baseline to compare against the actual PI observed during the pandemic. We selected a specific window of the time s...
D
Suppose that there is a pool of identically distributed extremely heavy-tailed losses (i.e., infinite mean), possibly statistically dependent.
Each agent (e.g., a reinsurance provider) needs to decide whether and how to diversify in this pool.
Moreover, if u∗<a/2⁢ksuperscript𝑢𝑎2𝑘u^{*}<a/2kitalic_u start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT < italic_a / 2 italic_k, i.e, the optimal position of each external agent is very small compared with the total position of each loss in the market, the loss Xisubscript𝑋𝑖X_{i}italic_X start_POSTSUBSCRIPT italic_i en...
This is related to the question raised in the Introduction: By Proposition 6, as long as the agent’s risk preference is monotone, an agent should not diversify, under the setting of this section.
For instance, in the context of reinsurance, h⁢(x)=x∧cℎ𝑥𝑥𝑐h(x)=x\wedge citalic_h ( italic_x ) = italic_x ∧ italic_c for some threshold c∈ℝ𝑐ℝc\in\mathbb{R}italic_c ∈ blackboard_R corresponds to an excess-of-loss reinsurance coverage; see e.g., OECD (2018).
A
We take the average of these category-specific stress indicators to compile a comprehensive stress index that reflects the overall market conditions.
Finally, we scale the resulting average to fall between 0 and 1 by applying the cumulative distribution function (norm.cdf) to the computed stress index, which normalizes our final index value.
We then apply a statistical method called z-scoring to this 10-day sentiment average, which helps us understand how strongly the news is leaning compared to the norm.
If we refine our analysis on the Sharpe ratio, we can also notice that the strategy based on the Stress index alone always comes second indicating that the signals emitted by the stress index seem quite robust and more effective than the ones using the VIX index. Regarding turnover, which measures the frequency of trad...
Because the stress index final result is a number between 0 and 1 thanks to the cumulative distribution function of the normal distribution, we directly get a stress index signal.
A
Fractional Hot Deck Imputation (FHDI): Here in this work (Song et al., 2020) each missing value has been replaced with a set of weighted imputed values here a missing value of the recipient unit gets replaced by the similar values of the donor unit. The values of donor unit are assigned with fractional weights in this ...
Machine Learning models were deployed by many researchers, including (Leo et al., 2019), for banking risk management. The authors of (Mai et al., 2019) use deep learning models for the same purpose. Other authors (Smiti and Soui, 2020) use deep learning for borderline smote, wherein they focused on imperfect classifica...
Fig. 5 shows the error in the prediction of individual values with different methods. As it could be observed from the figure the proposed granular prediction method results in low error consistently over all the years. The performance of FHDI and Autoencoder are equally good in most of the cases. Please note that FHDI...
The overall method defined here for bankruptcy prediction has been proven to be effective over all the five years Polish dataset. The newly formulated data imputation technique with contextual granule has been compared with three other popular methods, and resulted in higher or almost equal accuracy even compared to au...
Autoencoder: Autoencoders have become popular now-a-days for missing value imputation (Gjorshoska et al., 2022). Here the autoencoder approximates the values by learning a higher-level representation of its input.
D
The main fields that can be found in a transaction are: bank, account, transaction date, amount, relative balance, and several text fields (description, reference, payer and payee)
The information used to build the training dataset consists of the 3 months of banking transactions prior to the signature date of 4763 loans given between 2017 and 2023, together with daily account balance and financial product information for the same period. With this information, more than 350 variables are generat...
The loan application process begins with the customer specifying the characteristics of his/her desired loan, and continues with the declaration of certain personal data, including both socio-demographic and professional information. Then, customer is required to aggregate their bank accounts. This provides bank moveme...
The risk model of Wanna, Fintonic’s financial institution, is a binary classification model trained to predict a customer’s probability of default in the next 12 months based on their last 3 months of aggregated banking information. To train the model, information from the history of loans given by Wanna has been used ...
It is important to note that the client can add bank accounts of which he/she is not necessarily the account holder, which is a problem since the loan sanction should be made only on those accounts on which he/she is the account holder. For this reason, we have a service to check this beforehand, obtaining the features...
B
In conclusion, we find that the graph embedding method works better in separating building blocks associated with the same protocol in comparison with FFC.
Table 2: Clustering results on building blocks with combinations of node features and building block labels. The best results for each target label are highlighted through gray shading, indicating that the Signature Group node feature produced the optimal clustering outcome evaluated by both target labels.
We evaluate the clustering performance by computing the homogeneity, completeness, V-measure, and purity over both two target labels and the four node features defined in Section 3.
higher values for the clustering evaluated on the protocol target labels, compared to the financial functionalities.
We note that the information used for the building block target label Financial Functionality Category differs from that used as node feature for the Signatures Selectors and the Signatures Group; indeed, the former uses information from the name of the function invoked only, while the latter two use data of all functi...
A
\right\|\boldsymbol{\phi}\left(t-t_{j}\right)\right)\right)over~ start_ARG bold_h end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_l ) end_POSTSUPERSCRIPT ( italic_t ) = ReLU ( ∑ start_POSTSUBSCRIPT italic_j ∈ italic_n start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( [ 0 , italic_t ...
for a downstream task, node classification. We build our model within the TGN framework excluding the memory module. This was due to the very large size of the data used in our experiments, leading to out-of-memory errors. In graph embedding module, temporal embeddings for a dynamic graph are generated, specifically cr...
AllSetTransformer (Chien et al. 2021) comprises two multiset functions with SetTransformer(Lee et al. 2019) for aggregating node or hyperedge embeddings.
In addition to the two graph embedding methods proposed in (Rossi et al. 2020), we further experimented with two additional methods to explore the effectiveness of various graph embedding techniques.
Furthermore, we investigate various graph embedding modules within the TGN framework. While variants exist within TGN, the results consistently affirm the model’s ability to achieve remarkable performance in the anomaly detection task. This contributes to a deeper understanding of the factors influencing TGNs’ effectiv...
C
The self-stated purpose of much of this literature is to offer managerial insight, yet identifying antecedents to turnover is only the first step in designing programs to reduce turnover. Our study is the first to examine an explicit retention program, and is able to leverage a comparatively rich dataset on drivers’ se...
To understand the interesting pattern, we borrow the unfolding model of labor turnover (Lee & Mitchell, 1994), where “shocks” cause employees to re-examine their current employment relationships. From the perspective of the employer, these shocks can be “positive” (reducing turnover), or “negative” (increasing turnover...
With truck drivers less tied to their job through relational linkages, they are more sensitive to job “shocks”–random and often unexpected events that cause employees to re-examine their current employment relationship (Lee & Mitchell, 1994; Mitchell et al., 2001; Lee et al., 2004). Additionally, deprived of relational...
Truckers experience a variety of shocks in the course of their duties which may trigger reassessment of current employment including traffic congestion, equipment failures, detention during loading and unloading, variation in pay, and so on. The effect of the shock, however is moderated by embeddedness, and, in particu...
We use the unfolding model of labor turnover (Lee et al., 2004, 1996) as our primary theoretical scaffolding for hypothesis development. In this framework, labor turnover proceeds along four possible pathways, which for density of exposition, we present out of numeric order. It can occur because of evolving job dissati...
D
We do not draw the graphs in this case as they are precisely the same as in Figure 2, except that now each edge or loop carries a possibly different weight (not represented in Figure 2).
Table 4 depicts the top twenty portfolios ranked by their annualised expected return (ER) for the A2 cases (positive part of the correlation matrix), Table 6 for the A3 cases (subplus function of the negative elements of the correlation matrix) and Table 8 for the
In Tables 5, 7 and 9 we see a similar picture as in Table 3. Table 5 depicts the top twenty portfolios ranked by their annualised Sharpe Ratios for the A2 cases (positive part of the correlation matrix), Table 7 for the A3 cases (subplus function of the negative elements of the correlation matrix) and Table 8 for the
For this method, we filter important information by constructing a minimal spanning tree (MST), starting from the correlation matrix. MSTs are a class of graphs that connect all vertices by placing an edge among the most correlated pairs without forming any cycles. MSTs tend to retain only significant correlations. Ana...
A possible next preprocessing step, that we take as optional in order to analyse its effect, is what is called shrinkage [12] of the correlation matrix C𝐶Citalic_C. This consists in constructing the covariance matrix from the correlation matrix; then, a linear combination of the covariance matrix and a matrix coming f...
C
Table 12: Macro performance and training and testing times using selected textual features and most relevant temporal features from the combinatorial analysis.
The effect of numerical and temporal features became more apparent when we checked the behaviour by class. Table 10 shows the results of the first experiment in that case. Note that precision and recall were very asymmetric between past and future (∼similar-to\sim∼10% precision asymmetry with the svc classifier, ∼simil...
Table 2 shows the training and testing complexity of the Machine Learning algorithms we used in our analysis for c𝑐citalic_c target classes, f𝑓fitalic_f features, i𝑖iitalic_i algorithm instances (where applicable) and s𝑠sitalic_s dataset samples. For the specific case of the nn algorithm, m𝑚mitalic_m represents th...
In this field, nlp techniques have been successfully applied to noise removal and feature extraction (Sun et al., 2014; Liu, 2015; Fisher et al., 2016; Xing et al., 2018) from financial reports such as news (Zhang & Skiena, 2010; Alanyali et al., 2013; Atkins et al., 2018), micro-blogging comments (Sun et al., 2014; Fi...
Table 12 shows that, with this second selection, we attained well over 80% precision and recall performance with the svc classifier, which takes considerably less time to train than the nn. Furthermore, Table 13 shows the precision and recall of the svc classifier by class. Note that all metrics exceeded 80% as pursued...
D
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
3

Collection including liangzid/robench2024b_all_setq-finSCP-p