Replies: 18 comments 20 replies
-
Thanks, we can clarify that indeed, the correct way of declaring deterministic relationships that involve random variables is to use tmp ~ f(x) # f can be either deterministic or stochastic
y ~ Normal(tmp, 1) during parse time. We can improve the wording in the documentation and make it more clear. |
Beta Was this translation helpful? Give feedback.
-
Thanks Dmitri. I've been trying the use of := but I still have issues. Here's my code:
@model function investment_agent_model(price_AAA, price_BBB,
T,
initial_cash, ## initial_cash=100_000.0,
initial_aaa, ## initial_aaa=100,
initial_bbb, ## initial_bbb=100,
initial_price_aaa, ## initial_price_aaa=50.0,
initial_price_bbb ## initial_price_bbb=75.0
)
## Priors for initial state
R_AAA_0 ~ Poisson(initial_aaa)
R_BBB_0 ~ Poisson(initial_bbb)
R_0_0 ~ NormalMeanVariance(initial_cash, 1_000.0)
p_AAA_0 ~ LogNormal(log(initial_price_aaa), 0.1)
p_BBB_0 ~ LogNormal(log(initial_price_bbb), 0.1)
## Threshold priors (initial estimates)
θ_lo_AAA_0 ~ NormalMeanVariance(price_AAA*0.9, 10.0) # 10% below initial price
θ_hi_AAA_0 ~ NormalMeanVariance(price_AAA*1.1, 10.0) # 10% above initial price
θ_lo_BBB_0 ~ NormalMeanVariance(price_BBB*0.9, 10.0)
θ_hi_BBB_0 ~ NormalMeanVariance(price_BBB*1.1, 10.0)
p_AAA[1] ~ LogNormal(log(initial_price_aaa), 0.1)
p_BBB[1] ~ LogNormal(log(initial_price_bbb), 0.1)
θ_lo_AAA[1] ~ NormalMeanVariance(price_AAA*0.9, 10.0)
θ_hi_AAA[1] ~ NormalMeanVariance(price_AAA*1.1, 10.0)
θ_lo_BBB[1] ~ NormalMeanVariance(price_BBB*0.9, 10.0)
θ_hi_BBB[1] ~ NormalMeanVariance(price_BBB*1.1, 10.0)
## State transition model
for k in 2:T
## Price dynamics (geometric Brownian motion)
p_AAA[k] ~ LogNormal(log(p_AAA[k-1]), 0.05)
p_BBB[k] ~ LogNormal(log(p_BBB[k-1]), 0.05)
## Threshold adaptation (slowly varying)
θ_lo_AAA[k] ~ NormalMeanVariance(θ_lo_AAA[k-1], 1.0) ## Add small noise
θ_hi_AAA[k] ~ NormalMeanVariance(θ_hi_AAA[k-1], 1.0)
θ_lo_BBB[k] ~ NormalMeanVariance(θ_lo_BBB[k-1], 1.0)
θ_hi_BBB[k] ~ NormalMeanVariance(θ_hi_BBB[k-1], 1.0)
## Action determination based on thresholds
a_AAA[k] ~ (
p_AAA[k] < θ_lo_AAA[k] ? NormalMeanVariance(5.0, 2.0) : ## Buy signal
p_AAA[k] > θ_hi_AAA[k] ? NormalMeanVariance(-5.0, 2.0) : ## Sell signal
Normal(0.0, 0.1) ## Hold
)
a_BBB[k] ~ (
p_BBB[k] < θ_lo_BBB[k] ? NormalMeanVariance(5.0, 2.0) : ## Buy signal
p_BBB[k] > θ_hi_BBB[k] ? NormalMeanVariance(-5.0, 2.0) : ## Sell signal
Normal(0.0, 0.1) ## Hold
)
# action_AAA = round(Int64, rand(a_AAA[k]))
# action_BBB = round(Int64, rand(a_BBB[k]))
## Portfolio position updates with transaction costs (0.1% of transaction value)
## transaction_cost = (abs(a_AAA[k])*p_AAA[k] + abs(a_BBB[k])*p_BBB[k])*0.001
# transaction_cost = (abs(action_AAA)*p_AAA[k] + abs(action_BBB)*p_BBB[k])*0.001
# transaction_cost ~ Deterministic( (abs(a_AAA[k])*p_AAA[k] + abs(a_BBB[k])*p_BBB[k])*0.001 )
# transaction_cost ~ (Deterministic(abs(a_AAA[k]))*p_AAA[k] + abs(a_BBB[k])*p_BBB[k])*0.001
# transaction_cost ~ (abs(Deterministic(a_AAA[k])*p_AAA[k]) + abs(Deterministic(a_BBB[k])*p_BBB[k]))*0.001
transaction_cost := (abs(a_AAA[k])*p_AAA[k] + abs(a_BBB[k])*p_BBB[k])*0.001
## Cash position transition (negative when buying, positive when selling)
## cash_flow = (-a_AAA[k]*p_AAA[k] - a_BBB[k]*p_BBB[k]) - transaction_cost
# cash_flow = (-action_AAA*p_AAA[k] - action_BBB*p_BBB[k]) - transaction_cost
# cash_flow ~ Deterministic( (-a_AAA[k]*p_AAA[k] - a_BBB[k]*p_BBB[k]) - transaction_cost )
# cash_flow ~ Deterministic( (0 - a_AAA[k]*p_AAA[k] - a_BBB[k]*p_BBB[k]) - transaction_cost )
# cash_flow ~ (0 - Deterministic(a_AAA[k])*p_AAA[k] - a_BBB[k]*p_BBB[k]) - transaction_cost
# cash_flow ~ (0 - Deterministic(a_AAA[k]*p_AAA[k]) - Deterministic(a_BBB[k]*p_BBB[k])) - transaction_cost
cash_flow := (0 - a_AAA[k]*p_AAA[k] - a_BBB[k]*p_BBB[k]) - transaction_cost
## R_AAA[k] ~ R_AAA[k-1] + a_AAA[k]
# R_AAA[k] ~ R_AAA[k-1] + Deterministic(a_AAA[k])
# R_AAA[k] := R_AAA[k-1] + a_AAA[k]
R_AAA[k] ~ R_AAA[k-1] + a_AAA[k]
R_BBB[k] ~ R_BBB[k-1] + a_BBB[k]
R_0[k] ~ R_0[k-1] + cash_flow
## Add constraints (no negative positions or cash)
R_AAA[k] >= 0
R_BBB[k] >= 0
R_0[k] >= 0
end
return R_AAA, R_BBB, R_0, p_AAA, p_BBB, θ_lo_AAA, θ_hi_AAA, θ_lo_BBB, θ_hi_BBB
end
`
`
function run_inference(n_steps=10)
## Specify priors and constraints
constraints = @constraints begin
q(R_AAA, R_BBB, R_0, p_AAA, p_BBB, θ_lo_AAA, θ_hi_AAA, θ_lo_BBB, θ_hi_BBB) =
q(R_AAA)q(R_BBB)q(R_0)q(p_AAA)q(p_BBB)q(θ_lo_AAA)q(θ_hi_AAA)q(θ_lo_BBB)q(θ_hi_BBB)
end
## Observables (price data would normally come from market feed)
price_data_aaa = [100.0, 102.0, 98.5, 105.0, 103.5, 107.0, 95.0, 97.5, 102.5, 110.0]
price_data_bbb = [75.0, 73.5, 72.0, 76.5, 74.0, 77.0, 71.0, 70.5, 73.0, 78.0]
## Run inference
result = infer(
model = investment_agent_model(
T= _T,
initial_cash= _initial_cash, ## initial_cash=100_000.0,
initial_aaa= _initial_aaa, ## initial_aaa=100,
initial_bbb= _initial_bbb, ## initial_bbb=100,
initial_price_aaa= _initial_price_aaa, ## initial_price_aaa=50.0,
initial_price_bbb= _initial_price_bbb ## initial_price_bbb=75.0
),
data= (price_AAA = price_data_aaa, price_BBB = price_data_bbb,),
constraints = constraints,
## initmessages = initvars(),
iterations = 15,
free_energy = true
)
return result
end
_T = 3
_initial_cash=100_000.0
_initial_aaa=100
_initial_bbb=100
_initial_price_aaa=50.0
_initial_price_bbb=75.0
investment_result = run_inference()
investment_result
|
Beta Was this translation helpful? Give feedback.
-
@kobus78 Thanks for sharing the model. There are a few problems within this specification in RxInfer. I will elaborate on this more after I get a better hang of what you are trying to achieve. |
Beta Was this translation helpful? Give feedback.
-
Hey Kobus, the error you are getting right now has to do with the fact that you access |
Beta Was this translation helpful? Give feedback.
-
We also do not support conditionals involving random variables, this might lead to weird behaviour
E.g. |
Beta Was this translation helpful? Give feedback.
-
I don't think we documented that anywhere. Where do you want to put it? |
Beta Was this translation helpful? Give feedback.
-
I think both in GraphPPL and RxInfer it should be mentioned because it is quite important :D |
Beta Was this translation helpful? Give feedback.
-
Thank you all for helping. I really appreciate it! This is a really simple problem and I'm keen to find out if I can proceed with this problem making use of RxInfer. Here is what I'm after: The agent controls a portfolio consisting of 2 stocks: AAA and BBB. It simply has to learn the best hi and lo levels for each stock. The hi level is the price above which the stock should be sold, and the lo level is the price below which the stock should be bought. I think this could be a very useful example for our portfolio of examples in RxInfer. I'm not aware of any investment related example yet. Here is a graphic:
|
Beta Was this translation helpful? Give feedback.
-
Thanks @wouterwln. It was a silly mistake on my part. I'm looking forward to guidance on how to do the above implementation. Hopefully the above graphic explains what I'm after as @albertpod asked for. |
Beta Was this translation helpful? Give feedback.
-
Hi @kobus78, I have hard time deciphering your model. I see you define the prior I would suggest to break this model down to a minimum example that involves all distributions that you require, I see there's Poisson, I also see you want to use LogNormal. There are many moving parts that better be addressed in steps. So my question is what nodes do you want to use and what's the prior/likelihood relationship? Based on your graphic, I understand you want an agent that learns to buy below a low threshold and sell above a high threshold. Let me know what core functionality you need in the simplest form, and we can build from there while making sure we're following RxInfer's best practices for deterministic relationships and avoiding conditionals with random variables. |
Beta Was this translation helpful? Give feedback.
-
Hi @albertpod. Thanks for looking at this. I've simplified it as much as possible. It now only has a single stock called AAA. Julia Version 1.11.3 (I apologize for the poor formatting of the following code - I have not discovered the secret yet) julia```
end #Inference configuration
end Execute the inference_T = 3 _initial_cash=100_000.0 investment_result = run_inference()
|
Beta Was this translation helpful? Give feedback.
-
Here is an even simpler version @albertpod . I get this error: Comparing Factor Graph variable Not sure how to handle this. I think @wouterwln mentioned using mixtures. Any idea how to code this? Many thanks. Julia Version 1.11.3 @model function investment_agent_model(price_AAA, T)
θLo_AAA ~ NormalMeanVariance(0.9*price_AAA[1], 10.0)
θHi_AAA ~ NormalMeanVariance(1.1*price_AAA[1], 10.0)
R_AAA₀ ~ Poisson(100) #number of shares of stock
R0₀ ~ NormalMeanVariance(100_000.0, 1_000.0) #cash in portfolio in dollars
R_AAAₖ₋₁ = R_AAA₀
R0ₖ₋₁ = R0₀
for k in 1:T
θLo := θLo_AAA
θHi := θHi_AAA
action_AAA := (
price_AAA[k] < θLo ? NormalMeanVariance(5.0, 2.0) : ## Buy signal
price_AAA[k] > θHi ? NormalMeanVariance(-5.0, 2.0) : ## Sell signal
Normal(0.0, 0.1) ## Hold
)
transaction_cost := (abs(action_AAA)*price_AAA[k])*0.001
cash_flow := (0 - action_AAA*price_AAA[k]) - transaction_cost
R_AAA[k] ~ R_AAAₖ₋₁ + action_AAA
R0[k] ~ R0ₖ₋₁ + cash_flow
R_AAAₖ₋₁ = R_AAA[k]
R0ₖ₋₁ = R0[k]
end
end
@constraints function investment_agent_model_constraints()
q(R_AAA₀, R0₀, R_AAA, R0, θLo_AAA, θHi_AAA) =
q(R_AAA₀, R0₀, R_AAA, R0)q(θLo_AAA)q(θHi_AAA)
end
price_data_aaa = [100.0, 102.0, 98.5, 105.0, 103.5, 107.0, 95.0, 97.5, 102.5, 110.0]
result = infer(
model = investment_agent_model(T = 10),
data= (price_AAA = price_data_aaa, ),
constraints = investment_agent_model_constraints(),
# initialization=imarginals,
iterations = 15,
free_energy = true
) |
Beta Was this translation helpful? Give feedback.
-
Not even this extremely simple model works: @model function comparison_model()
θLo_AAA ~ NormalMeanVariance(90.0, 10.0)
price ~ Uninformative() # Define price as an observable
comparison := (price < θLo_AAA) where { meta = DeltaMeta(method = Linearization()) }
result ~ Bernoulli(comparison)
end
result = infer(
model = comparison_model(),
data= (price=1.0, ),
iterations = 15
) It gives the error: Half-edge has been found: result_8. To terminate half-edges 'Uninformative' node can be used. After struggling for a week, I could not find a way to do a simple comparison in a @model. I'm starting to think it might be better to use a sampled approach for this instead of message passing (something like NumPyro). |
Beta Was this translation helpful? Give feedback.
-
Here is the latest. Also showing the graph: @model function comparison_model(price)
θLo_AAA ~ NormalMeanVariance(90.0, 10.0)
price_less ~ Uninformative()
## comparison := (price < θLo_AAA) where { meta = DeltaMeta(method = Linearization()) }
comparison ~ (price < θLo_AAA) where { meta = DeltaMeta(method = Linearization()) }
# comparison := (NormalMeanVariance(price,0.1) < θLo_AAA) where { meta = DeltaMeta(method = Linearization()) }
price_less ~ Bernoulli(comparison)
end Pkg.add("MetaGraphsNext")
Pkg.add("GraphPlot")
using MetaGraphsNext
using GraphPlot
function print_meta_graph(meta_graph) ## just for info
println("============== NODES ==============")
for node in MetaGraphsNext.vertices(meta_graph)
label = MetaGraphsNext.label_for(meta_graph, node)
println("$node: $label")
end
println("\n============== LINKS ==============")
for link in MetaGraphsNext.edges(meta_graph)
source_node = MetaGraphsNext.label_for(meta_graph, link.src)
dest_node = MetaGraphsNext.label_for(meta_graph, link.dst)
println("$(source_node) -> $(dest_node)")
end
end comparison_conditioned = comparison_model() | (price = [ 1.0 ], )
comparison_rxi_model = RxInfer.create_model(comparison_conditioned)
comparison_gppl_model = RxInfer.getmodel(comparison_rxi_model)
comparison_meta_graph = comparison_gppl_model.graph
print_meta_graph(comparison_meta_graph)
============== NODES ==============
1: θLo_AAA_1
2: constvar_2
3: constvar_3
4: NormalMeanVariance_4
5: price_less_5
6: Uninformative_6
7: comparison_7
8: <_8
9: price_9
10: Bernoulli_10
============== LINKS ==============
θLo_AAA_1 -> NormalMeanVariance_4
θLo_AAA_1 -> <_8
constvar_2 -> NormalMeanVariance_4
constvar_3 -> NormalMeanVariance_4
price_less_5 -> Uninformative_6
price_less_5 -> Bernoulli_10
comparison_7 -> <_8
comparison_7 -> Bernoulli_10
<_8 -> price_9 meta_graph = comparison_meta_graph
## Shorten some labels to make graph more readable
# str_labels = [string(lab) for lab in labels(meta_graph)]
# replacements =
# Pair("MvNormalMeanCovariance", "Nmc"),
# Pair("MvNormalMeanPrecision", "Nmp"),
# Pair("constvar", "cv"),
# Pair("x", "XXXXX") ## make more obvious
# short_labels = [replace(s, replacements...) for s in str_labels]
GraphPlot.gplot( ## existing plotting functionality
meta_graph,
layout=spring_layout,
nodelabel=collect(labels(meta_graph)),
## nodelabel=short_labels,
nodelabelsize=0.1,
# NODESIZE=0.02, ## diameter of the nodes
NODESIZE=0.10, ## diameter of the nodes
# NODELABELSIZE=1.5,
NODELABELSIZE=3.0,
# nodelabelc="white",
nodelabelc="green",
nodelabeldist=0.0,
nodefillc=nothing, ## "cyan"
edgestrokec="red",
## ImageSize = (800, 800) ##- does not work
) result = infer(
model = comparison_model(),
data= (price=1.0, ),
showprogress = true,
iterations = 15
)
┌ Error: We encountered an error during inference, here are some helpful resources to get you back on track:
│
│ 1. Check our Sharp bits documentation which covers common issues:
│ https://docs.rxinfer.ml/stable/manuals/sharpbits/overview/
│ 2. Browse our existing discussions - your question may already be answered:
│ https://github.com/ReactiveBayes/RxInfer.jl/discussions
│ 3. Take inspiration from our set of examples:
│ https://examples.rxinfer.ml/
│
│ Still stuck? We'd love to help! You can:
│ - Start a discussion for questions and help. Feedback and questions from new users is also welcome! If you are stuck, please reach out and we will solve it together.
│ https://github.com/ReactiveBayes/RxInfer.jl/discussions
│ - Report a bug or request a feature:
│ https://github.com/ReactiveBayes/RxInfer.jl/issues
│ - (Optional) Share your session data with `RxInfer.share_session_data()` to help us better understand the issue
│ https://docs.rxinfer.ml/stable/manuals/telemetry/
│
│ Note that we use GitHub discussions not just for technical questions! We welcome all kinds of discussions,
│ whether you're new to Bayesian inference, have questions about use cases, or just want to share your experience.
│
│ To help us help you, please include:
│ - A minimal example that reproduces the issue
│ - The complete error message and stack trace
│ - (Optional) If you shared your session data, please include the session ID in the issue
│
│ Use `RxInfer.disable_inference_error_hint!()` to disable this message.
└ @ RxInfer [/home/vscode/.julia/packages/RxInfer/w7xtP/src/inference/inference.jl:263](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f55736572732f6b6f6275732f4c6962726172792f4d6f62696c6520446f63756d656e74732f636f6d7e6170706c657e436c6f7564446f63732f444154412f524d532f43452f44617461536369656e63652f467265656c616e63696e672f506f7274666f6c696f2f323032352d30332d31315e44796e616d696354726164696e6741494633222c226c6f63616c446f636b6572223a66616c73652c2273657474696e6773223a7b22636f6e74657874223a226465736b746f702d6c696e7578227d2c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f55736572732f6b6f6275732f4c6962726172792f4d6f62696c6520446f63756d656e74732f636f6d7e6170706c657e436c6f7564446f63732f444154412f524d532f43452f44617461536369656e63652f467265656c616e63696e672f506f7274666f6c696f2f323032352d30332d31315e44796e616d696354726164696e67414946332f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f55736572732f6b6f6275732f4c6962726172792f4d6f62696c65253230446f63756d656e74732f636f6d7e6170706c657e436c6f7564446f63732f444154412f524d532f43452f44617461536369656e63652f467265656c616e63696e672f506f7274666f6c696f2f323032352d30332d313125354544796e616d696354726164696e67414946332f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f55736572732f6b6f6275732f4c6962726172792f4d6f62696c6520446f63756d656e74732f636f6d7e6170706c657e436c6f7564446f63732f444154412f524d532f43452f44617461536369656e63652f467265656c616e63696e672f506f7274666f6c696f2f323032352d30332d31315e44796e616d696354726164696e67414946332f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/home/vscode/.julia/packages/RxInfer/w7xtP/src/inference/inference.jl:263)
RuleMethodError: no method matching rule for the given arguments
Possible fix, define:
@rule Bernoulli(:p, Marginalisation) (m_out::Uninformative, ) = begin
return ...
end
Alternatively, consider re-specifying model using an existing rule:
Bernoulli(m_p::PointMass)
Bernoulli(m_p::Beta)
Bernoulli(m_out::PointMass)
Bernoulli(q_out::Categorical{P} where P<:Real)
Bernoulli(q_out::Bernoulli)
Bernoulli(q_out::PointMass)
Bernoulli(q_p::Any)
Bernoulli(q_p::PointMass)
Note that for marginal rules (i.e., involving q_*), the order of input types matters.
Stacktrace:
... |
Beta Was this translation helpful? Give feedback.
-
@bvdmitri, @Nimrais, I'm citing you so we can make this a collective effort to help out @kobus78. @kobus78, the latest model shared is more parsable, i.e., @model function comparison_model(price)
θLo_AAA ~ NormalMeanVariance(90.0, 10.0)
price_less ~ Uninformative()
comparison ~ (price < θLo_AAA) where { meta = DeltaMeta(method = Linearization()) }
# comparison := (NormalMeanVariance(price,0.1) < θLo_AAA) where { meta = DeltaMeta(method = Linearization()) }
price_less ~ Bernoulli(comparison)
end But there are, of course, a few issues with that. First of all, the conditioning is not a good idea because a hard comparison is not nicely differentiable. So I suggest using an alternative that enables gradient-based inference methods to work properly, such as: function sigmoid_comparison(price, threshold; steepness=1.0)
return 1.0 / (1.0 + exp(steepness * (price - threshold)))
end See:
Hence: @model function comparison_model(price)
# Prior for the threshold
θLo_AAA ~ NormalMeanVariance(90.0, 10.0)
price_less ~ Uninformative()
# Use the externally defined sigmoid function
sigmoid_prob := sigmoid_comparison(price, θLo_AAA)
# Use the sigmoid probability for the Bernoulli distribution
price_less ~ Bernoulli(sigmoid_prob)
end Now, with this model, you won't be able to use Linearization as it works only with Gaussians, and in this particular model, you'd have to make use of both As a side note @kobus78, in this particular model, what is the posterior you are interested in? Is it θLo_AAA and price_less, if so, then I would suggest to take a look a fusion example, but it might bite later. |
Beta Was this translation helpful? Give feedback.
-
Here are the results after implementing @albertpod 's suggestions: ## steepness=1.0: smooth transition (0.5 at threshold, 0.73 at threshold-1, 0.27 at threshold+1)
## steepness=10.0: sharper transition (0.5 at threshold, 0.9999 at threshold-1, 0.0001 at threshold+1)
## steepness=100.0: very close to hard comparison but still differentiable
function sigmoid_comparison(price, threshold; steepness=1.0)
return 1.0 / (1.0 + exp(steepness * (price - threshold)))
end
@model function comparison_model(price)
## Prior for the threshold
θLo_AAA ~ NormalMeanVariance(90.0, 10.0)
price_less ~ Uninformative()
## comparison := (price < θLo_AAA) where { meta = DeltaMeta(method = Linearization()) }
## comparison ~ (price < θLo_AAA) where { meta = DeltaMeta(method = Linearization()) }
## sigmoid_prob := sigmoid_comparison(price, θLo_AAA)
# comparison_prob := sigmoid_comparison(price, θLo_AAA)
comparison_prob := sigmoid_comparison(price, θLo_AAA) where { meta = DeltaMeta(method = CVIProjection()) }
price_less ~ Bernoulli(comparison_prob)
end
result = infer(
model = comparison_model(),
data= (price=1.0, ),
showprogress = true,
iterations = 15
)
|
Beta Was this translation helpful? Give feedback.
-
Hi @kobus78! This seems to run now. I will provide some commentary later why it works like this, but you can also check this one: using RxInfer
using ExponentialFamilyProjection
## steepness=1.0: smooth transition (0.5 at threshold, 0.73 at threshold-1, 0.27 at threshold+1)
## steepness=10.0: sharper transition (0.5 at threshold, 0.9999 at threshold-1, 0.0001 at threshold+1)
## steepness=100.0: very close to hard comparison but still differentiable
function sigmoid_comparison(price, threshold; steepness=1.0)
return 1.0 / (1.0 + exp(steepness * (price - threshold)))
end
@model function comparison_model(price, price_less)
## Prior for the threshold
θLo_AAA ~ NormalMeanVariance(90.0, 10.0)
comparison_prob := sigmoid_comparison(price, θLo_AAA)
price_less ~ Probit(comparison_prob)
end
comparison_meta = @meta begin
sigmoid_comparison() -> Unscented()
end
init_messages = @initialization begin
q(comparison_prob) = Beta(1.0, 1.0)
end
result = infer(
model = comparison_model(),
data= (price=1.0, price_less=UnfactorizedData(missing)),
meta = comparison_meta,
showprogress = true,
iterations = 5
)
result.posteriors[:comparison_prob]
result.predictions[:price_less] |
Beta Was this translation helpful? Give feedback.
-
result.predictions[:price_less] 6-element Vector{Bernoulli{Float64}}: My interpretation of these results is: The probability that the price (part of the hidden state) is less than the (learned) threshold is 0.8413. Would you agree with this interpretation? If so, it seems to me, in order to generate a BUY signal, I simply have to sample from Bernoulli{Float64}(p=0.841344746068543) If I get a '1'/success, I issue a BUY action. If I get a '0', I do NOT issue a BUY action. Would you agree with this plan of action @albertpod ? |
Beta Was this translation helpful? Give feedback.
-
Referring to the page: https://docs.rxinfer.ml/stable/manuals/model-specification/
Statement 1:
Note:
The RxInfer.jl package uses the ~ operator for modelling both stochastic and deterministic relationships between random variables. However, GraphPPL.jl also allows to use := operator for deterministic relationships.
Statement 2:
In contrast to other probabilistic programming languages in Julia, RxInfer does not allow use of = operator for creating deterministic relationships between (latent)variables. Instead, we can use := operator for this purpose. For example:
t ~ Normal(mean = 0.0, variance = 1.0)
x := exp(t) # x is linked deterministically to t
y ~ Normal(mean = x, variance = 1.0)
Questions:
What is the best practice when defining deterministic factor nodes? It seems wrong to use = (but I have used code in the past that worked when I used =). Is the best practice to use := or ~ as well? It seems to me that one wants to make it real obvious when we have a deterministic factor node. Why not enforce the use of := ?
Thanks for helping.
Kobus
Beta Was this translation helpful? Give feedback.
All reactions