Max Noichl, Gareth R. Pearce & Charles H. Pence
2024-08-22
How to represent texts? – Topic modelling runs into all kinds of problems!
A lot of older texts are messy in pretty unforseable ways. How to do cleanup?
How to identify philosophy of math? Little cohesion, similar, but different groups (philosophy of physics, philosophical logic)
“Even beyond the problem of maintaining the division of cognitive labor, this model suggests that in some circumstances there is an unintended benefit from scientists being uninformed about experimental results in their field. This is not universally beneficial, however.
In circumstances where speed is very important or where we think that our initial estimates are likely very close to the truth, connected groups of scientist will be more reliable. On the other hand, when we want accuracy above all else, we should prefer communities made up of more isolated individuals.” – Zollman (2007)
Main network-types used in Zollman (2007)
Convergence as a function of network-size – Rosenstock, Bruner, and O’Connor (2017)
“As a result, we cannot say with confidence that we expect real world epistemic communities to generally fall under the area of parameter space where the Zollman effect occurs. We are unsure whether they correspond to this area of parameter space, or some other area, or some other models with different assumptions.” – Rosenstock, Bruner, and O’Connor (2017)
But what are the appropriate
networks to test for the effect?
Approach 1: Artificial Networks
Some candidates for more realistic networks
But how to evaluate the influence of specific network structures?
Rewiring!
Changing network-structure through randomization
(E. g. Watts-Strogatz-graph)
Parameter | Type/Range | Description |
---|---|---|
Number of Agents | 11 to 200 | Number of agents in the network |
BA-Degree | 2 to 10 | Degree for the Barabási-Albert (BA) model |
ER-Probability | 0 to 0.25 | Probability for edge creation in the Erdos-Renyi (ER) graph model |
Rewiring probability | 0 to 1 | Probability of rewiring in the network generated |
Uncertainty | 0.001 to 0.01 | Probability-difference between the theories. (Smaller: harder problem) |
n-experiments | 10 to 100 | Number of experiments to run each round (Smaller: Less information collected) |
Network-type | ‘ba’, ‘sf’, ‘ws’ | Type of network (‘ba’: Barabási-Albert, ‘sf’: directed Scale-Free, ‘ws’: Watts-Strogatz) |
Agent-type | ‘bayesian’, ‘beta’ | We currently implement two agent types: The original bayesian one, and a beta-distribution based Thompson-sampler. |
Results: Bayesian-learner & Barabási-Albert: In nearly all simulations, basically all agents learn the correct method.
=============================================== ==========================================================
Distribution: NormalDist Effective DoF: 19.5321
Link Function: IdentityLink Log Likelihood: -298574.6004
Number of Samples: 990 AIC: 597190.265
AICc: 597191.178
GCV: 0.0017
Scale: 0.0016
Pseudo R-Squared: 0.145
==========================================================================================================
Feature Function Lambda Rank EDoF P > x Sig. Code
================================= ==================== ============ ============ ============ ============
Probability of Rewiring [0.6] 6 4.2 7.09e-01
Uncertainty Level [0.6] 6 3.2 2.88e-02 *
Number of Experiments [0.6] 6 3.2 6.98e-01
Mean Degree [0.6] 6 3.3 1.11e-16 ***
BA-Degree [0.6] 6 2.6 1.11e-16 ***
Number of Agents [0.6] 6 3.0 1.50e-05 ***
Intercept 1 0.0 1.11e-16 ***
==========================================================================================================
Significance codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Predicting Quality: Messing with the hierarchical-ness of the network (prob-rewiring) doesnt seem to make much difference, when predicting the share of correct agents at convergence.
Predicting Speed: Probability of rewiring also doesn’t influence convergence-time, which is determined by the usual suspects (problem difficulty, number of experiments, degree)
These results are very similar for all tested network-types!
We don’t find an reliability
advantage for sparser networks.
We don’t find an influence
of rewiring network-structure.
Approach 2: Real Networks
Degree-distribution of the perceptron-network (n=3519).
Degree-distribution after random rewiring (p=.2), moving towards a normal degree distribution. Rewiring does not change the mean degree.
Share of correctly informed (bayesian) agents at convergence depending on varied parameters.
Isolated dependencies of the correctness of agents on varied parameters. Probability of rewiring seems to strongly drive outcomes!
Tentative Results
Using more sophisticated
network-models doesn’t end
original robustness worries.
But network-structure clearly does matter,
as we find real, suboptimal networks!