Page 2 of 3 FirstFirst 123 LastLast
Results 31 to 60 of 75

Thread: Which one of these 2 mainstream models gets it right ?

  1. #31
    Join Date
    Oct 2009
    Posts
    1,360
    Quote Originally Posted by parejkoj View Post
    So, I feel like I need to clear up some confusion on the part of several people here about what Springel et al. (and most modern cosmological simulations) did.

    A semi-analytic model does not "produce galaxies" in the sense of generating them ab initio from the gravitational collapse of gas, but rather assigns galaxies to dark matter halos following some prescription. The overall matter distribution is set by the cosmological parameters and however they generate the initial Gaussian random field. The small to medium scale galaxy clustering (correlation function or power spectrum) at a given redshift is thus determined by both how they assign galaxies to halos and the clustering of the dark matter particles (and thus halos) at that redshift. Because of the nature of the gravitational collapse of structure, the clustering of matter on large scales (~>70Mpc) is also determined by both, because of effects like the Baryon Acoustic Oscillation (BAO) which shows up at scales around 150Mpc.
    Thanks for the clarification.

    A semi-analytic model......assigns galaxies to dark matter halos following some prescription.

    The small to medium scale galaxy clustering (correlation function or power spectrum) at a given redshift is thus determined by both how they assign galaxies to halos and the clustering of the dark matter particles (and thus halos) at that redshift.


    So the point is about Galaxies clustering and redshift that the semi-analytic model introduce in their CDM model...which still mean that the model known about Galaxies in term of data gathered from observation surveys about Galaxies clustering and Redshift.

  2. #32
    Quote Originally Posted by Don J View Post
    Thanks for the clarification.
    So the point is about Galaxies clustering and redshift that the semi-analytic model introduce in their CDM model...which still mean that the model known about Galaxies in term of data gathered from observation surveys about Galaxies clustering and Redshift.
    No, I think you're still misunderstanding. The prescription for how to populate halos with galaxies doesn't know about "galaxy clustering and redshift". It knows about baryon physics, and something about how baryons are distributed relative to the dark matter. That's it. The tuning that happens is required because some aspects of the baryon physics are very complicated and not that well constrained, but the model is not tuned to match observations of galaxy clustering. The clustering is a direct prediction of the model.

  3. #33
    Join Date
    Oct 2009
    Posts
    1,360
    Quote Originally Posted by parejkoj View Post
    No, I think you're still misunderstanding. The prescription for how to populate halos with galaxies doesn't know about "galaxy clustering and redshift". It knows about baryon physics, and something about how baryons are distributed relative to the dark matter. That's it. The tuning that happens is required because some aspects of the baryon physics are very complicated and not that well constrained, but the model is not tuned to match observations of galaxy clustering. The clustering is a direct prediction of the model.
    But they specifically claim in Page 19 _ our parameter adopted values are consistent with a combined analysis of the 2dFGRS surveys and first WMAP data"._
    http://arxiv.org/abs/astro-ph/0504097
    So it seem that they plugged the results of the combined analysis of the 2dFGRS surveys and first WMAP data to run the program... Right?

    Eta
    On page 18 in the Method chapter they describe the code utilised for the simulation called -hierarchical mutipole expansion method, or"tree"algorithm.
    I think it is a similar version or a modification of that one .
    http://ai.stanford.edu/~paskin/gm-short-course/lec3.pdf
    Last edited by Don J; 2012-May-11 at 06:07 AM.

  4. #34
    Join Date
    Mar 2004
    Location
    Ocean Shores, Wa
    Posts
    5,151
    Models always assume round cows. No one has ever made a model based upon first principles that create the universe as we know it. As more knowledge of what the universe is has rolled into our path, the more complex the models must be to handle the details.

    The danger in this approach is that by adding more assumptions and parameters, more confidence ends up going into the model than is warranted. A good example is the University of Colorado 'hurricane predictions' posted in late March or early April. Over the last five years, these models demonstrated zero predictive power and had to be scrapped. There was nothing wrong with the physics in the models - the problem was the inability of the model to adapt to rapid climate change. Cause-and-effect assumptions were just plain wrong.

    We should enjoy the fact that we have toy models that roll out a universe similar to what we see. But these models are toys - not hard physical solutions, and the underlying danger is that more confidence is placed in 'established scientific principles' than should be. In any and all cases, we are infants sharing ideas about something we are trying wrap our collective arms about. We should welcome as many models as possible, and scrutinize the assumptions that seem to have the best predictive power - both into the past and into the future. There is much to learn.

    Meanwhile, back at this thread - It seems to me that a model based upon both gravitational and electrodynamics has a much better chance of surviving into the future than a dark matter model. NONE of the predicted attributes of dark matter have shown their signature. Dark matter is a vacuous unsubstantiatable physical assumption, and the sooner we find a way to discard it, the better.

  5. #35
    Join Date
    Nov 2002
    Posts
    6,238
    Quote Originally Posted by Don J View Post
    But they specifically claim in Page 19 _ our parameter adopted values are consistent with a combined analysis of the 2dFGRS surveys and first WMAP data"._
    http://arxiv.org/abs/astro-ph/0504097
    So it seem that they plugged the results of the combined analysis of the 2dFGRS surveys and first WMAP data to run the program... Right?
    Don, any astrophysical paper will specify the assumed cosmological values. Those values would be different if you assume MOND, but there would still be values that have to be plugged in. Lookback time, co-moving distance, etc are all different if the parameters are different, that's why they have to specify what parameters they are using. For instance This paper at the bottom right on page 2, lists the cosmological parameters used in the paper. That was a random paper I found by typing Gunn-Petterson into google. I was the fifth item on the page.

    And, just as a note, they are trying to model the evolution of galaxies, quasars and their distribution. They plug in the values that have been measured now, and let the thing run from the last scattering. If the end result of the simulation isn't what we see, then there may be a problem with our current model. If the end of the simulation matches what we currently see, then our current model may be on the right path. This isn't to say that if it doesn't match there's something wrong or if it does, it definitely confirms it. Just that it's a piece of evidence either for (it matches) or against (it doesn't match)

    Quote Originally Posted by Don J View Post
    Eta
    On page 18 in the Method chapter they describe the code utilised for the simulation called -hierarchical mutipole expansion method, or"tree"algorithm.
    I think it is a similar version or a modification of that one .
    http://ai.stanford.edu/~paskin/gm-short-course/lec3.pdf
    Actually, not it's not. That particular on is used in artificial intelligence or information theory, not N-body simulations. The code that Springle et al used is a combination of the Particle Mesh and TREE code. This paper describes the merging of PM and TREE codes. The TREE codes came out of work Josh Barnes and Piet Hut did.

  6. #36
    Join Date
    Oct 2009
    Posts
    1,360
    Quote Originally Posted by Tensor View Post
    And, just as a note, they are trying to model the evolution of galaxies, quasars and their distribution. They plug in the values that have been measured now, and let the thing run from the last scattering. If the end result of the simulation isn't what we see, then there may be a problem with our current model. If the end of the simulation matches what we currently see, then our current model may be on the right path. This isn't to say that if it doesn't match there's something wrong or if it does, it definitely confirms it. Just that it's a piece of evidence either for (it matches) or against (it doesn't match)
    That is exactly what i said since the beginning that the model known about all these things (Large Scale structures ,evolution of galaxies, quasars and their distribution.) , so that is not surprising that it produced them.

    That is what i tried to explain to Shaula who argued in post 13 via his -demonstrative example- "The model did not know about galaxies but it produced them."

    Or Reality Check in post 26... about large scale structures who argued:
    "The evolution of the large scale structures in billions of years is an output of the model. It is not part of the input. It is a prediction of the model."
    and in post 27
    "He is confirming what anyone who knows about the simulation knows: That the model did not know about galaxies but it produced them."

    Eta
    Now about the observation and "prediction" that the model provided.

    Observation in page 15
    Following a demonstration~
    ...are the baryon wiggle also present in the Galaxies distribution ? ...fig 6 shows that the answer to that important question is yes.

    Prediction in page 18 down the page
    If future surveys increase on this: (see text )....then precision measurements of Galaxies clustering will shed light on the most puzzling components of the Universe :the elusive dark energy field.
    Last edited by Don J; 2012-May-13 at 04:10 AM.

  7. #37
    Join Date
    Nov 2002
    Posts
    6,238
    Quote Originally Posted by Don J View Post
    That is exactly what i said since the beginning that the model known about all these things (Large Scale structures ,evolution of galaxies, quasars and their distribution.) , so that is not surprising that it produced them.
    No, it doesn't know about galaxies. Those parameters are not large scale structure, galaxies or quasars or their distribution. They are simply numbers, that are based on observations. Those numbers do nothing else but give the model the initial conditions. The model then proceeds from there.

    Quote Originally Posted by Don J View Post
    That is what i tried to explain to Shaula who argued in post 13 via his -demonstrative example- "The model did not know about galaxies but it produced them."
    That is exactly what the model does. It starts with no galaxies. How does it know about galaxies, when it starts without them?

    Quote Originally Posted by Don J View Post
    Or Reality Check in post 26... about large scale structures who argued:
    "The evolution of the large scale structures in billions of years is an output of the model. It is not part of the input. It is a prediction of the model."
    and in post 27
    "He is confirming what anyone who knows about the simulation knows: That the model did not know about galaxies but it produced them."
    He's right. They give their paper's initial conditions. A MOND model would have different parameters, BUT THE MOND MODEL WOULD STILL HAVE TO HAVE THOSE PARAMETERS TO INITIALIZE THE MODEL. Would you claim the MOND model "knows" about Large Scale Structure, galaxies, quasars and their distribution because of the MOND's required initial parameters? Even a plasma universe model would require initial parameters to run. Every model has initial conditions that has to be input into the model, to allow it to run. If you are arguing that inputing initial conditions(that have been measured) somehow causes the model to output the answer you want, it appears that you don't know what initial conditions or initial parameters are, or their importance in modeling.


    Modeling works in the following way:
    Develop the model.
    Write the code that represents the model.
    Enter the initial conditions
    Run the model
    Observe the data during the run and the end of the run.

    Where in there does entering numbers before running the model allow the model (which was written without knowing the actual numeric initial conditions) to "know" about Galaxies, Quasars, large scale structure, and distribution?

    The following are the parameters that were used in the model.

    Ωm = Ωdm + Ωb = 0.25,
    Ωb = 0.045,
    h = 0.73,
    ΩΛ = 0.75,
    n = 1,
    σ8 = 0.9.

    ρcrit = 3H02/(8πG)
    H0 = 100 h km s−1Mpc−1
    σ8 is the rms linear mass fluctuation within a sphere of radius 8h−1Mpc extrapolated to z = 0

    What changes should be made in those values, so the model doesn't "know" about Large scale structures, evolution of galaxies, quasars, and their distribution, and why do those changes take away the knowledge from the model?

  8. #38
    Join Date
    Oct 2009
    Posts
    1,360
    Quote Originally Posted by Tensor View Post
    The following are the parameters that were used in the model.

    Ωm = Ωdm + Ωb = 0.25,
    Ωb = 0.045,
    h = 0.73,
    ΩΛ = 0.75,
    n = 1,
    σ8 = 0.9.

    ρcrit = 3H02/(8πG)
    H0 = 100 h km s−1Mpc−1
    σ8 is the rms linear mass fluctuation within a sphere of radius 8h−1Mpc extrapolated to z = 0

    Where in there does entering numbers before running the model allow the model (which was written without knowing the actual numeric initial conditions) to "know" about Galaxies, Quasars, large scale structure, and distribution?

    Those parameters are not large scale structure, galaxies or quasars or their distribution. They are simply numbers, that are based on observations. Those numbers do nothing else but give the model the initial conditions. The model then proceeds from there.
    But the code algorithm (the program)is the key he his creating the large scale stuctures ....via a pre-established pattern.

    Quote Originally Posted by Tensor View Post
    The code that Springle et al used is a combination of the Particle Mesh and TREE code.
    http://en.wikipedia.org/wiki/Barnes%...Hut_simulation
    and this paper describes the merging of PM and TREE codes.
    http://arxiv.org/pdf/astro-ph/9409021v1.pdf
    Last edited by Don J; 2012-May-13 at 05:32 AM.

  9. #39
    Join Date
    Nov 2002
    Posts
    6,238
    Quote Originally Posted by Don J View Post
    But the code algorithm (the program)is the key he his creating the large scale stuctures ....via a pre-established pattern.
    Yeah, so. Gravity and EM act a certain way. You can't change that. That is what is the program does, takes initial conditions and then takes into account the effects of Gravity and EM as the model runs. Or are you accusing them of specifically making up code, to specifically to get large scale structures, galaxies, and quasars?

  10. #40
    Join Date
    Oct 2009
    Posts
    1,360
    Quote Originally Posted by Tensor View Post
    Yeah, so. Gravity and EM act a certain way. You can't change that. That is what is the program does, takes initial conditions and then takes into account the effects of Gravity and EM as the model runs.
    Right ...however are you sure this model takes EM effects into account ?
    Quote Originally Posted by Tensor View Post
    Or are you accusing them of specifically making up code, to specifically to get large scale structures, galaxies, and quasars?
    No, what I said is that the simulation program used ...-which obviously must take into account the initial conditions.-
    http://en.wikipedia.org/wiki/Barnes%...Hut_simulation

    ....create what is it supposed to create (node at the junction of 2 or more intersecting filaments).....
    http://en.wikipedia.org/wiki/Node_%28graph_theory%29

    In graph theory, a vertex (plural vertices) or node is the fundamental unit out of which graphs are formed: an undirected graph consists of a set of vertices and a set of edges (unordered pairs of vertices), while a directed graph consists of a set of vertices and a set of arcs (ordered pairs of vertices). From the point of view of graph theory, vertices are treated as featureless and indivisible objects, although they may have additional structure depending on the application from which the graph arises; for instance, a semantic network is a graph in which the vertices represent concepts or classes of objects.

    The two vertices forming an edge are said to be the endpoints of this, and the edge is said to be incident to the vertices. A vertex w is said to be adjacent to another vertex v if the graph contains an edge (v,w). The neighborhood of a vertex v is an induced subgraph of the graph, formed by all vertices adjacent to v.

    Taking into account the Universe expansion factor .
    Multipole expansion
    http://en.wikipedia.org/wiki/Multipole_expansion
    Last edited by Don J; 2012-May-13 at 07:23 AM.

  11. #41
    Join Date
    Mar 2010
    Location
    United Kingdom
    Posts
    4,171
    ....via a pre-established pattern.
    What exactly do you mean by that? Please explain in suitable detail where the template you seem to be alluding to is, what it consists of and how it invalidates the results.

  12. #42
    Join Date
    Oct 2009
    Posts
    1,360
    Quote Originally Posted by Shaula View Post
    ....via a pre-established pattern.
    What exactly do you mean by that? Please explain in suitable detail where the template you seem to be alluding to is, what it consists of
    See post 40.
    Quote Originally Posted by Shaula View Post
    and how it invalidates the results.
    I don't pretend it invalidate the results.

  13. #43
    Join Date
    Mar 2010
    Location
    United Kingdom
    Posts
    4,171
    See post 40.
    That really does not answer what I asked, even with the edits. You seem to be making an assumption that somehow the way the algorithm divides the simulation space up creates large scale structure? Is your argument that any simulation, no matter the starting conditions, that uses this scheme will generate results similar to the filamentary structures we observe? Please clarify your stance on this as I am not sure I agree with you.

  14. #44
    Join Date
    Oct 2009
    Posts
    1,360
    Quote Originally Posted by Shaula View Post
    That really does not answer what I asked, even with the edits. You seem to be making an assumption that somehow the way the algorithm divides the simulation space up creates large scale structure?
    No, not about the way the algorithm divides the simulation space up.Remember the code that Springle et al used is a combination of the Particle Mesh and TREE code.

    Quote Originally Posted by Shaula View Post
    Is your argument that any simulation, no matter the starting conditions, that uses this scheme will generate results similar to the filamentary structures we observe?
    Not exactly...The TREE code will still produce filamentary structures but obviously not similar to what we observe today.But i wonder what will happend if you change the value of 50% less for DarkMatter and 50% more for Baryonic Matter without changing the other values?


    The following are the parameters that were used in the model.

    Ωm = Ωdm + Ωb = 0.25,
    Ωb = 0.045,
    h = 0.73,
    ΩΛ = 0.75,
    n = 1,
    σ8 = 0.9.

    ρcrit = 3H02/(8πG)
    H0 = 100 h km s−1Mpc−1
    σ8 is the rms linear mass fluctuation within a sphere of radius 8h−1Mpc extrapolated to z = 0

  15. #45
    Join Date
    Mar 2010
    Location
    United Kingdom
    Posts
    4,171
    So your point is that because astrophysical simulations produce filamentary structures by simulating gravitational effects it is cheating and non-predictive to use astrophysical models in astrophysics?

    What exactly is your chain of logic here? Please take the time to spell it out a bit more completely than you have been doing. Your posts feel like a series of retorts - it would be useful to have it laid out clearly in one place.

  16. #46
    Join Date
    Oct 2009
    Posts
    1,360
    Quote Originally Posted by Shaula View Post
    So your point is that because astrophysical simulations produce filamentary structures by simulating gravitational effects it is cheating and non-predictive to use astrophysical models in astrophysics?
    I have never said that .
    In post 36 I have even pointed to an -observation- and a -prediction- this LCDM model make.This model answer an important question and make a big prediction...

    http://arxiv.org/abs/astro-ph/0504097

    Observation made by this LCDM model in page 15
    Following a demonstration~
    question ...are the baryon wiggle also present in the Galaxies distribution ? ...fig 6 shows that the answer to that important question is yes.

    Prediction made by this LCDM model in page 18 down the page
    If future surveys increase on this: (see text )....then precision measurements of Galaxies clustering will shed light on the most puzzling components of the Universe :the elusive dark energy field.



    But as i pointed out in the OP there is another mainstream model based on magnetic fields and gravity who also make predictions about large scale structures.

    http://adsabs.harvard.edu/cgi-bin/np...13B&db_key=AST
    Last edited by Don J; 2012-May-13 at 08:27 AM.

  17. #47
    Join Date
    Mar 2010
    Location
    United Kingdom
    Posts
    4,171
    But the code algorithm (the program)is the key he his creating the large scale stuctures ....via a pre-established pattern.
    This is the statement I am trying to get to the bottom of. You seem to be implying that there is something going on in the case of the paper you clearly do not favour that makes it's predictions less valuable than on first glance. Please can you elaborate?

  18. #48
    Quote Originally Posted by Don J View Post
    Not exactly...The TREE code will still produce filamentary structures but obviously not similar to what we observe today.But i wonder what will happend if you change the value of 50% less for DarkMatter and 50% more for Baryonic Matter without changing the other values?
    If you do that, you'll get the matter correlation function wrong (among other things). Read some of the references in Springel et al. for examples of this.

    The filamentary structure comes almost exclusively from the gravitational collapse of Gaussian-random perturbations of the initial density field. The specific code they use to perform the simulation doesn't really matter: it's just one choice to speed up calculations.

  19. #49
    Quote Originally Posted by Don J View Post
    But as i pointed out in the OP there is another mainstream model based on magnetic fields and gravity who also make predictions about large scale structures.

    http://adsabs.harvard.edu/cgi-bin/np...13B&db_key=AST
    For the Nth time, this is not a mainstream model, and its predictions cannot be easily compared with observations, nor does that paper attempt to do so. If you want to suggest that that model is correct, you have to either find a paper by that group that does compare their results with modern observations, or make that comparison yourself.

  20. #50
    Join Date
    Oct 2009
    Posts
    1,360
    Quote Originally Posted by parejkoj View Post
    For the Nth time, this is not a mainstream model, and its predictions cannot be easily compared with observations, nor does that paper attempt to do so. If you want to suggest that that model is correct, you have to either find a paper by that group that does compare their results with modern observations, or make that comparison yourself.
    I see, only models based on LCDM and gravity as the only driving force are considered mainstream.Why the mainstream chose to ignore EM force in his model?

    Quote Originally Posted by parejkoj View Post
    For the Nth time, this is not a mainstream model,
    But the model works notheless... right ?
    http://adsabs.harvard.edu/cgi-bin/np...13B&db_key=AST

  21. #51
    Join Date
    Oct 2009
    Posts
    1,360
    Quote Originally Posted by parejkoj View Post
    But i wonder what will happend if you change the value of 50% less for DarkMatter and 50% more for Baryonic Matter without changing the other values?
    If you do that, you'll get the matter correlation function wrong (among other things). Read some of the references in Springel et al. for examples of this.
    That is what i have in mind. So the model work only on the hypothetical assumption of the cold dark matter ratio and Dark Energy ratio...

    Quote Originally Posted by parejkoj View Post
    The filamentary structure comes almost exclusively from the gravitational collapse of Gaussian-random perturbations of the initial density field. The specific code they use to perform the simulation doesn't really matter: it's just one choice to speed up calculations.
    I have understand that....

  22. #52
    Quote Originally Posted by Don J View Post
    But the model works notheless... right ?
    http://adsabs.harvard.edu/cgi-bin/np...13B&db_key=AST
    Battaner's model? I have no idea. As I said, there are no comparisons with observations in that paper, so there's no way to know.

  23. #53
    Join Date
    Oct 2009
    Posts
    1,360
    Quote Originally Posted by parejkoj View Post
    Battaner's model? I have no idea. As I said, there are no comparisons with observations in that paper, so there's no way to know.
    There is a comparison with observations here:

    http://adsabs.harvard.edu/abs/1998A%26A...338..383B

  24. #54
    Quote Originally Posted by Don J View Post
    There is a comparison with observations here:

    http://adsabs.harvard.edu/abs/1998A%26A...338..383B
    Wow... Seriously? Some sketches of "octahedral structure" and they call it good? Besides the fact that in the past decade we've gotten several orders of magnitude more data that maps the large scale structure in a large volume out to z~0.7, that paper is a joke! There's absolutely no statistical analysis, no comparison with the galaxy correlation function or power spectrum (as was found in Springel et al.), and no mention of how they convert redshifts to distances, which is kind of important for this sort of thing.

    Anyone who points at a contour map, draws some straight lines on it, and says "look at these obvious structures" is trying to sell you a bill of goods.

    I particularly like the line on the first page where they say "... the recognition of the octahedral network was noticeably easy, rendering a full statistical analysis unnecessary." 'nuff said.

  25. #55
    Join Date
    Nov 2002
    Posts
    6,238
    Quote Originally Posted by Don J View Post
    Right ...however are you sure this model takes EM effects into account ?
    Yeah, they talk about reionization and radio mode cooling. Of course, you may want more....

    Quote Originally Posted by Don J View Post
    No, what I said is that the simulation program used ..

    snip....

    formed by all vertices adjacent to v.
    What is the problem here? You're complaining about how computers do calculations, how operations are done, or how the computer makes decisions on which operations to do . It still appears that you think that the programmers of the model are somehow specifically programming the model to produce what they want the model to produce. You haven't provided any specific example. Everything you've complained about is nothing more than standard procedure for running computer models. They use the same methods in modeling nuclear weapons, supernova explosions, hydrodynamical models, etc.

    Can you show exactly how this would lead the code to "know" about galaxies, quasars, large scale structure, and connections? If you can't, it would seem to be nothing more than a case of you don't like it, but really can't show anything wrong with it.


    Quote Originally Posted by Don J View Post
    Taking into account the Universe expansion factor .
    Multipole expansion
    http://en.wikipedia.org/wiki/Multipole_expansion
    You do know that the multipole expansion is a mathematical operation, (under most circumstances it is the Laplace Expansion ). The Universe Expansion Factor is nothing more than the Hubble parameter.?

  26. #56
    Join Date
    Nov 2002
    Posts
    6,238
    Quote Originally Posted by Don J View Post
    I see, only models based on LCDM and gravity as the only driving force are considered mainstream.Why the mainstream chose to ignore EM force in his model?

    But the model works notheless... right ?
    http://adsabs.harvard.edu/cgi-bin/np...13B&db_key=AST
    Define works. There is the following nugget in that paper(Part 1, Page 2, top of the second column):

    "We have not included either protons or electrons in the system of equations."

    Really? How accurate do you think this will be? At least Springel et al's model uses matter. Not to mention if there were as much of a magnetic field as Battaner claims, there would be effects seen in the CMB. We don't see those effects, so there couldn't have been that much of a magnetic field. In Battaner's defense, his paper was before the precision WMAP data. Could be that EM is ignored in most models because we don't see it's effects in any large way.

  27. #57
    Join Date
    Oct 2009
    Posts
    1,360
    Quote Originally Posted by Tensor View Post
    What is the problem here? You're complaining about how computers do calculations, how operations are done, or how the computer makes decisions on which operations to do . It still appears that you think that the programmers of the model are somehow specifically programming the model to produce what they want the model to produce. You haven't provided any specific example. Everything you've complained about is nothing more than standard procedure for running computer models. They use the same methods in modeling nuclear weapons, supernova explosions, hydrodynamical models, etc.
    Can you use that program to create Spiral Galaxies for example: in the sense of generating them ab initio from the gravitational collapse of gas ?

    Quote Originally Posted by Tensor View Post
    Can you show exactly how this would lead the code to "know" about galaxies, quasars, large scale structure, and connections? If you can't, it would seem to be nothing more than a case of you don't like it, but really can't show anything wrong with it.
    well i think the term "know" about Galaxies was rather innapropriate.... as parejkoj pointed out

    "it rather assigns galaxies to dark matter halos following some prescription."

    so what are those prescription ?

    Quote Originally Posted by Tensor View Post
    You do know that the multipole expansion is a mathematical operation, (under most circumstances it is the Laplace Expansion ). The Universe Expansion Factor is nothing more than the Hubble parameter.?
    Not problem with that ! Does the Hubble parameter determine the value of the Dark Energy ?
    h = 0.73,
    ΩΛ = 0.75,

  28. #58
    Join Date
    Mar 2010
    Location
    United Kingdom
    Posts
    4,171
    Can you use that program to create Spiral Galaxies for example: in the sense of generating them ab initio from the gravitational collapse of gas ?
    The main model doesn't even try to produce galaxies - it focuses on very large scale structure, the distribution of matter over vast distances. The parameters from this are then fed into a different, as you have said, semi-analytical model to be populated with galaxies. The changes in the parameters and this model are used to evolve the objects embedded in the halos. The model does not attempt to deal with galactic formation - it is about the interactions between the large scale structure and the objects in it.

  29. #59
    Join Date
    Oct 2009
    Posts
    1,360
    Quote Originally Posted by Shaula View Post
    Can you use that program to create Spiral Galaxies for example: in the sense of generating them ab initio from the gravitational collapse of gas ?

    The main model doesn't even try to produce galaxies - it focuses on very large scale structure, the distribution of matter over vast distances. The parameters from this are then fed into a different, as you have said, semi-analytical model to be populated with galaxies. The changes in the parameters and this model are used to evolve the objects embedded in the halos. The model does not attempt to deal with galactic formation - it is about the interactions between the large scale structure and the objects in it.
    My reply was in response to Tensor which said... (see post 57 for context.)
    They use the same methods in modeling nuclear weapons, supernova explosions, hydrodynamical models, etc.
    I wanted to know if all the computing power was used for that specific task only and using the specific code algorithm used for the Millenium Simulation.

    Can you use that program to create Spiral Galaxies for example: in the sense of generating them ab initio from the gravitational collapse of gas ?

  30. #60
    Join Date
    Mar 2010
    Location
    United Kingdom
    Posts
    4,171
    What do you mean by 'that program'? Do you mean the code? No, it is a large scale structure simulation. If you mean the algorithm - you could but you need to include other effects to get anything realistic. Galaxies are complex beasts. Galactic formation models have to take into account the evolution of the objects in them, they are inherently more complex that dark matter. The bulk of the computing power was used to generate larger than ever data sets, high resolution data over huge volumes. It was not used to generate spiral galaxies from scratch.

Similar Threads

  1. Models?
    By Nitreau in forum Space/Astronomy Questions and Answers
    Replies: 2
    Last Post: 2011-Jul-26, 05:06 PM
  2. Keeping Mainstream in Mainstream !
    By galacsi in forum Forum Introductions and Feedback
    Replies: 13
    Last Post: 2007-Sep-06, 03:48 PM
  3. Mainstream Gripes re: Mainstream
    By Peter Wilson in forum Astronomy
    Replies: 82
    Last Post: 2007-Jul-05, 05:00 PM
  4. Standard Solar Models
    By Tim Thompson in forum Against the Mainstream
    Replies: 95
    Last Post: 2005-Dec-10, 01:16 AM
  5. Bad Lunar Models
    By The Bad Astronomer in forum Conspiracy Theories
    Replies: 24
    Last Post: 2003-Dec-04, 05:21 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
here
The forum is sponsored in-part by: