Table of Contents
Showing posts with label Astronomy. Show all posts
Showing posts with label Astronomy. Show all posts
Thursday, November 8, 2012
Wednesday, October 24, 2012
Paper - Magnetic Fields in Star Formation
Magnetic Fields in Star Formation
A star’s evolution begins and ends in the interstellar medium, the dust and gas that exists between the stars. At the beginning of this cyclic process of star formation where dust and gas collapse and compress together due to gravity. However, if gravity always draws things together, why isn’t everything a star? Why isn’t the universe mostly stars and planets rather than dust and empty space? This is because on average the distance between two particles in the interstellar medium is too far, and so on average the dust is too diffuse for star formation to occur. Only through turbulence and random chance can the dust group up enough to begin coagulating into a star.
The first requirement for star formation is if there is enough mass in a given area to form a star. This minimum mass over an area is called Jean’s criterion, which models the interstellar medium as a uniform, stationary cloud of gas that acts under only gravity and ideal gas pressure. Given these assumptions we can use the Virial theorem to model the collapse of the gas. The virial theorem tells us that for a closed system in equilibrium, the following is true.
So we can see that on average the total potential energy of every single particle in the system is equal to twice the kinetic energy in the system. So from this stability assumption we can assume that if the potential energy of the system exceeds twice the kinetic energy, it will begin to collapse in on itself as gravity links more and more mass together. With a bit of substitution we get the equations
M is the minimum mass to trigger collapse, and R is the minimum radius for that this mass must be contained in. Now we can envision a new process that has to occur just from thinking about this equation. As the density increases the jean’s mass decreases, which means as the object collapses less mass is needed to collapse, and greater density increases the potential energy of gravity. This gives rise to the process of fragmentation where, the collapse of a dust cloud is a runaway process in which denser areas of gas will collapse faster. This means that not all the material in an area gets used, the more dense and massive a cloud, the smaller fragments and thus smaller stars are formed from the gas cloud.
Another property we can deduce from the above equations is the average velocities of the particles in the cloud. <T> represents average kinetic energy which is equal to ½ m <v>2. This gives us
Which is the root mean squared speed of the particle, which gives us an average speed of a collapsible gas cloud of about 500 m/s. So just by making the simplest of assumptions we already have a lot of information and approximations for star formation. We have a rough idea of the density and size necessary to form a star, and we have some idea of the structure of it by knowing the velocity of the particles within it. However, it obviously lacks many other important aspects which contribute significantly and change the criteria of this basic model including: the initial velocity of the the cloud, radiation transport through the cloud, ionization, rotation, and magnetic effects. In particular the magnetic field plays a very important effect in preventing or triggering collapses.
Troland and Heiles in 1986, found that the magnetic field strength in the beginning stages of collapse is on the same order as the gravitational and kinetic energy of the molecular cloud. Although we might be able to guess that it has a strong effect because em fields are stronger than gravitational ones, we have to realize that electromagnetic field strength goes down as r2 which should be extremely significant due to the large distances we deal with in molecular clouds. So its not inherently obvious that em fields can play such a huge role. However, it does, and we know this because of the Zeeman effect, which allows us to detect the strength of magnetic fields from a distance.
The Zeeman effect is the splitting of electron energy levels in an atom due to the presence of a magnetic field. This separates a solid emission line into 3 distinct lines, and the distance between these lines is proportional to the strength of the field. In particular the 21 cm line produced by hydrogen, and the 18 cm line produced by OH are often measured because their wavelengths are large enough to penetrate out through the clouds of interstellar dust. Troland and Heiles measured these lines using a method called stellar spectropolarimetry, which takes advantage of the polarization of e-m fields in order to measure the strength of the electric field. These measurements led to the discovery of the magnetic field strength in molecular clouds.
Now that we know the strength of the field, we can now try to characterize what they do. It serves two main functions, the first aids in supporting or collapsing the cloud, the second is changing the distribution of mass in a cloud. Lets start by analyzing the first function. We can think of the magnetic field lines as distributed among the particles in the gas. Since the fields are linked to the particles, when the gas tries to collapse in on itself, the field lines come together. We know from lenz’s law that in e-m fields resist changes, so this increases the magnetic pressure in the system which resists compression due to gravity. If we include this to the Jean’s mass criterion it rises to
However, this idea has some issues. If we use this to predict star formation, the initial mass function would be much higher than it is observed to be. The initial mass function is the distribution of masses of newly formed stars since, and since the magnetic field here is treated in a way where it only resists collapse, the mass function rises.This means we most likely have a problem with our assumption that the magnetic field lines are linked to the position of the particles in the clouds. This is supported by the fact that newly formed stars would contain a very high magnetic field as the contributions add up. However, recent theories indicate that there is a way to account for this magnetic field loss, so the assumption may not be entirely untrue.
One of these theories is ambipolar diffusion. The logic behind this is that although the interstellar medium is mostly neutral, there are still ions. These ions must travel along the path of the magnetic field lines in the ISM. So the ions can be thought of as a stationary grid which slows down the neutral particles that collide into it, letting the magnetic field absorb the energy transferred to the ionic particles instead of acting as an elastic collision. This means that more mass will coagulate around magnetic field lines.
In addition since the position of the ions are fixed in space, even though the neutral mass is collapsing down, the ions star relatively close to where they are and are linked to the magnetic field. This means that the magnetic field doesn’t get drawn in nearly as much as we thought in the original model where the field can gets dragged in uniformly with the collapse of the dust cloud. And accounts for the weaker magnetic fields of new stars, since the density can increase without drastically increasing magnetic flux. This effect also helps explain the high field strength in the interstellar medium because its a function of all the ions left over after star formation, so star formation doesn’t reduce the magnetic field strength in the surrounding area.
So now we have a good explanation right? Unfortunately it still has some issues, one of the major problems is that the effectiveness of this process would be dependent on the metallicity of the medium, since the process is dependent on the number of ions that can link up to the magnetic field. This does not match observations, which seem to be independent of metallicity. It was also shown by Troland and Heiles in 1986 that ambipolar diffusion is too slow of a process to remove the magnetic field from a gas if the gas is diffuse, and Shu et. al 2006 showed that ambipolar diffusion was too slow for dense gases, so it doesn’t account for either extreme, only mid range cases.
However, since the results match for many general cases, ambipolar diffusion is still a reasonable theory to follow. It could mean that it is a main process for mid range molecular clouds, and that we only need to change the model for fringe cases, or the data could be matching out of chance and we require a different explanation.
Another explanation that has become more popular recently is magnetic reconnection. In this model, we take into account that the gas is turbulent. So the magnetic field lines are not stationary but are warping and twisting. The field lines are simply an indication of flux, but it is a good idea to think about them as actual lines in this case because magnetic reconnection occurs when field lines “touch”. This means that huge amounts of magnetic flux are passing through the same area which results in a huge release of energy. This is the same process that happens in a more familiar event, the solar flare. In either case, magnetic reconnection that happens in a short time frame is capable of quickly releasing a lot of energy from the magnetic field associated with those field lines, which can account for the field loss rate depending on the rate of reconnections or touches that occur due to turbulence.
Lazarian and Vishniac were the ones who calculated the timescale for this process by solving the incompressible magnetohydrodynamic equations using fourier methods. They got the solution for the magnetic field by solving for the field in a box sized to the fundamental wavelength of the B-field, then repeat that over all space, while meeting boundary conditions. The equations are listed as follows.
∂v/∂t= (∇ × v) × v − (∇ × B) × B + ν∇2v + f + ∇P′ (1)
∂B/∂t= ∇ × (v × B) + η∇2B (2)
∇ · v = ∇ · B = 0 (3)
where f is the driving force, P′ ≡ P/ρ + v · v/2, v is the velocity over r.m.s. velocity, and B is the Alfven speed over the r.m.s. velocity. The real space components of time t is in units of the large eddy turnover time (∼ L/v) and the length in units of L, the inverse wavenumber of the fundamental box mode.
The magnetic reconnection model better accounts for the loss of magnetic field in a molecular cloud mainly because its much faster in those fringe cases of high and low density, and is not dependent on metallicity, rather on turbulence, and there is enough random motion in the ISM to account for the number of reconnections necessary. This gives us a more accurate inital mass function. However, this model has a major problem in that it predicts tangled field lines in the cores of molecular clouds, which contradicts observations that show that the B field in molecular clouds are regular, thats how we were able to polarize and observer the additive effect.
There now exists research into models that try to combine the two effects together, in which both ambipolar diffusion and magnetic reconnection are accounted for. However, these models are difficult to create because magnetic reconnection by itself already accounts for the reduction rate of the magnetic field in this stage of stellar evolution. Also by definition these two processes counteract one another since the stronger the ambipolar diffusion effect, the less turbulence can occur since the ionic grid slows down the particles, which makes it difficult to model due to feedback loops.
In conclusion, more research into this field is necessary in order to create a model that matches observations. Ambipolar diffusion explains many cases, and matches observations of regular field lines. Magnetic reconnection models match initial mass functions more accurately and over a wider range of star formation scenarios. Some new formulation, either involving both processes or a new process altogether must be found in order to accurately match all the data gathered.
Tuesday, October 23, 2012
Paper - Pulsar Formation and Structure
Pulsar Formation and Structure
How does degeneracy support a white dwarf from collapsing under its own mass? Degeneracy pressure is caused by the Pauli exclusion principle which says that two fermions cannot occupy the same quantum state, meaning they can’t have the same quantum number. Some examples of fermions include protons, neutrons and electrons which have antisymmetric wavefunctions. This property limits the number of particles you can stuff into a given volume because they cannot share an energy state, imposing a limit to the proximity of the particles. This means that white dwarfs and other objects in a degenerate state cannot be compacted down any farther.
This paper will describe the formation of another type of degenerate object, pulsars, which are a type of neutron star. As the name suggests neutron stars are neutron degenerate objects, so instead of being packed so tightly that the electrons can’t be packed any closer they are limited by the proximity of their neutrons. These stars can be formed by certain types of supernova.
The first way a supernova can form neutron stars is by having a white dwarf find a companion star that’s less dense than itself, such as a main sequence star. Due to its greater density, it can draw mass away from the less dense companion star. We can explain this ability to remove mass from another object by both Newtonian and Relativistic physics. Overall it boils down to the greater density more sharply curving spacetime around the white dwarf, or by fringe material on the companion star escaping its original gravitational pull.
However, despite giving us an acceptable reason Newtonian physics fails to give us the appropriate rate of mass loss and gain from the companion star to the white dwarf. This process is called mass accretion and allows a white dwarf to destabilize itself by exceeding the Chandrasekhar mass limit set by:
where ω03 is a constant derived from a from the lame-emden equation, µe is the average molecular weight of the star, mH is the mass of hydrogen, and mp is the planck mass. This formula gives us a maximum weight of approximately 1.4 times the mass of the sun, which is the upper stability limit for a white dwarf. Normally the white dwarf doesn’t have an ongoing process that increases its mass and can destabilize itself. However, if a companion star is present, this cannot occur, and the white dwarf will collapse. Electron degeneracy pressure will no longer be able to support it and so the star pretty much implodes as gravity wins out over its outward pressure. This happens so fast that the mass gets superheated and undergoes runaway nuclear fusion resulting in a supernova.
Another way a supernova could occur is if the star was simply extremely massive. High mass stars burn hotter and can progress through enough cycles of stellar evolution that it can fuse much heavier elements. This allows them to naturally exceed the Chandrasekhar limit at the end of their stellar lifetimes instead of reaching a proper white dwarf phase and undergo a supernova without the aid of a companion star.
Either way, a supernova has the potential to form a new celestial body. The leftover matter at the center of these explosions is extremely condensed, which can form a neutron star. It can reach this only if the mass lies between the Chandrasekhar mass and the Tolman - Oppenheimer - Volkoff mass limit, the upper bound of a white dwarf and the lower bound of a black hole.
Now we shall delve into the structure of these interesting stars, starting with the equation of state. These tell us the pressure, the density, and the temperature of the star as a function of radius. Currently, there aren’t consistent formulations for the equations of states. Some models include APR EOS, UU, EOS FPS, and L, all of which come up with different mass predictions. An example of one of the possible equations of state for pressure is
The integral over the mass is a volume integral adjusted by the central density rho to give the mass as a function of radius. The factor nabla shows that this is a general relativity equation as the factor 2Gm/c2 factor is known as the Schwarzschild radius, the radius of a black hole.
This term is commonly used in general relativity, and we can use the Schwarzschild metric here to show us some interesting properties that occur on the surface of a neutron star.The Schwarzschild metric is
By taking some average values for a neutron star, 1.5 solar masses for its mass, and 10 km for its radius, we get the Schwarzschild radius to be roughly 4.4 km. This makes the factor 1-rs/r roughly .66 which we can see will have a significant factor in warping the space around the neutron star. This means that at the surface of the neutron star, the actual distance is ~1.2 times the coordinate distance! This means that if you were able to walk on the surface of a neutron star, in order to walk 10 meters from an observer’s reference frame, such as a scientist floating in a spaceship, the walker would need to walk 12 meters in his own frame!
The exact way in which to incorporate this spacetime curvature into an equations of state for a neutron star is currently unknown and warrants further research. The main contention between the different models, is on the application of general relativistic corrections, so its clear that we can gain a greater understanding of quantum gravity from the creation of more accurate neutron star models through data gathering and analysis.
Another structural feature to look at is the neutron star’s temperature. In particular it is important to model the cooling process. Immediately after a supernova, a neutron star must be immensely hot, on the order of 10^11 kelvins. We know from observation that it can’t dissipate this heat simply through simple radiation of heat, so there needs to be explanations on methods to quicken this rate of heat loss. In addition, neutron stars often undergo periods of quiescence where it turns on/off its x-ray emission spectrum. The speed at which it turns off also requires explanation, and one of the most common explanations is the Direct URCA process. This is when the star emits neutrinos via the processes:
This cools the star because neutrinos don’t interact very strongly with matter, they can simply pass through and leave the core of the neutron star, carrying away with it large amounts of energy very quickly. This process occurs at a rate many orders of magnitude quicker than cooling by radiation. This means that a neutron star actually cools from the inside out rather than from outside in, which is a very interesting side effect of this process. Another cooling method is the modified URCA process in which
For now the most accepted structure of a neutron star is as follows. The surface of the star is a solid lattice formed of degenerate electrons and heavy nuclei such as iron. Below this is an inner crust that contains a lattice of heavier nuclei such as krypton, and superfluid neutrons and electrons. It is around here that neutron drip occurs. Neutron drip is when a neutrons tunnel out of the nuclei to become free neutrons. Normally a free neutron would decay into a proton, however, since the star is neutron degenerate, there is no lower energy state to occupy, so instead of decaying into a proton and its antiparticle, it stays a neutron and tunnels out of the energy well, becoming a free floating neutron, then undergoes its decay. This forms a layer in the star of free neutrons and protons located below the inner crust, and below this is the unknown core.
That fermi sea though, causes many of the interesting properties associated with neutron stars, including the ability for it to become a pulsar. Not only are the particles in that area a superfluid they are also superconductors. Superfluidity is a state of matter where a liquid behaves as though it has no viscosity, and has infinite thermal conductivity. Due to the property of infinity thermal conductivity a superfluid is also isothermal. What this means is that since it has an infinite ability to distribute heat, its not possible to locally heat a superfluid, so it always has the same temperature throughout the entire liquid. Superfluidity also allows a liquid to maintain equilibrium in containers regardless of gravity, allowing for fluids to crawl up or down the outside surfaces of its container. Also turbulence that is self contained, like a vortex last indefinitely inside a superfluid since it has no friction.
The neutron star’s sea of protons, neutrons and electrons eventually behave like a superfluid because the degenerate neutrons can pair together to form a boson. Boson’s do not have to obey the pauli exclusion principle and can be in the same state at the same time, which allows them to pass freely through one another. This gives it even more freedom than a gas as it has a viscosity of 0, but is still considered a liquid due to the distance and bonds between the particles as a whole.
In addition a neutron star’s sea of particles is also superconductive which means that it has no electrical resistance, and can maintain currents indefinitely. Only lenz’s law can slow down the current in a superconductive material.
These properties heavily affect the structure of the neutron star. Since the sea of particles has some charge due to protons and electrons, once the movement of this sea becomes uniform, the magnetic field lines they form will become constant, making the surface which is also charged solid. The particles on the surface are degenerate electrons and heavy nuclei, which means it is purely ionic and its position is heavily fixed due to the magnetic field lines created by the rotating superfluid.
This effect seems to be akin to the effect of ambipolar diffusion in star formation. In that case, the dust of the Interstellar medium has ionic particles that couple to the magnetic field lines of a molecular cloud, which slow down the movement and helps in the accretion of mass around those field lines. This lends credence to the idea of a solid surface on a neutron star because the effect of the superfluid causes a magnetic field many times stronger than that of ambipolar diffusion so the increasing of density and slowing of movement should also increase.
When these fields are strong enough, they can force the electromagnetic radiation of the star to exit only at the poles of the star resulting in a pulsar. The only way for them to get this strong is if the superfluid has enough angular momentum through formation, but where does it come from?
It relates all the way back to the formation of the original star. In the giant molecular clouds that formed the original star. In order to shrink some of that gas down gravitationally, assuming there was some initial rotation, or turbulence in that cloud, angular momentum has to be conserved. This is why stars rotate, such as our sun, and why they have a magnetic field. The ambipolar diffusion previously mentioned is a process that explains how the rotation can be slowed by clumping of material and removal of momentum through magnetic fields. Though again ambipolar diffusion is its own topic with its own controversies.
However what holds true is that as you decrease the size of the object and pull in more particles together, the rate at which it spins has to go up. Once the rest of the star is blown away and only the compacted matter is left, the speed at which it rotates is enormous because it went from a huge cloud of dust to a few kilometers compounded with the moment of inertia calculated as r2. Neutron stars have been recorded to have a period of rotation on the order of a millisecond. Not only that it will maintain this rotation very accurately due to the superfluid nature underneath the crust, no friction to slow down the process.
However there are some strange events that can disrupt this regularity. Sometimes a pulsar will “ glitch” or “kick”. Some speculate that this is caused by asymmetric supernova, others think that there are starquakes in which the solid crust compacts, as deformities caused by landing meteorites or crust compacting as pressure towards the surface goes down, causing quakes on the surface of the star. That quake can then severely disrupt the period of rotation for that star as the motion will remain in the superfluid. Forces continue endlessly as there is no friction disrupting its motion, so even the smallest events could have major consequences.
In conclusion there are still a lot of unknowns pertaining to the pulsar. In particular we don’t know the composition of the core, or have a definitive explanation for anomalies or “kicks” in the pulsar, or a reliable equation of state for a neutron star, which makes it very difficult to model. However, further research into this area will help further our understanding of both general relativity and quantum mechanics and possibly how to reconcile the two.
Wednesday, July 25, 2012
Trinary Dwarfs-Data Reduction Summary
The goal of the data reduction was to take radial velocity measurements of certain observed low mass stars. In order to accomplish this we used the redspec data reduction package.
The first step in doing a reduction is combining the flat and dark files. In later steps this lets us remove influences from background sources and correct for the dark current in the telescope. This is followed by beginning the reduction by setting up how files will be used in parfile. In this step we define the file paths to the spec files to denote the target star, and the calibration star. The calibration star is used later on in the reduction to eliminate telluric absorption, and other features associated with the location of the target star. In this case, as we are working with high resolution spectra, we use an A type star as a source of calibration because they have few intrinsic lines in their spectra.
Spatial rectification with redspec is accomplished by summing an A+B pair of a calibration star to produce an image with two spectra. By finding polynomial fits to each spectral trace, and calculating guassian centroids to define the separation between the two, a curved or distorted spectra can be remapped to produce straight spectral traces with respect to the rows of the detector. Once this is done the spectral rectification step follows, where we use a polynomial fit of order 1-4 in order to fit the spectra to known arc lines or oh lines. This step matches the pixels of the picture to fitted wavelengths with regular intervals.
The last step is the main portion of the redspec. This is the step where the target and the calibrator get divided by the dark subtracted flat to remove the background sources for both the calibration star and the target star. The divided spectra gets multiplied against a normalized blackbody function corresponding to the spectral type of the calibration star to provide a relative flux calibration.
The observations for this project were made using filters N7 and N3. For the N7 observations the echelle was 63, with a cross dispersion of 35.52, and the N3 observations were made with an 62.95, and a cross dispersion of 34.08. The reductions for this project involved the use of both arc lines and oh lines for the spectral mapping portion of the reduction. The values for the oh lines came from a paper written by Emily Rice rather than the standard lines used in the echelle format simulator. The reason for this was simply greater accuracy. The N7 reduction used arc lines for the order we were interested, order 33 due to the strong methane absorption features. In the spectral mapping portion of the reduction, the fit to each individual arc line was of order 1 or 2, while the overall lambda fit used to map the arc lines to pixels was of order 1. This is because there were only 3 arc lines total on that particular echelle order when using xenon, krypton, argon, and neon line lists from keck, and due to the nature of polynomial fits, we can't use the higher order fits without introducing large errors as for example the 2nd order fit would match any 3 points with 0 error. The N3 reduction used OH lines, and we were primarily interested in order 59. For this fit, although there were a possible 10 lines to be fitted, the most accurate reduction was found using 6 out of those 10 lines, with an order 3 fit for the oh lines, and an order 2 fit for the overall lambda fit. The reason for this was that adding in additional lines caused significantly greater error even with a higher order lambda fit, and the lines themselves were not only very weak in terms of signal, but often shifted around greatly from observation to observation, relative to the 6 lines that were used in the fit.
The first step in doing a reduction is combining the flat and dark files. In later steps this lets us remove influences from background sources and correct for the dark current in the telescope. This is followed by beginning the reduction by setting up how files will be used in parfile. In this step we define the file paths to the spec files to denote the target star, and the calibration star. The calibration star is used later on in the reduction to eliminate telluric absorption, and other features associated with the location of the target star. In this case, as we are working with high resolution spectra, we use an A type star as a source of calibration because they have few intrinsic lines in their spectra.
Spatial rectification with redspec is accomplished by summing an A+B pair of a calibration star to produce an image with two spectra. By finding polynomial fits to each spectral trace, and calculating guassian centroids to define the separation between the two, a curved or distorted spectra can be remapped to produce straight spectral traces with respect to the rows of the detector. Once this is done the spectral rectification step follows, where we use a polynomial fit of order 1-4 in order to fit the spectra to known arc lines or oh lines. This step matches the pixels of the picture to fitted wavelengths with regular intervals.
The last step is the main portion of the redspec. This is the step where the target and the calibrator get divided by the dark subtracted flat to remove the background sources for both the calibration star and the target star. The divided spectra gets multiplied against a normalized blackbody function corresponding to the spectral type of the calibration star to provide a relative flux calibration.
The observations for this project were made using filters N7 and N3. For the N7 observations the echelle was 63, with a cross dispersion of 35.52, and the N3 observations were made with an 62.95, and a cross dispersion of 34.08. The reductions for this project involved the use of both arc lines and oh lines for the spectral mapping portion of the reduction. The values for the oh lines came from a paper written by Emily Rice rather than the standard lines used in the echelle format simulator. The reason for this was simply greater accuracy. The N7 reduction used arc lines for the order we were interested, order 33 due to the strong methane absorption features. In the spectral mapping portion of the reduction, the fit to each individual arc line was of order 1 or 2, while the overall lambda fit used to map the arc lines to pixels was of order 1. This is because there were only 3 arc lines total on that particular echelle order when using xenon, krypton, argon, and neon line lists from keck, and due to the nature of polynomial fits, we can't use the higher order fits without introducing large errors as for example the 2nd order fit would match any 3 points with 0 error. The N3 reduction used OH lines, and we were primarily interested in order 59. For this fit, although there were a possible 10 lines to be fitted, the most accurate reduction was found using 6 out of those 10 lines, with an order 3 fit for the oh lines, and an order 2 fit for the overall lambda fit. The reason for this was that adding in additional lines caused significantly greater error even with a higher order lambda fit, and the lines themselves were not only very weak in terms of signal, but often shifted around greatly from observation to observation, relative to the 6 lines that were used in the fit.
Trinary Dwarfs-Nirspec Data Reduction Walkthrough
Nirspec Data Reduction Walkthrough
Directory structure:
Usually directories with raw data for spectra are categorized
by date, create a reduction folder to store your reductions.
Then, within reduction, create a folder for source name, a
folder within that for order number, and if more than one pair
of exposures was taken a folder within that for nod set e.g.
/data/nirspec/110715/reduction/2026-2943/order33/nod1/
/data/nirspec/110715/reduction/2026-2943/order33/nod2/
etc...
Create or copy in a spec1.in file which is a text file that
contains linelist information to use for the further steps
in reduction. This can be created using the efs simulator
cd into the reduction directory and start idl by entering
idl as a command into the terminal.
nirspecfd:
from the reduction directory, after starting idl type in
nirspecfd. This is a shell script that creates flat and
dark files. Other scripts can be used or created on your
own, but the idea is to combine the images and average
them. If you want to use this on a file, for example
jul06s0019.fits you would do:
IDL> nirspecfd
raw='../spec'
out='./'
prefix='aug19s'
darks='20-34'
flats='5-19'
The prefix is the date signifier in the name of the data
fits files, the darks variable is set to the range of file
numbers for your darks, and the flats variable is set to the
range of file numbers for your flat fields. Run this for
each set of flats and darks you've generated for each filter
/slit combination used during the night.
parfile:
from the source/order/nod directory, start idl and type:
IDL> parfile
on left side:
keep spatial map and spectral map as they are, they
are usually set to spat.map, spec.map in order to
facilitate the later steps. set flat and dark to the
files created in nirspecfd (in your reduction folder)
set arc1 to the closest arc file taken to your source
or if using oh lines set the arc to the A frame of the
target. Leave arc 2, reference arc and ref spec map
blank set nod 1 and nod 2 to the AB frames of your
source
on right side:
make the entries blank, spat.map.cal, spec.map.cal
are useful in a different context and not in the
first reduction for a given set of data.
set nod 1 and nod 2 to the AB frames of your A0 V star
set T_eff to 9480
spatmap:
First ascertain which orders you are looking at by using the
efs to simulate the same slit, filter and x-dispersion. Select
the echelle order by clicking above and below the tracing
standard, and keep the default options. It is less important in
this step to ensure accuracy in selection, and more important
to select all of the order's information.
Then the next step is to create a fit to the data, which
is done by just clicking on the traces, and adjusting
specmap:
Options, these are less set in stone and more of guidelines.
The line fit order can vary between 1-3. First off click on display
arc lamp fit. Then after clicking on an line to fit, judge from the
plot that shows up which line fit order is best to use. Usually,
there is some curvature so line fit order 2-3 is often used.
Set fit height to 6 and fit width = 13. In general for this type
of reduction in N3/N7 the fit height is not often varied.
Fit width is often changed, based on two criteria, one is how well
the arc lamp fit graph aligns, and the other is just to select
the proper portion of the line. If a line is too thin and close to
a secondary line, it can be difficult to select the correct one. In
this case lower the fit width, and click on the line at different
heights. Lambda fit order is set to the lowest error that can be
achieved without going so low as to 0 out all the error by choosing
for example a 2nd order fit, when there are only 3 points as the low
error comes from needing 3 points just to define a 2nd order fit
rather than an accurate fit. The goal here is to get as many oh or
arc lamp lines matched as possible without compromising the error.
Usually including the maximum number of lines is optimal, but
in some cases, it is better to have less lines for a better
increase in accuracy, however the more lines fit, means that you
are using more of the data available.
redspec:
standard extraction - increase the contrast with sliders usually
when the contrast is maximum allows you to select more accurately.
center the lines on standard trace, then reduce clip height until dashed
lines just cover trace. Its okay when on maximum contrast to make the
clip height slightly within colored areas. The centering and keeping
the clip height low are the two most important parts. source extraction
- same, but you can make the clip height smaller (but no smaller than
~5) the resulting data file is tar.dat
tricks:
- Take screenshots when using the efs. This lets you more accurately
place the arc lines in specmap.
- when reducing a second nod pair, simply copy the files from one
folder to the next and change just the relevant data files in parfile.
and as long as you are using the same arc and standard, you can skip
spatmap and specmap
Trinary Dwarfs-EFS Echelle Format Simulator
The Echelle Format Simulator is very useful for planning and operation, and
is also useful for data reduction. It can be downloaded at:
For data reduction, its useful to use the echelle format simulator to figure
out where in the echelle order the arc lines are. In order to open the
EFS, cd into the nirspec simulate folder after downloading the files,
then use the command idl simstartup to start the EFS. In the
simulator, set the filter, echelle, and cross dispersion.
Then to show the arc lines click on overlays and select OH Lines or
arc lines.
To prepare for the specmap portion of the data reduction it is a
good idea to take a screenshot using printscreen on a windows or
using command shift 4 to take a cropped screenshot of the EFS.
The following is a screenshot of the arc lines for neon, argon,
xenon, and krypton as viewed in the simulator for echele 63
and cross dispersion 35.53.
The following screenshot shows the oh lines as
as viewed in the simulator for echele 63
Trinary Dwarfs-Nirspec Data Reduction
Nirspec Data Reduction
Directory structure:
Usually directories with raw data for spectra are categorized
by date, create a reduction folder to store your reductions.
Then, within reduction, create a folder for source name, a
folder within that for order number, and if more than one pair
of exposures was taken a folder within that for nod set e.g.
/data/nirspec/110715/reduction/2026-2943/order33/nod1/
/data/nirspec/110715/reduction/2026-2943/order33/nod2/
etc...
Create or copy in a spec1.in file which is a text file that
contains linelist information to use for the further steps
in reduction. This can be created using the efs simulator
cd into the reduction directory and start idl by entering
idl as a command into the terminal.
nirspecfd:
from the reduction directory, after starting idl type in
nirspecfd. This is a shell script that creates flat and
dark files. Other scripts can be used or created on your
own, but the idea is to combine the images and average
them. If you want to use this on a file, for example
jul06s0019.fits you would do:
IDL> nirspecfd
raw='../spec'
out='./'
prefix='aug19s'
darks='20-34'
flats='5-19'
The prefix is the date signifier in the name of the data
fits files, the darks variable is set to the range of file
numbers for your darks, and the flats variable is set to the
range of file numbers for your flat fields. Run this for
each set of flats and darks you've generated for each filter
/slit combination used during the night.
parfile:
from the source/order/nod directory, start idl and type:
IDL> parfile
on left side:
keep spatial map and spectral map as they are, they
are usually set to spat.map, spec.map in order to
facilitate the later steps. set flat and dark to the
files created in nirspecfd (in your reduction folder)
set arc1 to the closest arc file taken to your source
or if using oh lines set the arc to the A frame of the
target. Leave arc 2, reference arc and ref spec map
blank set nod 1 and nod 2 to the AB frames of your
source
on right side:
make the entries blank, spat.map.cal, spec.map.cal
are useful in a different context and not in the
first reduction for a given set of data.
set nod 1 and nod 2 to the AB frames of your A0 V star
set T_eff to 9480
spatmap:
First ascertain which orders you are looking at by using the
efs to simulate the same slit, filter and x-dispersion. Select
the echelle order by clicking above and below the tracing
standard, and keep the default options. It is less important in
this step to ensure accuracy in selection, and more important
to select all of the order's information.
specmap:
Options, these are less set in stone and more of guidelines.
The line fit order can vary between 1-3. First off click on display
arc lamp fit. Then after clicking on an line to fit, judge from the
plot that shows up which line fit order is best to use. Usually,
there is some curvature so line fit order 2-3 is often used.
Set fit height to 6 and fit width = 13. In general for this type
of reduction in N3/N7 the fit height is not chanced very often.
Fit width is often changed, based on two criteria, one is how well
the arc lamp fit graph aligns, and the other is just to select
the proper portion of the line. If a line is too thin and close to
a secondary line, it can be difficult to select the correct one. In
this case lower the fit width, and click on the line at different
heights. Lambda fit order is set to the lowest error that can be
achieved without going so low as to 0 out all the error by choosing
for example a 2nd order fit, when there are only 3 points as the low
error comes from needing 3 points just to define a 2nd order fit
rather than an accurate fit. The goal here is to get as many oh or
arc lamp lines matched as possible without compromising the error.
Usually including the maximum number of lines is optimal, but
in some cases, it is better to have less lines for a better
increase in accuracy, however the more lines fit, means that you
are using more of the data available.
redspec:
standard extraction - increase the contrast with sliders usually
when the contrast is maximum allows you to select more accurately.
center the lines on standard trace, then reduce clip height until dashed
lines just cover trace. Its okay when on maximum contrast to make the
clip height slightly within colored areas. The centering and keeping
the clip height low are the two most important parts. source extraction
- same, but you can make the clip height smaller (but no smaller than
~5) the resulting data file is tar.dat
tricks:
- Take screenshots when using the efs. This lets you more accurately
place the arc lines in specmap.
- when reducing a second nod pair, simply copy the files from one
folder to the next and change just the relevant data files in parfile.
and as long as you are using the same arc and standard, you can skip
spatmap and specmap
Trinary Dwarfs-Using VI
The following are some useful commands to use in the text editor vi.
Note-Editor commands in vi are case sensitive and the editor removes spaces in text filenames
vi - opens vi text editor
vi filename - opens a file or creates a new file
esc - is used to switch from text insertion mode to command mode
vi filename - opens a file or creates a new file
esc - is used to switch from text insertion mode to command mode
Command mode-:q - is the quit command
:w - is the write/save command, w filename writes to that filename
:wq - writes and quits from vi
:q! exclamation point forces the quit command
:w filename is used to write to another file, save as function
ctrl u/ctrl d is used page up page down, file may open at bottom of page
:11 goes to text line 11
/word seaches for the character string word, while in this mode N goes to the closest word up from current line n goes to the closest word below current line
:w - is the write/save command, w filename writes to that filename
:wq - writes and quits from vi
:q! exclamation point forces the quit command
:w filename is used to write to another file, save as function
ctrl u/ctrl d is used page up page down, file may open at bottom of page
:11 goes to text line 11
/word seaches for the character string word, while in this mode N goes to the closest word up from current line n goes to the closest word below current line
Cursor controlarrow keys or h j k l can be used to move cursor
0 beginning of line (zero)
$ end of line
W w word right
B b word left
E e end of word right
a goes into insert mode one space right of cursor
A is end line insert
:r filename inserts a file directly below current line
dw deletes forward word
db deletes back word
dd deletes line
Replacement
:%s/\r//
:%s/oldText/newText/g this command will replace instances of one text string with another text string
$ end of line
W w word right
B b word left
E e end of word right
a goes into insert mode one space right of cursor
A is end line insert
:r filename inserts a file directly below current line
dw deletes forward word
db deletes back word
dd deletes line
Replacement
:%s/\r//
:%s/oldText/newText/g this command will replace instances of one text string with another text string
Trinary Dwarfs-Setting up Directories
Introduction to Unix
Open up a terminal window, this will vary depending on what
system you are in, X11 is common for modern macs. From here
you are able to navigate about your own personal files and
among the server your in. The following are some common
commands that will make it easy to set up your files.
Of particular use is tabbing. Pressing tab can be used to
autocomplete an entry. For example given a directory name,
e.g. echelle_33 typing e tab if that is the only file in
your directory that starts with an e, it will autocomplete
the command to echelle_33, autocompletion makes going to
different directories easier.
ls --- lists files in directory
ls -l --- lists your files in with additional information,
size of the file, who owns the file and who has the right to
look at it and/or modify it, and when it was last modified.
The categories in which rights are given are your terminal,
those on the same server and the world.
ls -a --- lists all files, including the ones whose filenames
begin in a dot, which you do not always want to see. In
particular swap files designate files which are in use. This
is a temporary storage for data. When files are closed incorrectly
these files can sometimes be used to retrieve data, however they
can also cause some issues when opening files
emacs filename this is a common text editor to let you create
files, filename can includes suffixes such as .txt, .in, etc.
Your system may or may not include this but most likely will.
mv filename_1 filename_2 this moves a file into a different
directory or can be used to rename a file. This is especially
useful for renaming the type of file.
rm filename this is used to remove a file, there are additions
you can add to this in particular rm -f is a force remove
rmdir directoryname this can be used to remove a directory,
a common addition is rmdir -rf which recursively force removes
a directory.
diff filename_1 filename_2 this is used to find the
differences in the text of the two files.
cd directoryname this allows you to go to a directory
within the one that you are in. If you do instead cd .. this
allows you to go up one directory, you can also do cd /filepath
in order to go to a completely new directory
mkdir directoryname this allows you to create a new
directory/folder with the given name
vi this is another common text editor, in most systems. This
can be used alternative to emacs. vim is a more powerful
version of vi. vi filename creates a text file with name
filename
tkdiff this is another common checker, it is a gui that
actively shows and highlights the differences between two
text files and shows them side by side matched by lines.
cat filename filename filename ... > newfile is a very
useful command that can be used to combine multiple
textfiles into one.
find -name \*.txt > list.txt
prints all txt files with filepath into list.txt file.
find . -name '*' -print|xargs grep 'text to search' finds filename that contains text you wanted to find
Subscribe to:
Posts (Atom)