For millennia, humanity has looked up to engage with the stars. During much of that time, human eyes and hands were the main tools available to record or interpret such celestial information, whether through a petroglyph (rock carving) of an ancient sighting of an exploded star (Than, 2006), or a modern painting of a starry night (Temkin, n.d.).
Figure x.x Left: A petroglyph, or rock carving, in White Tanks Regional Park, Phoenix, AZ that is thought to show Supernova 1006 (star symbol to right of center) as well as Scorpius the constellation (scorpion symbol to left of center). Credit: John Barentine, Apache Point, Observatory. (Than, K. (2006). Right: Vincent van Gogh, a Dutch post-impressionist artist who lived from 1853-1890 painted “The Starry Night” with oil on canvas, from his direct observations as well as his imagination, in 1889 (Temkin, A. (n.d.).) Credit: Wikimedia Commons/public domain https://commons.wikimedia.org/w/index.php?search=starry+night+van+gogh&title=Special:Search&go=Go&searchToken=4lob9pltt57eaqum5u6oy1yn4#/media/File:VanGogh-starry_night_ballance1.jpg
The invention of the telescope about 400 years ago became a preliminary step toward the distancing of the human eye and hand from the recording of astronomical information (Van Helden, 1977). One of the first users of that first generation of telescopes was Galileo Galilei, who made hand-drawn sketches of his observations that revealed details of our Moon and noted the existence of satellites around Jupiter (Edgerton, 1984; Whitaker, 1978). Fast forward a few hundred years, however, and the technology of the modern telescope had grown exponentially (Rector, Arcand, & Watzke, 2015). In just the past few decades, the tools available to create new images of objects in our night sky have stretched far beyond the mechanics of human eyes and human hands. There is now highly specialized equipment and detectors with super human vision exceeding what humans can access from an Earth with ever-increasing light pollution (Globeatnight.org, n.d.), and eyes sensitive only to visible or “optical” light (Tucker, 2017, p.1; Arcand & Watzke, 2015, p.10-17).
Each band of the electromagnetic spectrum – the full range of light from radio waves to gamma rays – provides different information and insight about objects in space (Meyers, 2013), most of which was unknowable to humans until work began on electromagnetism in the mid-nineteenth century (Arcand & Watzke, 2015). Since this unveiling of the different kinds of light, numerous tools and technologies have been created to make visible the invisible. For example, NASA’s Chandra X-ray Observatory, launched in 1999 (“Chandra: About Chandra”, n.d.), explores a high-energy Universe of objects ranging from exploding stars to black holes and colliding galaxies (Tucker, 2017). Chandra is one of the key tools used to explore parts of the multiwavelength Universe that goes beyond the human senses (Tucker & Tucker, 2001). Output from Chandra, as well as from the iconic Hubble Space Telescope and every other ground and space-based observatory, is the archive of the data (where the data is stored) (White et al., n.d.).
As stated above, the modern telescope not only magnifies, but also amplifies and makes observable information beyond the visible spectrum of light. A translation is therefore required to move from the raw telescopic data to visual representations of the objects in a form that humans can view (Arcand, Watzke, Rector, Levay, DePasquale, & Smarr, 2013). That translation process starts with the information obtained by the spacecraft detectors and moves down the data processing pipeline through layers of analysis and software to the final image visual output (DePasquale, Arcand, & Edmonds, 2015; Rector, Arcand, & Watzke, 2015). This is discussed in the next section.
Translations: From Binary Code to Binary Stars
When a satellite points at a celestial object, such as a binary star system (a pair of stars that orbit each other), the spacecraft’s camera records the photons – the packets of energy – that have been traveling for billions (or trillions or more) of miles and transmits them back to Earth via NASA’s Deep Space network, encoded in the form of 1’s and 0’s (binary code). Scientists then use specialized astronomical software to translate the binary code into a table that plots the time, energy, and position of each photon that struck the detector during the observation. The data are then further processed with scientific software to form the visual representation of the object. From observation of light, to communication of the recorded data, to transformation of that information into various outputs, people are involved in each step of the creation. (Arcand, Watzke, Rector, Levay, DePasquale, & Smarr, 2013)
Ultimately, the specialists in that pipeline of data processing convert the raw data into an image, removing potential artifacts from the data, smoothing the data if necessary, selecting the field of view, scaling the image, compositing as needed, and adding color to it (Rector, Levay, Frattare, Arcand, & Watzke, 2017). The choices made by specialists during that translation process are most often done to preserve the scientific information encapsulated in the data, and are often targeted toward a particular audience (Arcand et al., 2013). That audience can consist of experts or non-experts, including subgroups from citizen scientists and science-interested publics, to educators or mediators (ranging from evaluators to science media creators), to various members of the media and other non-experts (ref new NASA VWG whitepaper when done). Considering the audience during the data production pipeline is an important step, as discussed in the next section.
Not Just What, But Also How:
The real power of visualization is to give a voice to the long misunderstood data of the world. And with that power comes a great responsibility for the creators of such visualizations (Werthessen, 2016, p.x).
The many choices that need to be made in representing scientific information affect user response across the expert to non-expert spectrum. To better help researchers understand their audiences, and how best to communicate the science underpinning the images, scientists and technicians must understand the perceptions of those audiences, in terms of both the astronomical images and their descriptive texts. This next section will discuss briefly the research on expert and non-expert differences in communicating the science underlying astronomical images, and provide findings related to gender. It is noted that these findings relate to adults.
Visual Processing
Starting with visual processing, what an expert sees when looking at an astronomical image is not necessarily what the non-expert sees. Research (Smith, Smith, Arcand, Smith, Bookbinder, & Keach, 2011) has shown that the expert tends to move from the science to the aesthetics of an image. Experts are likely to comment first on what kind of data are in the image, what individual colors might exemplify, what the image is meant represent, etc., then move on to statements such as, “This is pretty cool.” or “That’s a lovely image of a galaxy.” Smith et al. further reported that non-experts more often move from the aesthetics to the underlying science associated with the astronomical image. For example, a non-expert might start by saying, “Wow, that’s beautiful!” or “How intense and colorful.” before eventually questioning, “What does it mean?” or “What does a scientist see when he or she looks at this?”
Non-experts, therefore, tend to begin with a sense of awe and wonder, and focus first on the aesthetic qualities of the astronomical image being shown. Experts, however, first wonder how the image was produced, what information is being presented in the image, and what the creators of the image wanted to convey (Smith et al., 2011).
Color
Another area in which experts and non-experts differ is the interpretation of colors used in astronomical images. Non-experts tend to visualize red as hot and blue as cool. Indeed, more broadly, humans categorize “warm” and “cool” palettes very clearly as a “universal” constraint across language and culture biologically-based in the human brain (Xiao, Kavanau, Bertin, & Kaplan, 2011). Smith et al. (2011) found, however, that about 60% of experts consider blue as hot, compared to 20% of non-experts. In astrophysics, scientists classify stars using Planck’s law of black body radiation, wherein middle-range stars such as our Sun with surface temperatures around 9,000 degrees F appear yellow, cooler stars around 6,000 degrees F appear red, and hotter stars around 18,000 degrees F appear blue (Gaensler, 2011).
Figure x.x. NGC 4696 X-ray/infrared/radio image in blue (left) versus red (right). Credit: X-ray: NASA/CXC/KIPAC/S.Allen et al; Radio: NRAO/VLA/G.Taylor; Infrared: NASA/ESA/McMaster Univ./W.Harris
The question arises, therefore, when creating an astronomical image that shows super-heated material around a galaxy, whether to color it blue or red for non-expert audiences. The primarily red image might actually convey the heat of the object better than blue (see Figure x.x), even though its color mapping would be considered non-standard for a physicist or astronomer. Furthermore, Smith et al. (2011) reported that there was little difference in the aesthetic appreciation or scientific comprehension for red versus blue versions of astronomical images presented.
Viewing Time
A study of visitors to the Metropolitan Museum of Art in New York City (Smith & Smith, 2001) indicated that the average amount of time spent before great works of art was 27.2 seconds. A replication of that study (Smith, Smith, & Tinio, 2017) at The Art Institute of Chicago showed not much change, with the average time spent reported as 28.63 seconds. Comparatively, viewers of astronomical images on a computer screen (Smith et al, 2011) were shown to take about the same amount of time on average as the results from the museum studies, although with a fair amount of variability. However, when a scale or gauge was added to the image to provide a sense of the size, the time spent viewing the object increased by up to 50%.
Figure x.x. Young Galactic supernova remnant G292.0+1.8 in X-ray and optical light (left) and X-ray light only (right). Credit: X-ray: NASA/CXC/Penn State/S.Park et al.; Optical: Pal.Obs. DSS
Context
Smith et al. (2014, 2017a, 2017b) consistently have found that providing a suitable contextual background story to the data helps make images of the Universe more interesting for many viewers. In an early study, when participants looked at images such as Figure x.x without knowing what it is, it might be rated as attractive. But when a well-written and engaging caption was provided along with the image, the aesthetic appreciation of the image significantly increased (effect size of .19). (Smith et al., 2011).
Gender
Firstly, it is useful to note that female participants are typically outnumbered by male participants in online studies with self-selected participation (ranging from 4:1 to 3:1 to 2:1; see Smith et al., 2011, 2015, 2017). Only in-person studies conducted at museums had 1:1 male to female ratios. (Smith et al., 2014). For the purposes of this thesis, it is useful to review the reception and comprehension of astronomical data visualizations by those who identify as female, by reviewing two studies by Smith and colleagues with data collected in 2016 and 2014. For gender, in one study (Smith et al., 2017) of 1,119 male and 696 female participants, males reported significantly higher levels of initial understanding as compared with the females in the study. Analysis showed that initial understanding was attributable to self-reported knowledge, however. This result is not surprising due to xxx
Analyzing comments by female participants across two surveys on topics of color and formatting in perceptions of astronomical visualization (Smith, et al., 2017; Smith, et al., 2015), showed other relevant results. In addition to asking expected questions concerning colors, images’ visual or technological accuracy, and science content, female participants frequently probed into issues about the fate of the Universe and the nature of existence, as well as related disciplines of philosophy including metaphysics and epistemology. Many of the female participants expressed that they wished to learn more about research, though some seemed to lack confidence in their abilities and knowledge related to astrophysics. (Arcand, et al., whitepaper TBD)
It is notable from the survey responses that the female respondents were interested in astronomy visualizations, that many have technical knowledge, many want more technical knowledge, and in general want more information on images from space (Arcand, et al., whitepaper TBD). <add framing ref?>
<connecting graf/sentence here>
Additionally, more complexities of data visualization and representation – and audience perception of such data – must be addressed when the information moves out of the 2-dimenionsal space and into the third, as discussed in the next section.
1.2 3D Printing: Reconnecting Science to the Human Hand
Three-dimensional (3D) modeling in science has become commonplace in the past 10 years. From models of chemical compounds and molecules (Bergwerf, 2018), to anatomical representations (“BioDigital: 3D Human Visualization Platform for Anatomy and Disease,” 2018),
to geographic models of Earth (“CesiumJS – Geospatial 3D Mapping and Virtual Globe Platform,” 2018), visual representations of data can “accelerate rapid insight” (Thomas and Cook, 2005, p. 69). 3D modeling offers a relatively new vehicle to represent and understand scientific data, particularly when both experts and non-experts are able to manipulate their models and gain new perspectives on the data they explore (Amorim, Travnik, & Sousa, 2015; Craig, Michel, & Bateman, 2013; Berra, Felletti, & Zucali, 2014).
Although interacting with such 3D data on a computer screen can be useful for both specialist and non-specialist audiences, the ability to create a physical manifestation of the model and reconnect science more directly to the human hand – through 3D printing – is a yet more recent addition to the field of data visualization and representation. One popular method of 3D printing involves additive manufacturing on demand, whereby a material of some sort such as sugar, plastic, or titanium is continually added in layers to create the object, making the production of scientific tools possible wherever a viable 3D printer can be found (Clements et al., 2016). This method of 3D printing is known as fused deposition modeling (FDM) and has become increasingly more accessible and affordable to consumers over the past few years (Takagishi & Umezu, 2017).
Although 3D printing on a consumer scale is still somewhat new, its possibilities are varied. From planning a possible sustainable lunar base from moon dust (Klettner, 2013), to medical printing of skin or embryonic stem cells (Everett-Green, 2013), 3D printing applications in science offer a range of topics to explore for experts and non-experts. This thesis will explore potential applications of 3D data to enhance astronomy communication and increase engagement in astronomy as a field, for girls ages 10-15.
Data Challenges in Moving From 2D to a 3D Space
As stated, astronomical data, and the images such data produce, are often two-dimensional (2D). From our vantage point here on Earth, and from near-by orbiting telescopes, the Universe appears to us as a flat projection on the sky. Although some deep sky surveys, or catalogs, of our Universe contain information about distances to objects for 3D maps of the distribution of stars or galaxies (Devitt, 2012), and large studies are making headway into the three-dimensional nature of the Universe (Courtois et al., 2013; Mann et al., 2013), it is much less common to have three-dimensional information about specific cosmological sources. The number of objects in the Universe that have been mapped in 3D represents a small fraction of the observable Universe; indeed, the process of 3D mapping faces a number of constraints (Steffen & Koning, 2014), from data quality and resolution to processing, software, and hardware. New and upcoming missions are, however, working to try and change some of the constraints. For example, the European Space Agency’s Gaia satellite, launched in 2014, has a goal to create a detailed 3D map of a billion stars in our galaxy, the Milky Way (Bauer et al., 2016).
Astrophysicists, computer scientists, engineers, technicians, and developers are creating new techniques to push astronomy visualization of specific objects beyond 2D images and expand into this important third dimension in space. 3D imaging enables experts and non-experts to be able to view objects from any angle and in some cases to virtually travel through them. Advances are also being made toward 3D data representation that includes other information, such as time or velocity, which can open up new areas of research and understanding. Research investigating objects over intervals, for example, or opening up the “time domain” (Andersen, 2012) can lead to advances in understanding the evolution of stars, galaxies, and the Universe itself. Understanding that objects in the Universe, such as stars, are dynamic can help to negate a common misconception in astronomy that the heavens are static and never-changing (Comins, 2001).
The ability to study astronomical sources from every side could give scientists a better understanding of how cosmic objects are structured, as well as their underlying physics, and could potentially open access to non-scientists as well. The field of astronomy, despite its challenges to obtain 3D data, has developed and adapted innovative ways to obtain such information about distant sources, as discussed in the next section.
Astronomical Medicine and 3D Models of Molecular Clouds
Arguably one of the most innovative milestones in the recent development of 3D imaging in astronomy has been the Astronomical Medicine project (see Borkin, 2010). This effort comprised scientists from the Harvard-Smithsonian Center for Astrophysics and Harvard’s Initiative in Innovative Computing, a program led by Dr. Alyssa Goodman. This group adapted existing 3D software and brain-imaging techniques from Boston-area medical personnel for use in astronomical data visualization.
The Astronomical Medicine project enabled researchers to generate 3D images of molecular clouds using 3D Slicer software (Borkin, Goodman, Halle, & Alan, 2007), which could then be interactively included in digital editions of journals such as Nature, thereby allowing readers to manipulate the 3D model directly in the enhanced PDF (Goodman et al. 2009; Information Today, 2009). This was the first example this researcher (Arcand) came across in which astronomical data was taken out of the 2D form into the third dimension in a simple, usable format, with no additional software or downloads necessary. This single example helped launch this researcher’s exploration into 3D mapping of astronomical objects and their potential uses with non-experts.
3D visualization to 3D print
Shortly after Goodman et al. (2009) began disseminating their results from the Astronomical Medicine project, technologists at the Chandra X-ray Center (CXC) worked to take steps in expanding their efforts for another object type, supernova remnants. Cassiopeia A (Cas A) is a supernova that is the result of the catastrophic explosion of a star about 15-20 times the mass of our Sun (Orlando, Miceli, Pumo, & Bocchino, 2016). The stellar debris of Cas A is known to be expanding radially outwards from the explosion center (Delaney et al., 2010).
Using simple geometry and Doppler effect data from Chandra, NASA’s Spitzer Space Telescope, and multiple ground-based optical telescopes, Dr. Tracey Delaney, then of the Massachusetts Institute of Technology, collaborated with developers from the Astronomical Medicine project, as well as this researcher, to create a 3D model of the Cas A supernova remnant using 3D Slicer and similar programs through an interactive PDF (Delaney et al., 2010).
A version of the 3D supernova remnant was later produced with the Smithsonian (see Figure x) that could also be manipulated in a browser by viewing angle and selection of which data sets to show. This interactive application showed the 3D data set separated by energy cuts and type, whether the user wanted to select just the emission of Neon or Argon as observed with Spitzer in the infrared region of the electromagnetic spectrum (EMS), or view the iron emission as detected by Chandra in the X-ray portion of the EMS, etc. (Watzke, 2013). Additionally, the CXC worked with a professional animator to develop a fly-through, more transparent visualization adapted from the data into the commercial 3D software Autodesk Maya (Autodesk.com, 2018) so that textures and colors more reminiscent of typical astronomical imaging and an artistic starfield could be applied.
This overall Cas A 3D project as described in this section was the first time a supernova remnant had been mapped into 3D based on observational data (Chandra.si.edu, 2009). Developing such 3D models from the Cas A data led to new insights for scientists who build models of supernova explosions. Astrophysicists who model these types of explosions now understand that it’s necessary to calculate that the outer layers of a star like Cas A will come off spherically, while the inner layers will present in a more disk-like way, with multiple fiducial jets (Arcand, Watzke, DePasquale, Edmonds, & DiVona, 2017; Delaney et al., 2010).
Additional research groups have worked on 3D modeling of Cas A since the 2010 Delaney study, including Milisavljevic and Fesen (2015) and Orlando, Miceli, Pumo, and Bocchino (2016). This continued research on the dimensionality of the same one object helps demonstrate the interest and research value of such visualizations for experts.
Figure x. Left: 2D Chandra X-ray image of Cassiopeia A. Credit: NASA/CXC/SAO. Right: Digital 3D model of Cassiopeia A that can be manipulated by the user in their browser at http://3d.si.edu/explorer?mid=45. Credit: NASA/CXC/SAO & Smithsonian Institution.
Recent visualization research (Smith, Arcand, Smith, Smith, Smith, & Bookbinder, 2017), explored how unique-looking presentations of an object in deep space, like Cassiopeia A, might affect understanding, engagement, and aesthetic appreciation by expert and non-expert audiences. The online survey asked self-selected participants to respond to questions regarding a spectrum of Cas A images and videos in 2D, 2D plus time (such as data-driven time-lapse videos), and 3D, querying what kind of object the image resembled, how appealing the image was to the participant, how much the participant understood of the science, and whether the participant wanted to learn more about the object. Results from this study showed that alternative types of representations, such as in the form of the 3D stills or 3D videos, can and should be used, if they are appropriately explained and put into context.
Although data-based 3D visualizations can be beneficial to expert populations, it is also recognized that there is much potential for non-experts to work with physical 3D models. The Cas A project is an example of this. Collaborating with Smithsonian specialists in 3D scanning and printing, CXC generated the first-ever 3D print of a supernova remnant (see Figure x). Cas A in 3D is freely available as a 3D print-ready model with supports (600k triangle OBJ file at 27 MB) and in volumetric data form (ASCII VTK files created from telescope data at 3.94 MB) for use with any 3D printer and proprietary software (Arcand, 2016).
Figure x a & b. 3D model of Cas A via Ultimaker 3 in single (left, a) and dual part (right, b) versions. Visit http://chandra.si.edu/3dprint/ for model files. Credit: NASA/CXC/K.Arcand
Since the 3D modeling and printing of Cassiopeia A, other 3D modeled astronomical objects have also been printed successfully. Researchers working with the Hubble Space Telescope developed 3D models of areas of stellar birth (the Eagle Nebula, or “Pillars of Creation”) (Figure x) and stellar aging (Eta Carinae and the Homunculus Nebula) (Figure x) (Mc Leod & Hook, 2015; Reddy, 2015). In each case, researchers combined data from the European Southern Observatory’s (ESO) Very Large Telescope (VLT) with Hubble data to create a 3D map of the object.
By analyzing the data, scientists discovered that the famous Pillars of Creation are separated from each other in space — the tip of the largest pillar is pointing toward us, while the other pillars are pointing away from us (McLeod et al., 2015). Owing to this orientation, and the intense bombardment from nearby young stars, the tip of the tallest pillar appears brighter than the other pillars. The 3D plot (Figure x) shows the separation and orientation of the pillars, further adding to and expanding upon the information quotient of the original 2D image.
Figure x. Upper: 2D Optical image of Eagle Nebula/Pillars of Creation from the Very Large Telescope. Lower: The Pillars plotted in 3D. Credit: ESO
In the case of Eta Carinae, scientists mapped the shape of the bipolar bubble, known as the Homunculus nebula, surrounding the star (Figure x). Using 3D modelling software designed for astronomy called SHAPE (Steffen, Koning, Wenger, Morisset, & Magnor, 2011), researchers built a printable 3D model of the Homunculus nebula (Steffen et al., 2014). Additionally, further 3D modelling (and printed) of the Eta Carinae system has been done on different scales, for example of the inner winds (Madura, Clementel, Gull, Kruip, & Paardekooper, 2015). Such modelling helps to better display the multiple dimensions of this system, something that 2D images cannot individually do (Madura, 2017).
Figure x. Left: 2D Optical image of Eta Carinae from Hubble. Credit: ESA/NASA
Right: Eta Carinae in 3D. Credit: NASA/STScI & NASA’s Goddard Space Flight Center/CI Lab
There are now other data-driven 3D printed objects from astronomy and planetary geology, including an additional supernova remnant (Arcand, Watzke, DePasquale, Edmonds & DiVona, 2017), a binary star system (two stars orbiting each other) (Watzke & Edmonds, 2017), geological maps of our Moon (Ellison, 2014), craters and meteorites on Mars (Capraro, 2014; (Gwinner, Oberst, Jaumann & Neukum, 2014), asteroids (e.g., Kim, 2015), and the Cosmic Microwave Background (Clements, Sato & Fonseca, 2016).
To date, the stellar 3D astronomical models have been printed and shared directly by CXC with numerous U.S. schools, libraries, Maker Spaces, STEM programs such as Girls Who Code and Girls Get Math, groups for blind and visually impaired persons such as the National Federation of the Blind and local schools for the blind, members of the Smithsonian Advisory Board, Smithsonian Secretary David Skorton, politicians such as former U.S. Senator Harry Reid and current U.S. Senator Jack Reed, and many others (Arcand, Watzke, DePasquale, Edmonds & DiVona, 2017). This dissemination network has allowed for preliminary testing of the 3D models, with researcher examination of user response, potential difficulties in printing quantities on demand, issues when shipping delicate parts, and physical challenges of models being handled by numerous participants.
Non-expert populations can benefit from 3D-printed models of science data, including students (Rennie 2014) and populations with visual impairments (Arcand, Jubett, Watzke, Price, Williamson, Edmonds, 2018 submitted; Christian, Nota, Greenfield, Grice, & Shaheen, 2015; Grice, Christian, Nota, & Greenfield, 2015). As noted, astronomy tends to be a visuals-heavy field, historically from thousands of years’ worth of humans looking up to the night sky to the first generation of telescope users, as well as in modern times with the advent of high-powered multiwavelenth distributed spacecraft. Such visualizations have been said to play a significant role in the popularization of astronomy by leveraging the “visual economy” (Bigg & Vanhoutte, 2017, p.118), though they can leave individuals who have no or very low vision behind.
People with visual impairments, of which there are an estimated 253 million people worldwide. Approximately 36 million are blind, with the remaining 217 million having moderate or severe vision impairments (World Health Organization, 2017). Those with visual impairments have a spectrum of needs that are affected by considerations such as whether blind from birth or later in life, or the ability to read Braille (American Printing House for the Blind, 2016).
3D printed versions of astronomical data with tactile features have been shown to help communicate both with those who are blind from birth and those who have lost sight, whether all or some, at some time after birth (Arcand et al., 2018 submitted; Christian, Nota, Greenfield, Grice, & Shaheen, 2015). From “stimulating, building, and reinforcing a person’s mental model” of the objects to self-reported comprehension and learning gains, positive outcomes have been reported, including the ability for some participants to visualize the data being modeled (Christian, Nota, Greenfield, Grice, & Shaheen, 2015, p.43).
Evaluations of user experiences with the Cas A 3D printed model and similar data-driven 3D astronomical prints with blind and visually impaired learners have demonstrated learning gains (Arcand et al,, 2018 submitted). Participants’ comments ranged from, “I learned what stars looked like when they exploded” to “[I learned] how stars die and if they want to, blow up and ‘vomit hot gas.’” The user studies also suggested that working with the 3D printed models may have an effect on how participants relate to astronomy, or how participants view themselves in relation to science.
Such 3D printed visualizations as Cassiopeia A can, therefore, help provide learners with visual impairments access to astronomical data in a way that is difficult to experience otherwise. Building spatial reasoning skills, additionally, has been shown to be very important for success in STEM (as noted in section x) but is often less developed in underrepresented groups. When said data are, additionally, coming primarily from observatories built with taxpayer funding in the U.S. and elsewhere, providing equitable access to that data should be a given, and not an exception. (Jones & Broadwell, 2008).
Challenges of 3D Printing Cosmic Objects
File Formats:
The world of 3D printing comprises numerous file formats, which at first might seem inconsistent. Most 3D printers can print directly from OBJ (a geometry definition file format) and STL (an abbreviation of “stereolithography,” a 3D system file format) files; however, each printer has its own unique set of acceptable formats (Chakravorty, 2018). Some OBJ files can be stubborn, depending on the number of polygons in the model and other factors. OBJ is a bit more versatile, however, being commonly accepted to import/export into/from multiple 3D software packages such as Autodesk 3D design programs: Maya, 3D Studio Max and AutoCAD, as well as Open Source programs such as MeshLab and Paraview (http://www.meshlab.net/ and http://www.paraview.org/) (Arcand, Jubett, Watzke, Price, Williamson, & Edmonds, 2018).
There are a number of software packages that can convert other file formats into STL or OBJ. MeshLab can handle several different formats derived from 3D scanners. Using Paraview, one can readily import VTK (Visualization ToolKit) data files, which are commonly used by astronomers when visualizing data sets. With a bit of a learning curve and a few filters, one can visualize these data sets in Paraview as mesh objects and save the data as STL. (Chakravorty, 2018)
There are also proprietary file extensions that are used by specific printers such as Makerbots. FBX, for example, is owned by Autodesk and can be converted by a number of Autodesk programs. Other Autodesk proprietary extensions are 3DS (3D Studio Max) and MB or MA (Maya Binary or ASCII files). Various CAD (computer aided design) programs use specific file extensions that typically can be converted to other formats accepted by most 3D printers. Conversions between format extensions can take a bit of research depending on the available software and make/model of the printer being used (Arcand et al., 2018).
In general, with the number of files readily available as STL, OBJ and similar non-proprietary file formats in 3D databases (see for example, NASA’s 3D archives (https://nasa3d.arc.nasa.gov/models) or Thingiverse (https://www.thingiverse.com/)), it is recommended to stay with more common file formats as allowed by the software or hardware being used to help promote accessibility among 3D printer users from educators and librarians to other specialists and general users. However, sometimes 3D print developers must use or convert to proprietary file types as dictated by specific software and hardware being accessed.
Some 3D printers and 3D printing services can also accept color data as WRL in VRML, or X3D files. WRL in VRML and X3D file types are also common and are considered “worlds” in Virtual Reality Markup Language, and 3D computer graphics, respectively (“Using VRML Files For Color 3D Printing”, 2015). These files can provide color information for the models that can be handled by some printers or when outsourcing to 3D printing services.
Essay: Interpretation of astronomical images (experts/non-experts)
Essay details and download:
- Subject area(s): Science essays
- Reading time: 18 minutes
- Price: Free download
- Published: 14 June 2021*
- Last Modified: 22 July 2024
- File format: Text
- Words: 5,151 (approx)
- Number of pages: 21 (approx)
Text preview of this essay:
This page of the essay has 5,151 words.
About this essay:
If you use part of this page in your own work, you need to provide a citation, as follows:
Essay Sauce, Interpretation of astronomical images (experts/non-experts). Available from:<https://www.essaysauce.com/science-essays/interpretation-of-astronomical-images-experts-non-experts/> [Accessed 16-04-26].
These Science essays have been submitted to us by students in order to help you with your studies.
* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.