Sunday, December 29, 2019

Performance Enhancing Drugs Should Be Banned - 1259 Words

Performance enhancing drugs are substances that if taken allow humans to excel at different activities such as sport. I strongly feel that there should be a strict law that states if any competitive athlete is caught using drugs they should face a lifetime ban for a first offence. Firstly drugs are illegal! Secondly, they are also banned in sports however that doesn’t stop people from misusing them. Thirdly they can damage your body and be extremely addictive. Currently, in most countries, the law for a first offence is a two-year suspension and a lifetime ban for a second offence. My main reason for a lifetime ban is to deter other athletes from using drugs. Surely this is only fair? My first argument is that it is unfair for any athlete†¦show more content†¦This is unfair for the other athletes who are clean and it shouldn t be tolerated. If you were an athlete who had never touched a drug wouldn’t you feel cheated and victimized? A lifetime ban for a first drug offence would make athletes reluctant of using drugs. If this was introduced and the athlete continued to use drugs, they wouldn t be able to participate again. This is a honourable punishment and they wouldn t have the opportunity to discredit the sport again. The reason for competition is to test your natural ability or to discover your improvements or weaknesses through training. The fact is taking drugs is an unnatural talent like taking credit for someone else’s work or plagiarism. Athletes are destroying their integrity, the sport’s integrity and anyone they represents integrity, such as companies or organisations. Sporting integrity is being eaten away by the use of performing enhancing drugs. A sport that displays integrity can be identified as honest and sincere. It is supporting good sportsmanship and providing a safe and a fair environment for all involved. A person with integrity does what they say they will do in accordance with their values and beliefs. Just having integrity as a person will make people more open to trusting, respecting and believing in you. By using illegal drugs you lose your sporting integrity and this then becomes difficult to earn back. Huge worldwide sporting brands want to be associated with the biggest and bestShow MoreRelatedPerformance Enhancing Drugs Should Be Banned1645 Words   |  7 Pagesrules by using performance enhancing drugs each year. Performance enhancing drugs help athletes to becomes bigger, faster, and overall better at their individual sport. This process is called doping. Doping can be defined as using drugs and various substances to better perform at a particular task. Furthermore, these athletes act in the moment and fail to see all aspects of these performance enhancing drugs. Contradictorily, some individuals argue that performance enhancing drugs should in fact be allowedRead MorePerformance Enhancing Drugs Should Be Banned1223 Words   |  5 Pagesto performance enhancing drug use in athletes. Performance enhancing drugs should continue to be banned due to health risk factors, the element of cheating and abuse of the athlete s body. Paul is a dedicated young athlete. He works hard on the field and hard in the weight room training his body to peak conditioning for his sport. As a result of his hard work, Paul has secured a spot in the starting lineup. As his team develops and grows, Paul sees his team mates taking a performance enhancingRead MoreUse Of Performance Enhancing Drugs Should Be Banned1961 Words   |  8 PagesThe controversial question is whether the use of Performance Enhancing Drugs (PED) in sports should be banned or not. Professional sports are popular in most countries. Major franchises are dealing with the issue of PED being used by the athletes who are paid to perform in the sport on the belief that they are naturally a raw talent. This controversial essay will side with the banning of PED use in any type of sport activity, whether it is at professional or amateur level. Both sides of this issueRead MorePerformance Enhancing Drugs Should Be Banned For Athletes600 Words   |  3 Pages Per formance Enhancing Drugs Should Be Banned For Athletes The use of Performance Enhancing Drugs(PED) has a major impact on athletes negatively and cause many problems in sports and competitions. These PEDs should be banned for athletes and competitors on any level because they are, unhealthy and harmful to the body, give users an edge over competitors, and it diminishes the true sportsmanship of the game itself. The illegal use of Performance Enhancing Drugs lead to many unhealthy and potentionalyRead MorePerformance Enhancing Drugs Should Be Banned in Professional Sports2737 Words   |  11 Pagesfifty game suspensions without pay for using performance enhancing drugs. Big names such as Ryan Braun and Alex Rodriguez were on this list. Testosterone, an illegal substance, is what is found in the performance enhancing drugs. Testosterone increases male characteristics such as body hair, aggression, deepening of the voice, and of course massive muscle growth (â€Å"Steroids† par. 1). Some professional athletes claim to use performance enhancing drugs to recover more quickly from injury; others takeRead MoreAnabolic Steroid Use in Sports Summary1493 Words   |  6 Pagesfierce among athletes. Winning at all cost often includes using one of many performance enhancing drugs such as anabolic steroids. Many athletes use performance enhancing drugs, like steroids, to achieve higher goa ls and set higher records than other drug-free successful athletes. Although athletes are performing at higher levels when using such drugs, what is the cost? Finally anabolic steroids should remain banned from sports because their use results in many harmful side effects; because theirRead MoreSteroid Use in Sports1732 Words   |  7 PagesAround an astonishing ten to fifteen percent of professional athletes use illegal steroids which are also known as performance enhancing drugs. These substances which are banned in professional sports aren’t just any type of steroid or drug. They are called anabolic steroids or performance enhancing drugs, and they are synthetically produced substances of male testosterone hormones. The use of these illegal steroids has garnered a lot of publicity within the world of sports over the past few yearsRead MorePerformance Enhancing Drugs Should Not Be Legalized1129 Words   |  5 PagesPerformance enhancing drugs should not be legalized ‘Olympic track star Marion Jones was sentenced in a federal court to six months in prison.’ (Kelly and Rao, 2008) The reason why Jones was guilty is because of the use of performance enhancing drugs since 1999. More and more famous athletes prove to have used banned drugs to enhance their performance. At the same time, the role that the anti-doping agency is more and more important in the world wide games, such as Olympic Game, Tour de FranceRead MoreAnabolic Steroids : Use And Performance Enhancing Drugs1516 Words   |  7 Pagesuse of performance enhancing drugs like anabolic steroids has been a debatable topic in the United States as early as the 1950’s. Former U. S. Representative Howard Berman expresses that â€Å"Steroids can seem necessary to compete at the highest level, and the quick rewards may seem to outweigh the long term consequences to users.† The National Institute on Drug Abuse (NIDA) states that countless athletes, both young and old, fa ce life threatening illnesses due to the use of performance-enhancing drugsRead MorePerformance Enhancing Drugs791 Words   |  3 PagesPerformance enhancing drugs Using performance-enhancing drugs such as doping. Most of the athletes take PED’s so they can win a golden medal for their country. And they want fame. No any athlete shouldn’t take PED because no one knows the risks. The penalty for using performance enhancing drugs should be stricter because it can cause health risks, it’s cheating, an it’s legally, and world class athletes use it and still get away with it. The government should banned drugs so no any athlete shouldn’t

Saturday, December 21, 2019

Network Estimation Graphical Model - 1222 Words

3 Network estimation: graphical model The following projects involve network estimation problems encountered in different biological appli- cations such as gene-gene or protein-protein interaction. The main focus has been on to develop robust, scalable network estimation methodology. Quantile based graph estimation Graphical models are ubiquitous tools to describe the interdependence between variables measured si- multaneously such as large-scale gene or protein expression data. Gaussian graphical models (GGMs) are well-established tools for probabilistic exploration of dependence structures using precision matrices and they are generated under a multivariate normal joint distribution. However, they suffer from several shortcomings since†¦show more content†¦Stochastic approximation (SA) provides a fast recursive way for numerically maximizing a function under measurement error. Using suitably chosen weight/step-size the stochastic approximation algorithm converges to the true solution, which can be adapted to estimate the components of the mixing distribution from a mixture, in the form of recursively learning, predictive recursion method. The convergence depends on a martingale construction and convergence of related series and heavily depends on the independence. The general algorit hm may not hold if dependence is present. We have proposed a novel martingale decomposition to address the case of dependent data. 5 Measurement error model: small area estimation We proposed [4] a novel shrinkage type estimator and derived the optimum value of the shrinkage pa- rameter. The asymptotic value of the shrinkage coefficient depends on the Wasserstein metric between standardized distribution of the observed variable and the variable of interest. In the process, we also estab- lished the necessary and sufficient conditions for a recent conjecture about the shrinkage coefficient to hold. The biggest advantage of the proposed approach is that it is completely distribution free. This makes the estimators extremely robust and I also showed that the estimator continues to perform well with respect to the ‘best’ estimator derivedShow MoreRelatedResearch Statement : Texas A M University1438 Words   |  6 PagesResearch Statement Nilabja Guha Texas AM University My current research at Texas AM University is in a broad area of uncertainty quantification (UQ), with applications to inverse problems, transport based filtering, graphical models and online learning. My research projects are motivated by many real-world problems in engineering and life sciences. I have collaborated with researchers in engineering and bio-sciences on developing rigorous uncertainty quantification methods within Bayesian frameworkRead MoreOnline Learning : Stochastic Approximation1139 Words   |  5 Pagesrelated series and heavily depends on the independence of the data. The general algorithm may not hold if dependence is present. We have proposed a novel martingale decomposition to address the case of dependent data. 5 Measurement error model: small area estimation We proposed [4] a novel shrinkage type estimator and derived the optimum value of the shrinkage pa- rameter. The asymptotic value of the shrinkage coefficient depends on the Wasserstein metric between standardized distribution of the observedRead MoreEstimating The Mixing Density Of A Mixture Distribution951 Words   |  4 Pagesrelated series and heavily depends on the independence of the data. The general algorithm may not hold if dependence is present. We have proposed a novel martingale decomposition to address the case of dependent data. 5 Measurement error model: small area estimation We proposed [4] a novel shrinkage type estimator and derived the optimum value of the shrinkage pa- rameter. The asymptotic value of the shrinkage coefficient depends on the Wasserstein metric between standardized distribution of the observedRead MoreThe Static Model Of Data Mining Essay1710 Words   |  7 PagesAbstract: Lot of research done in mining software repository. In this paper we discussed about the static model of data mining to extract defect .Different algorithms are used to find defects like naà ¯ve bayes algorithm, Neural Network, Decision tree. But Naà ¯ve Bayes algorithm has the best result .Data mining approach are used to predict defects in software .We used NASA dataset namely, Data rive. Software metrics are also used to find defects. Keywords: Naà ¯ve Bayes algorithm, Software Metric, SolutionRead MoreEssay1183 Words   |  5 Pagesparameters for digital modulation schemes all of which formed an instrumental part of my Final BTech Project titled â€Å"Estimation of Parameters, Error Detection and Multi-Carrier Cooperative Device to Device(D2D) Communication in Cellular Networks† and for my Major and Minor Projects. It must be needlessly reiterated that in all these projects, an exhaustive comparison and precision based estimation has been demonstrated between methodologies employed in the project and research article references. All of theseRead More Time and Cost Estimating Techniques Essay1233 Words   |  5 Pagesshould understand that they are approximations, not accuracies. Although the formal techniques are very specific, most of them have the following tasks in common: * Break activities down into small pieces for easier and more accurate estimation. (WBS) * Review historical information and compare to current activities. * Include a contingency buffer for potential risks. * Solicit advice from others that have previously completed similar activities. * Identify and documentRead MoreNetwork Security And Situational Awareness Data Pre Processing Method Based On Conditional Random Fields1418 Words   |  6 PagesNetwork Security and Situational Awareness Data Pre-processing Method Based on Conditional Random Fields Rajesh.P #1, Krishnamoorthy.P #2 Gopi.S#3 ,Sivasankari.S #4 Assistant Professor CSE*1,2, Assistant Professor IT #3 #4, Kingston Engineering College*1,2,3 4 Vellore, India*1,2,3 4 1rajeshpcse@kingston.ac.in 2 krishnancse0206@gmail.com 3gopi.scse@gmail.com 4sivasankari_cse@yahoo.co.in Abstract The examination of NetworkRead MoreThe Cloud Of Cloud Computing2307 Words   |  10 Pagesattacks, using extensible resources and other cloud’s characteristics. Our model is based on the SaaS (Security as a Service) to manage security using specialized virtual firewalls proposed as a service by the cloud provider. The main advantage of this approach is to instantiate firewalls when needed and adapt resources to filter the networks flow avoiding bottleneck and congestion. We have proposed a new autonomous model to manage cloud based firewalling services using the Multi-Agent system. WeRead MoreComputer Literacy Is The Level Of Proficiency And Fluency1501 Words   |  7 Pagesand commence chatting. USENET (Unix User Network) is an arrangement of displays where a user or any person can display posts and other users will follow and respond. In the same way, as with IRC, a user will notice panels established for all sorts of users. The search engine Google has a group of web-interfaces for these chat boards (Shannon, n.d.). 4. Describe software development in respect to the Systems Development Life Cycle (SDLC) using applicable model. SDLC, Software Development Life CycleRead MoreGenotypying Case Study1259 Words   |  6 Pages(Illumina, Inc.) and SNPs with minor allele frequency (MAF) of 0.05 were ruled out from further analysis. Subsequently 42,041 SNPs with minor allele frequency ≠¥ 0.05 across 491 genotypes were used for GWAS analysis. Linkage disequilibrium estimation Glyma.Wm.82.a2 reference genome will be used to obtain chromosome physical lengths (bp) through SoyBase (Grant, Nelson, et al., 2010) to calculate genome-wide inter marker distance and chromosome-wide densities. Linkage disequilibrium (LD) between

Friday, December 13, 2019

Advances in Data Storage Technology Free Essays

Advances in Data Storage Technology Contents I. Introduction3 II. Purpose of storage4 III. We will write a custom essay sample on Advances in Data Storage Technology or any similar topic only for you Order Now Hierarchy of storage6 A. Primary storage6 B. Secondary storage7 C. Tertiary storage7 D. Off-line storage8 IV. Characteristics of storage9 A. Volatility9 B. Mutability9 C. Accessibility10 D. Addressability10 E. Capacity11 F. Performance11 G. Energy use11 V. Fundamental storage technologies12 A. Semiconductor12 B. Magnetic12 C. Optical13 D. Paper14 E. Uncommon14 VI. Related technologies17 A. Network connectivity17 B. Robotic storage17 References19 I. INTRODUCTION Computer data storage, often called storage or memory, refers to computer components and recording media that retain digital data used for computing for some interval of time. Computer data storage provides one of the core functions of the modern computer, that of information retention. It is one of the fundamental components of all modern computers, and coupled with a central processing unit (CPU, a processor), implements the basic computer model used since the 1940s. In contemporary usage, memory usually refers to a form of semiconductor storage known as random-access memory (RAM) and sometimes other forms of fast but temporary storage. Similarly, storage today more commonly refers to mass storage — optical discs, forms of magnetic storage like hard disk drives, and other types slower than RAM, but of a more permanent nature. Historically, memory and storage were respectively called main memory and secondary storage (or auxiliary storage). Auxiliary storage (or auxiliary memory units) was also used to represent memory which was not directly accessible by the CPU (secondary or tertiary storage). The terms internal memory and external memory are also used. II. Purpose of storage Many different forms of storage, based on various natural phenomena, have been invented. So far, no practical universal storage medium exists, and all forms of storage have some drawbacks. Therefore a computer system usually contains several kinds of storage, each with an individual purpose. A digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, using eight million bits, or about one megabyte, a typical computer could store a short novel. Traditionally the most important part of every computer is the central processing unit (CPU, or simply a processor), because it actually operates on data, performs any calculations, and controls all the other components. Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators or simple digital signal processors. Von Neumann machines differ in that they have a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines. In practice, almost all computers use a variety of memory types, organized in a storage hierarchy around the CPU, as a trade-off between performance and cost. Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary and off-line storage is also guided by cost per bit. III. Hierarchy of storage A. Primary storage: Primary storage (or main memory or internal memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner. Historically, early computers used delay lines, Williams’s tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive. This led to modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. (The particular types of RAM used for primary storage are also volatile, i. e. they lose the information when not powered). As the RAM types used for primary storage are volatile (cleared at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM (Read-only memory). Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage. B. Secondary storage: Secondary storage (or external memory) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfers the desired data using intermediate area in primary storage. Secondary storage does not lose the data when the device is powered down—it is non-volatile. Consequently, modern computer systems typically have two orders of magnitude more secondary storage than primary storage and data is kept for a longer time there. In modern computers, hard disk drives are usually used as secondary storage. Rotating optical storage devices, such as CD and DVD drives, have longer access times. Some other examples of secondary storage technologies are: flash memory (e. g. USB flash drives or keys), floppy disks, magnetic tape, paper tape, punched cards, standalone RAM disks, and Iomega Zip drives. C. Tertiary storage: Tertiary storage or tertiary memory provides a third level of storage. Typically it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into storage device according to the system’s demands; this data is often copied to secondary storage before use. It is primarily used for archival of rarely accessed information since it is much slower than secondary storage (e. g. 5–60 seconds vs. 1-10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes. D. Off-line storage: Off-line storage is computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction. In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are most popular, and to much lesser extent removable hard disk drives. In enterprise uses, magnetic tape is predominant. Older examples are floppy disks, Zip disks, or punched cards. IV. Characteristics of storage Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressability. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance. A. Volatility: Non-volatile memory will retain the stored information even if it is not constantly supplied with electric power. It is suitable for long-term storage of information. Nowadays used for most of secondary, tertiary, and off-line storage. In 1950s and 1960s, it was also used for primary storage, in the form of magnetic core memory. Volatile memory requires constant power to maintain the stored information. The fastest memory technologies of today are volatile ones (not a universal rule). Since primary storage is required to be very fast, it predominantly uses volatile memory. B. Mutability: Read/write storage or mutable storage allows information to be overwritten at any time. A computer without some amount of read/write storage for primary storage purposes would be useless for many tasks. Modern computers typically use read/write storage also for secondary storage. Read only storage retains the information stored at the time of manufacture, and write once storage (Write Once Read Many) allows the information to be written only once at some point after manufacture. These are called immutable storage. Immutable storage is used for tertiary and off-line storage. Examples include CD-ROM and CD-R. C. Accessibility: Random access any location in storage can be accessed at any moment in approximately the same amount of time. Such characteristic is well suited for primary and secondary storage. Sequential access the accessing of pieces of information will be in a serial order, one after the other; therefore the time to access a particular piece of information depends upon which piece of information was last accessed. Such characteristic is typical of off-line storage. D. Addressability: Location-addressable each individually accessible unit of information in storage is selected with its numerical memory address. In modern computers, location-addressable storage usually limits to primary storage, accessed internally by computer programs, since location-addressability is very efficient, but burdensome for humans. The underlying device is still location-addressable, but the operating system of a computer provides the file system abstraction to make the operation more understandable. In modern computers, secondary, tertiary and off-line storage use file systems. E. Capacity: Raw capacity the total amount of stored information that a storage device or medium can hold. It is expressed as a quantity of bits or bytes (e. g. 10. 4 megabytes). Memory storage density the compactness of stored information. It is the storage capacity of a medium divided with a unit of length, area or volume (e. g. 1. 2 megabytes per square inch). F. Performance: Latency the time it takes to access a particular location in storage. The relevant unit of measurement is typically nanosecond for primary storage, millisecond for secondary storage, and second for tertiary storage. It may make sense to separate read latency and write latency, and in case of sequential access storage, minimum, maximum and average latency. G. Energy use: Storage devices that reduce fan usage, automatically shut-down during inactivity, and low power hard drives can reduce energy consumption 90 percent. 2. 5 inch hard disk drives often consume less power than larger ones. Low capacity solid-state drives have no moving parts and consume less power than hard disks. Also, memory may use more power than hard disks. V. Fundamental storage technologies As of 2008, the most commonly used data storage technologies are semiconductor, magnetic, and optical, while paper still sees some limited usage. Some other fundamental storage technologies have also been used in the past or are proposed for development. A. Semiconductor: Semiconductor memory uses semiconductor-based integrated circuits to store information. A semiconductor memory chip may contain millions of tiny transistors or capacitors. Volatile and non-volatile forms of semiconductor memory exist. In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor memory or dynamic random access memory. Since the turn of the century, a type of non-volatile semiconductor memory known as flash memory has steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory is also used for secondary storage in various advanced electronic devices and specialized computers. B. Magnetic: Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is non-volatile. The information is accessed using one or more read/write heads which may contain one or more recording transducers. A read/write head only covers a part of the surface so that the head or medium or both must be moved relative to another in order to access data. In modern computers, magnetic storage will take these forms:  ¦ Magnetic disk  ¦ Floppy disk, used for off-line storage  ¦ Hard disk drive, used for secondary storage  ¦ Magnetic tape data storage, used for tertiary and off-line storage In early computers, magnetic storage was also used for primary storage in a form of magnetic drum, or core memory, core rope memory, thin-film memory, twister memory or bubble memory. Also unlike today, magnetic tape was often used for secondary storage. C. Optical: Optical storage, the typical optical disc, stores information in deformities on the surface of a circular disc and reads this information by illuminating the surface with a laser diode and observing the reflection. Optical disc storage is non-volatile. The deformities may be permanent (read only media), formed once (write once media) or reversible (recordable or read/write media). The following forms are currently in common use. CD, CD-ROM, DVD, BD-ROM: Read only storage, used for mass distribution of digital information (music, video, computer programs)  ¦ CD-R, DVD-R, DVD+R, BD-R: Write once storage, used for tertiary and off-line storage  ¦ CD-RW, DVD-RW, DVD+RW, DVD-RAM, BD-RE: Slow write, fast read storage, used for tertiary and off-line storage  ¦ Ultra Density Optical or UDO is similar in capacity to BD-R or BD-RE and is slow write, fast read storage used for tertiary and off-line storage Magneto-optical disc storage is optical disc storage where the magnetic state on a ferromagnetic surface stores information. The information is read optically and written by combining magnetic and optical methods. Magneto-optical disc storage is non-volatile, sequential access, slow write, fast read storage used for tertiary and off-line storage. D. Paper: Paper data storage, typically in the form of paper tape or punched cards, has long been used to store information for automatic processing, particularly before general-purpose computers existed. Information was recorded by punching holes into the paper or cardboard medium and was read mechanically (or later optically) to determine whether a particular location on the medium was solid or contained a hole. A few technologies allow people to make marks on paper that are easily read by machine—these are widely used for tabulating votes and grading standardized tests. Barcodes made it possible for any object that was to be sold or transported to have some computer readable information securely attached to it. E. Uncommon: Vacuum tube memory, a William’s tube used a cathode ray tube, and a Selectron tube used a large vacuum tube to store information. These primary storage devices were short-lived in the market, since Williams tube was unreliable and Selectron tube was expensive. Electro-acoustic memory also known as delay line memory used sound waves in a substance such as mercury to store information. Delay line memory was dynamic volatile, cycle sequential read/write storage, and was used for primary storage. Optical tape is a medium for optical storage generally consisting of a long and narrow strip of plastic onto which patterns can be written and from which the patterns can be read back. It shares some technologies with cinema film stock and optical discs, but is compatible with neither. The motivation behind developing this technology was the possibility of far greater storage capacities than either magnetic tape or optical discs. Phase-change memory uses different mechanical phases of Phase Change Material to store information in an X-Y addressable matrix, and reads the information by observing the varying electrical resistance of the material. Phase-change memory would be non-volatile, random access read/write storage, and might be used for primary, secondary and off-line storage. Most rewritable and many write once optical disks already use phase change material to store information. Holographic data storage stores information optically inside crystals or photopolymers. Holographic storage can utilize the whole volume of the storage medium, unlike optical disc storage which is limited to a small number of surface layers. Holographic storage would be non-volatile, sequential access, and either write once or read/write storage. It might be used for secondary and off-line storage. See Holographic Versatile Disc (HVD). Molecular memory stores information in polymer that can store electric charge. Molecular memory might be especially suited for primary storage. The theoretical storage capacity of molecular memory is 10 terabits per square inch. Data storage tag (DST), also sometimes known as an archival tag is a data logger that uses sensors to record data at predetermined intervals. Data storage tags usually have a large memory size and a long lifetime. Most archival tags are supported by batteries that allow the tag to record positions for several years. Alternatively some tags are solar powered and allow the scientist to set their own interval; this then allows data to be recorded for significantly longer than battery-only powered tags. Information repository is an easy way to deploy secondary tier of data storage that can comprise multiple, networked data storage technologies running on diverse operating systems, where data that no longer needs to be in primary storage is protected, classified according to captured metadata, processed, de-duplicated, and then purged, automatically, based on data service level objectives and requirements. In information repositories, data storage resources are virtualized as composite storage sets and operate as a federated environment. Information repositories were developed to mitigate problems arising from data proliferation and eliminate the need for separately deployed data storage solutions because of the concurrent deployment of diverse storage technologies running diverse operating systems. They feature centralized management for all deployed data storage resources. They are self-contained, support heterogeneous storage resources, support resource management to add, maintain, recycle, and terminate media, track of off-line media, and operate autonomously. VI. Related technologies A. Network connectivity: A secondary or tertiary storage may connect to a computer utilizing computer networks. This concept does not pertain to the primary storage, which is shared between multiple processors in a much lesser degree. Direct-attached storage (DAS) is a traditional mass storage that does not use any network. This is still a most popular approach. This term was coined lately, together with NAS and SAN. Network-attached storage (NAS) is mass storage attached to a computer which another computer can access at file level over a local area network, a private wide area network, or in the case of online file storage, over the Internet. NAS is commonly associated with the NFS and CIFS/SMB protocols. Storage area network (SAN) is a specialized network that provides other computers with storage capacity. The crucial difference between NAS and SAN is the former presents and manages file systems to client computers, whilst the latter provides access at block-addressing (raw) level, leaving it to attaching systems to manage data or file systems within the provided capacity. SAN is commonly associated with Fiber Channel networks. B. Robotic storage: Large quantities of individual magnetic tapes and optical or magneto-optical discs may be stored in robotic tertiary storage devices. In tape storage field they are known as tape libraries, and in optical storage field optical jukeboxes, or optical disk libraries per analogy. Smallest forms of either technology containing just one drive device are referred to as autoloaders or auto changers. Robotic-access storage devices may have a number of slots, each holding individual media, and usually one or more picking robots that traverse the slots and load media to built-in drives. The arrangement of the slots and picking devices affects erformance. Important characteristics of such storage are possible expansion options: adding slots, modules, drives, robots. Tape libraries may have from 10 to more than 100,000 slots, and provide terabytes or petabytes of near-line information. Optical jukeboxes are somewhat smaller solutions, up to 1,000 slots. Robotic storage is used for backups, and for high-capacity archives in imaging, medical, and video industries. Hierarchical storage management is a most known archiving strategy of automatically migrating long-unused files from fast hard disk storage to libraries or jukeboxes. If the files are needed, they are retrieved back to disk. References J. S. Vitter, Algorithms and Data Structures for External Memory. Series on Foundations and Trends in Theoretical Computer Science, now Publishers, Hanover, MA, 2008, ISBN 978-1-60198-106-6. National Communications System (1996). Federal Standard 1037C – Telecommunications: Glossary of Telecommunication Terms. Super Talent’s 2. 5†³ IDE Flash hard drive – The Tech Report – Page 13. (http://techreport. com/articles. x/10334/13) How to cite Advances in Data Storage Technology, Papers