Custom Search

Friday, July 2, 2010

COMPUTER - NETWORK

A computer network, often simply referred to as a network, is a collection of computers and devices connected by communications channels that facilitates communications among users and allows users to share resources with other users.

Purpose:-

Computer networks can be used for several purposes:

Facilitating communications:- Using a network, people can communicate efficiently and easily via e-mail, instant messaging, chat rooms, telephony, video telephone calls, and videoconferencing.

Sharing hardware:- A networked environment, each computer on a network can access and use hardware on the network. Suppose several personal computers on a network each require the use of a laser printer. If the personal computers and a laser printer are connected to a network, each user can access the laser printer on the network, as they need it.

Sharing files, data, and information. In a network environment, any authorized user can access data and information stored on other computers on the network. The capability of providing access to data and information on shared storage devices is an important feature of many networks.

Sharing software. Users connected to a network can access application programs on the network.

Wednesday, June 9, 2010

COMPUTER HACKING

Computer hacking is the practice of modifying computer hardware and software to accomplish a goal outside of the creator’s original purpose. People who engage in computer hacking activities are often called hackers. Since the word “hack” has long been used to describe someone who is incompetent at his/her profession, some hackers claim this term is offensive and fails to give appropriate recognition to their skills.

Computer hacking is most common among teenagers and young adults, although there are many older hackers as well. Many hackers are true technology buffs who enjoy learning more about how computers work and consider computer hacking an “art” form. They often enjoy programming and have expert-level skills in one particular program. For these individuals, computer hacking is a real life application of their problem-solving skills. It’s a chance to demonstrate their abilities, not an opportunity to harm others.

Since a large number of hackers are self-taught prodigies, some corporations actually employ computer hackers as part of their technical support staff. These individuals use their skills to find flaws in the company’s security system so that they can be repaired quickly. In many cases, this type of computer hacking helps prevent identity theft and other serious computer-related crimes.

Computer hacking can also lead to other constructive technological developments, since many of the skills developed from hacking apply to more mainstream pursuits. For example, former hackers Dennis Ritchie and Ken Thompson went on to create the UNIX operating system in the 1970s. This system had a huge impact on the development of Linux, a free UNIX-like operating system. Shawn fanning, the creator of Napster, is another hacker well known for his accomplishments outside of computer hacking.

In comparison to those who develop an interest in computer hacking out of simple intellectual curiosity, some hackers have less noble motives. Hackers who are out to steal personal information, change a corporation’s financial data, break security codes to gain unauthorized network access, or conduct other destructive activities are sometimes called “crackers.” This type of computer hacking can earn you a trip to a federal prison for up to 20 years.

If you are interested in protecting your home computer against malicious hackers, investing in a good firewall is highly recommended. It’s also a good idea to check your software programs for updates on a regular basis. For example, Microsoft offers a number of free security patches for its Internet Explorer browser.

Saturday, May 29, 2010

THE STUDY OF COMPUTER GRAPHICS

The Study of Computer Graphics:-

The study of computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.

As an academic discipline, computer graphics studies the manipulation of visual and geometric information using computational techniques. It focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues. Computer graphics is often differentiated from the field of visualization, although the two fields have many similarities.

Applications:-

Computational biology, Computational physics, Computer-aided design, Computer simulation, Digital art, Education, Graphic design, Info graphics, Information visualization, Rational drug design, Scientific visualization, Video Games, Virtual reality, Web design, etc,..

Thursday, May 13, 2010

COMPUTER GRAPHICS - IMAGE TYPES

2D Computer Graphics

Raster graphic sprites (left) and masks (right) 2D computer graphics are the computer - based generation of digital images mostly from two - dimensional models, such as 2D geometric models, text, and digital images, and by techniques specific to them. The word may stand for the branch of computer science that comprises such techniques, or for the models themselves.

2D computer graphics are mainly used in applications that were originally developed upon traditional printing and drawing technologies, such as typography, cartography, technical drawing, advertising, etc.. In those applications, the two - dimensional image is not just a representation of a real - world object, but an independent artifact with added semantic value; two - dimensional models are therefore preferred, because they give more direct control of the image than 3D computer graphics, whose approach is more akin to photography than to typography.

Pixel art

Pixel art is a form of digital art, created through the use of raster graphics software, where images are edited on the pixel level. Graphics in most old (or relatively limited) computer and video games, graphing calculator games, and many mobile phone games are mostly pixel art.

Vector graphics

Example showing effect of vector graphics versus raster (bitmap) graphics. Vector graphics formats are complementary to raster graphics, which is the representation of images as an array of pixels, as it is typically used for the representation of photographic images. There are instances when working with vector tools and formats is best practice and instances when working with raster tools and formats is best practice. There are times when both formats come together. An understanding of the advantages and limitations of each technology and the relationship between them is most likely to result in efficient and effective use of tools.

3D computer graphics

3D computer graphics in contrast to 2D computer graphics are graphics that use a three - dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be for later display or for real - time viewing.

Despite these differences, 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and primarily 3D may use 2D rendering techniques.

3D computer graphics are often referred to as 3D models. Apart from the rendered graphic, the model is contained within the graphical data file. However, there are differences. A 3D model is the mathematical representation of any three - dimensional object. A model is not technically a graphic until it is visually displayed. Due to 3D printing, 3D models are not confined to virtual space. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or used in non-graphical computer simulations and calculations.

Computer animation

An example of Computer animation produced using Motion capture Computer animation is the art of creating moving images via the use of computers. It is a subfield of computer graphics and animation. Increasingly it is created by means of 3D computer graphics, though 2D computer graphics are still widely used for stylistic, low bandwidth, and faster real-time rendering needs. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium, such as film. It is also referred to as CGI (Computer-generated imagery or computer-generated imaging), especially when used in films.

Virtual entities may contain and be controlled by assorted attributes, such as transform values (location, orientation, scale; see Cartesian coordinate system) stored in an object's transformation matrix. Animation is the change of an attribute over time. Multiple methods of achieving animation exist; the rudimentary form is based on the creation and editing of key frames, each storing a value at a given time, per attribute to be animated. The 2D / 3D graphics software will interpolate between key frames, creating an editable curve of a value mapped over time, resulting in animation. Other methods of animation include procedural and expression - based techniques: the former consolidates related elements of animated entities into sets of attributes, useful for creating particle effects and crowd simulations; the latter allows an evaluated result returned from a user-defined logical expression, coupled with mathematics, to automate animation in a predictable way (convenient for controlling bone behavior beyond what a hierarchy offers in skeletal system set up).

To create the illusion of movement, an image is displayed on the computer screen then quickly replaced by a new image that is similar to the previous image, but shifted slightly. This technique is identical to the illusion of movement in television and motion pictures.

Monday, May 3, 2010

COMPUTER GRAPHICS - CONCEPTS & PRINCIPLES

Image

An image or picture is an artifact that resembles a physical object or person. The term includes two-dimensional objects like photographs and sometimes includes three-dimensional representations. Images are captured by optical devices such as cameras, mirrors, lenses, telescopes, microscopes, etc. and natural objects and phenomena, such as the human eye or water surfaces.

A digital image is a representation of a two-dimensional image in binary format as a sequence of ones and zeros. Digital images include both vector images and raster images, but raster images are more commonly used.

In digital imaging, a pixel (or picture element) is a single point in a raster image. Pixels are normally arranged in a regular 2-dimensional grid, and are often represented using dots or squares. Each pixel is a sample of an original image, where more samples typically provide a more accurate representation of the original. The intensity of each pixel is variable in color systems, each pixel has typically three components such as red, green, and blue.

Graphics

Graphics are visual presentations on some surface, such as a wall, canvas, computer screen, paper, or stone to brand, inform, illustrate, or entertain. Examples are photographs, drawings, line art, graphs, diagrams, typography, numbers, symbols, geometric designs, maps, engineering drawings, or other images. Graphics often combine text, illustration, and color. Graphic design may consist of the deliberate selection, creation, or arrangement of typography alone, as in a brochure, flier, poster, web site, or book without any other element. Clarity or effective communication may be the objective, association with other cultural elements may be sought, or merely, the creation of a distinctive style.

Rendering

Rendering is the process of generating an image from a model, by means of computer programs. The model is a description of three dimensional objects in a strictly defined language or data structure. It would contain geometry, viewpoint, texture, lighting, and shading information. The image is a digital image or raster graphics image. The term may be by analogy with an "artist's rendering" of a scene. 'Rendering' is also used to describe the process of calculating effects in a video editing file to produce final video output.

3D projection

3D projection is a method of mapping three dimensional points to a two dimensional plane. As most current methods for displaying graphical data are based on planar two dimensional media, the use of this type of projection is widespread, especially in computer graphics, engineering and drafting.

Ray tracing

Ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane. The technique is capable of producing a very high degree of photorealism; usually higher than that of typical scanline rendering methods, but at a greater computational cost.

Shading

Shading refers to depicting depth in 3D models or illustrations by varying levels of darkness. It is a process used in drawing for depicting levels of darkness on paper by applying media more densely or with a darker shade for darker areas, and less densely or with a lighter shade for lighter areas. There are various techniques of shading including cross hatching where perpendicular lines of varying closeness are drawn in a grid pattern to shade an area. The closer the lines are together, the darker the area appears. Likewise, the farther apart the lines are, the lighter the area appears. The term has been recently generalized to mean that shaders are applied.

Texture mapping

Texture mapping is a method for adding detail, surface texture, or colour to a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Dr Edwin Catmull in 1974. A texture map is applied (mapped) to the surface of a shape, or polygon. This process is akin to applying patterned paper to a plain white box. Multi texturing is the use of more than one texture at a time on a polygon. Procedural textures, and bitmap textures are, generally speaking, common methods of implementing texture definition from a 3D animation program, while intended placement of textures onto a model's surface often requires a technique known as UV mapping.

Volume rendering

Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set. A typical 3D data set is a group of 2D slice images acquired by a CT or MRI scanner.

Usually these are acquired in a regular pattern and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel.

3D modeling

3D modeling is the process of developing a mathematical, wireframe representation of any three-dimensional object, called a "3D model", via specialized software. Models may be created automatically or manually; the manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. 3D models may be created using multiple approaches: use of NURBS curves to generate accurate and smooth surface patches, polygonal mesh modeling (manipulation of faceted geometry), or polygonal mesh subdivision (advanced tessellation of polygons, resulting in smooth surfaces similar to NURBS models). A 3D model can be displayed as a two-dimensional image through a process called 3D rendering, used in a computer simulation of physical phenomena, or animated directly for other purposes. The model can also be physically created using 3D Printing devices.

Friday, April 9, 2010

COMPUTER GRAPHICS - HISTORY


The advance in computer graphics was to come from one MIT student, Ivan Sutherland. In 1961 Sutherland created another computer drawing program called Sketchpad. Using a light pen, Sketchpad allowed one to draw simple shapes on the computer screen, save them and even recall them later. The light pen itself had a small photoelectric cell in its tip. This cell emitted an electronic pulse whenever it was placed in front of a computer screen and the screen's electron gun fired directly at it. By simply timing the electronic pulse with the current location of the electron gun, it was easy to pinpoint exactly where the pen was on the screen at any given moment. Once that was determined, the computer could then draw a cursor at that location.

Sutherland seemed to find the perfect solution for many of the graphics problems he faced. Even today, many standards of computer graphics interfaces got their start with this early Sketchpad program. One example of this is in drawing constraints. If one wants to draw a square for example, s/he doesn't have to worry about drawing four lines perfectly to form the edges of the box. One can simply specify that s/he wants to draw a box, and then specify the location and size of the box. The software will then construct a perfect box, with the right dimensions and at the right location. Another example is that Sutherland's software modeled objects - not just a picture of objects. In other words, with a model of a car, one could change the size of the tires without affecting the rest of the car. It could stretch the body of the car without deforming the tires.

These early computer graphics were Vector graphics, composed of thin lines whereas modern day graphics are Raster based using pixels. The difference between vector graphics and raster graphics can be illustrated with a shipwrecked sailor. He creates an SOS sign in the sand by arranging rocks in the shape of the letters "SOS." He also has some brightly colored rope, with which he makes a second "SOS" sign by arranging the rope in the shapes of the letters. The rock SOS sign is similar to raster graphics. Every pixel has to be individually accounted for. The rope SOS sign is equivalent to vector graphics. The computer simply sets the starting point and ending point for the line and perhaps bend it a little between the two end points. The disadvantages to vector files are that they cannot represent continuous tone images and they are limited in the number of colors available. Raster formats on the other hand work well for continuous tone images and can reproduce as many colors as needed.

Also in 1961 another student at MIT, Steve Russell, created the first video game, Spacewar. Written for the DEC PDP-1, Spacewar was an instant success and copies started flowing to other PDP-1 owners and eventually even DEC got a copy. The engineers at DEC used it as a diagnostic program on every new PDP-1 before shipping it. The sales force picked up on this quickly enough and when installing new units, would run the world's first video game for their new customers.

E. E. Zajac, a scientist at Bell Telephone Laboratory (BTL), created a film called "Simulation of a two-giro gravity attitude control system" in 1963. In this computer generated film, Zajac showed how the attitude of a satellite could be altered as it orbits the Earth. He created the animation on an IBM 7090 mainframe computer. Also at BTL, Ken Knowlton, Frank Sindon and Michael Noll started working in the computer graphics field. Sindon created a film called Force, Mass and Motion illustrating Newton's laws of motion in operation. Around the same time, other scientists were creating computer graphics to illustrate their research. At Lawrence Radiation Laboratory, Nelson Max created the films, "Flow of a Viscous Fluid" and "Propagation of Shock Waves in a Solid Form." Boeing Aircraft created a film called "Vibration of an Aircraft."

It wasn't long before major corporations started taking an interest in computer graphics. TRW, Lockheed-Georgia, General Electric and Sperry Rand are among the many companies that were getting started in computer graphics by the mid 1960's. IBM was quick to respond to this interest by releasing the IBM 2250 graphics terminal, the first commercially available graphics computer.

Ralph Baer, a supervising engineer at Sanders Associates, came up with a home video game in 1966 that was later licensed to Magnavox and called the Odyssey. While very simplistic, and requiring fairly inexpensive electronic parts, it allowed the player to move points of light around on a screen. It was the first consumer computer graphics product.

Also in 1966, Sutherland at MIT invented the first computer controlled head-mounted display (HMD). Called the Sword of Damocles because of the hardware required for support, it displayed two separate wireframe images, one for each eye. This allowed the viewer to see the computer scene in stereoscopic 3D. After receiving his Ph.D. from MIT, Sutherland became Director of Information Processing at ARPA (Advanced Research Projects Agency), and later became a professor at Harvard.

Dave Evans was director of engineering at Bendix Corporation's computer division from 1953 to 1962. After which he worked for the next five years as a visiting professor at Berkeley. There he continued his interest in computers and how they interfaced with people. In 1968 the University of Utah recruited Evans to form a computer science program, and computer graphics quickly became his primary interest. This new department would become the world's primary research center for computer graphics.


In 1967 Sutherland was recruited by Evans to join the computer science program at the University of Utah. There he perfected his HMD. Twenty years later, NASA would re-discover his techniques in their virtual reality research. At Utah, Sutherland and Evans were highly sought after consultants by large companies but they were frustrated at the lack of graphics hardware available at the time so they started formulating a plan to start their own company.

A student by the name of Ed Catmull got started at the University of Utah in 1970 and signed up for Sutherland's computer graphics class. Catmull had just come from The Boeing Company and had been working on his degree in physics. Growing up on Disney, Catmull loved animation yet quickly discovered that he didn't have the talent for drawing. Now Catmull (along with many others) saw computers as the natural progression of animation and they wanted to be part of the revolution. The first animation that Catmull saw was his own. He created an animation of his hand opening and closing. It became one of his goals to produce a feature length motion picture using computer graphics. In the same class, Fred Parkes created an animation of his wife's face. Because of Evan's and Sutherland's presence, UU was gaining quite a reputation as the place to be for computer graphics research so Catmull went there to learn 3D animation.

As the UU computer graphics laboratory was attracting people from all over, John Warnock was one of those early pioneers; he would later found Adobe Systems and create a revolution in the publishing world with his Post Script page description language. Tom Stockham led the image processing group at UU which worked closely with the computer graphics lab. Jim Clark was also there; he would later found Silicon Graphics, Inc.

The first major advance in 3D computer graphics was created at UU by these early pioneers, the hidden-surface algorithm. In order to draw a representation of a 3D object on the screen, the computer must determine which surfaces are "behind" the object from the viewer's perspective, and thus should be "hidden" when the computer creates (or renders) the image.

Friday, April 2, 2010

COMPUTER GRAPHICS - DEFINITION


Computer Graphics are graphics created using computers and more generally the representation and manipulation of image data by a computer. The development of computer graphics, or simply referred to as CG, has made computers easier to interact with, and better for understanding and interpreting many types of data. Developments in computer graphics have had a profound impact on many types of media and have revolutionized the animation, movies and video game industry.
The term computer graphics has been used in a broad sense to describe "almost everything on computers that is not text or sound. Typically the term computer graphics refers to several different things.

  • The representation and manipulation of image data by a computer.
  • The various technologies used to create and manipulate images
  • The images so produced, and
  • the sub field of computer science which studies methods for digitally synthesizing and manipulating visual content.
computers and computer generated images touch many aspects of our daily life. Computer imagery is found on television, in newspapers. A well constructed graph can present complex statistics in a form that is easier to understand and interpret. In the media such graphs are used to illustrate papers reports and other presentation material.

Many powerful tools have been developed to visualize data. Computer generated imagery can be categorized into several different types. 2D, 3D, and animated graphics. As technology has improved, 3D computer graphics have become more common, but 2D computer graphics are still widely used. Computer graphics has emerged as a sub field of computer science which studies methods for digitally synthesizing and manipulating visual content. Over the past decade other specialized fields have been developed like information visualization, and scientific visualization more concerned with the visualization of three dimensional phenomena where the emphasis is on realistic renderings of volumes, surfaces, illumination sources, and so forth perhaps with a dynamic component.

Friday, March 19, 2010

COMPUTER VIRUS - RECOVERY METHODS

RECOVERY METHODS:
Once a computer has been compromised by a virus, it is usually unsafe to continue using the same computer without completely reinstalling the operating system. However, there are a number of recovery options that exist after a computer has a virus. These actions depend on a severity of the type of virus.

1.VIRUS REMOVAL:
one possibility on Windows Me, Windows XP, Windows Vista and Windows 7 is a tool known as a system Restore, which restores the registry and critical system files to a previous check point. Often a virus will cause a system restore point from the same day corrupt. Restore points from previous days should work provided the virus is not designed to corrupt the restore files of also exists in previous restore points. Some viruses, however, disable system restore and other important tools such as Task Manager and Command Prompt. An example of a virus that does this is CiaDoor. However, a CiaDoor can be routed, if the user turns on their computer, opens in safe mode and then tries to open the necessary tools such as system Restore.
Administrators have the option to disable such tools from limited users for various reasons. The virus modifies the registry to do the same, except, when the Administrator is controlling the computer, it blocks all users from accessing the tools. When an infected tool activates, it gives the message "Task Manager has been disabled by your administrator.", even if the user trying to open the program is the administrator.
Users running a Microsoft operating system can access Microsoft's website to run a free scan, provided they have their 20 digit registration number.

2.OPERATING SYSTEM RE INSTALLATION:
Re installing the operating system is another approach to virus removal. It involves simply reformatting the computer's hard drive and installing the OS from its original media, or restoring the partition with a clean backup image.
This method has the benefit of being simple to do, being faster that running multiple anti virus scans, and is guaranteed to remove any malware. Downsides include having to re install all other software, reconfiguring restoring user preferences. User data can be backed up by booting off of a Live CD or putting the hard drive into another computer and booting from the other computer's operating system.
Care must be taken when restoring anything from an infected system to avoid transferring the virus to the new computer along with the restored data.

Sunday, March 7, 2010

COMPUTER - VIRUS: HOW TO AVOID 2

STEALTH

Some viruses try to trick anti virus software by intercepting its request to the operating system. A virus can hide itself by intercepting the anti - virus software's request to read the file and passing the request to the virus, instead of the OS. The virus can then return an uninfected version of the file to the anti virus software, so that it seems that the file is "clean". Modern anti virus software employs various techniques to counter stealth mechanisms of viruses. The only completely reliable method to avoid stealth is to boot from a medium that is known to be clean.

1. Self - Modification:
Most modern anti virus programs try to find virus - patterns inside ordinary programs by scanning them for so - called virus signatures. A signature is a characteristic byte-pattern that is part of a certain virus or family of viruses. If a virus scanner finds such a pattern in a file, it notifies the user that the file is infected. The user can then delete, or heal the infected file. Some viruses employ techniques that make detection by means of signatures difficult but probably not impossible. These viruses modify their code on each infection. That is each infected file contains a different variant of the virus.

2. Encryption with a variable key:
A more advanced method is the use of simple encryption to encipher the virus. In this case, the virus consists of a small decrypting module and an encrypted copy of the virus code. If the virus is encrypted with a different keys for each infected file, the only part of the virus that remains constant is the decrypting module, which would be appended to the end. In this case, a virus scanner cannot directly detect the virus using signatures, but it can still detect the decrypting module, which still makes indirect detection of the virus possible. Since these would by symmetric keys, stored on the infected host, it is in fact entirely possible to decrypt the final virus, but this is probably not required, since self - modifying code is such a rarity that it may be reason for virus scanners to at least flag the file as suspicious.
An old, but compact, encryption involves XORing each byte in a virus with a constant, so that the exclusive - or operation had only to be operated for decryption. It is suspicious for a code to modify itself, so the code to do the encryption / decryption may be part of the signature in many virus definitions.

3.Polymorphic code:

Polymorphic code was the first technique that posed a serious threat to virus scanners. Just like regular encrypted viruses, a polymorphic virus infects files with an encrypted copy of itself, which is decoded by a decryption module. In the case of polymorphic viruses, however, this decryption module is also modified on each infection. A well written polymorphic virus therefore has no parts which remain identical between infections, making it very difficult to detect directly using signature. Anti virus software can detect it by decrypting the viruses using an emulator, or by statistical pattern analysis of the encrypted virus body. To enable polymorphic code, the virus has to have a polymorphic engine somewhere in its encrypted body. See polymorphic code for technical detail on how such engines operate.
Some viruses employ polymorphic code in a way that constrains the mutation rate of the virus significantly. For example, a virus can be programmed to mutate only slightly over time, or it can be programmed to refrain from mutating when it infects a file on a computer that already contains copies of the virus. The advantage of using such slow polymorphic code is that it makes it more difficult for anti virus professionals to obtain bait files that are infected in one run will typically contain identical or similar samples of the virus. This will make it more likely that the detection by the virus scanner will be unreliable, and that some instances of the virus may be able to avoid detection.

4. Metamorphic Code:
To avoid being detected by emulation, some viruses rewrite themselves completely each time they are toinfect new executable. Viruses that utilize this technique are said to be metamorphic. To enable metamorphism, a metamorphic engine is needed. A metamorphic virus is usually very large and complex. For example W32/simile consisted of over 14000 lines of Assembly language code, 90 % of which is part of the metamorphic engine.





Wednesday, March 3, 2010

COMPUTER VIRUS - HOW TO AVOID 1

In order to avoid detection by users, some viruses employ different kinds of deception. Some old viruses, especially on the MS - DOS platform, make sure that the "last modified" date of a host file stays the same when the file is infected by the virus. This approach does not fool anti - virus software, however, especially those which maintain and date cyclic redundancy checks on file changes.

Some viruses can infect files without increasing their sizes or damaging the files. They accomplish this by overwriting unused areas of executable files. These are called cavity viruses. For example, the CIH virus, or chernoby1 virus, infects portable Executable files. Because those files have many empty gaps, the virus, which was 1 KB in length, did not add to the size of the file.


Some viruses try to avoid detection by killing the tasks associated with antivirus software with anti virus software before it can detect them.


As computers and operating systems grow larger and more complex, old hiding techniques need to be updated or replaced. Defending a computer against viruses many demand that a file system migrate towards detailed and explicit permission for every kind of files access.


4.1 Avoiding bait files and other undesirable hosts:


A virus needs to infect hosts in order to spread further. In some cases, it might be a bad idea to infect a host program. For example, many anti - virus programs perform an integrity check of their own code. Infecting such programs will therefore increase the likelihood that the virus is detected. For this reason, some viruses are programmed not to infect programs that are known to be part of anti - virus software. Another type of host that viruses sometimes avoid is bait files. Bait files are files that are specially created by anti - virus software, or by anti - virus professionals themselves, to be infected by a virus. These files can be created for various reasons, all of which are related to the detection of the virus:

Anti - virus professionals can use bait files to take a sample of a virus. It is more practical to store and exchange a small, infected bait files, than to exchange a large application program that has been infected by the virus.
Anti - virus professionals can use bait files to study the behavior of a virus and evaluate detection methods. This is especially useful when the virus is polymorphic. In this case, the virus can be made to infect a large number of bait files. the infected files can be used to test whether a virus scanner detects all versions of the virus.
Some Anti - virus software employs bait files that are accessed regularly. When these files are modified, the anti - virus software warns the user that a virus is probably active on the system.
Since bait files are used to detect the virus, or to make detection possible, a virus can benefit from not infecting them. Viruses typically do this by avoiding suspicious programs, such as small program files or programs that contain certain patterns of 'garbage instructions'.

A related strategy to make baiting difficult is sparse infection. Sometimes, spares infectors do not infect a host file that would be a suitable candidate for infection in other circumstances. For example, a virus can decide on a random basis whether to infect a file or not, or a virus can only infect host files on particular days of the week.

Monday, March 1, 2010

COMPUTER VIRUS - INFECTION STRATEGIES

In order to replicate itself, a virus must be permitted to execute code and write to memory. For this reason, many viruses attach themselves to executable files that may be part of legitimate programs. If a user attempts to launch an infected program, the virus' code may be executed simultaneously. viruses can be divided into two types based on their behavior when they are executed. Non resident viruses immediately search for other hosts that can be infected, infect those targets, and finally transfer control to the application program they infected. Resident viruses don't search for hosts when they are started. Instead, a resident virus loads itself into memory on execution and transfers control to the host program. The virus stays active in the background and infects new hosts when those files are accessed by other programs or the operation system itself.

1. Non - Resident Viruses

Non Resident viruses can be thought of as consisting of a finder module and a replication module. The finder module is responsible for finding new files to infect. For each new executable file the finder module encounters, it calls the replication module to infect that file.

2. Resident Viruses

Resident viruses contain a replication module that is similar to the one that is employed by non resident viruses. This module, however, is not called by a finder module. The virus loads the replication module into memory when it is executed instead and ensures that this module is executed each time the operating system is called to perform a certain operation. The replication module can be called, for example, each time the operating system executes a file. In this case the virus infects every suitable program that is executed on the computer.

Resident viruses are sometimes sub divided into a category of fast infectors and a category of slow infectors. Fast infectors are designed to infect as many files as possible. A fast infector, for instance, can infect every potential host file that is accessed. This poses a special problem when using anti - virus software, since a virus scanner will access every potential host file on a computer when it performs a system - wide scan. If the virus scanner fails to notice that such a virus is present in memory the virus can " piggy - back" on the virus scanner and in this way infect all files that are scanned. Fast infectors rely on their fast infection rate to spread. The dis advantage of this method is that infecting many files may make detection more likely, because the virus may slow down a computer or perform many suspicious actions that can be noticed by anti - virus software . Slow infectors, on the other hand, are designed to infect hosts infrequently. Some slow infectors, for instance, only infect files when they are copied. Slow infectors are designed to avoid detection by limiting their actions; they are less likely to slow down a computer noticeably and will, at most, infrequently trigger anti - virus software that detects suspicious behavior by programs. The slow infector approach, how ever, does not seem very successful.


Saturday, February 20, 2010

COMPUTER VIRUS - HISTORY

The creeper virus was first detected on ARPANET, the forerunner of the internet, in the early 1970s. Creeper was an experimental self – replicating program written by Bob Thomas at BBN Technologies in 1970. Creeper used the ARPANET to infect DEC PDP – 10 computers running the TENEX operating system. Creeper gained access via the ARPANET and copied itself to the remote system where the message, “I’m the creeper, catch me if you can!” was displayed. The Reaper program was created to the delete creeper.

A program called “Rother J” was the first computer virus to appear “in the wild” - that is, outside the single computer or lab where it was created. Written in 1981 by Richard skrenta, it attached itself to the Apple Dos 3.3 operating system and spread via floppy disk. The virus, created as a practical joke when Skrenta was still in high school, was injected in a game on a floppy disk. On its 50th use the EIK cloner virus would be activated, infecting the computer and displaying a short poem beginning “EIK Cloner” the program with a personality.”

The first PC virus in the wild was a boot sector virus dubbed ( C ) Brain, created in 1986 by the Farooq Alvi Brothers in Lahore, Pakistan, reportedly to deter piracy of the software they had written. However, analysts have claimed that the Ashar Virus, a variant of Brain, possibly predated it based on code within the virus.

Before computer networks became widespread, most viruses spread on removable media, particularly floppy disks. In the early days of the personal computer, many users regularly exchanged information and programs on floppies. Some viruses spread by infecting programs stored on these disks, while others installed themselves into the disk boot sector, ensuring that they would be run when the user booted the computer from the disk, usually inadvertently. PCs of the era would attempt to boot first from a floppy if one had been left in the drive. Until floppy disks fell out of use, this was the most successful infection strategy and boot sector viruses were the most common in the wild for many years.

Traditional computer viruses emerged in the 1980s, driven by the spread of personal computers and the resultant increase in BBS, modem use, and software sharing. Bulletin board – driven software sharing contributed directly to the spread of Trojan horse programs and viruses were written to infect popularly traded software. Shareware and bootleg software were equally common vectors for viruses on BBS’s. Within the “pirate scene” of hobbyists trading illicit copies of retail software, traders in a hurry to obtain the latest applications were easy targets for viruses.

Macro viruses have become common since the mid – 1990s. Most of these viruses are written in the scripting languages for Microsoft programs such as Word and Excel and spread throughout Microsoft Office by infecting documents and spreadsheets. Since Word and Excel were also available for Mac OS, most could also spread to Macintosh computers. Although most of these viruses did not have the ability to send infected e – mail, those viruses which did took advantage of the Microsoft Outlook COM interface.

Some old versions of Microsoft Word allow macros to replicate themselves with additional blank links. If two macro viruses simultaneously infect a document, the combination of the two, if also self – replicating, can appear as a “matting” of the two and would likely be detected as a virus unique from the “parents”.

A virus may also send a web address link as an instant message to all the contacts on an infected machine. If the recipient, thinking the link is from a friend follows the link to the website, the virus hosted at the site may be able to infect this new computer and continue propagating.

Viruses that spread using cross – site scripting were first reported in 2002, and were academically demonstrated in 2005. There have been multiple instances of the cross – site scripting viruses in the wild, exploiting websites such as my space and Yahoo.