Jumaat, 8 April 2011

TUGASAN 5: My preffered os


INSTALLATION OF UBUNTU

1)      Download the Ubuntu ISO from http://www.ubuntu.com/getubuntu/download and save to your desktop.
2)      Burn the ISO image to a blank CD using Roxio CD creator or similar:


    3) Run the CD from “My Computer” – the CD should ask permission to run at which   
            point you’ll see this option screen:

           
4)      Now configure your installation using the simple settings options. You can specify the location of the Ubuntu installation on your Windows partition, the size of the Ubuntu installation, the Ubuntu flavour (Ubuntu, Kubuntu, Xubuntu, etc), your preferred language, and a username and password for the Ubuntu system.

5)      Format your USB stick with a FAT32 partition from Windows. You can get to the format dialogue by opening My Computer and right mouse clicking the removable drive icon. Click “Format” and follow the settings in the image below. You need a minimum 2gb USB stick.

6)      The new version of Ubuntu isn’t in the Distribution list supplied with UNetbootin yet, so use the downloaded Ubuntu ISO from earlier on. Add the ISO using the “Diskimage”, make sure your USB drive is selected below and click OK.
7)      The ISO transfers to the USB pretty quickly, so soon after you click OK you’ll see this screen:

8)       That’s it – when the installation process is complete, restart your computer and make sure it’s set up to boot from USB. On my HP Laptop, pressing F9 on the boot screen shows a boot order menu. Selecting “USB Hard Drive” follows a black screen, an Ubuntu logo, and finally, your new Ubuntu desktop appears.
9)       Click “install” on the live desktop (top left) :

10)  Choose your language in the welcome screen

11)  Choose your location

12)  Choose your keyboard layout

13)  Set up your disk partition. This is probably the most “technical” part of the installation., use the largest continuous free space for option works nicely:

14)  Choose your username and password:

15)  Migrate your Windows documents and settings

16)   You’re now ready to install your new Ubuntu installation

17)  When the installation has finished, restart your computer (you’ll be instructed to remove your cd rom or USB drive). You’re now ready to begin using Ubuntu! 


TUGASAN 3:FILE MANAGEMENT



File storage management
A file management system is that set of system software that provides
services to users and applications related to the use of files. Typically, the only
way that a user or application may access files is through the file management
system. This relieves the user or programmer of the necessity of developing
special purpose software for each application and provide the system with a
means of controlling its most important asset.[GROS86] suggests the following
objectives for file management system:
    To meet the data-management needs and requirements of the user, which
include storage of data and the ability to perform the operations listed
earlier
    To guarantee, the extent possible that the data in the file are valid.
    To optimize performance both from the system point of view in terms of overall throughput and from the user’s point of view in terms of response time
    To provide I/O support for a variety of types of storage device
    To minimize or eliminate the potential for lost or destroyed data
    To provide a standardized set of I/O interface routines
    To provide I/O support for multiple users in the case of multiple–user
Systems.

CONCEPT AND DESIGN
Computer can store information on several different storage media such as
magnetic disks, magnetic tapes and optical disks. So that the computer systems
will be convenient to use, the operating system provides a uniform logical view
of information storage. The operating system abstract from the physical
properties of its storage devices to define a logical storage unit, the file. File are
mapped, by the operating system, on to physical devices. This storage device are
usually non volatile, so the contents are persistent thru power failures and
systems reboots.

FILE DIRECTORY

a)      single directory

The simplest directory structure is the single-level directory. All files are
contained in the same directory, which is easy to support and understand. A
single level directory has significant limitation, however, when the number of
the file increases or when there is more than one user. Since all files are in the
same directory, they must have unique names. If we have two users who call
their data file test, then the unique name rule is violated.




b)two level directory

The major disadvantage to a single level directory is the confusion of files
names between different users. The standard solution is to create a separate
directory for each user.
In the two level directory structures, each user has her own user file
directory (UFD). Each UFD has a similar structure, but lists only the files of a
single user. When a user job starts or a user log in, the system master file
directory (MFD), is search. The master file directory is index by user name or
account number, and each entry point to the UFD for that user. When a user
refers to a particular file, only his own UFD is search. Thus different users may
have files with the same name, as long as all the file names within it UFD are
unique.

c) multilevel directory
Once we have seen how to view a two level directory as two-level tree, the
natural generalization is to extend the directory structure to a tree of a arbitrary
Masterheight. This generalization allows users to create their own subdirectories and to
organize their files accordingly. The MS-DOS system for instance is structured
as a tree. In fact a tree is the most common directory structure. The tree has a
root directory. Every files in the system has a unique path name. A path name is
the path from the root through all the sub directories to a specified file.

 Linking Block











FILE MAP


BLOCK INDEX



Khamis, 24 Mac 2011

TUGASAN 4: :my protection& security


 SECURITY POLICY AND MECHANISM

The term security and protection are often used interchangeable. Nevertheless, it is frequently useful to make a distinction between the general problems involved in making sure that files are not read or modified by unauthorized persons, which include technical, managerial, legal and political issues on the one hand, and the specific operating system mechanism used to provide security on the other to avoid confusion, we will use the term security to refer to the overall problem, and the term protection mechanisms to refer to the specific operating system mechanisms used to safeguard information in the computer. The boundary between them is not well defined, however. A more interesting problem is what to do about intruders. These come in two varieties. Passive intruders just want to read files they are not authorized or read. Active intruders are more malicious; they want to make unauthorized changes to data.

AUTHENTIC CONCEPT
a)      Password
Passwords are often used to protect object in the computer system, in the absence of more complete protection scheme. They can be considered a special case of either keys or capabilities. For instance, a password could be associate with each resource such as file. Whenever a request is made to use the resource, the password must be given. If the password is correct, access isgranted. Different passwords may be associated with different access rights. For example, different password may be used for reading, appending and updating a file.
           
b)      Artifact
A completely different approach to authorization is to check to see if the user has some item, normally a plastic card with a magnetic stripe on it. The card is inserted into the terminal, which then checks to see whose card it is. This method can be combined with a password, so a user can only log in if he :
1. has the card
2. knows the password

Automated cash dispensing machine usually work this way. Another technique is signature analysis. The user sign his name with a special pen connected to the terminal and the computer compares it to a known specimen stored online. Even better is not to compare the signature, but compare the pen motion made while writing it. A good forger may be able to copy the signature, but will not have a clue as to the exact order in which the stroke were made.

protection concept an access control

  •  Protection is concerned with keeping data safe from improper or unauthorized access and physical damage .  When faulty memory resulted in the disk data being corrupted, technicians replacing disk, after disk and the problem not going away we need decided to swap the memory as part of an error exploration.
  • We can control access to files, specifying who and how can read, write, execute, delete and list files.
  • Access control has a number of strategies:
    • Access control list (ACL) specifies user names or groups, and types of access.
    • Associate passwords and access control (read only, modify with tracked changes) per file.





TUGASAN 2: INPUT OUTPUT MANAGEMENT

Input Output Procedure

A computer system uses a device controller to facilitate the transfer of information between the device and the CPU. A complex controller like small computer system interface (SCSI) may permit connecting several I/O devices simultaneously.

BUFFERING
A)    SINGLE BUFFER
The simplest time of support that the operating system can provide is single buffering. When a user process issues an I/O request the operating system assign a buffer in the system portion of main memory to the operation.  Input transfers are made to the system buffer. When the transfer is complete, the process moves the block into users space and immediately request another block. This approach will generally provide a speed up compared with the lack of system buffering. The user process can be processing one block of data while the next block is being read in. 


The operating system is able to swap the process out because the input operation is taking place into system memory rather than into user process memory. This technique does, however, complicate the logic in the operating system. 

B)    DOUBLE BUFFER
An improvement over single buffering can be had by assigning to system buffer to the operation. A process now transfer data to  one buffer while the operating system empties or fills the other. 


 It is therefore possible to keep the block oriented device going at fullspeed if . On the other hand, if  double buffering ensures the process will not have to wait for I/O. In either case, an improvement over single buffering is achieved.


 Again, this improvement comes at the cost of increase complexity. For stream oriented input, we again are faced with the two alternative mod of operation. For line-at-a-time I/O, the user process need not be suspended for input or output unless the process runs ahead of the double buffers. 
SPOOLING
Spooling is a way of dealing with dedicated I/O devices in a multiprogramming system.  Instead what is done to create a special process, called a daemon and a special directory, is a spooling directory. Spooling is not only use for printers it is also used in other situation. For example, file transfer over a network often uses a network daemon. 


To send a file somewhere, a user puts it in a network spooling directory. Later on, the network daemon takes it out and transmit it.  This network consists of thousand of machine around the world communicating by dial up telephone lines and many computer networks.





TUGASAN 1: MEMORY MANAGEMENT


MEMORY MANAGEMENT

Memory management is vital in a multiprogramming system. If only a few processes are in memory, then for much of the time of the processes will be waiting for input output and the processors will be idle. Thus, memory needs to be allocated efficiently to pack as many processors into memory possible.

OBJECTIVES

a) Relocation
b) Protection
c) Sharing
d) Logical organization
e) Physical organization

VIRTUAL MEMORY

Virtual Memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the effectively available amount of RAM using disk swapping. The quality of the virtual memory manager can have a big impact on overall system performance.

PAGING

paging is one of the memory management schemes by which a computer can store and retrieve data from secondary storage for use in main memory. In the paging memory-management scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. The main advantage of paging is that it allows the physical address space of a process to be noncontiguous. Before the time paging was used, systems had to fit whole programs into storage contiguously, which caused various storage and fragmentation problems.

SEGMENTATION

segmentation is similar to dynamic partitioning. In the absence of an overlay scheme or the use of virtual memory, it would require that all of a program’s segments be loaded into memory for execution. The different, compared with dynamic partitions, is that with segmentation a program may occupy more than one partition, and this partitions need not be contiguous. Segmentation eliminates internal segmentation, but like dynamic partitioning it suffers from external fragmentation. However, because a process is broken up into a number of smaller pieces the external fragmentation
should be less.

MEMORY RELOCATION POLICY

In systems with virtual memory, programs in memory must be able to reside in different parts of the memory at different times. This is because when the program is swapped back into memory after being swapped out for a while it can not always be placed in the same location. The virtual memory management unit must also deal with concurrency. Memory management in the operating system should therefore be able to relocate programs in memory and handle memory references and addresses in the code of the program so that they always point to the right location in memory.

LOCATION OF OUTDOOR SYSTEM

-         The allocator places a process in the smallest block of unallocated memory in which it will fit.
Problems:
-          It requires an expensive search of the entire free list to find the best hole.
-          More importantly, it leads to the creation of lots of little holes that are not big enough to satisfy any requests. This situation is called fragmentation, and is a problem for all memory-management strategies, although it is particularly bad for best-fit.
-          Solution:One way to avoid making little holes is to give the client a bigger block than it asked for. For example, we might round all requests up to the next larger multiple of 64 bytes. That doesn't make the fragmentation go away, it just hides it.
-          Unusable space in the form of holes is called external fragmentation
-          Unusable space in the form of holes is called external fragmentation

-         The memory manager places process in the largest block of unallocated memory available. The ides is that this placement will create the largest hole after the allocations, thus increasing the possibility that, compared to best fit, another process can use the hole created as a result of external fragmentation.

-         Another strategy is first fit, which simply scans the free list until a large enough hole is found. Despite the name, first-fit is generally better than best-fit because it leads to less fragmentation.
Problems:
-          Small holes tend to accumulate near the beginning of the free list, making the memory allocator search farther and farther each time.

-         The first fit approach tends to fragment the blocks near the beginning of the list without considering blocks further down the list. Next fit is a variant of the first-fit strategy.The problem of small holes accumulating is solved with next fit algorithm, which starts each search where the last one left off, wrapping around to the beginning when the end of the list is reached (a form of one-way elevator).

RELOCATION OF PAGING SYSTEM

Least Recently Used (LRU):
  • -         Removes page least recently accessed
  • -          Efficiency
  • -          Causes either decrease in or same number of interrupts
  • -          Slightly better (compared to FIFO): 8/11 or 73%
  • -          LRU is a stack algorithm removal policy
  • -          Increasing main memory will cause either a decrease in or the same number of page interrupts
  • -          Does not experience FIFO anomaly

Two variations:
  • -         Clock replacement technique
  • -          Paced according to the computer’s clock cycle
  • -          Bit-shifting technique
  • -          Uses 8-bit reference byte and bit-shifting technique
  • -          Tracks usage of each page currently in memory
First In First Out (FIFO):
  • -         Removes page in memory the longest
  • -          Efficiency
  • -          Ratio of page interrupts to page requests
  • -          FIFO example: not so good
  • -          Efficiency is 9/11 or 82%

FIFO anomaly:
  • -         More memory does not lead to better performance

Rabu, 23 Februari 2011


7.1 Explain protection and security concept.

Sharing of program and data among us a computer system necssitataes strong emphis on protection ang security measures in an os.Both protection and security imply guarding again intrusion in an os.
However,in keeping with the convention followed in os literature,a distinction is made between two types of intrusion

7.1 Security policy and mechanism
The term security and protection are often used interchangeable.
Nevertheless, it is frequently useful to make a distinction between the general
problems involved in making sure that files are not read or modified by
unauthorized persons, which include technical, managerial, legal and political
issues on the one hand, and the specific operating system mechanism used to
provide security on the other to avoid confusion, we will use the term security to
refer to the overall problem, and the term protection mechanisms to refer to the
specific operating system mechanisms used to safeguard information in the
computer. The boundary between them is not well defined, however.
A more interesting problem is what to do about intruders. These come in
two varieties. Passive intruders just want to read files they are not authorized o
read. Active intruders are more malicious; they want to make unauthorized
changes to data.
7.2 Elobrate authentic basic.
7.21 Password
Passwords are often used to protect object in the computer system,
in the absence of more complete protection scheme. They can be considered a
special case of either keys or capabilities. For instance, a password could be
associate with each resource such as file. Whenever a request is made to use the
resource, the password must be given. If the password is correct, access isgranted. Different passwords may be associated with different access rights. For
example, different password may be used for reading, appending and updating a
file.

7.22 Artifact
A completely different approach to authorization is to check to see
if the user has some item, normally a plastic card with a magnetic stripe on it.
The card is inserted into the terminal, which then checks to see whose card it is.
This method can be combined with a password, so a user can only log in if he
1. has the card
2. knows the password
Automated cash dispensing machine usually work this way.
Another technique is signature analysis. The user sign his name with a special
pen connected to the terminal and the computer compares it to a known
specimen stored online. Even better is not to compare the signature, but compare
the pen motion made while writing it. A good forger may be able to copy the
signature, but will not have a clue as to the exact order in which the stroke were
made.


7.23 BIOMETRIC
Yet another approach is to measure physical characteristic that are
hard to forge. For example a finger print or a voiceprint reader in the
terminal could verify the users identity (it make the search go faster if the
user tells the computer who he is, rather then making the computer
compare the given fingerprint to the entire database)
Finger length analysis is surprisingly practical. When this is used
each terminal has a device. The user inserts his hand into it and the length
of all his finger is measured and check against the database


7.3 Elaborate protection concept an access control
  •  Protection is concerned with keeping data safe from improper or unauthorized access and physical damage .  When faulty memory resulted in the disk data being corrupted, technicians replacing disk, after disk and the problem not going away we need decided to swap the memory as part of an error exploration.
  • We can control access to files, specifying who and how can read, write, execute, delete and list files.
  • Access control has a number of strategies:
    • Access control list (ACL) specifies user names or groups, and types of access.
    • Associate passwords and access control (read only, modify with tracked changes) per file.
System DOS
·         MS-DOS is a single-tasking operating system, which means that it can run only one program at a time. The MS-DOS user interface is a command-line interface, which means that users must type text-based commands and responses when interacting with the operating system.
·         MS-DOS treats each separate program and piece of data as an individual file. Each file has a name, which is broken down into two parts: a file name and an extension.
·         The input/output system consists of two files and a ROM (Read Only Memory) chip. While the two files are on your disks and are loaded into memory when the computer starts, they are normally hidden from your view and not available to you for changing.
·          Disk Operating System  is responsible for creating and/or deleting files in the file system and managing the input and output of data in the file system.

                                                                                                                        
WINDOWS 2000
·          Windows 2000 unites defined roughly the user-friendliness, pug & play and USB device support of Windows 98 and the safety and stability of the Windows NT family
·           It is a multitasking, multiprocessing operating system and supports up to 2 processors of the x86 32- bit and 64 The field of application of this operating system is suitable as a single user computer or as a client in company networks. bit architecture with SMP.
·           Networks are supported with the protocols TCP/IP, NWLink and AppleTalk. Windows 2000 supports the data interchange in decentralised working groups and central domains. 
·           The SFC (System File Protection) offers protection from overwrite of Windows system files. It is possible to create hardware profiles for different hardware configurations with the settings of all devices and services. 

WINDOWS NT
·          Windows NT is a Microsoft Windows personal computer operating system designed for users and businesses needing advanced capability.
·          A new file directory approach called Active Directory that lets the administrator and other users view every file and application in the network from a single point-of-view.
·          Dynamic Domain Name Server (DNS), which replicates changes in the network using the Active Directory Services, the Dynamic Host Configuration Protocol (DHCP), and the Windows Internet Naming Service (WINS) whenever a client is reconfigured.
·          The ability to create, extend, or mirror a disk volume without having to shut down the system and to back up data to a variety of magnetic and optical storage media.
·          A Distributed File System (DFS) that lets users see a distributed set of files in a single file structure across departments, divisions, or an entire enterprise.