Khamis, 24 Mac 2011

TUGASAN 4: :my protection& security


 SECURITY POLICY AND MECHANISM

The term security and protection are often used interchangeable. Nevertheless, it is frequently useful to make a distinction between the general problems involved in making sure that files are not read or modified by unauthorized persons, which include technical, managerial, legal and political issues on the one hand, and the specific operating system mechanism used to provide security on the other to avoid confusion, we will use the term security to refer to the overall problem, and the term protection mechanisms to refer to the specific operating system mechanisms used to safeguard information in the computer. The boundary between them is not well defined, however. A more interesting problem is what to do about intruders. These come in two varieties. Passive intruders just want to read files they are not authorized or read. Active intruders are more malicious; they want to make unauthorized changes to data.

AUTHENTIC CONCEPT
a)      Password
Passwords are often used to protect object in the computer system, in the absence of more complete protection scheme. They can be considered a special case of either keys or capabilities. For instance, a password could be associate with each resource such as file. Whenever a request is made to use the resource, the password must be given. If the password is correct, access isgranted. Different passwords may be associated with different access rights. For example, different password may be used for reading, appending and updating a file.
           
b)      Artifact
A completely different approach to authorization is to check to see if the user has some item, normally a plastic card with a magnetic stripe on it. The card is inserted into the terminal, which then checks to see whose card it is. This method can be combined with a password, so a user can only log in if he :
1. has the card
2. knows the password

Automated cash dispensing machine usually work this way. Another technique is signature analysis. The user sign his name with a special pen connected to the terminal and the computer compares it to a known specimen stored online. Even better is not to compare the signature, but compare the pen motion made while writing it. A good forger may be able to copy the signature, but will not have a clue as to the exact order in which the stroke were made.

protection concept an access control

  •  Protection is concerned with keeping data safe from improper or unauthorized access and physical damage .  When faulty memory resulted in the disk data being corrupted, technicians replacing disk, after disk and the problem not going away we need decided to swap the memory as part of an error exploration.
  • We can control access to files, specifying who and how can read, write, execute, delete and list files.
  • Access control has a number of strategies:
    • Access control list (ACL) specifies user names or groups, and types of access.
    • Associate passwords and access control (read only, modify with tracked changes) per file.





TUGASAN 2: INPUT OUTPUT MANAGEMENT

Input Output Procedure

A computer system uses a device controller to facilitate the transfer of information between the device and the CPU. A complex controller like small computer system interface (SCSI) may permit connecting several I/O devices simultaneously.

BUFFERING
A)    SINGLE BUFFER
The simplest time of support that the operating system can provide is single buffering. When a user process issues an I/O request the operating system assign a buffer in the system portion of main memory to the operation.  Input transfers are made to the system buffer. When the transfer is complete, the process moves the block into users space and immediately request another block. This approach will generally provide a speed up compared with the lack of system buffering. The user process can be processing one block of data while the next block is being read in. 


The operating system is able to swap the process out because the input operation is taking place into system memory rather than into user process memory. This technique does, however, complicate the logic in the operating system. 

B)    DOUBLE BUFFER
An improvement over single buffering can be had by assigning to system buffer to the operation. A process now transfer data to  one buffer while the operating system empties or fills the other. 


 It is therefore possible to keep the block oriented device going at fullspeed if . On the other hand, if  double buffering ensures the process will not have to wait for I/O. In either case, an improvement over single buffering is achieved.


 Again, this improvement comes at the cost of increase complexity. For stream oriented input, we again are faced with the two alternative mod of operation. For line-at-a-time I/O, the user process need not be suspended for input or output unless the process runs ahead of the double buffers. 
SPOOLING
Spooling is a way of dealing with dedicated I/O devices in a multiprogramming system.  Instead what is done to create a special process, called a daemon and a special directory, is a spooling directory. Spooling is not only use for printers it is also used in other situation. For example, file transfer over a network often uses a network daemon. 


To send a file somewhere, a user puts it in a network spooling directory. Later on, the network daemon takes it out and transmit it.  This network consists of thousand of machine around the world communicating by dial up telephone lines and many computer networks.





TUGASAN 1: MEMORY MANAGEMENT


MEMORY MANAGEMENT

Memory management is vital in a multiprogramming system. If only a few processes are in memory, then for much of the time of the processes will be waiting for input output and the processors will be idle. Thus, memory needs to be allocated efficiently to pack as many processors into memory possible.

OBJECTIVES

a) Relocation
b) Protection
c) Sharing
d) Logical organization
e) Physical organization

VIRTUAL MEMORY

Virtual Memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the effectively available amount of RAM using disk swapping. The quality of the virtual memory manager can have a big impact on overall system performance.

PAGING

paging is one of the memory management schemes by which a computer can store and retrieve data from secondary storage for use in main memory. In the paging memory-management scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. The main advantage of paging is that it allows the physical address space of a process to be noncontiguous. Before the time paging was used, systems had to fit whole programs into storage contiguously, which caused various storage and fragmentation problems.

SEGMENTATION

segmentation is similar to dynamic partitioning. In the absence of an overlay scheme or the use of virtual memory, it would require that all of a program’s segments be loaded into memory for execution. The different, compared with dynamic partitions, is that with segmentation a program may occupy more than one partition, and this partitions need not be contiguous. Segmentation eliminates internal segmentation, but like dynamic partitioning it suffers from external fragmentation. However, because a process is broken up into a number of smaller pieces the external fragmentation
should be less.

MEMORY RELOCATION POLICY

In systems with virtual memory, programs in memory must be able to reside in different parts of the memory at different times. This is because when the program is swapped back into memory after being swapped out for a while it can not always be placed in the same location. The virtual memory management unit must also deal with concurrency. Memory management in the operating system should therefore be able to relocate programs in memory and handle memory references and addresses in the code of the program so that they always point to the right location in memory.

LOCATION OF OUTDOOR SYSTEM

-         The allocator places a process in the smallest block of unallocated memory in which it will fit.
Problems:
-          It requires an expensive search of the entire free list to find the best hole.
-          More importantly, it leads to the creation of lots of little holes that are not big enough to satisfy any requests. This situation is called fragmentation, and is a problem for all memory-management strategies, although it is particularly bad for best-fit.
-          Solution:One way to avoid making little holes is to give the client a bigger block than it asked for. For example, we might round all requests up to the next larger multiple of 64 bytes. That doesn't make the fragmentation go away, it just hides it.
-          Unusable space in the form of holes is called external fragmentation
-          Unusable space in the form of holes is called external fragmentation

-         The memory manager places process in the largest block of unallocated memory available. The ides is that this placement will create the largest hole after the allocations, thus increasing the possibility that, compared to best fit, another process can use the hole created as a result of external fragmentation.

-         Another strategy is first fit, which simply scans the free list until a large enough hole is found. Despite the name, first-fit is generally better than best-fit because it leads to less fragmentation.
Problems:
-          Small holes tend to accumulate near the beginning of the free list, making the memory allocator search farther and farther each time.

-         The first fit approach tends to fragment the blocks near the beginning of the list without considering blocks further down the list. Next fit is a variant of the first-fit strategy.The problem of small holes accumulating is solved with next fit algorithm, which starts each search where the last one left off, wrapping around to the beginning when the end of the list is reached (a form of one-way elevator).

RELOCATION OF PAGING SYSTEM

Least Recently Used (LRU):
  • -         Removes page least recently accessed
  • -          Efficiency
  • -          Causes either decrease in or same number of interrupts
  • -          Slightly better (compared to FIFO): 8/11 or 73%
  • -          LRU is a stack algorithm removal policy
  • -          Increasing main memory will cause either a decrease in or the same number of page interrupts
  • -          Does not experience FIFO anomaly

Two variations:
  • -         Clock replacement technique
  • -          Paced according to the computer’s clock cycle
  • -          Bit-shifting technique
  • -          Uses 8-bit reference byte and bit-shifting technique
  • -          Tracks usage of each page currently in memory
First In First Out (FIFO):
  • -         Removes page in memory the longest
  • -          Efficiency
  • -          Ratio of page interrupts to page requests
  • -          FIFO example: not so good
  • -          Efficiency is 9/11 or 82%

FIFO anomaly:
  • -         More memory does not lead to better performance