Graphic design python turtle 🐢

from turtle import * import colorsys bgcolor('black') pensize(0) tracer(50) h=0 for i in range(300): c=colorsys.hsv_to_rgb(h,1,1) h+=0.9 color(c) forward(300) left(100) fd(i) goto(0,0) down() rt(90) begin_fill() circle(0) end_fill() rt(10) for j in range(5): rt(30) done() Please follow my blog and subscribe my channel for more videos and newly updates 👍👍👍👍👍 import turtle as t import colorsys t.bgcolor('black') t.tracer(100) h=0.4 def draw(ang,n): t.circle(5+n,60) t.left(ang) t.circle(5+n,60) for i in range(200): c=colorsys.hsv_to_rgb(h,1,1) h+=0.005 t.color(c) t.pensize(2) draw(90,i*2) draw(120,i*2.5) draw()

booting, spooling and Buffering

Booting Process:- Whenever a computer system is "cold started", say, after being powered or following a system crash, at least a portion of the operating system must be brought into main memory and given control of the processor. This activity is known as system booting or bootstrapping of an operating system. Typically, the hardware initially transfers control to a known address where a starting routine in ROM is placed. This routine is called the bootstrap loader. It can be used to bring the rest of the system gradually to main memory, for instance, from secondary memory or from another node in a distributed system.

In disk-based systems, the core portion of the operating system is often placed at a known address, called the boot block or boot area of a known system disk drive. Thus, the bootstrap loader routine can include a rudimentary from of a disk driver whose primary function is to load and to activate the initialization section of the operating system. This section can, in turn, load into main memory the rest of the operating system and complete the initialization process. In memory-based systems, such as KMOS, the starting routine can simply transfer control to the operating system, which itself may be residing in ROM.


Spooling:- A spool is a buffer that holds output for a device, such as a printer, that cannot accept interleaved data streams. Spooling is the process in which jobs from the cards are read directly to the disk and the location of that card in the disk is recorded in a table by the operating system. When that job is required for execution, it is read from the disk.

Similarly, when job requires a printer for output a line, that particular line is copied onto  the disk and system buffer, and at the completion of job output is read from and printed by the printer. by spooling more than one input and output devices can interact with disk simultaneously and provide an impression that these devices interact directly with CPU, so it is called simultaneous peripheral operation on line.

Some devices, such as taps drive and printers, cannot usefully multiplex the I/O requests of multiple concurrent applications. Spooling is one way that OS can coordinate concurrent output. Another way to deal with concurrent device access is for providing explicit facilities for coordination. Some OS provide support for exclusive device access, by enabling a process to allocate an idle device, and deallocated that device when it is no longer needed.

The only advantage of spooling was that you no longer had to carry tapes to and fro  the 1401 and 1709 machines. Under the new operating system, all jobs in the from of cards could be read into the disk first and later in, the OS would load as many jobs in the memory, one after the other, until the available memory could accommodate them. After many programs were loaded in different portions of the memory, the CPU was switched from one program to another to achieve multiprogramming. Spooling allowed smooth multiprogramming operations.

The I/O of all the jobs was essentially pooled together in the spooling method and therefore this could be overlapped with the CPU bound computations of the jobs at an appropriate time chosen by the OS to improve the throughput.

Buffering:- A buffer is a memory area that stores data  while they are transferred between devices or between a device and an application. Buffering is done for three reasons. First reason is to cope with a speed mismatch between consumer and producer of a data stream. Suppose,  for example, that a file is being received via modem for storage on the hard disk. Thus, a buffer is created in main memory to accumulate the bytes received from modem. When an entire buffer of data has arrived, the buffer can be written to disk in a single operation.

However the disk write is not instantaneous and the modem still needs a place to store additional incoming data, two buffers are used. When the first buffer will fill, then the disk write is requested. Now modem starts to fill the second buffer, while the first buffer is written to disk. After finishing the disk write from first buffer, modem can switch back to first buffer while disk write from second one. This is known as double buffering. This double buffering decouples the producer of data from the consumer, thus relaxing timing requirements between them.

The second use of buffering is to adapt between devices that have different data-transfer sizes. Such disparities are especially common in computer networking, where buffer are used widely for fragmentation and reassembly of messages. A large message is fragmented into small network packets, at sending side. These packets are sent over the network, and the receiving side places them in a reassembly to from an image of the source data.

Third reason of buffering is to support copy semantics for application I/O. Now consider an application has a buffer of data that it wishes to write on disk. It calls the "write ( )" system call providing a pointer to the buffer and an integer specifying the number of bytes to write. With copy semantics, the version of the data written to disk is guaranteed to be the version at the time of the application system call, independent of subsequent changes in the application's buffer. The disk write is performed from the kernel buffer, so that subsequent changes to the application buffer have no effect. Copying of between kernel buffers and application data space is common in operating systems, despite the overhead that this operation introduces because of the clean semantics. The same effect can be obtained more efficiently by clever use of virtual-memory and copy-on-write page protection. 







































Comments

Popular posts from this blog

Protocol verification using finite state machine model in computer network

Make fire crackers using python turtle 🐢

Scope and Limitation of Machine Learning