Memory Management by Using the Heap and the Stack in Java

5 downloads 0 Views 80KB Size Report
memory spills through java coding. I. Introduction. When a program is loaded into memory, it's organized into 3 areas of memory, known as segments: the text ...
ISSN: 2321-5585 (Online) ISSN: 2321-0338(Print)

IJRCSE Vol–4, Issue-6, Nov - Dec, 2014

Memory Management by Using the Heap and the Stack in Java 1

S.Poornima, 2Dr. C.V. Guru Rao, 3Nazia Thabassum, 4Dr. S.P. Anandaraj 1

Researcher, 2Principal, 3PG Scholar, 4Sr. Asst. Professor, 1,2,3,4 Dept of Computer Science and Engineering, SR Engineering College, Warangal, A.P.

Abstract- At the point when a project is stacked into memory, it’s composed into three territories of memory, called portions: the content section, the stack portion, and the pile fragment. The content portion (once in a while additionally called the code section) is the place the incorporated code of the system itself dwells. This is the machine dialect representation of the system steps to be done, including all capacities making up the project, both client and framework characterized. memory spill, in software engineering (in such connection, its otherwise called spillage), happens when a machine project devours memory yet is not able to discharge it again to the working framework. A memory spill has manifestations like various different issues and for the most part must be diagnosed by a software engineer with access to the system source code.in request to conquer this issue compelling memory administration of stack and pile which limits memory spills through java coding.

I. Introduction When a program is loaded into memory, it’s organized into 3 areas of memory, known as segments: the text section, the stack section, and therefore the heap section. The text section (sometimes additionally known as the code segment) is wherever the compiled code of the program itself resides. This can be the machine language illustration of the program steps to be dispensed, together with all functions creating up the program, each user and system defined1. Thus, in general, associate practicable program generated www.ijrcse.com

by a compiler (like gcc) can have the subsequent organization in memory on a typical design (such as on MIPS) 2:

Figure 1: Memory organization of a typical program in MIPS  Code phase or text phase: Code segment contains the code viable or code binary.  information phase: information segment is sub divided into 2 elements  Initialized information segment: All the world, static and constant information area unit hold on within the information phase.  Uninitialized information segment: All the uninitialized information area unit hold on in BSS. International Journal of Research in Computer Science & Engineering 1977

IJRCSE Vol–4, Issue-6, Nov - Dec, 2014

Heap: once program apportion memory at runtime exploitation calloc and malloc perform, then memory gets allocated in heap. Once some a lot of memory got to be allotted exploitation calloc and malloc perform, heap grows upward as shown in higher than diagram. Stack: Stack is employed to store your native variables and is employed for passing arguments to the functions along with the destination of the instruction that is to be dead once the call is over. When a brand new stack frame must be accessorial (as a results of a freshly known as function), the stack grows downwards. The stack and heap area unit historically settled at opposite ends of the process’s virtual address area. The stack grows mechanically once accessed, up to a size set by the kernel (which is adjusted with setrlimit(RLIMIT_STACK, ...)). The heap grows once the memory authority invokes the brk() or sbrk() system decision, mapping a lot of pages of physical memory into the process’s virtual address area. Implementation of each the stack and heap is typically right down to the runtime/OS. usually games and differentapplications that area unit performance important produce their own memory solutions that grab an outsized chunk ofmemory from the heap then dish it out internally to avoid looking forward to the OS for memory. Stack: Stacks in computing architectures are square measure regions of memory wherever knowledge is accessorial or removed in an exceedingly last-infirst-out manner. In hottest pc systems, every thread contains a reserved region of memory spoken as its stack. Once a perform executes, it's going to add a number of its state knowledge to the highest of the stack; once the function exits it's answerable for removing that knowledge from the stack. At a minimum, a thread’s stack is employed to store the placement of perform calls so as to permit come back statements to come back to the proper location, but programmers could more prefer to 1978 International Journal of Research in Computer Science & Engineering

ISSN: 2321-5585 (Online) ISSN: 2321-0338 (Print)

expressly use the stack. If a neighborhood memory lies on the threadsstack, that memory is claimed to own been allotted on the stack (see Footnote 4). Because the information is accessorial and removed in an exceedingly last-in-first-out manner, stack allocation is extremely easy and typically quicker than heap-based memory allocation (also called dynamic memory allocation). Another feature is that memory on the stack is mechanically, and really expeditiously, saved once the perform exits, which can be convenient for the computer programmer if the information is not any longer needed. If however, the information has to be unbroken in some type, then it should be traced from the stack before the perform exits. Therefore, stack primarily based allocation is appropriate for temporary knowledge or knowledge that is not any longer needed once the making perform exits (see Footnote 4). A decision stack consists of stack frames (sometimes referred to as activation records). These square measure machine dependent knowledge structures containing subprogram state info. Every stack frame corresponds to a decision to a subprogram that has not nonetheless terminated with a comeback. as an example, if a subprogram named Draw Line is currently running, having simply been referred to as by a subprogram Draw Square, the highest a part of the decision stack may be arranged out like this (where the stack is growing towards the top) 5: Here square measure a couple of extra items of knowledge concerning the stack that ought to be mentioned6: 1) The OS allocates the stack for every system-level thread once the thread is formed. Generally the OS is called by the language runtime to allot the heap for the appliance. 2) The stack is connected to a thread, thus once the thread exits the stack is saved. The heap is usually allocated at application startup by the runtime, and is saved once the appliance (technically process) exits. The dimensions of the stack are ready once a thread is formed. www.ijrcse.com

ISSN: 2321-5585 (Online) ISSN: 2321-0338(Print)

IJRCSE Vol–4, Issue-6, Nov - Dec, 2014

10) Sometimes features a most size already determined once your program starts. II. The following are some implementation

examples

Figure 2: Stack frame for some DrawRectangle program 1)The stack is quicker as a result of the access pattern makes it trivial to allocate memory from it, while the heap has far more complicated clerking concerned in Associate in Nursing allocation or free. Also, every computer memory unit within the stack tends to be reused terribly of times which implies it tends to be mapped to the processor’s cache, making it in no time. 2) Keep in pc RAM just like the heap. 3) Variables created on the stack can leave of scope and mechanically deallocate. 4) a lot of quicker to portion as compared to variables on the heap. 5) Enforced with Associate in nursing actual stack organization. 6) Stores native information, come addresses, used for parameter passing 7) Will have a stack overflow once an excessive amount of the stack is employed. (Mostly from infinite (or too much) recursion, terribly giant allocations) 8) Information created on the stack will be used while not pointers. 9) You'd use the stack if you recognize precisely what quantity information you would like to portion before compile time and it's not too massive. www.ijrcse.com

1) In C you'll get the advantage of variable length allocation through the employment of allocate, that allocates on the stack, as hostile alloc, that allocates on the heap. This memory won’t survive your come statement; however it’s helpful for a scratch buffer. Deallocating the stack is pretty easy as a result of you mostly deallocate within the reverse order during which youallocate. Stack stuff is adscititious as you enter functions; the corresponding information is removed as you exit them. This means that you simply tend to remain among a little region of the stack unless you decision variant functions that decision lots of different functions (or produce an algorithmic solution) 7. III. Stack Overflow That said, stack-based memory errors area unit a number of the worst I’ve experienced. If you utilize heap memory, and you overstep the bounds of your allotted block, you've got a good probability of triggering a section fault. (Not 100%: your block could also be incidentally contiguous with another that you just have antecedently allotted.) But since variables created on the stack area unit continually contiguous with one another, writing out of bounds will change the worth of another variable. I even have learned that whenever I feel that my program has stopped obeying the laws of logic, it's most likely buffer overflow8. Let me show you some samples of ways in which to kill the stack: classBeta { } classAlpha { staticBeta b1 ; Beta b2; } International Journal of Research in Computer Science & Engineering 1979

ISSN: 2321-5585 (Online) ISSN: 2321-0338 (Print)

IJRCSE Vol–4, Issue-6, Nov - Dec, 2014

3) Variables on the heap should be destroyed manually and ne'er fall out of scope. the info is freed with delete, delete[] or free 4) Slower to allot compared to variables on the stack. 5) Used on demand to allot a block of knowledge to be used by the program. 6) Can have fragmentation once there are unit lots of allocations and deallocations 7) Will have allocation failures if too massive of a buffer is requested to be allotted. 8) You'd use the heap if you don’t grasp precisely what proportion knowledge you may would like at runtime or if you need to portion lots of knowledge. 9) Liable for memory leaks.

publicclassTester { publicstaticvoid main(String[] args) { Beta b1 = newBeta(); Beta b2 = newBeta(); Alpha a1 = newAlpha(); Alpha a2 = newAlpha(); a1.b1 = b1 ; a1.b2 = b1 ; a2.b2 = b2 ; a1 = null ; b1 = null; b2 = null; System.out.println(" Line 1 System.out.println(" Line 2 System.out.println(" Line 3 System.out.println(" Line 4 System.out.println(" Line 5 System.out.println(" Line 6

" + " a1 " + " a1 " + " a2 " + " a1 " + " b1 " + " b1

" + a1.b1); " + a1.b2); " + a2.b2); " + a2.b1); " + b1); " + b2);

} } Heap: The heap contains a connected list of used and free blocks. New allocations on the heap (by new or malloc) are satisfied by making an acceptable block from one in every of the free blocks. This needs change list of blocks on the heap. This Meta info regarding the blocks on the heap is additionally keep on the heap typically in a very tiny space just before of each block ten. 1) The scale of the heap is about on application startup, however will grow as house is required (the authority requests a lot of memory from the operative system) (see Footnote 6). 2) Hold on in laptop RAM just like the stack. 1980 International Journal of Research in Computer Science & Engineering

IV. Memory leaks A memory leak, in engineering (in such context, it’s conjointly referred to as leakage), happens once a pc program consumes memory however is unable to unharness it back to the package. A memory leak has symptoms like variety of different issues and customarily will solely be diagnosed by an engineer with access to the program supply code; but, many folks talk over with any unwanted increase in memory usage as a memory leak, although this is often not strictly correct (Info here is in 12). A memory leak will diminish the performance of the pc by reducing the quantity of accessible memory. Eventually, within the worst case, an excessive amount of the out there memory might become allotted and every one or a part of the system or device stops operating properly, the applying fails, or the system slows down unacceptably as a result of thrashing. Memory leaks might not be serious or maybe detectable by traditional suggests that. In trendy in operation systems, normal memory employed by Associate in nursing application is discharged once the applying terminates. This implies that a memory leak in a very program that solely runs for a brief time might not be detected

www.ijrcse.com

ISSN: 2321-5585 (Online) ISSN: 2321-0338(Print)

IJRCSE Vol–4, Issue-6, Nov - Dec, 2014

and is never serious. Typically, a memory leak happens as a result of dynamically allotted memory has become unreached. The prevalence of memory leak bugs has LED to the event of variety of debugging tools to sight un- reachable memory. IBM Rational Purify, Bounds Checker, Valgrind, Insure++ and memwatch square measure a number of the additional in style memory debuggers for C and C++ programs. ”Conservative” pickup capabilities are another to any artificial language that lacks it as a constitutional feature, and libraries for doing this square measure out there for C and C++ programs. A conservative collector finds and reclaims most, however not all, unreachable memory. The following java operates deliberately leaks memory by losing the pointer to the allotted memory. Since the program loops forever business the memory allocation operate, malloc(), however while not saving the address, it will eventually fail (returning NULL) once no additional memory is out there to the program. As a result of the address of every allocation isn't hold on, it's not possible to free any of the antecedently allotted blocks. It should be noted that, generally, the package delays real memory allocation till one thing is written into it. Therefore the program ends once virtual addresses run out of bounds (per method limits or a pair of four GiB IA-32 or lots additional on x86-64 systems) and there could also be no real impact on the remainder of the system. package com.journaldev.test; public class Memory { public static void main(String[] args) { // Line 1 int i=1; // Line 2 Object obj = new Object(); // Line 3 Memory mem = new Memory(); // Line 4 mem.foo(obj); // Line 5 } // Line 9 private void foo(Object param) { // Line 6 www.ijrcse.com

String str = param.toString(); //// Line 7 System.out.println(str); } // Line 8 }

Conclusion: By the use of stack and heap memory management is occurred in C language. Stack follows LIFO (last in first out) and dynamic memory allocation and also stack allocation is very simple. But The heap contains a linked list of used and free blocks. New allocations on the heap (by new or malloc) aresatisfied by creating a suitable block from one of the free blocks. This requires updating list of blocks on the heap.

V. Acknowledgement Author S.Poornima would like to acknowledge and thank Women Scientist Scheme (SR/WOS-A/ET24/2012), Department of Science and Technology for all the Grant-in Aid provided for the Research Work. This work is supported by SR Engineering College, India. Also her sincere feelings and gratitude to Management and Principal of SR Engineering College for the Financial Support and encouragement which helped her to carry out the research work and wishes to thank Dr. C. V. Guru Rao for his valuable suggestions.

References: [1].Memory management in C: The heap and the stack inf.udec.cl/~leo/teoX.pdf 1) [2].7. Memory : Stack vs Heap – Paul Gribblegribblelab.org /CBootcamp/7_Memory _Stack_vs_ Heap.html 2) [3].Exercise 17: Heap And Stack Memory Allocation Learn C ...c.learncodethehardway.org book/ex17.html [4] T. Bianchi, A. Piva, and M. Barni, “On the implementation of the discrete Fourier transform in International Journal of Research in Computer Science & Engineering 1981

IJRCSE Vol–4, Issue-6, Nov - Dec, 2014

ISSN: 2321-5585 (Online) ISSN: 2321-0338 (Print)

the encrypted domain,” IEEE Trans. Inf. Forensics Security, vol. 4, no. 1, pp. 86–97, Mar. 2009. [5] J. R. Troncoso-Pastoriza and F. Pérez-González, “Secure adaptive filtering,” IEEE Trans. Inf. Forensics Security, vol. 6, no. 2, pp. 469–485, Jun. 2011. [6] T. Bianchi, A. Piva, and M. Barni, “Composite signal representation for fast and storage-efficient processing of encrypted signals,” IEEE Trans. Inf. Forensics Security, vol. 5, no. 1, pp. 180–187, Mar. 2010.

1982 International Journal of Research in Computer Science & Engineering

www.ijrcse.com