Brief discussion in Version Controlling, GIT, CDN and Virtualization



Need of VCS 

When the software systems grow larger and complex it can’t develop by single developer. So the system decomposed to sub system and developed by different groups. So when they complete their part all small parts should be integrated. So in that time if these sub parts in different versions, it will be a big problem. So to avoid this we need to use a vcs. And also it provide more benefits.

VCS keep track of every modifications. There for if a made happen developer can turn back. Compare the old version and can fix the problems


Three models of VCS

  1. Local version control systems
              This is the oldest VCS.  System maintains track of files within the local system. A simple                    and common system. This can’t use for collaborative softwares like Aconex, Adobe                            Acrobat,  Adobe LiveCycle, Airtable etc. This type is error prone (chances of accidently                      writing to the wrong file is higher.)
               


      2. Centralized version Control System

                                 ·        This can be use in collaborative software developments.
·         Everyone what happen in others projects.
·         All changes in the files are tracked under a centralized server and the server includes all information of versioned files and list of clients that check out files from that central place
·         Single point failure is the disadvantage of this system. If the part of the system fails, entire system will be stopped.
·         Subversion(SVN), CVS are few examples for this system


                       


      3. Distributed version Control System

                                 ·         The client completely clone the repository.
·         No single point failure. If any server dies any other client repositories can be   copied to the server and restore the server.
·         Every clone is consider as full backup of data
·         Can collaborate with different groups of the same project in different ways.
·         This allow to work productively when it’s not connect to a network
·         Operations like commits, viewing history, reverting changes are faster than   CVCS. Because DVCS is not communicate with central server.
·         Monotone, Codeville, Pijul are few examples for this system.
·         Initial checkout of repository is slower than CVCS. Because all branches and   revision history are copied to the local machine by default.
                                     ·         Need additional storage for every user to copy the complete codebase                                                history.                        



Git VS GitHub


                               

There is a question, that is Git and GitHub are same? The answer is no. They are different. Let’s understand the differences between them.

Git is a distributed version control system. It is created by Linux Torvalds in 2005 for development of Linux kernel. If we clone a Git project, we can have the all the project history. Without interacting with the server, we can commit, branch and tag all on our local computer. As the distributed revision control system, Git give more attention for speed, data integrity, support for distributed and non – linear workflows.

GitHub is a hosting service for Git repositories.


                   

Hope you now get the difference between Git and GitHub.
                               

Commit command VS Push command in Git

“Commit” is used to save changes to the local repository. When we are going to change, we have to tell Git which change we want, before running “git commit” command. Means files won’t be automatically include in next commit because it was changed.




“Push” is used to upload local repository content to a remote repository. Git push is most commonly used to publish an upload local changes to a central repository. 



The uses of staging area and Git directory


Staging area


Staging is a step before the commit process. As long as a changeset is in the staging area, git allows to edit.
Staging area helps,
·         to split up one large change into multiple commits.
·         in reviewing changes.
·         when a merge has conflicts.
·         To keep extra local files hanging around.


GIT directory


A git directory is a bare repository (without its own working tree).  This is used for exchanging histories with others by pushing into it and fetching from it. 


Collaboration workflow of git

                     ·         After creating a local repository user can be working on the file, delete, add, modify,                            copy, rename or moving a file can be done. At this time user don’t have to watch the                            process.
                     ·         When you reached to noteworthy state, you need to consider about version controlling.                      So you need to change commit.
                     ·         “Status” command is used to get a list of all changes user performed since last commit.
                     ·         You have to include what are the local changes add in next commit. Because the file                          is not a reason to add automatically for the next commit. So to do this perform we                                need to add the files in to “Staging area”.
                     ·         Now it’s time to commit those changes. You need to add a message what you have                            done and the commit will be recorded in your local repository, marking as a new                                  version.
                     ·         Time to time you need to check what are the new changes in your project when you are                      working with others specially. Use ‘log’ command to check what are the new changes                          and the person who is involved to do this.
                     ·         And also when collaborating with others you can share what are the changes that you                        have done as well as you can receive others changes. A remote repository on a server                      is used to make these changes possible.


Benefits of CDNs



These days there are so many companies that have a huge traffic on their website on daily basis. So these companies use CDN to reduce these traffics. Here are few advantages of using CDN.

              ·         Improve the website loading time.
Visitors who are using CDN server to visit website can experience fast page loading. Then the visitors browsing time reduced. They can complete their work quickly and can quit from the website. So that mean the traffic has been controlled. Then more visitors can visit and spent time. More visitors mean a win for the company.

              ·         Quickly delivered the content.
Because of higher reliability operators can deliver high-quality content with high level of service. When the user visit to a page, that file saved as cached. If user load that page again don’t need to download the content again. Because browser can load that saved data.

               ·         Manage Visitors easily.
CDN can send different content to different users depending on type of device requesting the content.  CDN capable of detecting the type of mobile devices and can deliver a device specific version of the content.

               ·         Low Network Latency and packet loss.

               ·         Higher availability.


      CDNs dynamically distribute assets to the strategically placed core, fallback, and edge servers. CDNs can give more control of asset delivery and network load. They can optimize capacity per customer, provide views of real-time load and statistics, reveal which assets are popular, show active regions and report exact viewing details to customers. CDNs can thus offer 100% availability, even with large power, network or hardware outages


                ·         Storage and Security.
      Improve security by providing DDoS migration, secure through Digital Rights Management and limit access through user authentication and other optimizations.
      And also CDNs offer secure storage for content like videos that need to enterprise, enhanced data backup service.


      Difference between CDNs and Web hosting servers


                ·         Web hosting servers use host the website and let user to access the internet.
               But CDN use to speed up the access of user website’s assets.

                ·         Web hosting servers deliver all the content to the user. If the distance between user and                     the server in too long user has to wait until data retrieve to user location.
               But CDN find the nearest server to user and deliver data to user. So it’s faster than web                     hosting server.

                ·         Web hosting normally refers to one server. But CDNs distribute user content in every                         server in the world. It call as multi – host environment.



      Examples for free CDNs

                 ·         Incapsula
                 ·         Photon by Jetpack
                 ·         Swarmify
         ·         CloudeFlare

Examples for commercial CDNs

                 ·         AWS Cloudefront
                 ·         Google Cloue CDN
                 ·         Microsoft Azure CDN
                 ·         Cloudinary
                 ·         MetaCDN
                 ·         CDN77
                 ·         KeyCDN

 Requirements for virtualization

                 ·         Processor that supports Intel VT-X
                 ·         Minimum 2 GB Memory (NAS reserves 1.5 GB Memory): TS-x51 series
                 ·         Minimum 4 GB Memory (NAS reserves 2 GB Memory): TS-x70, TS-x70 Pro, TS-ECx80                      Pro, TS-x70U-RP, TS-x79U-RP, TS-ECx79U-RP, TS-ECx79U-SAS-RP, SS-ECx79U-                          SAS-RP and TS-ECx80U-RP series
                 ·         Minimum 550 MB Hard disk space
                 ·      Minimum two Ethernet
                 ·         Supported NAS seriesTS-x51/TS-x51-4G, TS-x70, TS-x70 Pro, TS-x70U-RP, TS-                            x79U-RP, TS-ECx79U-RP, TS-ECx79U-SAS-RP, SS-ECx79U-SAS-RP, TS-ECx80 Pro                        and TS-ECx80U-RP, series 



     Virtualization techniques
   
      
      The  different levels of virtualization techniques and implementations, available tools are in this image.Now we talk about each of these levels.

      1.       Virtualization at Instruction Set Architecture(ISA) level

·         Every machine has an instruction set.
·         This instruction set is an interface between software and hardware.
·         Using this instructions software can communicate with hardware.
·         So when virtualization is carried at this level, we create an emulator which receives all the instructions from the Virtual machines, like for example if a virtual machine wants to access the printer then that instruction will be passed to this emulator.
·         The emulator will then interpret what type of instruction it is and then map that instruction to the Host machine's instruction and then that instruction will be carried out on Host machine and the results will be passed to the emulator and emulator will return it to the virtual machine.
·         This technique is simple to implement but as every instruction has to be interpreted before mapping it, too much time is consumed and performance becomes poor.

      2.       Virtualization at Hardware Abstraction Layer(HAL) level

·         As in Virtualization at ISA level, performance is reduced due to interpretation of every instruction so to overcome that we have virtualization at HAL level.
·         In this type we map the virtual resources with the physical resources.
·         We don't interpret every instruction but we just check whether it is a privileged instruction or not.
·         If the instruction is not privileged, we simply allow normal execution because already virtual and physical resources are mapped so accessing is simple.
·         But if the instruction is privileged, we pass the control to VMM (Virtual Machine Monitor) and it deals with it accordingly.
·         There may be many Virtual machines running simultaneously on the same Host system so if privileged instructions like memory management or scheduling tasks aren't handled properly, system can crash.
·         Even after many advancements still there are certain exceptions which cannot be caught by this method which is a drawback of this type of virtualization.

      3.       Virtualization at Operating System(O.S.) level

·         In virtualization at HAL level each virtual machine is built from scratch i.e. by installing O.S., application suites, networking systems, etc.
·         In cloud sometimes we need to initialize 100 Virtual machines at a single time, if we use virtualization at Hardware abstraction layer(HAL) level this can take too much time.
·         So to overcome this in Virtualization at Operating system level we share operating system between Virtual machines along with the hardware.
·         So we keep the base O.S. same and install only the differences in each single Virtual machine.
·         For example, if we want to install different versions of windows on virtual machines(VM), you keep base O.S. of windows same and only install the differences among each VM.
·         A drawback of this type is that you can install only those O.S. in VMs whose parent O.S. family is same like for example you can't install Ubuntu on a VM whose base O.S. is windows.

      4.       Virtualization at Library Level or Programming language level

·         When developers develop certain applications, they save the user from all the coding details by providing them Application User Interface(API).
·         This has given a new opportunity for virtualization.
·         We use Library Interfaces to provide a different Virtual Environment(VE) for that application.
·         Provide user with an emulator with which user can run applications of different O.S.s.
·         Example of this is the WINE tool which was used mostly by mac users to play Counter Strike 1.6 game which was only available for windows in the start.

     5.       Virtualization at Application Layer level

·         Virtual machines run as an application on the Host operating system.
·         Create a virtualization layer which is present above the Host Operating system and it encapsulates all the applications from the underlying O.S.
·         While all the Applications are loaded, Host O.S. provides them with a Runtime environment. But virtualization layer replaces a part of this Runtime environment and gives a Virtual Environment to the Virtualized applications.

       


Hypervisor


A hypervisor is a function which abstracts, isolates the operating system and applications from the underlying computer hardware. This abstraction allows the underling host machine hardware to independently operate one or multiple virtual machines as guests. Allowing multiple guest virtual machines to the physical computer resources like memory space, processor cycles, network bandwidth, etc. Sometime a hypervisor known as a virtual machine monitor. Hypervisors are very important to system operator or system administrator, because virtualization adds a crucial layer of management and control over the data center and enterprise environment.
The role of hypervisor is large. Storage hypervisors are used to virtualize all the storage resources in environment to create centralized pools that administrator can supply without having any problems or concerns. Now the storage hypervisors are become key element of software-define storage.


Difference between emulation and VMs

Virtual machines make use of CPU self – virtualization to provide a virtualized interface to the real hardware.
Emulators emulate the hardware without thinking about the CPU that it can be able to run the code directly and redirect some operations to hypervisor controlling virtual container.


VM vs Containers/Dockers






Virtual machine

       ·         A virtual machine is an emulation of a computer system. It makes possible to run many                     separate computers in one computer.
       ·         OS and the other applications hardware resources from a single host server or from a pool of             host servers.
       ·         Every VM requires its own underlying OS and the hardware is virtualized.
       ·        A hypervisor creates and runs VMs and it is in between hardware and the VM.  

Benefits of a Virtual machine

·         All OS resources available to apps.
·         Established management tools.
·         Established security tools.
·         Better known security controls.
        Disadvantages of a Virtual machine.

·         VMs are less efficient than real machines.
·         A virtual machine can be infected with the weakness of the host machine.
·         When several VMs are running on the same host, performance may be hindered.

Containers

       ·         Containers just virtualize the OS. Not virtualize the underlying computer like VM.
       ·         It is on the top of a physical server and its host can be Linux or Windows.
       ·         Each container shares the host OS kernel, the binaries and libraries.
       ·         Server can run multiple workloads with a single OS.
       ·         Because of the lightweight, containers take few seconds to start.
       ·         Linux containers and Docker are the two types of containers.

Benefits of a Container

·         Reduced IT management resources.
·         Reduced size of snapshots.
·         Quicker spinning up of the apps.
·         Reduced and simplified security updates.
·         Less codes to transfer, migrate or upload workloads.
·         Security and safety.

        


Difference between VMs and  Containers
VMs
Containers
Heavyweight
Lightweight
Limited performance
Native performance
Each VM runs in its own OS
All containers share the host OS
Hardware-level virtualization
OS virtualization
Startup time in minutes
Startup time in milliseconds
Allocates required memory
Requires less memory space
Fully isolated and hence more secure
Process-level isolation, possibly less secure







     Important things when you are using GIT
    
     Difference between local and global configurations

      ·        If you have multiple git servers, you can use global config (also you can use configuring your           git as per folder bases). Configuration vales are stored in a file and it is located in user’s home.           ~/.gitconfig on unix systems and C:\Users\<username>\.gitconfig on windows..

      ·        By default, git config will write to a local level if no configuration option is passed. Local level            configuration is applied to the context repository git config gets invoked in. Local                                configuration  values are stored in a file that can be found in the repo’s. git directory:                        git/config.


        
  v  Git uses a username to associate commits with an identity. The Git username is not the same as     your GitHub username. You can change the name that is associated with your Git commits             using the git config command. The new name you set will be visible in any future commits you       push to GitHub from the command line.

       GIT Branch

          Within a repository you have branches, which are effectively forks within your own repository.             Your branches will have an ancestor commit in your repository, and will diverge from that                     commit with your changes. You can later merge your branch changes. Branches let you work             on  multiple disparate features at once.

Comments

Popular posts from this blog

How to run multiple Transformations from one Job in Pentaho

Data Analytics

Distributed Systems