Our support fabric

If there is anything we understand better than technology and your business - its the significance and importance of support services. We have 'designed' each of our support package to provide exactly what you need.

A peek behind the scenes.

This is a typical rundown, on how a potential ticket handling is run... as we said before, each ticket is different and the one that you raise may have a completely different treatment, but you get the idea - right ?

The incident is fictitious but not imaginary. Any resemblance to real life corporation, customer or brand is purely coincidental. However, all references to Globus Eight and its product is intended and accurate.

Read on about a specific incident handled. This may not represent in full or part the specifics of your potential incident, but this is a partial representation of our processes, our commitment and our approach to resolving your problems.

Warning: The content of the incidents have been formatted to your comfortable viewing. All gory details about ip address, memory leaks, ssh 5.91.p1, RDP protocol, boot image failure from 0x0008000 to 0x000c000 and other such 'viewer discretionary' elements have been removed. No animals or good citizens of the world, were hurt in the making of this incident. (err... if you leave out the 04:23 am SMS to our Product manager, who had hit sack just a few minutes ago)



Example customer: You are a Business Hotel, and you have deployed around 250 G8 Eco App with the G8 Media net solution for your rooms, lobby,  Let's assume you identify a problem, that qualifies for a severity level 1 situation.


--------- Incident strikes --------

09:52 hrs: Problem is found !!. You find that none of the guests in the room are able to access the G8 Media server.

09:54 hrs: You login into our helpdesk portal. Raise a ticket.

09:54 hrs: System allocates the ticket to available support executive.

09:55 hrs: Email and SMS alert are sent to the executive.

09:57 hrs: Severity determination The executive logs into the system, determines this is severity level 1.

 --------- Action initiated --------

09:57 hrs: Communicate. This means, we need to inform the relevant stake holders. For S1 type of issues, we inform the highest level of stake holders on both sides - our organisation and yours.  Issue statement is sent through SMS to Support Manager, CEO, Product manager within Globus Eight. Issue statement is sent through SMS to your pre-registered stake holders. This will be our "Issue Contact list", for the duration of the incident open.

09:58 hrs: A call is placed to you, seeking more information about the incident.

09:59 hrs: The first level diagnostic kicks in. All agents are equipped with tonnes of Knowledge base, FAQs, documented trouble shooting steps for different incidents. All available to him online and instantly. He goes into the Engineers Trouble shooting guide and picks up the procedure for your situation. Sometimes, when out-of-band connection is not available, his phone call to you helps in executing this diagnostics.

 --------- Incident upgraded to problem --------

10:15 hrs: Problem Management Process kicks in. It is established, that the incident cannot be resolved using the first level.

10:15 hrs: Problem management process includes setting up of phone bridge (sometimes, we also use Google Hangout :) )  "Hot Call huddle"  - With members from Product management, Tier 2 engineers, Account manager, Customer care head and service engineer anchored by "Problem Manager". Solutions are discussed. Each member is assigned a work guidance.

10:25 hrs: Communicate. Hot huddle call over, while others go back to work on the discussed solution. Problem Manager updates the current situation to pre-determined stakeholders. In this case, since it S1, all those in the above contact list are updated with the current status. Hot huddle scheduled for next 30 minutes (since this is S1)

11:00 hrs: Hot huddle call 2 initiated. Team reports on action taken and impact on solution. It seems, we need to get the support from the Tier 3 specialist, from our hardware supplier. Call is established to reach the pre-determined Tier 3 specialist.  He is busy, (this is real life right..), Call is then placed to alternate Tier3 specialist (internal) for this competency, he joins the call.

11:23 hrs. Based on actions implemented by respective parties, we are able to revive the system.

11:29 hrs: First level check list, confirms all systems are working ok. Sample test cases confirm things are back to normal. Need for a "Closure huddle call"

 --------- Issue closure proceedings initiated --------

11:40 hrs: Huddle call convenes. Checkpoints discussed, confirmed. Technical open loops, such as open files, test configuration items closed. Security checklist completion confirmed (all passwords access opened during this work, closed), file permissions, configuration changes, associated devices (firewall ports etc) are closed.

11:49 hrs. System is declared "Under Observation", Severity level 1 reduced to Severity Level 2. Under observation period defined as 60 minutes.

11:49 hrs: Communication sent out to Contact list. System now under observation.

12:49 hrs: System engineer connects with Customer, establishes that the system continues to run. Based on the confirmation, the incident is closed. Issue is resolved. Ticket closed.

--------- Incident closed, However, our work continues --------

13:00 hrs: Customer confirmation over. Communication is sent out to established contact list.

11:00 hrs: (Day +1): S1 requires us to do a Root cause analysis. Service engineer creates one, submits to Head of the department Support services for approval.

13:30 hrs: (Day +1) RCA is approved, set of recommendation sent to You as a customer, sent to G8 support team.

17:00 hrs: (Day +1) RCA is incorporated into the Knowledge base, with appropriate security classification. Problem is really resolved, closed.

  --------- Issue finally closed from Globus Eight perspective  --------


Quick Links

Go to top