Tuesday 3 January 2012

Mobile Market and Network Planning - Part 4


1.4         Shift in Focus from Quality of Networks to Quality of Service


·         Mobile operators in advanced economies have penetrated about as low as they currently can into the marginal user population – growth must come from taking market share away from other carriers while increasing the average revenue per user (ARPU) through new services and new pricing plans. One of the ways that operators have always differentiated themselves is in the quality of their network. Network outages, dropped or uncompleted calls, and poor data transmission speeds are typical issues that cause subscribers to switch carriers. Meanwhile, the expectations of end-users are rapidly rising while consumer barriers to switching have lowered through regulator-mandated requirements for number portability among carriers. Thus, the operators are focusing on quality of service as one of the key drivers of end-customer acquisition and retention (see Figure 3 below).

Figure 3: Growth in mobile data traffic is driving investments in mobile broadband and 4G

Monday 19 December 2011

Mobile Market and Network Planning - Part 3


1.1         Lagging Revenue Growth

·         To date, operators have been unsuccessful in translating the data-growth into revenue growth resulting in significantly lower top line growth versus data growth. In developed regions, revenue per gigabyte is forecast to fall from USD23.21 in 2010 to USD4.27 by 2015. Moreover, regulatory pressures do decrease roaming prices and mobile termination rates are having a negative impact on voice revenues.
·         Operators are seeking new sources of revenue with changes to current flat-rate pricing to data caps  and tiered-pricing and by introducing a host of new services. New video services, in particular, are popular, but are consuming vast amounts of capacity, requiring the operators to add capacity faster than their revenue is growing.

1.2         Rising Operator Margin Pressure

·         The combination of dramatic traffic growth and slow revenue growth is putting pressure on mobile operator profit margins (see  Figure 2 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000E0000005F005200650066003200370037003500380031003700310034000000 below) Operators are therefore looking for increased efficiency in capex and opex spend through the optimisation of their networks.

Figure   2: Comparison of the network economics of mobile networks

Monday 12 December 2011

Mobile Market and Network Planning - Part 2


1           Key Market Trends


1.1         Exponential Growth in Wireless Data


·         Today’s mobile market is undergoing a dramatic transformation driven by the growth in mobile broadband and data-hungry applications such as smart-phones, e-books and i-Pads as well as the introduction of new services, especially video, which consume vastly more bandwidth. Global traffic is forecast to grow at a 48% CAGR from 2010 to 2015, from 225PB per month to 1603PB per month (source: Analysys Mason 2010). This is putting increasing pressure on the network capacity of operators. An example forecast for Western Europe and North America is seen in Figure 1 below. Similar growth pattern is forecast for other regions of the world.

Figure 1:  Mobile Content and Applications revenue by service category and forecast data traffic,
                 2009–2015,  (source: Analysys Mason, 2010)


Western Europe



North America







Monday 5 December 2011

Mobile Market and Network Planning - Part 1

In 2010, as part of some custom work for a network planning client, I pulled together some information on the overall state of mobile networking and of the mobile network planning business. Now that it is a bit dated, I decided I could release some of this information in my blog. More detailed and updated information is available from Analysys Mason reports (see www.analysysmason.com ).

This is part 1 of a multi-part tutorial on the state of the mobile industry and of mobile network planning.


The mobile telecommunications industry is undergoing a profound transformation driven by a number of underlying industry trends:

·         Tremendous growth in data usage: The rapid growth in data-intensive devices such as smart phones and mobile broadband is driving an exponential surge in data usage and networks are becoming increasingly congested in high-usage urban areas as a consequence.

·         Lagging revenues: Top-line revenue growth is failing to keep pace with this data explosion given the prevalence of “all-you-can-eat” pricing schemes and difficulties in charging for data-hungry applications. The problem is compounded by the negative impact on voice revenues of regulatory reductions in roaming charges and mobile termination rates.

·         Margin pressure: Network costs are outpacing revenue growth and not all forms of data are currently profitable escalating the pressure on operator profit margins and raising the need to justify return on capex investments and reign in opex costs.  

·         Rising end-user expectations: The expectations of end-users around the quality of service and applications are rising in an industry with low customer switching costs; this places competitive pressures on operators to ensure networks have the sufficient bandwidth to deliver high quality data services.

·         Increasing network complexity: The mix in vendor equipment and mobile technologies, compounded by the roll-out of next generation 4G mobile technology due to take place over the next few years, is rendering networks increasingly complex to manage. This is exacerbated by the trend towards network sharing as an effective means of driving down operator costs.  

·         Network management outsourcing shift: The rise in network complexity is forcing operators to outsource network management and optimisation solutions. 

·         Need for effective network management, end-to-end solutions and self-organising networks: The trends outlined above are raising the demand for efficient and effective network planning, management and optimisation. Moreover, the boundaries between planning, optimisation and the delivery of quality of service are blurring, creating a currently unmet need for holistic end-to-end solutions as well as a vision of “self-optimising / self-organising network” (SON) solutions as the future of the industry.

Sunday 10 July 2011

Decreasing BSS/OSS “integration tax": Part 3 - SOA and data models complete the current picture




This BLOG post is the third, and last, in a series of short articles on the changes in BSS and OSS architectures arising from the changing underlying data communications, middleware, and data modeling software technology. In the last posting, I described how IP and middleware dropped the cost and complexity of integration by an order of magnitude.

About the turn of the century, a new concept (which was really an old concept) appeared on the software scene - what became known as Service Oriented Architecture (SOA). The concept is simple - design software so that the functionality is not only available to a human via its associated user interface, but to another system via a simple, stable interface. In reality, the concept is very similar to the time-honoured remote procedure calls (RPC) of yesteryear (send in some data and define what predefined procedure you want to run, and then get a response). And the interface became to be based on human-readable code - XML (eXtensible Markup Language). But more importantly, the SOA model meant having a design goal of not breaking the interface with later generics (there also are some technical aspects of the interface that makes it easier to add data elements without breaking the interface).
When SOA is combined with the use of a standard meta-model for the data, it becomes even more useful. And when even more specificity is supplied, as in the use of the TMF's Shared Information and Data (SID) model, interfaces become even easier to build and maintain. The SID has been used by many OSS and BSS system vendors and is the preferred data model for nearly all systems integrators and ISVs.

These innovations of the past three decades have brought the cost of interfacing modern software systems from large fractions of a million dollars to a few tens of thousands of dollars and enabled multi-system architectures and automatic flow-through of information, orders, and network control unheard of when I first began my career.

Will this revolution cause the industry to move away from the current "best of suite" architectures, provided bydo-it-all vendors such as Amdocs and Oracle, and towards best-of-breed architectures? Perhaps. It certainly provides some new options and gives the systems integrators, whose influence in BSS/OSS architectures has waned in the last five years, a new set of systems to put into their preferred suites.


Monday 16 August 2010

Decreasing BSS/OSS “integration tax": Part 2 – IP and middleware make things better


This BLOG post is the second in a series of short articles on the changes in BSS and OSS architectures arising from the changing underlying data communications, middleware, and data modeling software technology. In the last posting, we looked back on the early days of OSS and BSS – in the 1970s and early 1980s when only huge systems, architected by the same group, and implemented and tested by a single vendor, could hope to work together.

During the 1980s and 1990s, while the technical world was focusing on the X.25 standard for system interoperability (and good old Bell Labs, never an organization to go with ‘good enough’ created its own improved version, BX.25), the TCP/IP protocol was coming into its own. It was not endorsed by any standards organization. It wasn’t owned by anyone. The group that agreed to it documented the agreements by something they call RFCs – ‘Request for Comments.’ But it was simple. It was open. And it could be easily implemented on a wide variety of hardware. It became the de facto worldwide standard, essentially solving the problem of system interoperability at the lower levels of the protocol stack.

At the same time, ‘middleware’ raised its head in the BSS/OSS industry. Vendors such as TIBCO had their software ‘busses’ that could be implemented to bind together software systems. All you had to do was to build to their API (and have an agreed-to mapping of the internal data models of the various systems to an intermediate language, with interfaces to the API implemented in every system) for systems to ‘talk’ together. The TMF created its NGOSS architecture around this ‘bus’ concept, and many modern systems were built around one of the busses available in the market. This made integration MUCH easier and cheaper, as seen below. For a mere half to one million dollars you could have a systems integrator interface two systems together.

The effect was that systems that were formerly ‘islands’ of automated systems started to be hooked into each other, creating ‘flow-through provisioning’ and an end-to-end ‘trouble process,’ among other work flows. So, if you wanted a good multi-OSS or BSS system, you would go to a systems integrator (SI), choose a bus, choose the new OSSs you wanted to implement, decide which legacy systems needed to be hooked into, and sit back and wait a couple of years for the SI to do its magic.

Mark H Mortensen

Monday 19 July 2010

Decreasing “integration tax" has radically changed BSS and OSS architectures and business – and will change them more in the future [Part 1]

This BLOG post starts a series of short articles on the changes in BSS and OSS architectures arising from the changing underlying data communications, middleware, and data modeling software technology. Today we look back on the early days of OSS and BSS in the 1970s and early 1980s.


During my 30 years in the telecoms industry, I have seen a radical change in the cost of developing interfaces among BSSs and OSSs. Before the wide adoption of standard data communications protocols such as X.25, the development costs of interfaces between systems was easily $500,000 or more. Then multiply that by the old magic 2.7 to get the cost to the customer, and it meant well over a million dollars. The adoption of data communications interfaces such as X.25 and the old IBM FCIF (used in TIRKS and other IBM 3270-era systems) dropped the costs somewhat. But it only really attacked part of the problem. I recall once in my testing systems days (SARTS, for the old-timers out there) one US RBOC customer who wanted us to implement an X.25 interface on the system. When quizzed as to why they wanted it, they replied that they wanted to ‘plug it into’ another system and exchange data in real time. They were quite disappointed when they were informed that implementing a data communications protocol in two different systems did not mean that they could ‘talk’ together – you needed to actually do some more work. We also had another problem in those days – the X.25 protocol stack implementation was different in every different computer hardware system. So every time you offered the software on a different hardware platform, even from the same manufacturer, you had to re-implement the protocol stack.


The effect on the architecture was that only huge systems, architected by the same group, and implemented and tested by a single vendor, could hope to work together. Interfaces between disparate systems were few – and very expensive to build and maintain.


Next time – IP and middleware make things better.


(As usual, comments welcomed.)


Mark H Mortensen