Blog Post

Consider the Escalator

Michael Goldberger
escalators image

Submitted for your consideration, the escalator - one of history's great products.

Nathan Ames, of Saugus, Massachusetts is generally credited as the first inventor of the escalator. Although he never actually built one, in 1859 he was issued U.S. patent #25076 for "revolving stairs." Working escalators of various designs were eventually developed and commercialized by the Otis Elevator Company. Until recently, some of those original escalators could still be found in the Boston subway system (how's that for a product life-cycle?).

In 2004, estimates suggest that there were over 30,000 escalators in use in the United States that are used more than 90 billion times per year . Clearly a successful and widely used product.

Essentially used to move foot traffic through public spaces - escalators can be found in department stores, shopping malls, airports, transit systems, convention centers, hotels and office buildings (a platform for many product variants across multiple market segments!).

Escalators efficiently move pedestrians (everyone goes the same direction on an escalator) and provide multiple benefits in each of the above segments all over the world. Some key features that differentiate the escalator from its primary competitor (the elevator) include:

  • Escalators carry large numbers of people
  • Escalators fit in the same spaces as stairways
  • Escalators have no waiting

And of course - our favorite feature of the escalator, is that when it fails to operate for any reason - it simply becomes a staircase (becoming the product that it was intended to replace). How many products can do that?

[vc_video link=" https://www.youtube.com/watch?v=FSIkjNaICsg" ]

One more interesting anecdote about escalators: even though the word "escalate" has entered popular usage, the word "escalator" was actually invented and trademarked by Charles Seeberger in 1900, to mark the product launch at the Paris Exposition Universelle. How's that for branding?

By Susan Loconto Penta 13 Dec, 2021
It’s that time of year again – at least for those firms that operate on a calendar year – and by that time, I’m talking about the time for planning. And budgeting. And reflecting. And adjusting. In some ways, it is harder this year than last since in the thick of the pandemic, everything was on hold, locked down and constrained. The picture was bleak but the options were few and so that made product planning more straightforward since there were fewer variables. Now however, the situation is fluid, varying by company, industry and region, making it hard to know if 2022 plans and roadmaps should contemplate investment for innovation and growth or if the focus should reflect a more conservative, incremental mindset. I imagine many of you reading this will feel it is anyone’s guess. Regardless of where you land, I want to remind – and encourage – each of you to include as part of your process an annual review of the effectiveness and output of your product organization relative to your current product portfolio and 2022 plans. I recommend doing this annually because product is the one area in a company where teams and players need to evolve with the products through their lifecycles and this requires regular tuning. While the roles on a product team may be consistent over the life of a product, the importance, roster of tasks and type of individual/personality needed to be effective in a role will vary with time. This does not have to be a complex analysis - but it should be a complete one. My back-of-the-envelope approach goes something like this: Start by making a list of all of the people that serve in a role on a product team. Here, you want to start with your roster of resources and their associated role only. Next, identify at least one key skill strength and one key personality asset for each individual. To make this quick, easy and most importantly, objective, I like to leverage the “three words” tactic (i.e., ask yourself “What are the 3 key skill strengths that come to mind when I say this person’s name?” and then “What are the 3 key personality assets that come to mind when I try to describe this person?” You may not have 3 for everyone in each category which is just fine since 1 will suffice.) It is helpful to add the product(s) each person is associated with on this list. Next, get your product list and for each product, make note of the key roles on each product team (not individuals, but role names). Not every product will have the same set of roles – some will have more, some will have less. Then, for each product-role combination, identify both the top skill and top personality asset that are key to success in the role, for that product, in its market at this time. Now, for every product, go through a quick exercise and add your assessment of the “match” between the top skill strength/personality asset required to be successful in the role (in this market at this time) and the skill strength/personality asset of the individual in the role. This doesn’t have to be a complex rubric – you can use a basic 5 point scale here. Do not overanalyze or think too hard. Rather, let your visceral reaction provide the input. Finally, take stock of what you see, what you can learn and where adjustments can be made. Where are there mismatches? Are there patterns that can be observed in terms of more or less alignment between what is needed and what you have? You now have a rudimentary inventory of people and roles, skills and needs, as seen through a "product" vs. an HR lens (both are important). You also have a basic assessment of how well tuned your product organization is to what is needed to be successful in achieving product goals (based on your match of skills/personality assets required to be successful in each role for each product vs. what is there). Excel is your friend here as the tool for logging and analyzing since after you completed this exercise, you can use basic functions like a pivot table to yield additional, interesting insights. To make things easier, I have included a link here to the tool we developed at MIDIOR for just this purpose in our consulting engagements known as our Product-Role-Resource Tuner. And of course, if you find yourself unsure of where to begin or in need of some assistance at any point in the process, please feel to reach out to me directly . Click here to access the Product-Role-Resource Tuner.
By Michael Goldberger 08 Sep, 2021
Agility is generally considered a virtue. To that end, the ability to work independently of your vendors - meaning you don’t have to depend on your vendor for ALL situations that require access to your data - gives you greater agility. In practice, that sort of independence is the result of two factors: Degree of access to all your data “allowed” by your vendor Level of knowledge and skill to do anything with that access As we described in last month’s post, access to your data can come in many forms. You may be able to get at your data through reports, queries and other vendor provided tools which are all forms of “allowed access.” But, it is important to remember that the fund office is the “owner” and “custodian” of all underlying data and that the vendor provided tools may or may not provide access to everything that constitutes the complete data set. Even if you feel this level of access is not necessary (and perhaps you wouldn’t know what to do with it anyway), it is an important consideration that may provide options in unanticipated situations. You can think of it as a form of insurance against something going wrong with your vendor. I am not talking about database backups here – also critical – but rather about having access to and an understanding of the complete data set that serves as the foundation for your administration systems. In some cases, if you ask your vendor for a set of data, they are likely to say “Sure, what do you need? We’ll put that in a file for you.” While that is certainly a form of access, unless or until you have set up a process where you’ve defined a request that covers all data elements, and you have a scheduled delivery of those files (e.g., once/month), then you haven’t achieved what we would call data independence. And that begs the question “How do I know what to ask for?” The answer depends, but for most fund offices this would at a minimum include: All the individuals in the database with their unique system identifiers, including all available demographic information (name, address, dates of birth, marriage, death, etc.) All the contributing employers in the database with their unique system identifiers The full history of all contributions transactions, with appropriate identifiers that link to a person and an employer The full history of all benefit payments with appropriate identifiers that link to a person The full history of all benefit applications with appropriate identifiers that link to a person The full history of all benefit credits (e.g. pension credits) for each person whether or not they were ever vested The relationships between members, dependents and beneficiaries (who is related to who) For health and welfare funds – the full history of health eligibility for all persons in the database All configuration and setup data (e.g. list of code names and values, tables of constants used within formulas, etc.) If you don’t have easy access to your complete data set (which would include these elements), it may be time to work with your vendor to set it up. Equally important to “access” are the knowledge and skills to use the data. The only way to know that you really have “everything” is if you can decode details. The knowledge component implies that even if it is not formally documented, you understand the data model that is used to support and organize your data. The skills component means that you have the ability (if necessary) to assemble the pieces (data elements) and make sense of them. As we discussed in a previous post, you can probably do a lot using Excel to extract value from your data if you have mastery of the underlying components. Given what I have just described, I will close with a few questions to ask and answer when assessing your level of data independence from your vendor(s): Do you have a clear understanding of how your vendor stores and manages your data? Where is it physically, what sort of database is used and how large is the entire data set? If you have a need for a new report or extract, can you get it yourself or do you need to ask your vendor to do it for you? If you are dependent on your vendor, how long does it take to get that turned around? Does anyone on your team have a full understanding of the underlying data model? What are the base tables and do you know how they are linked together? Can you create a diagram? If you can receive extracts of data, do you have a push or pull environment? Push: the vendor sends you a file when they can, or according to a pre-defined schedule. Pull: you can grab up to date data as you need it. If you can answer all these questions AND are satisfied with your answers, then you can safely assume you have sufficient data independence, which is a key factor in your ability to be agile and also contributes to moderating any risk related to your data. 10 Step Data Quality Program You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your data sources You have an approach for resolving any conflicts between data sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You have independent access to your data Everyone on your team is cognizant of the value of good data and the long-term costs of sloppy data You leverage your data to support operations AND to support long term decisions
Image of lightbulbs
By Susan Loconto Penta 31 Aug, 2021
Where do you find the most innovative products? Where there are the most significant problems of course. In the graduate courses that I teach, I often use one of the fabulous Ted Talks from some years back by South African journalist, Toby Shapshak, to explain the real engine of innovation: problems. Shapshak offers an image of the world that visually depicts electricity and he maintains that where there is electricity, there is little to no “real” innovation. He asserts that the “dark continents,” where there is limited electricity, are where major innovation opportunities live because there are real problems characterized by real pain. If you have not watched any of them, I encourage you to do so (you can view one of his early TED Talks here ). Which brings me to the point of this perspective: as you are considering the future of your product portfolio and how to ratchet up innovation, return your team’s focus to the problems. What are they? Where are they? Who has them? How many are there? How painful are they? If you can really understand problems through the eyes of those that have them, it will give you a good read on how valuable they will be to solve and ultimately, how to do some truly innovative development.
By Michael Goldberger 19 May, 2021
At this point in our data series, you might be wondering if you will ever get to hear about using the data you have been so carefully maintaining. Well, you are in luck: in this post I want to begin the conversation on using your data to provide insights, drive decisions and tune your business processes. Believe it or not, the data that lives inside your benefits administration system may not be as accessible and useable as you would think. Sometimes, the simple act of getting the data becomes a project in itself, burdened by complex reporting tools or constrained access mechanisms. But not to worry, in most cases, you (or your IT experts) should be able to grab your data and put it into a tool that you know how to use – and the Lingua franca here is typically Microsoft Excel. Excel is a powerful medium for sifting, sorting, reformatting, charting and generally putting data into a form that answers your questions or tells you a story. For this reason, I always recommend having access to one or two “super users” – either on your internal team or on staff at a vendor with whom you have a close relationship. Saying “I can’t – or my team can’t” when it comes to Excel is no longer an acceptable answer if you work in this industry (and if somehow this is where your fund office lands, there are a variety of free online resources on getting your team up to speed). Even if your core systems include standard reports or reporting tools, having the capability to use Excel as an additional way to analyze and leverage your data will prove valuable in the long run. We find that the first batch of data or reports you generate typically spawns more questions than answers, so rapid iterations are often needed to get to those answers. This is almost always easier with Excel vs. reporting tools embedded in core systems. Ultimately, you may find that there are certain data or reports that you will want to have available as “standard” in your core system and in this case, iterating in Excel can also help you “define the requirement(s)” for that information. As a side note, if for some reason you cannot get your data out of your core system(s) and/or you cannot put your data into a spreadsheet, it is a leading indicator that it is time to make some changes. Understanding your options for getting at the data will help you determine whether or not you need external assistance or additional expertise so I have outlined the 5 main approaches below: Reports: Historically, reports have been hard coded into systems with hard to change definitions of the data set and the page formatting. The nice thing about these types of reports is that they are typically easy to run and print in a format that is suitable for framing. Unfortunately, this type of formatted report is not so suitable for data analysis. If your system only allows you to output reports to a printer or a pdf file - that is a limitation in terms of accessing your data. Exports: Exports usually allow a user to take the information that is shown in the user interface, and save that data as a file (typically Excel or csv format) which can be opened in another program. Exports are nice in that they allow you to save data, but exports may be limited because you only get the data shown on the screen. Queries: Some systems have a query tool that lets users define a data set (based on a choice of fields to include and criteria for filtering those fields). The result of a query can usually be exported to an easy to use file … essentially an advanced form of an export. The challenge with queries is that they often require a degree of expertise with the particular tools and syntax of your vendor. Database Access: This is the most powerful - and most feared - approach to getting at your data. In the world of open systems, it is not unusual to have direct access to the data tables that form the core of your system. However, with an appropriate set of tools (and in fact, Excel is one of those tools) and someone that knows how to use them, you can create your own extracts that utilize the raw data in your system. Asking about direct access to the database, or even documentation of the database is a good test of how “open” your vendor really is to this method. Data Mashups: A relatively new, but potentially powerful toolkit that happens to live in Excel! Mashups are an approach to data that lets you take data from multiple systems and combine them together with powerful results. For example, maybe you have separate data sources for health benefits vs retirement benefits but you would like to compare names and addresses across the 2 systems - that would require a mashup. Once you have your chosen method(s) for accessing data and can get it into a useable format, you will want to make it easily accessible for anyone who can benefit from it. Newly created data sets or reports should be stored in a shared file location so that access can be set up for “self-service” – essentially instant access with zero waiting period. In particular, your users should not have to rely on printing, copying and pasting or rekeying to get a view or report that is useful. If that is happening then something about your data isn’t working and you should look for the root cause. For more about how to unlock the information in your core system(s) through better data access or what it could look like for your fund office, drop me a line and I am happy to chat. Happy reporting! 10 Step Data Quality Program You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your data sources You have an approach for resolving any conflicts between data sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You are not dependent on a software vendor for access to your data Everyone on your team is cognizant of the value of “good data” and the long-term costs of "sloppy data” You leverage your data to support operations AND to support long term decisions
By Susan Loconto Penta 30 Apr, 2021
In New England, where MIDIOR is headquartered, signs of spring are everywhere and combined with the prospect of moving outside and beyond our inner circles, hope and excitement seem palpable. With the constraints easing, I have finally started to think about how product teams will need to evolve to continue delivering innovative products that keep them in the game. The COVID-19 pandemic both spotlighted and amplified the challenges that global product teams have faced for some time. From different time zones and geographies, to remote work and the rise of video, most of us can now appreciate why distributed teams constrained to the “2 dimensions” of a screen are rarely as innovative or productive as those that are collocated. Pre-COVID, we could count on at least a few in-person, shared experiences to bind a product team together into something that is greater than the sum of the individual parts. In our virtual reality, we are meeting and schedule dependent - which is neither creative nor fluid – and the equivalent of stopping by someone’s office to chat about an idea is simply less likely. While I will admit that task-oriented work, especially when served up on a platter and wrapped in a bow, is often easier and more efficient to execute when working remotely, generative thinking and creative work are almost impossible to sustain. Our new reality requires us to rethink how to sponsor innovation and I suggest that we reconsider our habits and approaches and not just try to do things the same way, virtually. We can start by restating the problem: how do you develop new, innovative products and services with a team that is virtual much of the time? How do you create urgency? Inspiration? Momentum? Is the workflow different? At least one key is to be able to quickly get teams to understand context and align on outcomes, without being together or with a customer. I find that this requires an ability to make the problems products will address “come alive” in a virtual, 2-D setting which depends on the ability to set context through telegraphic visuals, pictures, videos and stories. This requires a different sort of creative skill set along with a competency with the tools, and a laser focus on things like production quality that in the past didn’t really matter (think about that Zoom call with the background noise that distracted you or the PowerPoint that lost your attention… and since you were not in a room with others, you did not have the pressure to stay focused). In today’s Zoom culture, if you want to nurture active collaboration, you will need to engage team members via the screen to elicit reactions and advance thinking versus depending on body language as the primary input to steer a dialogue or peer pressure to maintain focus. What is needed will vary with the product – especially its complexity and maturity - since making a new, unsolved problem come alive requires something very different than a problem that has been successfully solved for years. Supplementing existing teams with these new skills as well as conducting professional development around learning styles, communications strategies and visualization tools will all help to evolve conversations and therefore the generative thinking that spawns innovation. The successful, innovative teams in our virtual future are likely to have skills and tools that in the past might have been more common at a movie studio. As a result, we should be looking at our job descriptions, recruiting plans and retention strategies that attract this kind of professional. 
By Susan Loconto Penta 10 Nov, 2020
For many of our clients, Fall means planning and budgeting activities get serious. For anyone with product responsibility, it is a time to challenge market assumptions and reconsider the future product roadmap. It is also a time to evaluate the teams and individuals associated with each product for fit with personalities, interests, capabilities and core skills. This latter aspect of the annual planning process is especially important. Companies we consider to be leading the pack evaluate and adjust their product teams each year, even if it is not a formal part of the planning process. This year, an additional dimension to that evaluation has emerged: the ability to advance a product in the face of an unexpected crisis AND a major shift in the nature of work. One of our tenets for optimal product performance is that the team intertwined with the product must evolve in tandem with the product as it moves through its lifecycle. A new, innovative product needs someone who is able to evangelize and succeed with the missionary sale while a mature, cash-generating product needs someone who can tweak the fine details with incremental improvements to extract maximum value. But, new this year, we also need individuals who can flourish in their product roles in a largely, if not entirely, remote environment. This is not a given. Regardless of whether individuals worked from home in the past, moving the needle to a place where meetings are all done in 2 dimensions, on a screen (think Zoom, Teams etc.), can present a major challenge.  The Agile Manifesto emphasizes “individuals and interactions over processes and tools along with face to face conversations with motivated individuals” associated with any given project. On the surface, it seems easy enough to transition to a virtual dialogue since we are still seeing faces in our discussions. But it has proven difficult to do the generative work in a virtual environment where body language is not visible, whiteboards are strictly virtual and conversations are not fluid and impromptu, instead requiring scheduled Zoom meetings. Even as we are grateful for the technology infrastructure that keeps us connected and visible to each other, it is important to acknowledge that driving a product to meet its goals means creating team momentum and aligning activities with objectives, virtually. Therefore, it is incumbent on leaders and managers to recognize who is good at remote work, understand why they are good at it and leverage what is learned to evolve the way product work is done and product teams are configured. So this year, when you take a hard look at your product portfolio and annual plans, assess the teams associated with each product against the backdrop of a remote environment and tune accordingly.
By Michael Goldberger 09 Nov, 2020
Today’s post in our series on data focuses on the importance of having the tools and processes in place for continually identifying and correcting any gaps or flaws so that your data is always accurate. At this point in our series, you know what data you have, where it comes from and where it lives. You can also easily figure out what you don’t have but do need, which means you know which data needs to be corrected and which gaps filled in. Since your data is always changing (new data is entered, existing data is updated), no one’s data is ever perfect at all times. Data is like a river; it’s always flowing. Just because it was all correct yesterday doesn’t mean it will be correct tomorrow. Compounding this data fluidity is the environment of the fund office: for many data elements, data collection and data entry often end up being manual processes, especially for member information such as birthdates, marital status and life events. And by definition, even when people are being careful, manually entered data is likely to have an error rate of 1 -3 %. While some systems are quite rigorous about validating data before it is entered, others are much less so. It's often a balancing act between imposing restrictions and controls on data entry to optimize inbound data quality versus allowing data entry to be fast and easy with few if any validations. This last point is important because onerous validations often drive creative methods for working around the process. A good example of this would be individuals fabricating a marriage date if it is not known in order to get past the validation that requires a date (even if it is unknown) to create the member record. Unfortunately, once that has been done, it can be very difficult to find the “fake” dates within the data, which can lead to unexpected problems down the road. Our approach is a little bit different and is based on creating a regular and rigorous “exception detection reporting & correction process.” This is a proactive process that should be incorporated into the daily or weekly processes and all but eliminates the challenge that arises in waiting for a problem to happen and then going back to troubleshoot the data. Essentially, the core of this approach is to design and regularly run data exception reports AFTER the data is entered (vs. a VALIDATION process which occurs before or during data entry). An example of such a report would be one that surfaces participants who are married but where the marriage date is missing. Another might surface people who are working but don’t have a date of birth (DOB) or where the DOB is unrealistic (i.e. the individual would be 122 years old). It's important to remember that even if your data is determined to be 99% good, if you have 1,000 people you still have 10 errors which can be significant when it comes to providing individuals their benefits in a timely and accurate manner. Hence, the process is never finished and is ongoing - you’re always creating errors, surfacing errors and resolving errors. It is a mistake to think that data entry, and therefore data, is always perfect but if you have a way to continually polish it, it will always shine. 10 Step Data Quality Program You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your data sources You have an approach for resolving any conflicts between data sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You are not dependent on a software vendor for access to your data Everyone on your team is cognizant of the value of “good data” and the long-term costs of "sloppy data” You leverage your data to support operations AND to support long term decisions
By Michael Goldberger 21 Oct, 2020
Now that it is fall and we have all realized some type of new normal , I want to go back to our blog series on the importance of data quality for unions, funds and administrators. Now, more than ever, our new, often virtual environment has a dependency on accurate, current data. I have been gradually tackling each item in MIDIOR’s 10 step data quality program and will address the fifth in this post. It has been a while, so here is a reminder of those details: You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your data sources You have an approach for resolving any conflicts between data sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You are not dependent on a software vendor for access to your data Everyone on your team is cognizant of the value of “good data” and the long-term costs of "sloppy data” You leverage your data to support operations AND to support long term decisions In the last two posts, I talked about the importance of establishing a “system of record” for each piece of data and a commitment to capturing data once and only once (even though it is likely to be used in multiple places). Following on from there, now that you know what data you have, how you get it and where it lives, you can easily figure out what you don’t have but do need. In other words, where are your data gaps that could mess with the accuracy of your systems? In the context of funds and administrators, the data gaps are usually related to information needed to completely implement your business rules. These could be rules related to eligibility, contribution rates, benefit calculations, or maybe something as simple as who gets the monthly newsletter. If you don’t have any gaps, you (or your technical staff) will have a much easier time implementing the rules. In the context of a fund office, the data gaps are usually related to information needed to completely implement your business rules. In order to determine if you have any gaps, start by defining all of the data inputs required to calculate a benefit, issue a disbursement, report on an activity or whatever else you may need to do according to the plan rules. Some of the rules are described in a plan’s SPDs and some are operational rules that have evolved over time and have become standard practice. In any case, we like to think of those business rules as a set of algorithms or equations, with defined inputs (data) and outputs (actions). If (and that’s a big if), you have clearly defined the algorithms to match your rules, then you can list all your required inputs and compare them to what you have available and define all of the gaps. Because systems are not people (who can often fill in the data gaps), you will need to figure out how to fill in all of the missing data and organize it in a way that lets you perform any calculation, and repeat it over and over, before you can consider your data set complete. The key point is to step through each business rule and ask yourself what piece of information is needed to complete that step and write that all down. I've included two simple examples below.
Show More
Share by: