Multi-Employer & Public Funds

Multi-Employer & Public Funds

Most multi-employer and public sector fund administrators would agree that it’s getting more and more costly and complicated to run the fund office. Regulatory scrutiny and compliance obligations have become onerous and in an era of unfunded liabilities, it’s hard to justify the significant investment a systems modernization project requires. At the same time, it’s clear that maximizing operational efficiencies will enable better service to plan participants and greater ROI on technology investments. MIDIOR supports Taft-Hartley and public sector funds as they face these challenges. We work hand-in-hand with business and IT teams to overhaul existing systems or invest in new ones and provide supplemental services to assure a successful systems implementation. We are vendor agnostic and partner with our clients’ providers or, in some instances, execute a custom development project tailored to a fund’s specific needs. MIDIOR consultants understand how to take plan documents and translate them into functional and technical requirements that provide the foundation for any systems change. We also manage implementations, convert the data, and deploy new member service models including mobile and web applications without impacting the mission-critical systems that drive day-to-day operations. 

BENEFITS ADMINISTRATION

Our vast experience with benefits administration systems and data provides a platform that enables us to take on any size technology project for both in-house and third-party administrators.

MULTI-EMPLOYER FUNDS

Our deep understanding of unions and benefit plans along with our fund administration experience (operations and systems) makes us the perfect technology services partner for Taft-Hartley funds.

PUBLIC FUNDS

Our pension fund operations experience combined with our deep capabilities in requirements, systems and data allow us to support technology projects and provide supplemental resources.

Benefits Administration

Today, benefits administration ranges from determining eligibility for health insurance to processing claims to administering retirement benefits. The ability to maximize operational efficiencies enables better service to plan participants and greater ROI on technology investments. MIDIOR helps in-house and third-party administrators modernize their operations and upgrade their technology platforms. Whether you are overhauling an existing system, moving to a new one, or simply on-boarding a new plan or fund, MIDIOR supports administrators and technology teams in executing the many moving parts of technology projects. We excel in working together with our clients’ teams: we ask the right questions to elicit and document meaningful requirements and lay the groundwork for platform changes; we provide project management and systems implementation services; we interface with technology vendors on behalf of our clients; we specialize in all things “data” ranging from developing data models, to implementing the data warehouse and managing the data conversions (including data forensics and remediation).

Multi-Employer Funds

Multi-employer plans have been hit by the perfect storm of risk, compliance, and regulatory overload. Older systems are not able to keep up with the new guidelines, yet launching a major system upgrade could jeopardize the day-to-day business demands that need to be met. At MIDIOR we work hand-in-hand with operations and IT teams to make the decision to overhaul an existing system or invest in a new one as well as deploy new models of delivery such as member self-service (mobile and web) applications. We are vendor agnostic and partner with our clients’ providers or, in some instances we support a custom development initiative tailored to a fund’s specific needs. MIDIOR consultants understand how to take plan documents and translate them into functional and technical requirements that provide the foundation for any systems change. We provide project management and systems implementation services, interface with technology vendors on behalf of our clients, and we specialize in all things “data” from developing data models, to implementing the data warehouse and managing the data conversions (including data forensics and remediation).

Public Funds

Too often, public fund performance is focused on the investment portfolio and funding obligations, while administrator operations and technology get short shrift. MIDIOR understands the dichotomy of efficiency and complexity that lurks below the surface of pension fund operations – from the challenge of maintaining data integrity over long periods of time to the difficulty of implementing a systematic approach to arcane benefit calculations. While much attention is rightly focused on the funding of pension obligations, this leads to increased scrutiny and regulation of already overburdened operations and technology teams. At the same time, participants expect real-time availability of information related to their accounts. Given the short life cycles of information technology, this creates an ongoing tradeoff between “what’s possible” and “what’s possible given our historic data, bandwidth, and capabilities?” MIDIOR helps our public fund clients find answers to these questions and implement systems that bridge the past with the future. We work hand-in-hand with operations and IT teams to make the appropriate decision to overhaul an existing system or invest in a new one, as well as provide supplemental services to assure a successful systems implementation. We are vendor agnostic and partner with our clients’ providers or, in some instances we support a custom development initiative tailored to a fund’s specific needs. MIDIOR consultants understand how to take plan documents and translate them into functional and technical requirements that provide the foundation for any systems change. We provide project management and systems implementation services, interface with technology vendors on behalf of our clients, and we specialize in all things data from developing data models, to implementing the data warehouse and managing the data conversions (including data forensics and remediation).

READ OUR ARTICLES FEATURED IN IFEBP'S BENEFITS MAGAZINE

READ MORE FROM OUR BLOGS

By Michael Goldberger 08 Sep, 2021
Agility is generally considered a virtue. To that end, the ability to work independently of your vendors - meaning you don’t have to depend on your vendor for ALL situations that require access to your data - gives you greater agility. In practice, that sort of independence is the result of two factors: Degree of access to all your data “allowed” by your vendor Level of knowledge and skill to do anything with that access As we described in last month’s post, access to your data can come in many forms. You may be able to get at your data through reports, queries and other vendor provided tools which are all forms of “allowed access.” But, it is important to remember that the fund office is the “owner” and “custodian” of all underlying data and that the vendor provided tools may or may not provide access to everything that constitutes the complete data set. Even if you feel this level of access is not necessary (and perhaps you wouldn’t know what to do with it anyway), it is an important consideration that may provide options in unanticipated situations. You can think of it as a form of insurance against something going wrong with your vendor. I am not talking about database backups here – also critical – but rather about having access to and an understanding of the complete data set that serves as the foundation for your administration systems. In some cases, if you ask your vendor for a set of data, they are likely to say “Sure, what do you need? We’ll put that in a file for you.” While that is certainly a form of access, unless or until you have set up a process where you’ve defined a request that covers all data elements, and you have a scheduled delivery of those files (e.g., once/month), then you haven’t achieved what we would call data independence. And that begs the question “How do I know what to ask for?” The answer depends, but for most fund offices this would at a minimum include: All the individuals in the database with their unique system identifiers, including all available demographic information (name, address, dates of birth, marriage, death, etc.) All the contributing employers in the database with their unique system identifiers The full history of all contributions transactions, with appropriate identifiers that link to a person and an employer The full history of all benefit payments with appropriate identifiers that link to a person The full history of all benefit applications with appropriate identifiers that link to a person The full history of all benefit credits (e.g. pension credits) for each person whether or not they were ever vested The relationships between members, dependents and beneficiaries (who is related to who) For health and welfare funds – the full history of health eligibility for all persons in the database All configuration and setup data (e.g. list of code names and values, tables of constants used within formulas, etc.) If you don’t have easy access to your complete data set (which would include these elements), it may be time to work with your vendor to set it up. Equally important to “access” are the knowledge and skills to use the data. The only way to know that you really have “everything” is if you can decode details. The knowledge component implies that even if it is not formally documented, you understand the data model that is used to support and organize your data. The skills component means that you have the ability (if necessary) to assemble the pieces (data elements) and make sense of them. As we discussed in a previous post, you can probably do a lot using Excel to extract value from your data if you have mastery of the underlying components. Given what I have just described, I will close with a few questions to ask and answer when assessing your level of data independence from your vendor(s): Do you have a clear understanding of how your vendor stores and manages your data? Where is it physically, what sort of database is used and how large is the entire data set? If you have a need for a new report or extract, can you get it yourself or do you need to ask your vendor to do it for you? If you are dependent on your vendor, how long does it take to get that turned around? Does anyone on your team have a full understanding of the underlying data model? What are the base tables and do you know how they are linked together? Can you create a diagram? If you can receive extracts of data, do you have a push or pull environment? Push: the vendor sends you a file when they can, or according to a pre-defined schedule. Pull: you can grab up to date data as you need it. If you can answer all these questions AND are satisfied with your answers, then you can safely assume you have sufficient data independence, which is a key factor in your ability to be agile and also contributes to moderating any risk related to your data. 10 Step Data Quality Program You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your data sources You have an approach for resolving any conflicts between data sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You have independent access to your data Everyone on your team is cognizant of the value of good data and the long-term costs of sloppy data You leverage your data to support operations AND to support long term decisions
By Michael Goldberger 19 May, 2021
At this point in our data series, you might be wondering if you will ever get to hear about using the data you have been so carefully maintaining. Well, you are in luck: in this post I want to begin the conversation on using your data to provide insights, drive decisions and tune your business processes. Believe it or not, the data that lives inside your benefits administration system may not be as accessible and useable as you would think. Sometimes, the simple act of getting the data becomes a project in itself, burdened by complex reporting tools or constrained access mechanisms. But not to worry, in most cases, you (or your IT experts) should be able to grab your data and put it into a tool that you know how to use – and the Lingua franca here is typically Microsoft Excel. Excel is a powerful medium for sifting, sorting, reformatting, charting and generally putting data into a form that answers your questions or tells you a story. For this reason, I always recommend having access to one or two “super users” – either on your internal team or on staff at a vendor with whom you have a close relationship. Saying “I can’t – or my team can’t” when it comes to Excel is no longer an acceptable answer if you work in this industry (and if somehow this is where your fund office lands, there are a variety of free online resources on getting your team up to speed). Even if your core systems include standard reports or reporting tools, having the capability to use Excel as an additional way to analyze and leverage your data will prove valuable in the long run. We find that the first batch of data or reports you generate typically spawns more questions than answers, so rapid iterations are often needed to get to those answers. This is almost always easier with Excel vs. reporting tools embedded in core systems. Ultimately, you may find that there are certain data or reports that you will want to have available as “standard” in your core system and in this case, iterating in Excel can also help you “define the requirement(s)” for that information. As a side note, if for some reason you cannot get your data out of your core system(s) and/or you cannot put your data into a spreadsheet, it is a leading indicator that it is time to make some changes. Understanding your options for getting at the data will help you determine whether or not you need external assistance or additional expertise so I have outlined the 5 main approaches below: Reports: Historically, reports have been hard coded into systems with hard to change definitions of the data set and the page formatting. The nice thing about these types of reports is that they are typically easy to run and print in a format that is suitable for framing. Unfortunately, this type of formatted report is not so suitable for data analysis. If your system only allows you to output reports to a printer or a pdf file - that is a limitation in terms of accessing your data. Exports: Exports usually allow a user to take the information that is shown in the user interface, and save that data as a file (typically Excel or csv format) which can be opened in another program. Exports are nice in that they allow you to save data, but exports may be limited because you only get the data shown on the screen. Queries: Some systems have a query tool that lets users define a data set (based on a choice of fields to include and criteria for filtering those fields). The result of a query can usually be exported to an easy to use file … essentially an advanced form of an export. The challenge with queries is that they often require a degree of expertise with the particular tools and syntax of your vendor. Database Access: This is the most powerful - and most feared - approach to getting at your data. In the world of open systems, it is not unusual to have direct access to the data tables that form the core of your system. However, with an appropriate set of tools (and in fact, Excel is one of those tools) and someone that knows how to use them, you can create your own extracts that utilize the raw data in your system. Asking about direct access to the database, or even documentation of the database is a good test of how “open” your vendor really is to this method. Data Mashups: A relatively new, but potentially powerful toolkit that happens to live in Excel! Mashups are an approach to data that lets you take data from multiple systems and combine them together with powerful results. For example, maybe you have separate data sources for health benefits vs retirement benefits but you would like to compare names and addresses across the 2 systems - that would require a mashup. Once you have your chosen method(s) for accessing data and can get it into a useable format, you will want to make it easily accessible for anyone who can benefit from it. Newly created data sets or reports should be stored in a shared file location so that access can be set up for “self-service” – essentially instant access with zero waiting period. In particular, your users should not have to rely on printing, copying and pasting or rekeying to get a view or report that is useful. If that is happening then something about your data isn’t working and you should look for the root cause. For more about how to unlock the information in your core system(s) through better data access or what it could look like for your fund office, drop me a line and I am happy to chat. Happy reporting! 10 Step Data Quality Program You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your data sources You have an approach for resolving any conflicts between data sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You are not dependent on a software vendor for access to your data Everyone on your team is cognizant of the value of “good data” and the long-term costs of "sloppy data” You leverage your data to support operations AND to support long term decisions
By Michael Goldberger 09 Nov, 2020
Today’s post in our series on data focuses on the importance of having the tools and processes in place for continually identifying and correcting any gaps or flaws so that your data is always accurate. At this point in our series, you know what data you have, where it comes from and where it lives. You can also easily figure out what you don’t have but do need, which means you know which data needs to be corrected and which gaps filled in. Since your data is always changing (new data is entered, existing data is updated), no one’s data is ever perfect at all times. Data is like a river; it’s always flowing. Just because it was all correct yesterday doesn’t mean it will be correct tomorrow. Compounding this data fluidity is the environment of the fund office: for many data elements, data collection and data entry often end up being manual processes, especially for member information such as birthdates, marital status and life events. And by definition, even when people are being careful, manually entered data is likely to have an error rate of 1 -3 %. While some systems are quite rigorous about validating data before it is entered, others are much less so. It's often a balancing act between imposing restrictions and controls on data entry to optimize inbound data quality versus allowing data entry to be fast and easy with few if any validations. This last point is important because onerous validations often drive creative methods for working around the process. A good example of this would be individuals fabricating a marriage date if it is not known in order to get past the validation that requires a date (even if it is unknown) to create the member record. Unfortunately, once that has been done, it can be very difficult to find the “fake” dates within the data, which can lead to unexpected problems down the road. Our approach is a little bit different and is based on creating a regular and rigorous “exception detection reporting & correction process.” This is a proactive process that should be incorporated into the daily or weekly processes and all but eliminates the challenge that arises in waiting for a problem to happen and then going back to troubleshoot the data. Essentially, the core of this approach is to design and regularly run data exception reports AFTER the data is entered (vs. a VALIDATION process which occurs before or during data entry). An example of such a report would be one that surfaces participants who are married but where the marriage date is missing. Another might surface people who are working but don’t have a date of birth (DOB) or where the DOB is unrealistic (i.e. the individual would be 122 years old). It's important to remember that even if your data is determined to be 99% good, if you have 1,000 people you still have 10 errors which can be significant when it comes to providing individuals their benefits in a timely and accurate manner. Hence, the process is never finished and is ongoing - you’re always creating errors, surfacing errors and resolving errors. It is a mistake to think that data entry, and therefore data, is always perfect but if you have a way to continually polish it, it will always shine. 10 Step Data Quality Program You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your data sources You have an approach for resolving any conflicts between data sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You are not dependent on a software vendor for access to your data Everyone on your team is cognizant of the value of “good data” and the long-term costs of "sloppy data” You leverage your data to support operations AND to support long term decisions
By Michael Goldberger 21 Oct, 2020
Now that it is fall and we have all realized some type of new normal , I want to go back to our blog series on the importance of data quality for unions, funds and administrators. Now, more than ever, our new, often virtual environment has a dependency on accurate, current data. I have been gradually tackling each item in MIDIOR’s 10 step data quality program and will address the fifth in this post. It has been a while, so here is a reminder of those details: You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your data sources You have an approach for resolving any conflicts between data sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You are not dependent on a software vendor for access to your data Everyone on your team is cognizant of the value of “good data” and the long-term costs of "sloppy data” You leverage your data to support operations AND to support long term decisions In the last two posts, I talked about the importance of establishing a “system of record” for each piece of data and a commitment to capturing data once and only once (even though it is likely to be used in multiple places). Following on from there, now that you know what data you have, how you get it and where it lives, you can easily figure out what you don’t have but do need. In other words, where are your data gaps that could mess with the accuracy of your systems? In the context of funds and administrators, the data gaps are usually related to information needed to completely implement your business rules. These could be rules related to eligibility, contribution rates, benefit calculations, or maybe something as simple as who gets the monthly newsletter. If you don’t have any gaps, you (or your technical staff) will have a much easier time implementing the rules. In the context of a fund office, the data gaps are usually related to information needed to completely implement your business rules. In order to determine if you have any gaps, start by defining all of the data inputs required to calculate a benefit, issue a disbursement, report on an activity or whatever else you may need to do according to the plan rules. Some of the rules are described in a plan’s SPDs and some are operational rules that have evolved over time and have become standard practice. In any case, we like to think of those business rules as a set of algorithms or equations, with defined inputs (data) and outputs (actions). If (and that’s a big if), you have clearly defined the algorithms to match your rules, then you can list all your required inputs and compare them to what you have available and define all of the gaps. Because systems are not people (who can often fill in the data gaps), you will need to figure out how to fill in all of the missing data and organize it in a way that lets you perform any calculation, and repeat it over and over, before you can consider your data set complete. The key point is to step through each business rule and ask yourself what piece of information is needed to complete that step and write that all down. I've included two simple examples below.
By Susan Loconto Penta 28 Apr, 2020
The impacts of the coronavirus are significant, regardless of location or vocation, company or community, age or gender. We find ourselves marveling at our clients as they do their best to keep delivering on their promises to customers, members and stakeholders, even as many are considered “non-essential.” At MIDIOR, the nature of our work and the location of our clients has necessitated virtual work for quite some time. We also find ourselves in the fortunate position of having made recent investments in the processes, platforms and training that enable a truly remote work environment. That said, we could not have imagined our current situation, where we are testing the limits of what we can do every day. For our union and fund office clients, a shift to remote work can be particularly challenging – because some work cannot be done remotely (i.e. it is difficult to swing a hammer virtually) and because the norm for benefit services has centered on high touch, personalized, in-person work. Most, if not all, of our clients have been pushed further along the “remote work” and “mobile access” journey and now, more than ever, the value of quality member benefit systems, administration platforms, strong IT teams and mobile applications is visible. Today’s question is not “if” teams can work remotely and “how” members can access their benefit information at any time and from anywhere, but “when.” So, irrespective of where you are on your journey, now is a good time to sit down with your leadership teams and discuss your current situation and what a new normal will look like. The following is a quick list of questions to consider as you dialogue with your teams about making remote work and mobile access a reality. I hope it is helpful. Which jobs can be done while working remotely? For those that can’t, is it really impossible to do the work remotely or is it something else (e.g. people aren’t trained, guidelines are not in place, platforms do not exist or even unconscious bias against remote work for particular jobs)? Are the basic technology tools in place to make this work? This includes remote access via VPN, or remote or virtual desktops, internal instant messaging platforms like Slack and the ability to conduct video meetings (e.g. Microsoft Teams, Skype, Zoom, Lifesize). How much security is required and how much training do your teams need? Do you have guidelines and clear expectations about what it means to work remotely? Timing can get blurry when you are at home in terms of when you are at work and when you are not. Define ways for employees to “check in” and “check out” along with a new roster of team meetings. Can you service members remotely? There is no (technical) reason your phone and email systems can’t work 100% as well when your staff is distributed. Turning things off is not the answer. If anything, this is a time when members need more service and immediate answers. This may require restructuring workflows in the short term but doing this now will give you a leg up in the future. Do you have a member portal? Is it a real mobile app with sufficient data to answer member’s basic questions? If not, think about how to enable smart phone access to your benefit systems quickly and put a plan in place for a permanent solution. It appears that we all need to visualize a future where remote work, at least for some, and self-service everything will be the norm. We can’t turn the clock back, but we can set ourselves up for a better future. See what’s hard now and incorporate an approach to moving through any obstacles and make sure your technology roadmap accounts for a future that includes remote work and member self-service. Making it work is not trivial but it is not impossible either. I hope this is helpful and we always want to hear from you if you have ideas on how we can adapt our services to be helpful in these complicated times. And lastly, for anyone reading this that has someone in their circle that is on the front lines, please say an extra thanks from us. And for everyone else holding it together in the background, no matter how, remember we all have a role to play in our community’s recovery.
By Michael Goldberger 26 Feb, 2020
Welcome to 2020! After a bit of a hiatus for the holidays, I am picking up this blog series on data quality for unions and fund offices with this post. I started the series by talking about the importance of “getting the data right” in your benefits administration system, including a grading rubric to assess data excellence. Since then, I have unpacked the first three (3) elements of our 10-step, comprehensive data quality program (listed again below) and will tackle the fourth in this post: You know where your data comes from in terms of systems and sources You are aware of conflicts and inconsistencies between your data sources You have an approach for resolving any conflicts between data sources You capture data once, and use it in multiple places You have documented what data is critical for implementing your business rules, and you have approaches for filling in any missing data You have tools and processes for identifying and correcting flaws in your data Your data exists in a format that makes it easy to access using readily available tools You are not dependent on a software vendor for access to your data Everyone on your team is cognizant of the value of “good data” and the long-term costs of "sloppy data” You leverage your data to support operations AND to support long term decisions In my last post, I talked about identifying systems of record for each data element and creating some rules on how to use them in order to resolve data conflicts. Once you can identify your master data sources, you need to be disciplined about capturing the data that goes into them (and any subsequent changes) just once, even though it is probably used in multiple places. To accomplish this, you must link the many places a particular data element is used (e.g. reports) back to that single trusted, master source for that data. Maybe this seems obvious but there is a catch. Many fund offices and unions have systems that were built on top of other systems, and business processes that are disconnected from each other so even where intentions are right, there are often copies of the same information residing in multiple places. For example, think about creating a list of contributing employers. Let's say one of the employers on that list had a name change. How many places beyond your system of record might need to be updated in order to be sure you always use the new name (e.g. on invoices or reports)? If there is more than one, this post is for you. To avoid this problem, you want to “normalize” your data. In a fully normalized system, any piece of data that is used in multiple places, is stored independently, with unique identifiers. Let’s say the employer “Bill’s Sprockets” changes their name to “Bill and Daughter's Sprockets.” In this case, you want to be sure that your “system of record” reflects the new name and anywhere that the employer name is used references that source. This ensures you don’t (1) continue using the old name by accident, (2) lose the connection between information associated with the old name and the new name, or (3) end up with confusion about how many companies really exist. This will sound like a technical detail, but there is a very important key to having a normalized data set from which you can create such a list – you need a unique identifier (ID) for each employer that never changes. Why is this so important? Because once you establish the Employer ID, any other tool or report that needs information about an employer can reference the Employer ID, rather than something else that might change over time (like the Employer’s name). That unique identifier might be based on something real (like a Tax ID number) or it might be created manually by you or generated by one of your systems. The important points in this case, which also apply to any situation where normalized data is critical, are that: Every Employer has a unique ID Every Employer has only one ID Once it’s been assigned, the Employer ID never changes Every ID is only used only once, and for only one Employer You have at least one piece of information for each employer (besides the ID) For example, a basic mail-merge list, made from data that is not normalized, might look like this:
More Posts
Share by: