The many faces of Authorization and why we need a new approach - Part II
”Kärt barn har många namn”
In my last blog, I provided an overview of the authorization landscape and where the market is heading. It also explains some foundational concepts around granularity and the current breadth of approaches, includingRBAC, ABAC, and PBAC and the newer options becoming available like Open Policy Agent (OPA), Google Zanzibar, Amazon Verified Permissions, Strata’s IDQL, ReBAC and IndyKite’s KBAC.
In this article, I want to explore how these approaches work and point to some similarities and differences. In doing so, I hope to provide some light on why these emerging approaches are exciting and long sought after by businesses looking beyond traditional IAM thinking.
How it works: Constructs of an authorization service
To help me compare the models, I think it is helpful to define some key constructs of authorization and access control and will use the traditional ABAC/XACML architectural model to set the terminology. Even if some vendors in this space do not use these definitions, you can generally map any implementation into this model.
- The Policy Administration Point (PAP) is the administrative console/tool that is used to author and manage policies. It often provides life-cycle management capabilities of policies and has some mechanism to provide policies to a PDP.
- The Policy Decision Point (PDP), or sometimes the Policy Engine, is the core access control service that, at run-time, responds to PEP requests and makes access control decisions by evaluating policies, rules, and the associated attribute values.
- The Policy Enforcement Point (PEP) is the component that, at run-time, stops the user from accessing the resource/asset, formulates a request to the PDP, and then enforces the access based on the PDP decision.
- The Policy Information Point (PIP) is the authoritative data source for given attribute values of identities, resources, and other relevant data. The PDP, as part of the evaluation of policies, retrieves the attribute values from the PIP.
Side note: Since the current predominant access control method, Role-Based Access Control (RBAC), is neither dynamic nor externalized, I would like to focus this blog on how we can move beyond RBAC to achieve our goals. To modernize, we need to break out the security logic of an application and delegate this to a service capable of making dynamic and fine-grained decisions. That said, there is no fault in using these intermediate roles/groups as part of a blended authorization strategy. However, to generate substantial change, the focus must be put on how authorization is consumed and how data can be securely shared in more complex use cases. Using an Identity Governance and Administration (IGA) tool like SailPoint, OneIdentity, or Saviynt to manage an enterprise-wide role model does not really apply to organizations that need to fight role-explosion and expand authorization to new domains. However, the IGA process fulfills a tremendously important aspect of managing user attributes (PIP data).
Comparing models and concepts
Let’s look at some comparisons now of the “apples” of authorization. Some are green, some are red, and some are even sweet, like the dear child in the title of this blog post.
First, we need to recognize that all the remaining concepts (ABAC, PBAC, OPA, ReBAC, KBAC) fulfil the external and dynamic part of the analyst definitions (described in my previous blog). Secondly, the different concepts can in most cases implement static to dynamic access controls and go from coarse-grained to fine-grained in milliseconds. It all depends on the use cases, policies, rules, and the data (attribute values) consumed during policy and rule evaluation.
From a birds-eye view, there are many similarities between these concepts, and it can be challenging to spot the differences beyond the name. When comparing models, it’s also sometimes hard to define what is part of the model versus what is implemented by the vendors. Therefore, the comparisons below do not go to a detailed level nor does it try to compare specific vendor concepts.
Vendor specific variations
One such vendor-oriented aspect is the deployment model, i.e., cloud-based, hybrid, or software implementations, and the policy management capability.
Depending on an organization's appetite for cloud versus software deployments, this is a critical aspect to digest. The deployment model can also make a difference if the focus or scope is internal enterprise or external B2C/B2B use cases, or both.
Another vendor-specific aspect is policy management, i.e. how authorization policies are crafted and managed. There are two primary strategies. One that primarily wants to attract developers (for example, OPA and the Zanzibar project) and one that focuses on a “low/no-code” concept that treats policy authoring as a “configuration” process. The latter style attracts more business users (or at least people with non-developer skill sets).
Access enforcement
One major similarity between the models is the challenge of efficiently dealing with the enforcement of access.
Since the enforcement process (PEP) is external to the technology that implements the authorization service (PDP), there needs to be some integration between them. From a model point-of-view, the PEP is either part of an application/service or implemented as a gateway, a proxy, or a sidecar that sits above, below, or co-located with the application/service itself. However, this introduces network latency and negatively affects the performance of the solution.
Some vendors offer their own PEPs, some use third-party technology, and others rely solely on the deployment team to construct the PEP. But the challenge remains to mitigate the performance impact and manage the PEP-PDP communication as the application/service life cycle evolves.
Policy Engines
The PDP is also a topic for vendor-specific implementations. Some are very centralized, as with some SaaS vendors, and others are providing various levels of decentralized/distributed PDPs.
These PDPs are often quite small and sometimes even suitable for microservice architectures where the PDP can be deployed as a sidecar to each service. In those cases, you must also consider how the PDP communicates with the PIPs to ensure that network calls are limited or even omitted.
Authoritative data - what really separates the concepts
As a last observation on the comparison side of things, I want to put some extra light on how the different models use the authoritative PIP data as part of making access control decisions.
This topic is often not covered in conversations about dynamic authorization, however it is possibly one of the most important aspects for the implementation team.
Understanding how the different authorization models and vendors approach this challenge is critical to fully grasp the area of dynamic and externalized authorization.
For all models discussed above, the biggest challenge is not to author an authorization policy. Policy authoring is “child's-play” once the attribute data is lined-up and available. But lining up that data to be ready to support the most complex and fine-grained authorization requirements of a large organization or project is not an easy task.
Adding ‘data quality’ and ‘data life cycle’ processes into the picture and it becomes even more challenging. Still, the rewards are big for the ones that succeed.
ABAC/PBAC implementations typically handle authorization data external to the authorization service. These implementations use some type of connector (and often some caching mechanism) to access data at run-time. When using Open Policy Agent (OPA), there is an option to send all required data in the PEP request (other vendors also support this) or a combined approach where the PIP data can be pushed, by an external service, to the OPA agent in memory.
With the above options, you can set up, connect and load data and attributes easily, provided the data resides in some repositories and/or APIs. Once you cross data silos and need to combine data into a more complex structure, you will need to constantly reconfigure the repositories, APIs, connectors, and cache to reflect that model. This might end up with an “accidental data architecture” that was not in target for the organization.
In ReBAC and IndyKite’s KBAC, the PIP data is stored in a graph database layer bundled with the authorization service. The data is populated through an onboarding process and continuously synced from various data stores using an ingestion service. As the full information structure is persistently stored as relationships between nodes in the graph, the authorization complexities are resolved as close to the data as possible.
This enables fast semantic graph queries that traverse the nodes, eliminating the need for a caching mechanism. With a data model that supports the continuous enrichment of nodes, relations, and properties, we can also start exploring use cases very close to authorization, such as delegation, personalization, consent management, user onboarding, and verification. This can extend the use of identity data to create value across the business.
With this flexible data model comes a data architecture that is not only used for attribute retrieval (read-only), but also data enrichment. That data enrichment can be used to manage and store detailed metadata of a single attribute value, for example, < PassportId = 1234-5678 - “level of assurance 5 >” or even to allow for a user-initiated population of the graph to include a “delegation” for data sharing between family members. The potential use cases here are exciting.
That said, there are also challenges related to defining the data model and populating the graph database. But this model would, in many ways, better support having a vision of how authorization can support constantly changing user journeys yet enable an organization to start with a simple use case and then allow it to grow organically with a data architecture supporting the journey.
Finding the right solution for your project
An organization that seeks to implement dynamic authorization has many options, depending on the desired use cases.
The search for the “one ring to rule them all” is not the right approach. You will find several requirements hard to fulfill with just one solution/architecture/vendor. Sometimes combining approaches will be useful. This can also include the emerging technologies for “authorization policy orchestration” across environments.
To start your journey, the first step is to map out your use cases. Remember that authorization (who has access to what) is not primarily an identity problem–it’s a resource/asset problem. Once you understand the use cases and your business domain data, you can start on the road to dynamic authorization.
If that journey points you toward creating compelling, secure and value-driven user experiences in the B2C and/or B2B world, you can safely turn to IndyKite to learn more about KBAC.
KBAC harnesses the relationship data from the identity knowledge graph to not only facilitate complex authorization (which is both externalized and dynamic), but also enable you to use identity data to create value for your business and customer.
That is what the next and last part of this blog series will cover.
In the meantime, you can learn more about KBAC in this webinar.