
Figure 1
A generic algorithmic system broken down into several foundational components.
Table 1
The responsible algorithm principles alignment with MASov Principles.
| MASov PRINCIPLE | RESPONSIBLE ALGORITHM PRINCIPLES |
| Rangatiratanga | Fairness and Justice, responsibility, beneficience, freedom, trust, dignity |
| Whakapapa | Transparency, responsibility, non-maleficience, beneficience, sustainability |
| Whanaungatanga | Transparency, fairness and justice, responsibility, trust, solidarity |
| Kotahitanga | Fairness and justice, non-maleficience, beneficience, dignity, solidarity |
| Manaakitanga | Responsibility, privacy, trust, dignity |
| Kaitiakitanga | Transparency, fairness and justice, responsibility, privacy, trust, sustainability, solidarity |

Figure 2
An algorithm housed in a structure where tikanga values are at the foundation of the system. Using the MASov principles as the Tikanga values that is the foundation of the algorithmic system ensures that the system is Indigenised – that Māori perspectives underpin the systems, and that Māori needs are at the forefront of outcomes.
| PRINCIPLE | MEANING (IN THE CONTEXT OF ALGORITHMS) | RELATED PRINCIPLES |
|---|---|---|
| Transparency | Algorithms should be open, explainable, and explicit with regards to their purpose, development, use and maintenance, Efforts should be made to increase the explanability of how it works, interpretability of outputs, and understandability of the system. | Explainability, explicability, understandability, interpretability, communication, disclosure, non-opaque, showing. |
| Fairness and Justice | Outputs of algorithmic systems should be free of algorithmic bias, or at the very least, tested for bias and disclose findings, Purpose of the outputs must allign with the ideas of fairness, equity and inclusion, and should be explanable enough so that the outputs can be challenged. | Consistency, inclusion, equality, equity, non-biased, non-discriminatory, diversity, plurality, accessibility, reversibility, remedy, redress, challenge, access, distribution. |
| Non-Maleficence | Algorithms should be safe and secure, and should not purposely cause foreseeable or unintentional harm (discrimination, physical harm, violation of privacy etc.). | Security, safety, harm minimisation, protection, precaution, prevention, integrity, non-subversion. |
| Responsibility | Use of algorithms should be done with integrity, and the allocation of responsibility, obligations, and legal liability should be clear in all parts of the process. A focus on the underlying sources of harm is necessary, as is the focus on diversity, inclusion, and participation of all relevant groups. | Responsible, accountability, liability, obligations, acting with integrity, participation. |
| Privacy | Algorithms require data, and privacy (typically presented in relation to data protection and security) is a value to uphold, and as a protected right! | Personal or Private information, anonymisation |
| Beneficience | Algorithms should be used in such a way that promotes human wellbeing and flourishing, peace, happiness, creation of socio-economic opportunites and prosperity. | Benefits, well-being, flourishing peace, social good, common good |
| Freedom | Use of algorithms should promote freedoms, empowerment, autonomy, and self-determination through democratic means, There must be freedoms to withdraw consent, and individuals must be free from manipulation, surveillence, or technological experimentation. | Autonomy, consent, choice, self-determination, liberty, empowerment |
| Trust | Trust in Algorithms refers to the algorithm having a noble purpose, used by trustworthy individuals and organisations, built on good design principles, and should aspire to gain the trust of stakeholders. | Trust, Purpose |
| Sustainability | Development and deployment of algorithms should consider the protection of the environment, improving Earth’s ecosystem and biodiversity, contribution to fair and equitable societies, and the promotion of peace. | Environment (nature), energy, resources (energy) |
| Dignity | Algorithms should uphold human rights, It should not diminish or destroy, but preserve or increase human dignity. | Human dignity |
| Solidarity | Algorithms have a large implication on the labour market, so benefits of the algorithm should uphold strong safety nets, wealth generated should be redistributed to those whose labour has been taken through automation. | Solidarity, social security (welfare), cohesion, redistribution of benefits |
| Process | The Process component looks at the algorithm in its entirety. Important information to extract here are the people/organisations involved in the creation, development, maintenance, ownership, and funding. It is also important to understand the intent of the algorithm, how the data is being protected, and how the system will be maintained throughout its use. Questions to investigate the algorithm come straight from the MASov principles, which will provide a high-level look at the system. |
| Example Questions | Rangatiratanga: What controls do Māori have in all stages of the development of the algorithm? Whakapapa: Has this algorithmic system been used previously on Māori or other indigenous communities? For what purpose? Whanaungatanga: How are individuals and institutions involved in the development and use of the system accountable to Māori? Kotahitanga: What strategies for capacity building are there to ensure technical literacy in the Māori communities to which the algorithm applies? Manaakitanga: Have the necessary Māori individuals and communities given free, informed, and prior consent for their data to be used in the algorithm? Kaitiakitanga: Do Māori ethics underpin the protection, access, and use of the algorithm? |
| Motives | The Motives component looks at understanding what problems the algorithm is solving (or goals it is trying to achieve) and asks questions involving what the problem/goals are, and who is involved when defining the problems/goals, and where/if Māori consultation has been sought. This helps understand if an algorithmic system is the correct tool for solving the problem/achieving the goal. Preliminary steps before analysis would be to understand the underlying motivations for the algorithm, and whose motivations are driving the process, what the system is trying to achieve, who is involved in the process, and if Māori in any way have been involved in defining the motivations and in what capacity? |
| Example Questions | Rangatiratanga: Do the motivations/purpose of the algorithm further Māori collective aspirations? Whakapapa: Are the motivations underlying the use of the algorithm clear in providing future benefit to Māori Whanaungatanga: Which individuals and institutions have defined the motivations of the algorithm, and what are their obligations to Māori? Kotahitanga: What harms and benefits do the motivations provide for Māori? Manaakitanga: Do the motivations uphold and maintain dignity for Māori individuals and communities? Kaitiakitanga: Do Māori have the right to change the motivations if tikanga values are not involved in the construction of the motivations? |
| Inputs | The Inputs component looks specifically at what data, variables, and other inputs (such as weights, priors, model specifications) have been chosen to be used in the computational algorithm. The inputs all depend on the decisions made and the data collected throughout the process so far, so it is important to ask questions regarding the consistency of the process thus far as we approach this pivotal step of the process. Prior to analysis, it is important to understand the technical components of the process, including understanding what variables have been defined for the eventual model, what data has been used and whether it is sufficient, details surrounding the model, and the algorithms that could be considered useful to run with the data available. |
| Example Questions | Rangatiratanga: What controls do Māori have to determine what inputs are tapū (closed) or noa (open)? Whakapapa: Do the inputs used align with the (Māori) motivations of the algorithm? Whanaungatanga: Who (individuals/institutions) is responsible for the protection of inputs (Māori data)? Kotahitanga: What are the strategies to build technical capacity and knowledge for Māori issues surrounding the protection of inputs? Manaakitanga: Have appropriate Māori communities given consent for the application of their data being used in the algorithm? Kaitiakitanga: Do the inputs have the necessary protocols in place for protection and security? |
| Outputs | Once inputs are chosen, they are plugged into the computational algorithms, and outputs are generated. The Outputs component looks at how the outputs are interpreted and communicated to decision-makers. Other important aspects involve access to outputs, benefit-sharing, capacity building, and if/how Māori are involved with the interpretation of outputs. Important things to understand is who is analysing and interpreting the outputs, the quality of the outputs, and who decides if outputs are correct or relevant to the motivations of the algorithm. Since outputs are newly generated knowledge created from inputs, it is important that all outputs that are about Māori are treated with the same care and respect as Māori data. |
| Example Questions | Rangatiratanga: Do the outputs and the analysis and interpretation of the outputs contribute to Māori self-determination and aspirations? Whakapapa: Are outputs about Māori consistent with the inputs (specifically Māori data)? Whanaungatanga: What are the obligations of the individuals and institutions that generate new outputs, to the Māori individuals and communities that the outputs describe? Kotahitanga: Have the outputs been interpreted from the correct Māori lens? Manaakitanga: Do the findings of the outputs, and the analysis and interpretation of the outputs uphold Māori dignity? Kaitiakitanga: Are the outputs about Māori treated the same as Māori data with respect to controls and protections? |
