Jun 302020
 

My Thesis is still embargoed. Recently, someone (a researcher?) requested a copy, giving the following reason: We are doing some research into a PIA system and would like to read your full paper. We will obviously credit your research in our findings and presentations.
Please consider removing restrictions or uploading the full text to the archive so that it will be available immediately to future searchers.

For now, here’s Chapter 1 and the Table of Contents and preambles. (pdf)

 Posted by on June 30, 2020 at 10:57 pm
Aug 122016
 

Semiotics is important because it can help us not to take ‘reality’ for granted as something having a purely objective existence which is independent of human interpretation. It teaches us that reality is a system of signs. Studying semiotics can assist us to become more aware of reality as a construction and of the roles played by ourselves and others in constructing it. It can help us to realize that
information or meaning is not ‘contained’ in the world or in books, computers or audio-visual media.
Meaning is not ‘transmitted’ to us – we actively create it according to a complex interplay of codes or conventions of which we are normally unaware. Becoming aware of such codes is both inherently fascinating and intellectually empowering. We learn from semiotics that we live in a world of signs and we have no way of understanding anything except through signs and the codes into which they are organized.
Through the study of semiotics we become aware that these signs and codes are normally transparent and disguise our task in ‘reading’ them. Living in a world of increasingly visual signs, we need to learn that even the most ‘realistic’ signs are not what they appear to be.
By making more explicit the codes by which signs are interpreted we may perform the valuable semiotic function of ‘denaturalizing’ signs. In defining realities signs serve ideological functions. Deconstructing and contesting the realities of signs can reveal whose realities are privileged and whose are suppressed.
The study of signs is the study of the construction and maintenance of reality. To decline such a study is to leave to others the control of the world of meanings which we inhabit.

Extracted from ‘Semiotics for Beginners by Daniel Chandler’

 Posted by on August 12, 2016 at 2:39 pm
Aug 052016
 

Caught between three world views as described in this passage:

Scientific knowledge is constructed socially by subjective minds interacting with nature. It, therefore, seems obvious that we have to admit that our inner “subjective” world is as foundational a part of reality as “objective” external nature and “intersubjective” social worlds.

But western scientific culture lacks a transdisciplinary framework that can encompass all three worlds without reducing any of them to byproducts of the development of one of the others. We need such a non-reductionistic framework more than ever as our basic problems often arise in the gaps between the recognized disciplines.

Interdisciplinary work needs a transdisciplinary framework for mutual orientation and context determination.

A sort of common map, so to speak, on which to point out, recognize and understand each other’s territories..

Extracted from: ‘The necessity of Trans-Scientific Frameworks for doing Interdisciplinary Research’ by Professor Søren Brier

Now, how do I apply the theory and the transdiscipinary framework for my research on data breach incident response?

It seems that on initial exploration, Peirce’s work on Semiotics provided the necessary framework in the form of Firstness, Secondness, Thirdness – Peirce’s ternary.

 Posted by on August 5, 2016 at 6:18 pm
Sep 072015
 

OASIS Cyber Threat Intelligence Technical Committee(CTI TC)

Extracted information from the site;
Overview

The OASIS Cyber Threat Intelligence (CTI) TC was chartered to define a set of information representations and protocols to address the need to model, analyze, and share cyber threat intelligence. In the initial phase of TC work, three specifications will be transitioned from the US Department of Homeland Security (DHS) for development and standardization under the OASIS open standards process: STIX (Structured Threat Information Expression), TAXII (Trusted Automated Exchange of Indicator Information), and CybOX (Cyber Observable Expression).

The OASIS CTI Technical Committee will:

define composable information sharing services for peer-to-peer, hub-and-spoke, and source subscriber threat intelligence sharing models
develop standardized representations for campaigns, threat actors, incidents, tactics techniques and procedures (TTPs), indicators, exploit targets, observables, and courses of action
develop formal models that allow organizations to develop their own standards-based sharing architectures to meet specific needs

I will certainly be interested in the ‘incidents, indicators, observables and courses of action’. Anything shareable is worth researching.

 Posted by on September 7, 2015 at 8:18 pm
Jul 232015
 

My research is not directly on ‘secure system design and development’. Still..it is worth posting the Saltzer-Schroeder principles here to remind myself that there are principles that all software engineers and cybersecurity researchers are embracing. Are they embracing these principles?

The following texts are extracted from this report,’Towards a Safer and More Secure Cyberspace‘ issued by the National Academy of Sciences, US.

Box 4.1 summarizes the classic Saltzer-Schroeder principles, first published in 1975, that have been widely embraced by cybersecurity researchers. (my italic)

BOX 4.1
The Saltzer-Schroeder Principles of Secure System Design and Development
Saltzer and Schroeder articulate eight design principles that can guide system design and contribute to an implementation without security flaws:

• Economy of mechanism: The design should be kept as simple and small as possible. Design and implementation errors that result in unwanted access paths will not be noticed during normal use (since normal use usually does not include attempts to exercise improper access paths). As a result, techniques such as line-by-line inspection of software and physical examination of hardware that implements protection mechanisms are necessary. For such techniques to be successful, a small and simple design is essential.

• Fail-safe defaults: Access decisions should be based on permission rather than exclusion. The default situation is lack of access, and the protection scheme identifies conditions under which access is permitted. The alternative, in which mechanisms attempt to identify conditions under which access should be refused, presents the wrong psychological base for secure system design. This principle applies both to the outward appearance of the protection mechanism and to its underlying implementation.

• Complete mediation: Every access to every object must be checked for authority. This principle, when systematically applied, is the primary under- pinning of the protection system. It forces a system-wide view of access control, which, in addition to normal operation, includes initialization, recovery, shutdown, and maintenance. It implies that a foolproof method of identifying the source of every request must be devised. It also requires that proposals to gain performance by remembering the result of an authority check be examined skeptically. If a change in authority occurs, such remembered results must be systematically updated.

• Open design: The design should not be secret. The mechanisms should not depend on the ignorance of potential attackers, but rather on the possession of specific, more easily protected, keys or passwords. This decoupling of protection mechanisms from protection keys permits the mechanisms to be examined by many reviewers without concern that the review may itself compromise the safeguards. In addition, any skeptical users may be allowed to convince themselves that the system they are about to use is adequate for their individual purposes. Finally, it is simply not realistic to attempt to maintain secrecy for any system that receives wide distribution.

• Separation of privilege: Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key. The reason for this greater robustness and flexibility is that, once the mechanism is locked, the two keys can be physically separated and distinct programs, organizations, or individuals can be made responsible for them. From then on, no single accident, deception, or breach of trust is sufficient to compromise the protected information.

• Least privilege: Every program and every user of the system should operate using the least set of privileges necessary to complete the job. This principle reduces the number of potential interactions among privileged programs to the minimum for correct operation, so that unintentional, unwanted, or improper uses of privilege are less likely to occur. Thus, if a question arises related to the possible misuse of a privilege, the number of programs that must be audited is minimized.

• Least common mechanism: The amount of mechanism common to more than one user and depended on by all users should be minimized. Every shared mechanism (especially one involving shared variables) represents a potential information path between users and must be designed with great care to ensure that it does not unintentionally compromise security. Further, any mechanism serving all users must be certified to the satisfaction of every user, a job presumably harder than satisfying only one or a few users.

• Psychological acceptability: It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly. More generally, the use of protection mechanisms should not impose burdens on users that might lead users to avoid or circumvent them—when possible, the use of such mechanisms should confer a benefit that makes users want to use them. Thus, if the protection mechanisms make the system slower or cause the user to do more work—even if that extra work is “easy”—they are arguably flawed.

 Posted by on July 23, 2015 at 10:07 pm
Apr 252015
 

Catching up with my collection of notes and posting this Verizon 2014 image showing the nine common incident patterns. The report states ‘Within each of those patterns, we cover the actors who cause them, the actions they use, assets they target, timelines in which all this took place, and give specific recommendations to thwart them.’
Nine Patterns

BREACHES VS INCIDENTS?
This report uses the following definitions:
Incident: A security event that compromises the integrity, confidentiality, or availability of an information asset.
Breach: An incident that results in the disclosure or potential exposure of data.
Data disclosure: A breach for which it was confirmed that data was actually disclosed (not just exposed) to an unauthorized party.

 Posted by on April 25, 2015 at 11:44 pm
Sep 092014
 

How many *ogies related terms do I really know and understand?

Being a software analyst (plus other ‘beings’), I tend to start looking for ways or more formally, a methodology to ‘identify or classify’ objects that I’m interested in exploring.

There’s a saying in management speak that ‘you can’t manage (it) if you can’t measure (it)’
To measure it, one has to also know what is ‘it’ I want measuring, so the first step appears to be ‘identify it’.
Well, this is assuming I have also set up my context for ‘identifying it’.

In the world of research, ‘identification or classification’ is considered as ‘the most central and generic conceptual exercises’. Conceptually this takes two forms of classification namely typologies and taxonomies.

Further clues are in this article;
G. Par´e, M.-C. Trudel, M. Jaana, S. Kitsiou, Synthesizing Information Systems Knowledge: A Typology of Literature Reviews, Information and Management (2014), http://dx.doi.org/10.1016/j.im.2014.08.008

In essence, typology is derived deductively and taxonomy is usually derived empirically or inductively using cluster analysis or other statistical methods.

Which term to use depends on what I want to do with it.

 Posted by on September 9, 2014 at 4:38 pm