admin

Oct 262015
 

Back posting on an event in September 2015…

In reviewing literature, I came across a Conference – the ICG3S – that was due to be hosted just in my neighbourhood on 15th September 2015. I signed up at the very last minute and attended the Conference.

Just for my own record, the URLs for the live broadcast:

http://bit.ly/icgs3-2015-dayone

http://bit.ly/icgs3-2015-daytwo

http://bit.ly/icgs3-2015-daythree

 Posted by on October 26, 2015 at 7:26 pm
Oct 262015
 

Someone told me long time ago that learning boring stuff is what makes life ‘easy’ and/or a ‘safe’ bet.

My PhD is focusing on pretty boring stuff which I will have to somehow find a novel way of looking at the problem and solution domains.

Here’s a site – TERENA Incident Taxonomy and Description Working Group which has tons of ‘boring stuff’, but I guess they are damn important for folks who are into such matters. The problem is there are so many such ‘boring stuff’ and I will have to navigate my way through all these ‘important, boring stuff’ to do my PhD (groan!).

 Posted by on October 26, 2015 at 7:07 pm
Oct 262015
 

I am now into the start of my 3rd year of PhD, and I’m still reviewing literature and discussing my research aims/objectives etc. with yet another lot of new supervisors.

To keep it short, I nearly ‘quit’ (and/or forced out from!) the PhD due to circumstances, which could have been avoided with regular and ‘keen’ interaction with the supervisor. Another contributing factor was that my research topic, involving Eastern Theory and approaches, was totally ‘alien’ and was viewed as ‘high risk’.
Some collected information at jollyvip.com/wuxing

So here I’m re-starting on a new topic, which hopefully is considered ‘safe’ enough for Westerners.
I could write a Thesis just on my PhD experience so far!

 Posted by on October 26, 2015 at 6:55 pm
Sep 072015
 

OASIS Cyber Threat Intelligence Technical Committee(CTI TC)

Extracted information from the site;
Overview

The OASIS Cyber Threat Intelligence (CTI) TC was chartered to define a set of information representations and protocols to address the need to model, analyze, and share cyber threat intelligence. In the initial phase of TC work, three specifications will be transitioned from the US Department of Homeland Security (DHS) for development and standardization under the OASIS open standards process: STIX (Structured Threat Information Expression), TAXII (Trusted Automated Exchange of Indicator Information), and CybOX (Cyber Observable Expression).

The OASIS CTI Technical Committee will:

define composable information sharing services for peer-to-peer, hub-and-spoke, and source subscriber threat intelligence sharing models
develop standardized representations for campaigns, threat actors, incidents, tactics techniques and procedures (TTPs), indicators, exploit targets, observables, and courses of action
develop formal models that allow organizations to develop their own standards-based sharing architectures to meet specific needs

I will certainly be interested in the ‘incidents, indicators, observables and courses of action’. Anything shareable is worth researching.

 Posted by on September 7, 2015 at 8:18 pm
Sep 032015
 

The Federal Financial Institutions Examination Council (FFIEC) Cybersecurity Assessment Tool

The news release at FFIEC Releases Cybersecurity Assessment Tool

Here’s the extracted news;

FFIEC Releases Cybersecurity Assessment Tool

The Federal Financial Institutions Examination Council (FFIEC), on behalf of its members, today released a Cybersecurity Assessment Tool (Assessment) to help institutions identify their risks and assess their cybersecurity preparedness.
Financial institutions of all sizes may use the Assessment and other methodologies to perform a self-assessment and inform their risk management strategies. The release of the Cybsercurity Assessment Tool follows last year’s pilot assessment of cybersecurity preparedness at more than 500 institutions. The FFIEC members plan to update the Assessment as threats, vulnerabilities, and operational environments evolve.
In addition to the Assessment, the FFIEC has also made available resources institutions may find useful, including an executive overview, a user’s guide, an online presentation explaining the Assessment, and appendices mapping the Assessment’s baseline maturity statements to the FFIEC Information Technology Examination Handbook, mapping all maturity statements to the National Institute of Standards and Technology’s Cybersecurity Framework, and providing a glossary of terms.
The FFIEC members are also encouraging institutions to comment on the Assessment through an upcoming Paperwork Reduction Act notice in the Federal Register.
The FFIEC provides several resources to further awareness of cyber threats and help financial institutions improve their cybersecurity. These resources are available on the FFIEC website at http://www.ffiec.gov/cybersecurity.htm.

Jul 232015
 

My research is not directly on ‘secure system design and development’. Still..it is worth posting the Saltzer-Schroeder principles here to remind myself that there are principles that all software engineers and cybersecurity researchers are embracing. Are they embracing these principles?

The following texts are extracted from this report,’Towards a Safer and More Secure Cyberspace‘ issued by the National Academy of Sciences, US.

Box 4.1 summarizes the classic Saltzer-Schroeder principles, first published in 1975, that have been widely embraced by cybersecurity researchers. (my italic)

BOX 4.1
The Saltzer-Schroeder Principles of Secure System Design and Development
Saltzer and Schroeder articulate eight design principles that can guide system design and contribute to an implementation without security flaws:

• Economy of mechanism: The design should be kept as simple and small as possible. Design and implementation errors that result in unwanted access paths will not be noticed during normal use (since normal use usually does not include attempts to exercise improper access paths). As a result, techniques such as line-by-line inspection of software and physical examination of hardware that implements protection mechanisms are necessary. For such techniques to be successful, a small and simple design is essential.

• Fail-safe defaults: Access decisions should be based on permission rather than exclusion. The default situation is lack of access, and the protection scheme identifies conditions under which access is permitted. The alternative, in which mechanisms attempt to identify conditions under which access should be refused, presents the wrong psychological base for secure system design. This principle applies both to the outward appearance of the protection mechanism and to its underlying implementation.

• Complete mediation: Every access to every object must be checked for authority. This principle, when systematically applied, is the primary under- pinning of the protection system. It forces a system-wide view of access control, which, in addition to normal operation, includes initialization, recovery, shutdown, and maintenance. It implies that a foolproof method of identifying the source of every request must be devised. It also requires that proposals to gain performance by remembering the result of an authority check be examined skeptically. If a change in authority occurs, such remembered results must be systematically updated.

• Open design: The design should not be secret. The mechanisms should not depend on the ignorance of potential attackers, but rather on the possession of specific, more easily protected, keys or passwords. This decoupling of protection mechanisms from protection keys permits the mechanisms to be examined by many reviewers without concern that the review may itself compromise the safeguards. In addition, any skeptical users may be allowed to convince themselves that the system they are about to use is adequate for their individual purposes. Finally, it is simply not realistic to attempt to maintain secrecy for any system that receives wide distribution.

• Separation of privilege: Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key. The reason for this greater robustness and flexibility is that, once the mechanism is locked, the two keys can be physically separated and distinct programs, organizations, or individuals can be made responsible for them. From then on, no single accident, deception, or breach of trust is sufficient to compromise the protected information.

• Least privilege: Every program and every user of the system should operate using the least set of privileges necessary to complete the job. This principle reduces the number of potential interactions among privileged programs to the minimum for correct operation, so that unintentional, unwanted, or improper uses of privilege are less likely to occur. Thus, if a question arises related to the possible misuse of a privilege, the number of programs that must be audited is minimized.

• Least common mechanism: The amount of mechanism common to more than one user and depended on by all users should be minimized. Every shared mechanism (especially one involving shared variables) represents a potential information path between users and must be designed with great care to ensure that it does not unintentionally compromise security. Further, any mechanism serving all users must be certified to the satisfaction of every user, a job presumably harder than satisfying only one or a few users.

• Psychological acceptability: It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly. More generally, the use of protection mechanisms should not impose burdens on users that might lead users to avoid or circumvent them—when possible, the use of such mechanisms should confer a benefit that makes users want to use them. Thus, if the protection mechanisms make the system slower or cause the user to do more work—even if that extra work is “easy”—they are arguably flawed.

 Posted by on July 23, 2015 at 10:07 pm
May 102015
 

My brain is ‘hurting’ today. So I’m chilling out by making my first bread in my newish oven (in my newish London flat). As I don’t have any measuring jugs or machine, I have to somehow convert all measurements i.e. in ml, g, kg into ‘cup’. In the past, I will have rushed out to buy the necessary tools (tools are in my house in Staines) to make sure I get a near perfect bread. Well… now I’m more ‘mellowed’, and just happy to make do with whatever I have…The result will be in couple of hours time when I put the dough to the test.

Now, why my brain is ‘hurting’? I suspect it’s because I watched TV till late last night (Robin Hood!) instead of reading and writing my transfer thesis. I said ‘hurting’ as I even forgot to turn off my kitchen tap – left it running for how long, I don’t remember?!

Mmm..am I ‘mellowed’ as well in doing my research process and reasoning?

I have read a bit about the so called scientific reasoning i.e. deductive, inductive and abductive reasoning. I guess the scientific process is linked to the reasoning. I said ‘I guess’ because I don’t know if there’s a standard or recognised method/approach for measurement or assessment of ‘scientific process(es). What constitute ‘scientific process’?

Perhaps I ought to frame my question as; What constitute research process?

Time to test my dough…

 Posted by on May 10, 2015 at 3:44 pm