> From: ext René Mayrhofer
> Sent: Friday, April 08, 2011 12:23 PM
> To: Schaufler Casey (Nokia-SD/SiliconValley)
> Cc: [email protected];
> Subject: Re: [Meego-security-discussion] Arbitrary 3rd Party Code
> Am Freitag 08 April 2011, um 19:53:14 schrieb
> > > Imagine for example a game application that should be able to set
> > > reminders for in
> > > the
> > > user calendar.
> > You have two applications, a game and a calendar.
> > The game pushes information to the calendar.
> Yes. The calendar would/should be the built-in device calendar (as a
stand-in for arbitrary
> other standard applications shipped with each MeeGo release).
> > > As another (unrelated) function of the game, it needs to
> > > communicate with a central server to exchange .
> > The game has bidirectional communication with the score-server.
> > > Upon
> > > installation time, the user might grant the application access to the
> > > calendar to post and maybe read and modify/delete existing entries as
> > > well as allowing it to connect to this server.
> > The installer instructs the computer to allow the game and the
> > score-server to communicate.
> > The installer instructs the computer to allow the game and the
> > calendar to cooperatively manage game related calendar entries.
> Yes. Again, I am thinking about communicating with the built-in PIM type
> that manage access to private/personal/sensitive user data.
> Sounds simple.
> Not necessarily ;-)
> > The calendar entry maintenance demonstrates how a simple requirement
> > can lead to excesses of architecture.
> > One approach that the game and the calendar can use to accomplish
> > the updates is for the game to blindly push calendar update requests
> > to the calendar.
> Agreed, problem solved in the simple case, but not when you extend the
scenario to e.g.
> moving existing (game-related) calendar entries, removing them, or
checking the built-in
> task list if the user has already finished a game-related task.
> > Another approach is for the game and the calendar to use
> > bidirectional communications to negotiate the updates, allowing
> > for the game to modify its expectations in the face of schedule
> > conflicts.
> > Finally, the game could be granted access to the data that the
> > calendar uses.
> > These three scenarios are all rational in certain contexts.
> > Each has its own set of security, performance and usability issues.
> > The security implementation for each can be very different.
> Fully agreed. However, looking at many of the current Android
> (I am referring to Android because I know its security architecture much
> than e.g. the iPhone security measures, but the same concept apply to
> markets), I see many examples that are related to this simplified use
> applications accessing the full contacts database just because they offer
> some application-related snipped via email or SMS; applications requiring
> telephony access because they allow to trigger a phone call to a phone
> found in the (typically social network type) application data;
> full network communication privileges just because they want to display
ads or check
> for updates of some application-related data from the developer server;
> applications requesting su shell privileges just to set one kernel
variable. That is,
> applications require access to some local or remote data set, but only
> (completely legitimate) subset of the available entries. With current
> I don't see a standard way to enforce this "subset of some data resource"
> in a way that end-users installing the application would be able to
> [That is one of the main reasons why I strive for the simplest possible
> if end-users don't understand it, it's (mostly) worthless.]
You are describing application object management (e.g. calendar entrys,
and OS capability (e.g. CAP_CLOCK, CAP_SYSADMIN) management. This all
has to happen in the application space and there are exactly two options.
to restrict the applications to those that comply to some criteria. Both
and Andriod use this model, with Apple and Google taking different
to the enforcement of adherence to criteria. The other approach is to leave
behavior of application space to the developers of applications. This is
linux/unix model and leaves the application space security strictly up to
distributor. Redhat for example has chosen to use SELinux as a mechanism
to argue that the applications conform to a policy with regard to each
Application object management takes everyone by surprise. Once an
starts providing access control services the application becomes a security
enforcing component of the system. Back in the Orange Book days we wrote
entire security policy models for print queue management. I seriously doubt
most of the people reading this would anticipate the issues with PostScript
If your calendar application is accepting request to make changes to the
calendar data you can describe the policy it is enforcing. If the game is
allowed to modify the data without the intervention of the calendar it has
be included in the policy for calendar objects. SELinux attempts to provide
an OS based structure for doing this. Because applications are rarely
written with data domains in mind you end up with large, complicated policy
As far as end users go, we really need to change our perception
of who the "end user" is. On a cell phone the person with the handset
in her purse does not typically know or care about the security model
of the operating system. The application writer and the service provider
do care. This is one reason for the success of Android, where the
emphasis of the platform is to make the development and deployment
of new applications easy by providing all system resources as services.
> The problem of assigning security context to applications is that it does
> limiting access to subsets of the data managed by one application.
> > > However, that does not mean that the application should be allowed to
> > > send calendar data to the server. If data from different sources
> > > as the calendar) was tagged appropriately and was not allowed to be
> > > sent
> > > over network connections, we could solve a significant amount of
> > > privacy leaks.
> > Oh my. I haven't seen anyone advocate information labels since the
> > Mitre Compartmented Mode Workstation specification in 1987. It can
> > be done and it has been done, it just doesn't turn out to work the
> > way you want it to.
> Casey, I highly respect and value your opinion on security architectures.
> However, this is the second time in two days that you post a rather
> remark about somebody's suggestion/question.
> For the record, I did not suggest information labels as such,
I understand. You did suggest a path that goes past the same dragon lairs.
> and I do not particularly want to go the path of SELinux MLS.
> Can we get back to purely technical discussions, please?
Sorry if I offended. None was intended.
> > > Will it be possible to protect against rogue
> > > applications that read private data in one context and then apply
> > > encryption/steganography/whatever to get them into another context
> > > without this being detected? No.
> > Yes.
> How (In the mobile applications context)?
Sorry, my response should have been "Yes, I agree with your conclusion."
> > > The question is therefore more a compromise: given limited resources
> > > and
> > > a finite-length security policy, against how many "standard" threats
> > > can
> > > we protect? By solving 90% of those cases where Android applications
> > > currently violate the "intended"/"expected" behavior, we would
> > > have made a large improvement.
> > I still say that your computer should not be asked to second guess
> > the intention or expectation of the user except in cases where the
> > entire software stack is under the control of a single entity that
> > is willing and able to take responsibility for the behavior.
> Did you come across the more recent papers on usability of security
> in the mobile domain, e.g. the authentication protocols usability study
> Nokia research Helsinki [Usability Analysis of Secure Pairing Methods.
> a more recent one in the same area [On the Usability of Secure
> Wireless Devices Based On Distance Bounding] or others that followed
> (I am not going to reference my own papers here because I do not consider
> studies we did statistically significant for a broad population)? I can
> them as a good read, even if they are specific to authentication
> don't cover the whole of usable security in mobile devices.
Authentication is an important component of security and secure
but all it provides is assurance that the message came from a particular
source. It says nothing about the appropriateness of the content of the
> A few years ago, I would have agreed that a "single entity" (the
> the device, and not any co-operation with potentially conflicting
> be in full control over what may or may not happen. However, I have
> my opinion based on these (non-representative, but still clearly
> other studies -- most end-users are simply not capable of making informed
> decisions about security policies; heck, I myself am not able to decide
> want to install an Android application based on its set of required
> And neither do I think that end-users should be asked to make these
> It's not part of the job they want to get done, but gets in the way of
the task they
> intend to perform. It is therefore completely understandable that most
> choose to ignore security policies as long as "the system works". We need
> decrease the burden placed on users when developing new security
> not increase it. Doing otherwise means a losing battle like the one still
> advising users to choose unique, strong password for every account and
> changing them regularly.
This has not changed since I started working in security in the days when
dinosaurs roamed the earth and megabytes were only found on disk drives.
We released a Unix variant that we charged $5000 extra for because it had
an unprivileged root (using POSIX capabilities) and every customer's first
questions was "How do I become Real Root?".
> There are multiple potential approaches to tackle this issue besides
> labels, e.g. informal tagging of content (which is similar to information
> where each application can define its own tags), the whole range of
> from data leak prevention (that is, trying to detect _only potentially
> before allowing it into another context instead of tagging/labeling _all_
> and probably many others I'm not currently thinking of. The scenario I
> was intended to act as a simple threat scenario against which we can
> technical suggestions, not as an implementation description.
If informal methods are sufficient than the problems of information labels
are reasonably easy to deal with. The problem comes from trying to ensure
that the mechanism is not circumventable.
I proposed a mechanism for content based access control last year, only
to discover that Eric Paris had beaten me to it with fanotify.
But again, you need the applications to buy into it.
> best regards,