Home
Reading
Searching
Subscribe
Sponsors
Statistics
Posting
Contact
Spam
Lists
Links
About
Hosting
Filtering
Features Download
Marketing
Archives
FAQ
Blog
 
Gmane
From: David Howells <dhowells <at> redhat.com>
Subject: Re: [PATCH 00/21] Permit multiple active LSM modules
Newsgroups: gmane.linux.kernel.lsm
Date: Thursday 3rd February 2011 11:05:47 UTC (over 6 years ago)
Casey Schaufler  wrote:

> One could create an LSM that composes other LSMs. I have been working on
> such an implementation, but it isn't going to be done soon and it has too
> deal with the same set of issues you outline here. It has the advantage
of
> leaving the existing LSMs closer to their current state. It would no
doubt
> have performance issues beyond what a "native" implementation would.

You still have to change all the references the LSMs make to the security
pointers in the objects (such file->f_security) as they all access those
directly.  A bunch of the patches I have here are simply wrapping those
accesses.

Also you still would need to sort out the fact that the LSMs chain to
commoncap
routines.  You generally want to call each commoncap routine once only.
Sometimes its just a matter of wasted time, but sometimes it'll have other
effects too.

> Is this strictly necessary? I have been working on the notion that there
> is a master blob that has a pointer for each LSM and that the individual
> LSMs manage their own blobs. It is pretty easy to image an LSM with
variable
> size blobs. I doubt that allocating the maximum possible blob every time
is
> going to make anyone happy.

I think it's the best way.  If an LSM just requires a fixed-size blob, then
the
security framework can allocate that directly.  Otherwise, if it wants a
variable size blob, it can just ask the framework for space for a pointer -
which then just falls back to your suggestion above (it can always split
its
blob too).

By aggregating like this you get a number of wins:

 (1) If possible, the aggregation can be tacked on to the end of the
object,
     thus eliminating the first pointer indirection and also possibly
sharing
     some CPU cacheline with the object itself.

 (2) If not possible, only one pointer's worth of space need to be set
aside to
     reach them.  For fixed size blobs no further pointer space need be
     consumed.

 (3) All the blobs can share cachelines.  It's likely that all the blobs
     will be referenced, so if the first blob is looked at, it is likely to
     automatically draw the second into the CPU cache.

 (4) Fewer calls to the memory allocator.

With your suggestion above, how do you handle just having a single LSM
module
active?  Do you still have to go through two pointers?  Or do you have some
conditional branches to skip one pointer under some circumstances?

> > The security framework aggregates the security data for each object
into
> > one allocation which _it_ makes rather than the LSM.  Where possible,
this
> > allocation is appended to the allocation of the object itself.  The
offset
> > of the security data for each class of objects for an LSM is stored in
that
> > LSM's pointer table, and wrappers are then provided.
> 
> Hmm. Sounds convoluted.

Not really, I suspect I haven't explained it very well (it was 2am when I
wrote
this:-).

Also is the whole thing the LSM or the LSM framework, and are modules LSMs
or
LSM modules?

> > These patches theoretically permit multiple LSMs to be selected, but I
> > haven't tested that as my test machine is only set up for SELinux at
this
> > time.  They do, however, with SELinux alone.
> 
> How about you give that a try? My experience has been that any one
> LSM is easy, and that certain pairs are relatively easy to have run
> together, but they don't actually work right. More on secids below.

I plan to try stacking TOMOYO on SELinux, but I've got to set that up first
(plus I need to give some attention to other stuff).

> >  (*) Which module should get to handle secid/secctx requests? 
Currently
> >      when it comes to retrieving or converting these, then the first
module
> >      that offers the service gets it, and subsequent modules are
ignored.
> 
> There's the elephant in the pudding. If you want audit to work correctly
> you have to make the secid_to_secctx() call produce a string that
reflects
> all of the LSMs that are involved. As for the labeled networking, I fear
> that we may be forced to rethink the interfaces to distance the LSM from
> the over-the-wire representation so that it is possible for a composer or
> the infrastructure to provide unification. Plus, there are those pesky
> inode_get_secid() interfaces that I screamed so loudly against.
> 
> No, just picking the first (or last) provider/consumer of secids
> is a NAKable offense. Sorry. I would have posted something in 2010
> were it not for that.

I know.  Hence why I stated it as an issue.  The interface just doesn't
support
it.

I think the obvious thing is to reject any chosen module that implements
any of
these interfaces if we've already selected a module that implements them. 
That
would mean you can choose either Smack or SELinux, but not both.

> >  (*) Should there be a way of marking certain module mixes as not
permitted
> >      (say SELinux and Smack or TOMOYO and AppArmor)?
> 
> Oddly, I found that TOMOYO and AppArmor seem to coexist without problem.
> That's because TOMOYO doesn't do anything that conflicts actively with
> AppArmor. Any LSM that reports and/or sets its process attribute via
> /proc/self/attr/current (SELinux, Smack, AppArmor) is going to expect
> to own that interface and teaching them to share is not going to be
pretty.

At a glance, I think that the only two non-compatible modules are SELinux
and
Smack because they both implement the secctx/secid interface functions.

I was thinking that you probably wouldn't want to mix SELinux and Smack
because
they're both object-based security modules or TOMOYO and AppArmor because
they're both path-based security modules.

David
--
To unsubscribe from this list: send the line "unsubscribe
linux-security-module" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
CD: 4ms