Design — Isuma Media Players 2.5 documentation

submited by
Style Pass
2024-05-09 14:30:04

Design¶ This document is the design document for what is dubbed the “3.0 generation of media players”. It covers and explains various design decisions made during the design. In doing so, we also explain some of the design of the previous versions, mostly for historical reasons. Previous historical specifications are also available in the 2.x branch of the documentation. The basis of this document was originally written by John Hodgins in an email to Antoine Beaupré and has since then been repeatedly refactored. It should be considered a work in progress in the sense that the design of the 3.0 media players may change in the future or that this design document may be out of date. The code available in the Redmine Koumbit repository should be considered the authoritative source in case there’s ambiguity. If such problem is found with this document, bugs can be reported in the isuma-doc project See also the About this document section for more information about how to maintain this documentation. Context¶ The 2.x Media Player code base was a bit of a proof of concept – it needed to be evaluated and used to imagine what we would build from the ground up. There are a number of things we want to focus on for the long term: open sourcing, stability, and scalability. Decisions we make about these things are being applied to the 2.x development work we are doing as much as possible. The basic 2.x design is a filesystem with a server (called the “central server”) and multiple dispersed clients (called “media players”). The server tracks the locations of remote clients on networks, locations of files on remote clients, and other metadata for files. The remote clients contact the central server in order to syncronize files and send location data. The central server also publishes information about files and metadata that can be used by other systems (such as a Drupal-based website) to access and control files in the filesystem. The central server was originally implemented in Drupal with the Queues module, which was fairly inefficient, but allow for flexibly requeuing items for download and so on. A set of custom PHP scripts were also written for the media players to communicate with the central server over an XML-RPC interface. The main website would also communicate with the central server over XML-RPC to discover file locations. Various media players were built in the history of the project. The above description covers more or less the original design and 2.x implementation. The Terminology will be essential to understand the different names given to the devices as history progressed. Requirements¶ Here is the set of requirements we will hold on to in developping prototypes for the next generation. Each requirement has, if relevant, a section about the chosen implementation. The 3.0 design is a major rift from the 2.x code base, which is based on a paradigm of queues. The new paradigm would be files, keeping track of their locations, storing and making available their metadata, doing things with them and to them, etc. Open source¶ The 2.x code base is too specific to Isuma’s use-case to be valuable to anyone else. The next generation should be abstracted and generalized, in order to be useful to a wider variety of projects. Implementation:This is done by reusing existing open-source tools (mainly Puppet and Git-annex) and documenting the process more thoroughly, here and in the Koumbit Redmine issue tracker. Some software is written to glue parts together, mostly Python scripts and Puppet manifests, and are available in the Puppet git repositoryies., All software and documentation produced by Koumbit is released under a GPL-compatible license. Standard communication API¶ There should be a well defined API for communication between the different entities (local servers, central servers, clients fetching content, other clients fetching metadata). The previous communication 2.x API was through XMLRPC. XMLRPC was quite a pain to deal with, but it’s RPC and generally works. JSON and REST protocol are also elegant and much simpler to use than XMLRPC. Implementation:we have settled on using the Puppet and git-annex protocols as black boxes and expand on this. Puppet does provide a good REST API, especially through the PuppetDB system. The git-annex interface is mostly through standard SSH connexions, but can also communicate with a wide range of third party services like Amazon S3. We are also thinking of expanding the current ping/pong test to simply try to fetch files from the local network, if available, and fallback to the upstream version otherwise, which would be implemented in a client-side Javascript library. Location tracking¶ The Isuma Media Players project is a geographically distributed filesystem, with the files on local servers and file metadata on a central server. One could also describe the local servers and the central server as a CDN. This includes tracking of local server locations on the internet, along with files and basic filesystem functions (add, copy, delete, etc). Implementation:Git-annex features an extensive location tracking system that allows tracking which device has a copy of which files and enforcing a minimum number of copies. It will take care of syncing files according to flexible policies defined using git-annex’s custom language. Transfer progress will be implemented using the Puppet inventory, see Monitoring below. Modularity¶ Code should be modular so that new functionality could be added and use existing functionality. Also consider that there are multiple components that are isolated from each other: the local server, central server and website codebases are independant from each other. We should also consider the possibility of supporting other CMS in the future (e.g. Wordpress). Implementation:Puppet will take care of deploying changes and monitoring. Git-annex will take care of syncing files and location tracking. Any website using this infrastructure will clone the git-annex repository from a central server and use git-annex to get tracking information. A standard Javascript library may take care of checking existence of files. Plupload takes care of one-step uploads, both on the main website and on media players. Monitoring¶ It should be possible to monitor the status of the various media players easily. Implementation:This is implemented through the Puppet “inventory” system which makes an inventory of various “facts” collected from the Puppet clients running on all media players. There is significant latency in those checks however, Pupppet being run around once per hour. The exact parameters to be specified are detailed in Metadata . Monitoring tools such as Munin, Logstash and/or Kibana could be deployed for more precise monitoring eventually. Remote management¶ it should be possible to remotely manage the media players to debug problems with them, deploy new software and configuration. Maintenance should be automated as much as possible, when it’s possible we should be able to login the machines easily to diagnose problems and implement solutions. We should also be able to manage video playlists remotely. Download and upload bandwidth limits should be configurable remotely. It should also be possible to forbid certain files to be propagated to certain media players and prioritise the download of certain files. A link to the dashboard of the currently active media player should be provided. Implementation:Some parameters can be configured through Puppet, but remote-control is currently limited to SSH interactions and thus reserved to developpers. So we will reuse the existing autossh and auto-upgrade code for now, but it may eventually be deployed only through Puppet, see Redmine issue #17259 for progress information on this. A link to the media player configuration is not currently possible on the main website and remote traffic prioritisation is not implemented either see Redmine issue #17469 for rationale. Technical decisions¶ There are a few decisions to be made about the technical implementation. Programming language and framework¶ we favor adopting existing applications as much as possible instead of writing our own software so in that sense, this question will be answered by the best software we find for the tools we have. however, if new software is to be implemented at the server side, Python will be favored as it supports basic POSIX primitives better than PHP, and is more stable to implement daemons and servers. The Koumbit team has sufficient experience with Python to provide support in the future. Cron or daemons?¶ so far the cron-based approach has given us a lot of problems, as we had to implement a locking system that has repeatedly shown its flaws, thanks to PHP’s poor system-level API. we therefore look towards a syncing daemon, which git-annex provides. still, some short-lived jobs like the Custom metadata script and stopping/starting daemons for Schedules are implemented using cron jobs. Architecture overview¶ This diagram gives a good overview of the system. Caution The original design above involved having a custom-build transcoding server. Unfortunately, this implementation was never completed and therefore the transcoding server is somewhated treated like a black box. See Redmine issue #17492 for more details about the transcoding server design. The transcoding server is built with the Media Mover Drupal module. It adds files into a git-annex repository on the transcoding server, where files get transfered to the central server which, in turn, has the credentials to send the files up to S3. Main website¶ The main website is a source git annex repository, where files are first added from the website. This is where original files get “hashed” into their unique “key” storage. Files here are then transfered to the transcoding server. The repository is also used to do key lookups to find the keys to each file The assistant also runs here to pick up (or delete!) files uploaded by the website and sync files automatically to the transcoding server. Transcoding server¶ The transcoding server runs a source git annex respository. The files are added to it by the media mover transcoding system, and then moved to the central server for upload to S3. The original design expected files to be sent from the main website and central server for transcoding. Then scripts would have kicked in to start transcoding the files. A custom preferred content expression may be required to avoid removing the file until transcoded copies are generated. The assistant runs here to keep the repository up to date and transfer files to the central server. More details of this implementation in the Transcoding section. Central server¶ The central server is also a transfer git annex repository. All other git-annex repositories will push and pull from this repo through key-based SSH authentication, using keys and individual accounts per media players created by Puppet. Files from the media players, the main website and the transcoding server are uploaded here and then uploaded to S3. Caution More precisely, the actual preferred content is not transfer, but more a custom preferred content expression like not inallgroup=backup, to make sure it keeps files until they get sent to S3. See Redmine issue #18170 for more details.) Note The central server could also be a unwanted repository, but it seems those may be ignored by git-annex, which is not what we want. An assistant is running here to make synchronisation faster, but is otherwise not really necessary. The Puppetmaster server is the main configuration server. It will store Puppet manifests that get managed through git repositories. It is only accessible to developers through SSH. The Puppet Dashboard communicates with the Puppetmaster server to display information about the various media players to Isuma operators. We use Puppet Dashboard because it provides an ENC (External Node Classifier) that will allow us to grant configuration access to less technical users, something that is not supported by the alternative, Puppetboard. The dashbaord also provides basic statistics about the status of Git-annex, disk usage and bandwidth statistics (through vnstat) in a web interface, which replaces the previously custom-built Drupal website. Media players¶ Media players host are backup git annex repositories. That is: they hold a copy of all the files they can get their hands on. The assistant is also running here to download files from S3 storage and synchronize location tracking information with the central server repository through the SSH tunnel. Each media player is running a Puppet client which connects to the central Puppetmaster to deliver facts about the local media player and git-annex execution. Each media player also creates a reverse proxy connexion to the central server using autossh to allow remote management. Amazon S3¶ Amazon S3 stores all the files that are known to git-annex. It therefore behaves as a full backup. The file layout on there is different than the file layout on the regular git-annex repositories, as it is only the backend storage. Files there will look something like: SHA256E - s31959420 -- 42422 ebca6f3a41fc236a60d39261d21e78ef918cf2026a88091ab6b2a624688 . mp3 . m4a Yet this is used by git-annex and the website to access files. This hashing mechanism ensures that files are deduplicated in git-annex. Otherwise no special code runs on S3 for us: we just treat it as the mass storage system that it is. Files are stored in the isuma-files bucket. Note about standard groups¶ Note that we use the standard groups vocabulary above to distinguish the various functions of the different git annex repositories. An overview: Source A source git annex repository only holds file while they are being transfered elsewhere. Its normal state is to only have the metadata. Backup This repository has a copy of all the files that ever existed. This is the case for the S3 and media players repositories. Transcoding¶ Uploaded files are the “originals”. Currently, they are stored in a specific S3 bucket and also on the main website. The transcoding server NFS-mounts the main website directory and does transcoding remotely to avoid overloading the main website. This is handled by a media mover cronjob, which we would like to get rid of. Note that we also rescale images (think imagecache) right now, so this would also need to cover this work. One solution that John suggested was to write a daemon that would react to git annex repository changes and would do the transcoding and upload to amazon. This way: the main website doesn’t have access to the AWS credentials transcoding operates on a separate server still we decouple transcoding from the main website modules transcoding implementation remains stable and portable against Drupal releases and infrastructure changes One way to react to those changes could be through regular git hooks or the git annex specific post-update hook that was recently added. We should probably look at existing implementations of such a transcoding daemon, and how to hook that into git. Otherwise I would suggest using Python to implement this, as it is future-proof and an elegant and stable enough language to have a lower barrier to entry. This could all be done in a second phase of the 3G media players. Followup is in Redmine issue #17492. Metadata¶ Caution This is not implemented yet, as it needs some help from the transcoding server. For now, we only use path-based preferred content expressions, see Changing preferred content . See also Redmine issue #17492 for details about the transcoding server integration. We want to attach metadata to files. A few fields should be defined: mimetype: image/jpg, video/… quality: sd/hd/original/… original: name of the original file, allowing transcoded versions to be linked to their original file. absent for originals. channel: the number of the channels the file was published to (multiple values) site: “” for now The above metadata can then be used to have certain media players sync only certain content. For example, a given media players may only carry a certain channel or site, or certain quality settings. Those could be then used to determined the preferred content of a set (or a single) media player. We can then create groups (using the git annex groupwanted command) and assign media players to those groups (using the git annex group command). For example, this would create a group for sd and hd files and assign the current media player to it: git annex groupwanted sdhd 'metadata=quality=sd or metadata=quality=hd' git annex group here sdhd The specific transfer group can be chosen on the commandline or in a dropdown in the webapp interface, but groups need to be created on the commandline or in the configuration file. So the group definition would be propagated through puppet and could be set using the ENC. Note that those groups will not not make git-annex drop non-matching files. In other words, files that match the pattern will be kept, but other files are not necessarily removed immediately. To add a file to channels (say 1 and 2), the web site would need to do a command such as: git annex metdata - s channel += 1 - s channel += 2 file Arbitrary tags could also be used: git annex metadata - t word - t awesome file Schedules¶ Download schedules are not managed by git-annex yet. We have made Puppet rules to enforce the sync schedules to disable the S3 remote at specific times, which need to be configured through the Puppet ENC. See Redmine issue #17261 for more details. We are using the annex-ignore configuration flag to disable remotes on a specific schedule, an idea documented in the “disabling a special remote tip” upstream. This remains to be connected with the Puppet Dashboard. We have considered using the Puppet schedules but that only precludes certain resources from being loaded, which is not what we want exactly. A discussion on the Puppet mailing list clarified that we had to come up with our own solution for this. Bandwith limits¶ Bandwidth limitation is not available natively in git-annex. One solution is to override the annex.web-download-command to specify a bandwidth limit with wget. The trickle command could also be used but it wouldn’t be effective for manual downloads (see below). Another option may be in the AWS support. This was implemented with the annex.web-download-command (for downloads) and annex.rsync-upload-options for uploads. It was verified that S3 uses wget for public downloads. See Redmine issue #17262 for details. This remains to be connected with the Puppet Dashboard.xs File deletion and garbage collection¶ Removed files should be scheduled for deletion after a certain period of time that remains to be decided. This will be done by the assistant with the annex.expireunused configuration setting. See Redmine issue #17493 for followup. The annex.expireunused is used by the assistant to prune old “unused” (e.g. deleted or old versions) content. For example, this will make the assistant remove files that have been unused for 1 month: git config annex . expireunused 1 m This setting is documented in the git-annex manpage. Files uploaded to the main website repository are automatically uploaded to S3 and dropped locally, thanks to the source group the repository is assigned to. In a way, the files, once uploaded to S3, become locally unused and this is why the assistant removes them. Server-specific metadata¶ There is a certain set of metadata that isn’t the same as the “git annex metadata”. We need to propagate a certain set of server-specific metadata like the public IP address, last check-in time, and so on. This is propagated through Puppet runs, which are usually scheduled around once per hour, so there is significant latency in those checks. Puppet facts and settings¶ Puppet facts are used to send to the central server various information about the media players. In return, the media players also receive settings that affect their behavior from the central server. Those are documented in Metadata . Custom metadata script¶ The IP address of the media players are propagated using a custom Python script that saves the data in the git-annex repository. The inner workings of the script are detailed in the development section . A trace of the reasoning behind this implementation is available in Redmine issue #17091. A discussion also took place upstream, where the remote.log location was suggested. Basically, this option was retained because we wanted to avoid having another channel of communication and remote-specific metadata has to be inspected by the website to see where files are. So it’s a logical extension of the file location tracking code. The other options that were considered (and discarded) were: Puppet fact: required interoperation of the main website with Puppet, which required more research and a more explicit dependency on the Puppet requirement. concerns were also raised about the security of the system, considering how critical Puppet is (because it runs as root everywhere) Pagekite: doesn’t fulfill the requirement, because it is only a reverse proxy to bypass NATs and Firewalls. it is also a paid service and while we could have setup our own replica, it was a big overhead and wouldn’t have given us the information we wanted about the internal and external IPs out of the box. it is still considered as an alternative to the remote access problem. DDNS: would have involved running our own DNS server with a client that would update a pair of DNS records that would be looked up by the main website. this would have required a separate authentication system to be setup when we setup a new machine and extra configuration on the server. Koumbit currently uses this approach for the office router (see documentation here) but only for the office router, a quite different use case. Offline detection¶ The above metadata system works well if media players are always offline. But unfortunately, the metadata has no timestamp, so it is not possible for the main website to tell if the information is stale. For that reason, there is a purge script that detects offline media players and removes them from the metadata storage. This is documented in Metadata purge script . Remaining issues¶ These are the issues with the current design that have been noticed during development. There are also Known issues in git-annex itself that should be kept in mind here. Hard drive sync feedback¶ Right now, it is difficult to tell how or when a HD sync operates. we could send a message through to the main website (same as the IP address problem above) and use it to inform the user of progress. if we use git-annex to propagate that metadata, that could involve extra latency, as it remains to be tested how fast changes propagate from a media player through to the website. our current preferred solution is to train users to use the webapp to examine the transfers in progress. The webapp could also pop up on the desktop when a HD sync is in progress… Another option is to use desktop notifications (e.g. the notify-send command), but all those assume a working screen and desktop configuration, which is not always available. Operators can’t configure media players¶ Right now, configuration changes are accessible only by operators. right now, configuration is being fetched through our custom XML-RPC API. we’d like that to go away, so it will likely be replaced by Puppet. but then this means giving users (or at least “operators”) access to the puppet manifests, which in turns means root everywhere, so huge security issue and error potential. an External Node Classifier (ENC) may resolve that issue in that it would restrict the changes allowed to the operator. the parameters we need to change here are: bandwith limits (Redmine issue #17262 scheduling times (Redmine issue #17261 preferred content - this can also be done by the operator through the git annex webapp UI This implementation of this still needs to be decided, see Redmine issue #16705 for follow. Remote transfer prioritisation¶ Transfer prioritisation cannot be handled by an operator on the central server in the current design. this would need to be managed by an operator on the media player, so we need to teach users to operate the git annex webapp UI. those would be called “manual downloads”. git-annex has a todo item regarding having queues, but it’s not implemented at all so far. this will not be implemented at first as on-site operators can prioritise transfers. The git-annex web interface by default listens only to the localhost socket, which make it necessary to have a screen on the media player for certain operations mentionned above. A workaround is to force the webapp to listen on a specific interface, but it is yet unclear how to make it listen on all interfaces. It is possible to forward the web interface port through SSH, but then it doesn’t allow us to manage queue priority because this is done by manually forcing files to be downloaded through the file manager, which won’t show up in the web interface. In other words, the remote view of the git-annex web interface allows us to have a readonly interface to the media-players, only to see what is going on, but not prioritise downloads or remove files. In fact, the git-annex webapp interface isn’t currently available at all in the “kiosk” mode of the media players, which only provide Firefox and VLC. It could be possible to start the webapp in the kiosk mode as well, but that remains to be implemented. There are a few solutions to this issue: the aforementionned git-annex implementation, but that would require hiring Joey and actually writing that software VNC access, which would work, but would provide access to only one media player at a time Puppet-based “jobs”, but that would take at least an hour to propagate and would require Koumbit’s intervention more research on similar alternatives (e.g. MCollective, Fabric, etc) Multiple UIs¶ The 3.x design has multiple UIs: the main website, the puppet dashboard, the git annex webapp UI… this could be overwhelming for operators of media players. Unfortunately, that is the cost of modularity and software reuse at this point and, short of implementing yet another dashboard on top of all the other ones, this will not be fixed in the short term. Eventually, an interface could be built on the main website to show key Puppet parameters and so on. HTML site cache¶ The media players still don’t provide an offline copy of the Drupal site. This is an inherent problem with the way Drupal works: it is a dynamic website that is hard to mirror offline. There are various modules in Drupal that could be leveraged to distribute files on the media players, however: boost creates a static HTML cache of a running Drupal. The cache may be incomplete or even inconsistent, so it maybe not the best candidate. Still, it’s a long lasting project in the Drupal community with stable releases that is worth mentionning, if only to contrast with the others. static can create a static copy of the website. Updates can be triggered to refresh certain pages. This looks like a great candidate that could eventually be deployed to distribute an offline copy of the website. However, anything that looks like a form (comments, searching, etc) wouldn’t work. There is only a Drupal 7 version with no stable releases. html_export is similar, but hasn’t seen a release since 2012 and little changes since then. There is little documentation available to compare it with the others. Any of those options would need to first be implemented in the Drupal site before any effort is made into propagating those files into the media players. git-annex may be leveraged to distribute the files but it could be easier to just commit the resulting files into git and distribute them that way. Transitionning the main site to a static site generator engine would help a lot in distributing a static copy of the website, as there would be clearer separation of duties in the different components of the site (content submission, rendering, distribution). But that is beyond the scope of this document for now. The current design should be able to adapt to various platforms beyond Drupal, provided that files are put in the git-annex repository on the frontend site and that the site properly rewrites URLs depending on the origin of the client. Still, even with a static site generator, some research would need to be done to see how clients would discover the static copy while offline… Another way this could work would be by providing a simple way to browse the content on the media player, without being a direct mirror of the website. This issue is tracked in Redmine issue #7159. Security issues¶ Introducing git annex adds certain problematic properties to the system. Those issues were mostly addressed in a git validation hook (see also Redmine issue #17829 . The validation hook currently forbids changes to trust.log and file removal. There is also a discussion upstream about this to implement this natively in git-annex. The hook is installed in /var/lib/git-annex/isuma-files/.git/hooks/pre-receive and is managed through Puppet, in modules/gitannex/files/ Amazon S3 access¶ To allow media players to upload, they could need to be able to upload directly to Amazon, but we don’t want that, since it gives way too much power to the media players. They could, for example, destroy all the data on the precious S3 buckets. We could implement access policies and special credentials on Amazon, but that means yet another set of credentials to distribute, and complicated configurations on S3. The solution we have chosen instead is to make the media players upload to the central server which would then upload to Amazon itself, as its repository is a transfer repository. File removal¶ A malicious or broken media player may start removing files from the “master” branch in the git repository. This would be destructive in that the files would appear to be gone or “unused” from all repositories after those changes are synced out. They could then end up being garbage-collected and lost. Note that this could easily be reverted if files are not garbage-collected everywhere. A git hook that refuses pushes that remove files has been implemented to workaround that problem. Filesystem-level permissions could also have be used to enforce this, but this was considered to be more complicated, if not impossible. Tracking information tampering¶ A malicious media player could start inserting bad information in the git-annex branch, either corrupting the branch’s content or inserting erroneous information in other media player’s state information. Since this is stored in a per file basis (as opposed to per-repository), it could be difficult to control those kind of corruption. Once detected however, the offending media player access could be simply removed and changes reverted by a developper. The git hook implemented forbids changes to critical git-annex files like the trust.log file. This file is where trust information is kept, which make git-annex trust a remote or not about the location tracking information it provides. A remaining issue here is the number of copies of files in a given remote. A media player should only be allowed to change tracking information from its own files. This has not been implemented yet in the git hook, but is considered to be a benign problem: the worst case is that a media player lies about the presence of a file on Amazon or the site server, which could confuse the queuing system. A simple git annex fsck would resolve the problem. Removal of the last copy of a file¶ Normally, git-annex will not willingly drop the “last copy” (which may mean any number of copies depending on the numcopies setting) of a file, unless the --force flag is used. Nevertheless, it could be possible that some garbage-collection routine we would set would drop unused files that would be have been removed by a malicious server. the above git hooks should protect against such an attack.

Context¶ The 2.x Media Player code base was a bit of a proof of concept – it needed to be evaluated and used to imagine what we would build from the ground up. There are a number of things we want to focus on for the long term: open sourcing, stability, and scalability. Decisions we make about these things are being applied to the 2.x development work we are doing as much as possible. The basic 2.x design is a filesystem with a server (called the “central server”) and multiple dispersed clients (called “media players”). The server tracks the locations of remote clients on networks, locations of files on remote clients, and other metadata for files. The remote clients contact the central server in order to syncronize files and send location data. The central server also publishes information about files and metadata that can be used by other systems (such as a Drupal-based website) to access and control files in the filesystem. The central server was originally implemented in Drupal with the Queues module, which was fairly inefficient, but allow for flexibly requeuing items for download and so on. A set of custom PHP scripts were also written for the media players to communicate with the central server over an XML-RPC interface. The main website would also communicate with the central server over XML-RPC to discover file locations. Various media players were built in the history of the project. The above description covers more or less the original design and 2.x implementation. The Terminology will be essential to understand the different names given to the devices as history progressed.

Leave a Comment