Program is now published!
You can have a look at the ugly version of of the program. The pretty printed version will be posted soon.
You can also have a look at the list of accepted submissions with their abstracts:
Abstract: Find out what are the new features of the ETSI LIS word and character count standard for the Localization Industry: GMX/V (Global information management Metrics eXchange). GMX/V 2.0 builds on the original standard adding new features and capabilities.
Andrzej Zydroń. Using Excel as an XLIFF ediutor: YOU CANNOT BE SERIOUS!
Abstract: Microsoft's ubiquitous Excel spreadsheet program appears at first hand to be an unlikely candidate for an XLIFF editor. This presentation and demonstration will prove otherwise! In fact Excel is in many respects ideally suited for off line XLIFF file editing.
Andrzej Zydroń. OAXAL - Open Architecture for XML Authoring and Localization
Abstract: The OASIS OAXAL Reference SOA based architecture brings together all of the major Localization Industry standards: XLIFF, W3C ITS, SRX, xml:tm, TMX, GMX/V, TBX, Unicode, Unicode TR#29 into one coherent solution. This showcase will present a real world practical implementation of OAXAL showing all of the standards working together in harmony to reduce translation costs and increase automation.
Abstract: xml:tm revolutionized the way in which translation memory is managed and stored within XML documents. xml:tm 2.0 is the proposed new version within ETSI LIS which is currently undergoing public review. This presentation will feature the new changes to the original xml:tm 1.0 specification.
Yves Savourel. Interoperability and Quality Checking
Abstract: Localization quality assurance is an important aspect of the localization process. When a workflow spans different tools and environments it's crucial to keep some level of interoperability for the quality information.
This presentation shows how various types of document can go through different steps and applications and use the same mechanism to carry quality information throughout their journey. It also illustrates how ITS (the Internationalization Tag Set) 2.0 new data categories are used in real life projects, mixed with XLIFF, XML and HTML5, allowing better interoperability.
Kevin O'Donnell and Ryan King. Better software localization through XLIFF 2.0
Abstract: XLIFF 2.0 makes it possible to conduct simple or complex software localization using industry standard file formats. No longer must software localization be chained to legacy file formats and proprietary localization tools. XLIFF 2.0 enables predictable, consistent interchange of crucial software metadata including contextual data, recycling data, reference translations, binary data and validation rules. As localization buyers move towards XLIFF 2.0 for software, we can expect greater choice and innovation in the tools and solutions available across the industry.
During this talk, we'll showcase the XLIFF 2.0 elements that have been designed to enable software localization and describe how XLIFF is set to become a universal localization file format, for content and software localization.
The talk will incorporate examples of how software localization is conducted at Microsoft today, where millions of words are translated each year using proprietary software formats and tools. We’ll talk about how XLIFF 2.0 promises to transform the software localization process and ignite new opportunities in localization interoperability.
Yves Savourel. Using ITS in XLIFF
Abstract: The Internationalization Tag Set (ITS) 2.0 provides many data categories that can be used during the localization process.
As XLIFF is used by many tools and workflows, it is important to have a common way to represent ITS in XLIFF, especially with regard to overlapping or partially overlapping types of information.
This session shows the work done on this area for XLIFF 1.2, and discusses some of the challenges and opportunities the added metadata may bring to the consumers of XLIFF documents.
Abstract: For a localiser or localisation project manager, the shift from static html-based to CMS-based dynamic websites usually involves assimilating a new editing environment, acquiring administrative rights for the site, and relinquishing the various benefits of using Computer-Assisted Translation tools (Torres del Rey & Rodríguez Vázquez de Aldana 2011; Schnabel 2013). However, the possibility of integrating CAT tools in the localisation process is now becoming a reality. This is the case, for instance, of the Drupal FOSS web CMS when complemented with XLIFF Tools . In this research paper, we introduce a java application we have developed (using Schnabel’s xliff RoundTrip XSL files) for the import/export of multilingual web content for the Joomla! FOSS web CMS (with the FaLang extension ) as well as its strategy for the selection of newly added or edited content and the workflow for carrying this content from source to target with CAT tools. We will then point at some of the current limitations and will conclude by discussing our future work, which includes:
--Analysing which parts or fields of a Joomla! element (“module”, “menu”, “content”, etc.) should be exported to XLIFF to optimise the localisation process.
--Assessing, by working with different websites, whether integrating XLIFF and CAT tools in the localisation process can compensate for the lack of visualisation obtained by directly using the CMS’s editing environment.
--Evaluating, after analysing the segmentation produced by the tool, whether it would be advisable to add a new module for the generation of an XML file including ITS rules.
--Extending the application to support the Josetta multilingual content manager for Joomla!
Abstract: Linport is a combination of three distinct projects from three very different organizations all working together to promote interoperability among translation tools and other multilingual based applications. The Linport project got its start when, as one of the final initiatives of the dying LISA standards body, a small team of its members was tasked to create a standard by which project data such as multilingual translation data could be digitally represented and moved between translation environment tools, much like how the internationally standardized containers on ships facilitate cargo shipping. An important factor in the creation of this new standard would be the inclusion of Structured Translation Specifications (STS), 21 distinct parameters used to guide all participants in a translation project toward quality translations by supplying them with specific information about the source and target texts and various project requirements.
The container project team soon found out that this kind of standard was globally relevant when they discovered that the European Commission had already been working on a similar project called the Multilingual Electronic Dossier (MED). This was an electronic representation of the physical dossiers used in their translation departments. The MED and container projects were soon combined and later integrated with the work being done by the Interoperability Now!(IN!) group. IN! had been working on a bilingual packaging format called a Translation Interoperabiliy Protocol Package (TIPP) for translation tasks incorporating XLIFF and other technologies.
With this combination Linport developed a two layer system. An entire translation project, (i.e., a multilingual compilation of documents) can be represented in a Linport project portfolio. This portfolio can then be broken down into individual bilingual translation tasks, each in the form of TIPP. Once the tasks have been completed, the TIPP packages can be integrated back into the project portfolio. Both the Linport portfolio and the smaller TIPP package have Structured Translation Specifications to guide the translators in their crucial work. XLIFF can be used at both levels since source and target texts can be represented as individual files or as one XLIFF file. Linport is currently in the development stage with the goal of becoming an industry standard very soon. Ongoing monthly conference calls are held by a growing community of interested individuals and organizations.
One of the additional goals of Linport is to further the development and integration of standards into Linport-based workflows. The same specifications used in the STS serve as a basis for the quality-assessment tools being developed by the EU-funded QTLaunchPad project. By working to align these two projects, quality assessment will be more integrated into overall project specifications and Linport will gain an easy way to ensure that Linport projects are assessed appropriately. A further point of contact is with the ITS 2.0 specification, which provides a mechanism to refer to the quality expectations outlined in an STS and to integrate them into a standard, QTLaunchPad-compatible mechanism that enables quality to be addressed in any tool that implements ITS 2.0’s quality markup.
This presentation will address the current state of Linport and will demonstrate tools for working with Linport portfolios and packages. It will also cover plans for further development and issues found in its development, as well as plans for further integration with ITS 2.0 and QTLaunchPad.
More information can be found at www.linport.org.
Jörg Schütz. Corporate Language Audits Enabling Dynamic Localization Concerns
Abstract: For managing corporate content, relating it to multi-faceted customer and consumer data, and modeling its life-cycle business processes including the preparation for localization and translation, a (source) language audit can help to identify what issues need to be fixed and adapted (language and cultural wise), and to establish a baseline and metrics for measuring success and quality, and determine an effective and efficient pathway for achieving set goals.
During the proposed presentation, we will discuss how to evaluate a content set, and the dynamic processes for reuse, metadata, content componentization, GILT considerations and memory management, multilingual terminology and other issues related to corporate language concerns, and how to employ the recently developed ITS 2.0 data categories and related standards such as W3C Provenance and ISO/TS 11669 (translation projects) to smoothly describe, relate, stream and gear relevant information. At the end of the presentation, the audience attendees will be able to do the following:
* Identifying issues related to GILT in their templates and content (source content audit and quality)
* Identifying opportunities for reuse across different delivery types (multidimensional evaluation metric and quality)
* Identifying particular information types, and the appropriate metadata categories found in the content set (metadata indicators, feedback cycles and curation)
* Determining the granularity needed to componentize the content (metadata indicators, feedback cycles and curation)
* Recognizing primary issues and bottlenecks in authoring, QA, and change control of workflows and processes (process modeling and management)
The intended target audience includes but is not limited to Senior Corporate and Technical Communicators, Localization Project Managers, and managers who need to evaluate their content and processes. Participants should have some knowledge of technical communication best practices, and knowledge of their company's processes and teaming abilities.
The takeaways of this presentation will include:
* How to start an audit of your content and how to effectively employ standards
* Tools to support and conduct the audit
* Understanding of metrics and ideas for what works for different enterprise types
Jörg Schütz. Notes from the XLIFF Underground: "Elements of Style" for Language Interoperability
Abstract: At the first XLIFF Symposium in Limerick, many presenters and attendees expressed their dissatisfaction with the usability and the tool vendor support of XLIFF 1.2. Out of this frustration an industrial led group was established with the goal to substantially contribute to better interoperability in the field. Within the last 2.5 years the Interoperability-Now! group has designed and implemented an extended and semantically restricted version with 100% compatibility of XLIFF 1.2 for the document area named XLIFF:doc. Recently XLIFF:doc 1.0 was released and officially published. The work has benefited from a wide range of comments and contributions from the genuine community and communities of related fields.
In the proposed presentation, we will show the main design and implementation decisions of XLIFF:doc which are based on the existing XLIFF 1.2 extension mechanism and general interoperability considerations. Since XLIFF:doc has been developed in tandem with the data exchange format TIPP (Translation Interoperability Protocol Package), this container format and its relationship to XLIFF:doc is also briefly discussed. To situate the work of Interoperability-Now! in the broad field of localization and translation, the presentation will also outline how XLIFF:doc might act as a blue print for some of the future XLIFF 2.0 features such as metadata, memory data and terminological linking, and what should be avoided in XLIFF 2.0 because of foreseeable clashes with interoperability concerns. As such the presentation actively contributes to the evolution of XLIFF and the transitioning from XLIFF 1.2 to 2.0.
The presentation will close with an outlook to future work including aspects of business process modeling and management, as well as the on-going collaboration with the LINPORT (Language Interoperability Portfolio) project which has an interoperability scope that also covers content authoring and publishing in addition to localization and translation.
XLIFF:doc 1.0 and TIPP 1.5 are available at http://interoperability-now.org
LINPORT http://linport.org and an Interest Group at GALA
Several incarnations of XLIFF Core 2.0 from http://www.oasis-open.org
Forthcoming event: MT Summit XIV Tutorial in Nice, France, September 2013 (http://mtsummit2013.info/tutorials.asp)
Fredrik Estreen and Ryan King. Everything you ever wanted to know about XLIFF 2.0
Abstract: An overview of the new XLIFF 2.0 standard from two sides of the industry. The presentation will present the evolution of the standard and how the changes help different players in the translation supply chain. The intention is not to cover every aspect of the new standard but rather cover the high-level aspects while also highlighting in more detail areas the presenters feel are important.
The presentation will include the following high level topics:
• How is XLIFF 2.0 different to XLIFF 1.2?
• Design and structure (core, modules, extensions)
• Processing rules (objectives, limitations)
• Feature overview (very quick review of feature list)
• Examples (selected examples of what is improved in XLIFF 2.0)
Ankit Srivastava and Declan Groves. ITS 2.0-enabled Statistical Machine Translation and Training Web Services
Abstract: In this paper, we describe two Statistical Machine Translation (SMT) web services in the ITS 2.0 domain: (a) Online translation of documents tagged with ITS 2.0 metadata, and (b) Online training of SMT modules with parallel text tagged with ITS 2.0 metadata. The translation web service will demonstrate the impact of five ITS 2.0 data categories (translate, domain, language information, terminology, and MT Confidence) on MT performance and efficacy by contrasting MT performances with / without ITS 2.0 on Spanish-English texts. For example, the "translate" category is used to mark segments in a document which need to be printed as-is and not passed to the SMT decoder. The training web service will demonstrate a prototype where parallel text tagged with ITS 2.0 data categories (translate, provenance, domain) can be used to retrain a pre-existing SMT system. It will investigate whether incorporating "translate" and "domain" tags in the training data helps generate better equipped translation and language models. The paper will give a technical description of the modifications required in a SMT system for translating as well as training with ITS 2.0 data, along with examples of usage scenarios.
Abstract: The XLIFF standard is an evolving standard that not only provides the basis for defining the XML Localization Interchange File Format (XLIFF) but is also used as the basis for other flavours of XLIFF that are extended or simplified to one extent or another.
The presentation will go through the creation of an XLIFF 2.0 filetype using the Studio SDK to demonstrate the advantages of a flexible platform for users of our technology. Using the SDK we'll take around 30 mins for this, so probably need a around 45 mins for the presentation as a whole that will show how a developer could quickly create a customised XLIFF filetype on their own and make it available for Studio users with no input from SDL at all.
The filetype created on the day will also be made available for users to try and feedback on through the SDL OpenExchange.
Phil Ritchie. Using state of the art CNGL Technology, a new W3C Standard and Industrial Experience to Improve the Post editing Process
Abstract: VistaTEC Post Editor’s Workbench is a desktop application which utilises new CNGL Text Analysis and Classification techniques from start up Digital Linguistics, the W3C Internationalization Tag Set 2.0 Standard and the business experience and expertise of VistaTEC to provide a revolutionary and efficient working environment for post editors and language reviewers.
Joachim Schurig. A rose is not a horse - but how close are they? A primer on match percentages
Abstract: Similarity between sentences is the basis of all effort calculation in the language industry. However, other than in popular belief, there is no agreed method in how to measure this similarity. Effort calculation is in turn the basis to all price calculation. So, there is apparently no common understanding about the way we calculate our prices in this industry. This merits a closer look! How do the different translation tools measure similarity? Do we know? Is it a secret? What would be fair algorithms to apply? Could those be universally defined, for any language in the world?
ETSI ISG LIS has taken the challenge to develop a standard on similarity calculation between two segments of text. In this presentation, you will learn about the amazing difficulty to come to an agreement in what describes the similarity of texts. And you will learn how simplistic your various language tools deal with the issue, each of those of course in completely different ways.
David Filip and Asanka Wasala. Addressing Interoperability Issues of XLIFF: Towards the Definition and Classification of Agents and Processes
Abstract: This paper and its presentation are intended as a theoretical and practical motivation/explanation for a comment to the XLIFF 2.0 CSPRD that proposes to introduce process and agent classification in the standard's specification, so that conformance claims of varied tool types can be based on normative definitions provided within the spec.
With increased adoption, XLIFF is maturing. Solutions to significant issues relating to the current version of the XLIFF standard and its adoption are being proposed in the context of the development of XLIFF version 2. Important matters for a successful adoption of the standard, such as standard-compliance, conformance and interoperability of technologies claiming to support the standard have not yet been adequately addressed.
In this research we aim to fill this gap by mainly proposing a process and agent related terminology and an approach towards agent classification. We also highlight the importance of defining the term “support” in the context of XLIFF tool compliance and discuss the importance of a semantic model to enable fully automated machine processing of XLIFF data.
A very few research activities have been reported in this area. A common observation noted with regards to XLIFF is that many features described in the XLIFF specification are either not supported or only partially supported by localisation tools. Bly (2010) analysed XLIFF support in commonly used tools and presented a matrix containing tools and their level of support for individual XLIFF elements. The level of support is divided into three categories: no support, partial support and full support. Imhof (2010) also proposes a classification similar to Bly’s (2010). Imhof (2010) identifies three levels of XLIFF support in tools: tools belonging to the first level treat XLIFF files as just XML files; tools belonging to the second level support XLIFF partially (i.e. these tools are capable of interpreting source and target content, but only a limited set of features of the XLIFF standard are implemented); and tools with full XLIFF support are grouped into the third level. Like Bly (2010), Anastasiou and Morado-Vázquez (2010) also classified some selected tools into two categories: XLIFF converters (i.e. generators) and XLIFF editors. In terms of enabling semantic interoperability, Wasala (2013) analysed over a 3000 XLIFF files and identified various features leading to interoperability issues. Frimannsson and Lieske (2010) as well as Anastasiou (2011) propose the use of new approaches such as Resource Description Framework (RDF) to represent additional metadata. Moreover, some tools have been developed for validating standard-compliance (e.g. for XLIFF, TBX and TMX content), however, no tools have been reported to assess the actual level conformance with these standards, for a given file, except in the EU co-founded IGNITE project (LocFocus 2006; Schäler 2007), which is currently not operational.
This research augments existing work and focus on methodologies to improve standard-compliance by defining terminology related to agents and processes. The process and agent related terminology will help to improve the prescriptiveness and coverage of the XLIFF specification. Thereby it will reduce ambiguities in the specification. An agent classification will contribute to optimum selection of tools to minimise the data-loss. We also highlight the importance of defining and developing an underlying semantic model such as an ontology or a metadata repository to address semantic interoperability issues associated with some XLIFF features.
Abstract: The demo shows the use case developed in MultilingualWeb-LT for the full interoperability by automatic interchange between Content Management System (CMS) and Translation Management System using ITS.20 in a real life scenario (http://www.w3.org/International/multilingualweb/lt/wiki/Use_cases_-_high_level_summary#Interchange_between_Content_Management_System_and_Translation_Management_System).
In the roundtripping, the content originates in a CMS, and gets exposed/serialized as XHTML + ITS 2.0. This is sent to a TMS, and processed in a workflow. Upon completion, the TMS exposes/serializes localized/translated XHTML + ITS 2.0 to the CMS.
For the CMS, Cocomore has provided Drupal with an open source module for the ITS 2.0 integration and interoperability via webservices(http://drupal.org/project/its). For the TMS, Linguaserve has adapted its GBC Server webservices and the translation workflow platform PLINT. The Cocomore’s final client is VDMA (German machinery and plant manufacturer’s association).
The demo will be structured in three parts:
1) Short description of the system (PPT).
2)A real life demo, showing the CMS from Frankfurt and the TMS from Madrid. The real life demo will show:
a. Creation and ITS 2.0 annotation of content
b. Content round tripped between the CMS and the TMS.
c. TMS workflow with ITS 2.0
d. CMS-side management, review and publication
Infrastructure for voice and screen shared connection London-Frankfurt-Madrid during the session will be required.
This demo has not been shown before in other conferences outside the W3C MultilingualWeb-LT Working Group.
Abstract: This presentation describes the design of the mapping of ITS2.0 into RDF. It will show how this mapping can be used to express ITS annotation of textual content in both the NLP Interchange Format RDF vocabulary and the W3C Provenance ontology. The NLP Interchange Format enables the exchange of fine grained parsing output of NLP tools. The Provenance Ontology is an RDF vocabulary that allows the result of activities operating on entities to be recorded. An interoperability showcase will then show how linked data can be used to track and analyse the operation of multiple stages of an XLIFF based localisation workflow by using these mappings. This use of the ITS ontology can both improve the visibility and optimisation localisation workflows across multiparty value chains and enable the on-demand, quality aware assembly of parallel corpora for retraining MT engines.
Abstract: ITS2.0 introduces several annotation that can provide useful information to translators via the user interfaces of computer aided translation (CAT) tools. Such annotations provides translators with meta-data about what text should or should not be translated, about text defined as terminology, about restrictions on size and character set used in translation or about localisation notes from content authors. It also offers meta-data useful for CAT tool users in interpreting the output of language technologies such as text analytics engines and statistical machine translation engines. Finally ITS annotations can record the outcome of actions by translators, such as translation and translation revision provenance and the identification of localisation quality issues and rating. This paper examines how CAT tool users might view, edit and create ITS annotations. We use the open source CAT tool OmegaT to explore the implementation of these requirements and to explore the recording of annotation actions in CAT tool instrumentation logs.
Serge Gladkoff. Practical visualization of ITSv2.0 categories for real world localization process
Abstract: In 2013, W3C will be finalizing ITS (Internationalization Tag Set) 2.0 specification as a complementary part of XML, HTML5 and XLIFF specifications. ITS delivers localization instructions and other context information via meta-data tags (16 data types) embedded into the content. Currently neither browsers, nor CAT tools display these meta-data to end users. As a contractor of respective workgroup, Logrus is developing visualization tools for ITS 2.0 metadata embedded in HTML5, XML and XLIFF files. These tools will enable translators and reviewers to refer to localization context visually presented in web browser window while working on the content in their content editor or CAT tool. This presentation provides the draft visual designs developed for most important ITS 2.0 data categories and explains the typical use case workflow, as well as business benefits.
Abstract: ITS2.0 provides a means for annotating content with common meta-data that addresses different aspects of content localisation from content creation, through extraction, segmentation, terminology management, automated translation, post-editing, quality assurance to publication of the translated content. This paper explores the lessons learnt in developing ITS2.0 as a suite of interoperability annotations that can then be applied to other end-to-end content processing interoperability solutions that span different content format, content processing tools and engines and content processing service providers in the same way. The paper therefore discusses how the ITS approach to annotation of real world content processing chains could benefit other types of content processing. In particular we propose annotations that follow the ITS annotation patterns but address personalisation content processing. Such annotations may address: the control of what content may be personalised; how content may be sliced into recomposable units for personalised delivery; how words and phrases may be annotated with domain or semantic information useful for personalisation engines; how content can be annotated with personalisation instructions and how the provenance of personalisation processes and its impact on users can be captured. From this proposal the potential for integrated localisation and personalisation processing is considered.
Klemens Waldhör. Recommender Systems as part of Localization Project Management with XLIFF
Abstract: The use of recommender systems is a key feature of today's eBusiness world. Notable examples are Amazon and Facebook. The translation industry does not use recommender systems as heavily as other industries. Their application brings great benefit to customers, vendors and translators alike by rationalizing the decision process in different areas.
The beoRecommender system, implemented for beo, a translation vendor in Stuttgart, serves two main goals: a) improving the selection process of translators based on various criteria like costs, delivery date, reliability etc., and b) helping translators choose the best translation from a set of TM proposals. This talk will focus on b) showing how recommendation can be used to select project specific criteria to improve matching order. The tool receives XLIFF files as input and sorts TM matches (either from a TM database or from the xliff) using various recommendation algorithms. Based on this criteria the beoRecommender tailors the files for specific project characteristics.
The talk will refer to several examples from real projects and discuss the advantages all involved parties will gain from using recommender systems.
Bryan Schnabel. XLIFF Roundtrip Support In Content Management Systems
Abstract: When life was simpler, most companies knew where their content lived. It was usually in big files on the file system (in the case of technical documentation), or HTML files on a static web site (in the case of a company's internet presence).
Back then there were plenty of difficulties with translation, but finding and exchanging files was not one of the more challenging aspects.
Today most companies cannot continue to rely on file systems and static web sites. More and more they are turning to Content Management Systems (CMS).
CMS is great for managing and sharing data. But it makes translation even more challenging sometimes. Data now often lives in tables and objects in databases.
A great solution is to integrate the ability to roundtrip in and out of XLIFF. Some CMS makers have embraced this idea. And some are not quite there.
In this presentation I will give a detailed demonstration of the workflow to perform XLIFF roundtrips in Component and Web CMS. I will then take a quick look at the state of support for XLIFF across several CMS tools.
David Filip, David Lewis, Leroy Finn, Sean Mooney, David O'Carrol and Philip O'Duffy. XLIFF based roundtrip of ITS2.0 metadata deomonstrated on an open service oriented platform
Abstract: This is an update of the showcase given in Seattle at FEISGILTT 2012:
CMSL10n <-> SOLAS Integration as an ITS 2.0 <-> XLIFF test bed
Trinity College Dublin (TCD) and University of Limerick (UL) have been working on ITS2.0 and XLIFF 2.0 and one of their main interests in these two efforts has been to make sure that the two major I18N and L10N standards hosted by W3C and OASIS respectively match and facilitate useful roundtripping of critical content metadata throughout the whole content life cycle, so that the metadata informs the localization process, and at the same time useful localization metadata have a standardized vehicle to be communicated back to the source format.
A major breakthrough in ITS 2.0 and consequently its XLIFF mapping is that it works with categories, which can introduce metadata during the localization cycle.
For tweaking this interaction the TCD-UL distributed test bed CMS L10N <-> SOLAS has been instrumental.
This year round we will be showing e.g.
- How term and text analytics metadata are consumed by XLIFF 1.2 AND XLIFF 2.0 human AND MT consumers
- How MT introduces MT confidence during the L10n cycle and how this information gets re-imported to the source HTML or XML.
- We will also show how an arbitray source file can enter the SOLAS workflow via an OKAPI based extractor/merger.
Both Systems are ITS 2.0 and XLIFF 1.2 reference implementations.
Both aim to become XLIFF 2.0 reference implementations.
Des Oates. [Plenary Keynote] Standards Based Interoperability in Corporate Service Oriented Architectures
Abstract: As a publisher of large volumes of global content managed across many different contexts, Adobe like many large global organisations, faces an increasing challenge of how to create this content, and make it available to its customers and partners in a culturally appropriate form and in a timely but cost-effective manner.
We will present an overview of the solution we devised to meet this challenge. A suite of decoupled services working in concert over many configurable workflows to meet the demands of the burgeoning set of business cases.
When discussing our services platform, we will highlight the importance of initiatives like ITS2.0 and other standards into this space, and the opportunity the offer to lower the 'impedance' between localisation services technologies.