Keyword Articles

SEO Article Writing

  • Home
  • About
  • Contact Us
  • Privacy Policy
  • Terms of Use

By Erik

Web accessibility

Web accessibility is the inclusive practice of ensuring there are no barriers that prevent interaction with, or access to, websites on the World Wide Web by people with physical disabilities, situational disabilities, and socio-economic restrictions on bandwidth and speed. When sites are correctly designed, developed and edited, generally all users have equal access to information and functionality.

For example, when a site is coded with semantically meaningful HTML, with textual equivalents provided for images and with links named meaningfully, this helps blind users using text-to-speech software and/or text-to-Braille hardware. When text and images are large and/or enlargeable, it is easier for users with poor sight to read and understand the content. When links are underlined (or otherwise differentiated) as well as colored, this ensures that color blind users will be able to notice them. When clickable links and areas are large, this helps users who cannot control a mouse with precision. When pages are not coded in a way that hinders navigation by means of the keyboard alone, or a single switch access device alone, this helps users who cannot use a mouse or even a standard keyboard. When videos are closed captioned or a sign language version is available, deaf and hard-of-hearing users can understand the video. When flashing effects are avoided or made optional, users prone to seizures caused by these effects are not put at risk. And when content is written in plain language and illustrated with instructional diagrams and animations, users with dyslexia and learning difficulties are better able to understand the content. When sites are correctly built and maintained, all of these users can be accommodated without decreasing the usability of the site for non-disabled users.

The needs that web accessibility aims to address include:

Accessibility is not confined to the list above, rather it extends to anyone who is experiencing any permanent, temporary or situational disability. Situational disability refers to someone who may be experiencing a boundary based on the current experience. For example, a person may be situationally one-handed if they are carrying a baby. Web accessibility should be mindful of users experiencing a wide variety of barriers. Unfortunately, according to a 2018 WebAIM global survey of web accessibility practitioners, close to 93% of survey respondents received no formal schooling on web accessibility.

Individuals living with a disability use assistive technologies such as the following to enable and assist web browsing:

In 1999 the Web Accessibility Initiative, a project by the World Wide Web Consortium (W3C), published the Web Content Accessibility Guidelines WCAG 1.0.

On 11 December 2008, the WAI released the WCAG 2.0 as a Recommendation. WCAG 2.0 aims to be up to date and more technology neutral. Though web designers can choose either standard to follow, the WCAG 2.0 have been widely accepted as the definitive guidelines on how to create accessible websites. Governments are steadily adopting the WCAG 2.0 as the accessibility standard for their own websites. In 2012, the Web Content Accessibility Guidelines were also published as an ISO/IEC standard: “ISO/IEC 40500:2012: Information technology – W3C Web Content Accessibility Guidelines (WCAG) 2.0”.

There has been some criticism of the W3C process, claiming that it does not sufficiently put the user at the heart of the process. There was a formal objection to WCAG’s original claim that WCAG 2.0 will address requirements for people with learning disabilities and cognitive limitations headed by Lisa Seeman and signed by 40 organizations and people. In articles such as “WCAG 2.0: The new W3C guidelines evaluated”, “To Hell with WCAG 2.0” and “Testability Costs Too Much”, the WAI has been criticised for allowing WCAG 1.0 to get increasingly out of step with today’s technologies and techniques for creating and consuming web content, for the slow pace of development of WCAG 2.0, for making the new guidelines difficult to navigate and understand, and other argued failings.

The accessibility of websites relies on the cooperation of several components:

Web developers usually use authoring tools and evaluation tools to create web content. People (“users”) use web browsers, media players, assistive technologies or other “user agents” to get and interact with the content.

Because of the growth in internet usage and its growing importance in everyday life, countries around the world are addressing digital access issues through legislation. One approach is to protect access to websites for people with disabilities by using existing human or civil rights legislation. Some countries, like the U.S., protect access for people with disabilities through the technology procurement process. It is common for nations to support and adopt the Web Content Accessibility Guidelines (WCAG) 2.0 by referring to the guidelines in their legislation. Compliance with web accessibility guidelines is a legal requirement primarily in North America, Europe, parts of South America and parts of Asia.

In 2000, an Australian blind man won a $20,000 court case against the Sydney Organizing Committee of the Olympic Games (SOCOG). This was the first successful case under Disability Discrimination Act 1992 because SOCOG had failed to make their official website, Sydney Olympic Games, adequately accessible to blind users. The Human Rights and Equal Opportunity Commission (HREOC) also published World Wide Web Access: Disability Discrimination Act Advisory Notes. All Governments in Australia also have policies and guidelines that require accessible public websites.

In Brazil, the federal government published a paper with guidelines for accessibility on 18 January 2005, for public reviewing. On 14 December of the same year, the second version was published, including suggestions made to the first version of the paper. On 7 May 2007, the accessibility guidelines of the paper became compulsory to all federal websites. The current version of the paper, which follows the WCAG 2.0 guidelines, is named e-MAG, Modelo de Acessibilidade de Governo Eletronico (Electronic Government Accessibility Model), and is maintained by Brazilian Ministry of Planning, Budget, and Management.

The paper can be viewed and downloaded at its official website.

In 2011, the Government of Canada began phasing in the implementation of a new set of web standards that are aimed at ensuring government websites are accessible, usable, interoperable and optimized for mobile devices. These standards replace Common Look and Feel 2.0 (CLF 2.0) Standards for the Internet.

The first of these four standards, Standard on Web Accessibility came into full effect on 31 July 2013. The Standard on Web Accessibility follows the Web Content Accessibility Guidelines (WCAG) 2.0 AA, and contains a list of exclusions that is updated annually. It is accompanied by an explicit Assessment Methodology that helps government departments comply. The government also developed the Web Experience Toolkit (WET), a set of reusable web components for building innovative websites. The WET helps government departments build innovative websites that are accessible, usable and interoperable and therefore comply with the government’s standards. The WET toolkit is open source and available for anyone to use.

The three related web standards are: the Standard on Optimizing Websites and Applications for Mobile Devices, the Standard on Web Usability and the Standard on Web Interoperability.

In 2019 the Government of Canada passed the Accessible Canada Act. This builds on the on provincial legislation like the Accessibility for Ontarians with Disabilities Act, The Accessibility for Manitobans Act and the Nova Scotia Accessibility Act.

In February 2014 a draft law was endorsed by the European Parliament stating that all websites managed by public sector bodies have to be made accessible to everyone.

On 26 October 2016, the European Parliament approved the Web Accessibility Directive that requires that the websites and mobile apps of public sector bodies be accessible. The relevant accessibility requirements are described in the European standard EN 301 549 V1.1.2 (published by ETSI). EU member states are expected to bring into force by 23 September 2018 laws and regulations that enforce the relevant accessibility requirements.

Some categories of websites and apps are excepted from the directive, for example “websites and mobile applications of public service broadcasters and their subsidiaries”.

The European Commission’s “Rolling Plan for ICT Standardisation 2017” notes that ETSI standard EN 301 549 V1.1.2 will need to be updated to add accessibility requirements for mobile applications and evaluation methodologies to test compliance with the standard.

In 2019 the European Union introduced the European Accessibility Act which is now seen as one of the leading pieces of legislation for digital accessibility.

In Ireland, the Disability Act 2005 requires that where a public body communicates in electronic form with one or more persons, the contents of the communication must be, as far as practicable, “accessible to persons with a visual impairment to whom adaptive technology is available” (Section 28(2)). The National Disability Authority has produced a Code of Practice giving guidance to public bodies on how to meet the obligations of the Act. This is an approved code of practice and its provisions have the force of legally binding statutory obligations. It states that a public body can achieve compliance with Section 28(2) by “reviewing existing practices for electronic communications in terms of accessibility against relevant guidelines and standards”, giving the example of “Double A conformance with the Web Accessibility Initiative’s (WAI) Web Content Accessibility Guidelines (WCAG)”.

The Israeli Ministry of Justice recently published regulations requiring Internet websites to comply with Israeli standard 5568, which is based on the W3C Web Content Accessibility Guidelines 2.0. The main differences between the Israeli standard and the W3C standard concern the requirements to provide captions and texts for audio and video media. The Israeli standards are somewhat more lenient, reflecting the current technical difficulties in providing such captions and texts in Hebrew.

In Italy, web accessibility is ruled by the so-called “Legge Stanca” (Stanca Act), formally Act n.4 of 9 January 2004, officially published on the Gazzetta Ufficiale on 17 January 2004. The original Stanca Act was based on the WCAG 1.0. On 20 March 2013 the standards required by the Stanca Act were updated to the WCAG 2.0.

Web Content Accessibility Guidelines in Japan were established in 2004 as JIS (Japanese Industrial Standards) X 8341-3. JIS X 8341-3 was revised in 2010 to adopt WCAG 2.0. The new version, published by the Web Accessibility Infrastructure Commission (WAIC), has the same four principles, 12 guidelines, and 61 success criteria as WCAG 2.0 has.

In Malta Web Content Accessibility assessments were carried out by the Foundation for Information Technology Accessibility (FITA) since 2003. Up till 2018 this was done in conformance with the requirements of the Equal Opportunities Act (2000) CAP 43 and applied WACG guidelines. With the advent of the EU Web Accessibility Directive the Malta Communications Authority was charged with ensuring the accessibility of online resources owned by Maltese public entities. FITA continues to provide ICT accessibility assessments to public and commercial entities, applying standared EN301549 and WCAG 2.1 as applicable. Therefore both the Equal Opportunities Act anti-discrimination legislation and the transposed EU Web Accessibility Directive are applicable to the Maltese scenario.

In Norway, web accessibility is a legal obligation under the Act 20 June 2008 No 42 relating to a prohibition against discrimination on the basis of disability, also known as the Anti-discrimination Accessibility Act. The Act went into force in 2009, and the Ministry of Government Administration, Reform and Church Affairs [Fornyings-, administrasjons- og kirkedepartementet] published the Regulations for universal design of information and communication technology (ICT) solutions [Forskrift om universell utforming av informasjons- og kommunikasjonsteknologiske (IKT)-losninger] in 2013. The regulations require compliance with Web Content Accessibility Guidelines 2.0 (WCAG 2.0) / NS / ISO / IEC 40500: 2012, level A and AA with some exceptions. The Norwegian Agency for Public Management and eGovernment (Difi) is responsible for overseeing that ICT solutions aimed at the general public are in compliance with the legislative and regulatory requirements.

As part of the Web Accessibility Initiatives in the Philippines, the government through the National Council for the Welfare of Disabled Persons (NCWDP) board approved the recommendation of forming an adhoc or core group of webmasters that will help in the implementation of the Biwako Millennium Framework set by the UNESCAP.

The Philippines was also the place where the Interregional Seminar and Regional Demonstration Workshop on Accessible Information and Communications Technologies (ICT) to Persons with Disabilities was held where eleven countries from Asia – Pacific were represented. The Manila Accessible Information and Communications Technologies Design Recommendations was drafted and adopted in 2003.

In Spain, UNE 139803:2012 is the norm entrusted to regulate web accessibility. This standard is based on Web Content Accessibility Guidelines 2.0.

In Sweden, Verva, the Swedish Administrative Development Agency is responsible for a set of guidelines for Swedish public sector web sites. Through the guidelines, web accessibility is presented as an integral part of the overall development process and not as a separate issue. The Swedish guidelines contain criteria which cover the entire lifecycle of a website; from its conception to the publication of live web content. These criteria address several areas which should be considered, including:

An English translation was released in April 2008: Swedish National Guidelines for Public Sector Websites. The translation is based on the latest version of Guidelines which was released in 2006.

In the UK, the Equality Act 2010 does not refer explicitly to website accessibility, but makes it illegal to discriminate against people with disabilities. The Act applies to anyone providing a service; public, private and voluntary sectors. The Code of Practice: Rights of Access – Goods, Facilities, Services and Premises document published by the government’s Equality and Human Rights Commission to accompany the Act does refer explicitly to websites as one of the “services to the public” which should be considered covered by the Act.

In December 2010 the UK released the standard BS 8878:2010 Web accessibility. Code of practice. This standard effectively supersedes PAS 78 (pub. 2006). PAS 78, produced by the Disability Rights Commission and usable by disabled people. The standard has been designed to introduce non-technical professionals to improved accessibility, usability and user experience for disabled and older people. It will be especially beneficial to anyone new to this subject as it gives guidance on process, rather than on technical and design issues. BS 8878 is consistent with the Equality Act 2010 and is referenced in the UK government’s e-Accessibility Action Plan as the basis of updated advice on developing accessible online services. It includes recommendations for:

BS 8878 is intended for anyone responsible for the policies covering web product creation within their organization, and governance against those policies. It additionally assists people responsible for promoting and supporting equality and inclusion initiatives within organizations and people involved in the procurement, creation or training of web products and content. A summary of BS 8878 is available to help organisations better understand how the standard can help them embed accessibility and inclusive design in their business-as-usual processes.

On 28 May 2019, BS 8878 was superseded by ISO 30071-1, the international Standard that built on BS 8878 and expanded it for international use. A summary of how ISO 30071-1 relates to BS 8878 is available to help organisations understand the new Standard.

In the United States, Section 508 Amendment to the Rehabilitation Act of 1973 requires all Federal agencies’ electronic and information technology to be accessible to those with disabilities. Both members of the public and federal employees have the right to access this technology, such as computer hardware and software, websites, phone systems, and copiers.
Also, Section 504 of the Rehabilitation Act prohibits discrimination on the basis of disability for entities receiving federal funds and has been cited in multiple lawsuits against organizations such as hospitals that receive federal funds through medicare/medicaid.

In addition, Title III of the Americans with Disabilities Act (ADA) prohibits discrimination on the basis of disability. There is some debate on the matter; multiple courts and the U.S. Department of Justice have taken the position that the ADA requires website and app operators and owners to take affirmative steps to make their websites and apps accessible to disabled persons and compatible with common assistive technologies such as the JAWS screen reader, while other courts have taken the position that the ADA does not apply online. The U.S. Department of Justice has endorsed the WCAG2.0AA standard as an appropriate standard for accessibility in multiple settlement agreements.

Numerous lawsuits challenging websites and mobile apps on the basis of the ADA have been filed since 2017. These cases appears spurred by a 2017 case, Gil v. Winn Dixie Stores, in which a federal court in Florida ruled that Winn Dixie’s website must be accessible. Around 800 cases related to web accessibility were filed in 2017, and over 2,200 were filed in 2018. Additionally, though the Justice Department had stated in 2010 that they would publish guidelines for web accessibility, they reversed this plan in 2017, also spurring legal action against inaccessible sites.

A notable lawsuit related to the ADA was filed against Domino’s Pizza by a blind user who could not use Domino’s mobile app. At the federal district level, the court rules in favor of Domino’s as the Justice Department had not established the guidelines for accessibility, but this was appealed to the Ninth Circuit. The Ninth Circuit overruled the district court, ruling that because Domino’s is a brick-and-mortar store, which must meet the ADA, and the mobile app an extension of their services, their app must also be compliant with the ADA. Domino’s has petitioned to the Supreme Court, backed by many other restaurants and retail chains, arguing that this decision impacts their Due Process since handicapped customers have other, more accessible means to order. In October 2019, the Supreme Court declined to hear the case, which effectively upheld the decision of the 9th Circuit Court and requires the case to be heard as it stands.

The number and cost of federal accessibility lawsuits has risen dramatically in the last few years.

A growing number of organizations, companies and consultants offer website accessibility audits. These audits, a type of system testing, identify accessibility problems that exist within a website, and provide advice and guidance on the steps that need to be taken to correct these problems.

A range of methods are used to audit websites for accessibility:

Each of these methods has its strengths and weaknesses:

Ideally, a combination of methods should be used to assess the accessibility of a website.

Once an accessibility audit has been conducted, and accessibility errors have been identified, the errors will need to be remediated in order to ensure the site is compliant with accessibility errors. The traditional way of correcting an inaccessible site is to go back into the source code, reprogram the error, and then test to make sure the bug was fixed. If the website is not scheduled to be revised in the near future, that error (and others) would remain on the site for a lengthy period of time, possibly violating accessibility guidelines. Because this is a complicated process, many website owners choose to build accessibility into a new site design or re-launch, as it can be more efficient to develop the site to comply with accessibility guidelines, rather than to remediate errors later.

With the progress in AI technology, web accessibility has become more accessible. With 3rd party addons that leverage AI and machine learning, it is possible to offer changes to the website design without altering the source code. This way, a website can be accessible to different types of users without the need to adjust the website for every accessibility equipment.

For a web page to be accessible all important semantics about the page’s functionality must be available so that assistive technology can understand and process the content and adapt it for the user.
However, as content becomes more and more complex, the standard HTML tags and attributes become inadequate in providing semantics reliably. Modern Web applications often apply scripts to elements to control their functionality and to enable them to act as a control or other dynamic component. These custom components or widgets do not provide a way to convey semantic information to the user agent. WAI-ARIA (Accessible Rich Internet Applications) is a specification published by the World Wide Web Consortium that specifies how to increase the accessibility of dynamic content and user interface components developed with Ajax, HTML, JavaScript and related technologies. ARIA enables accessibility by enabling the author to provide all the semantics to fully describe its supported behaviour. It also allows each element to expose its current states and properties and its relationships between other elements. Accessibility problems with the focus and tab index are also corrected.

The basic WCAG requirements – What you need to start develop with WCAG, 2018
Designing Offline-First Web Apps by Alex FeyerkeDecember 04, 2013

Filed Under: Scheduling Content Tagged With: Scheduling Content

By Erik

Web archiving

Web archiving is the process of collecting portions of the World Wide Web to ensure the information is preserved in an archive for future researchers, historians, and the public. Web archivists typically employ web crawlers for automated capture due to the massive size and amount of information on the Web. The largest web archiving organization based on a bulk crawling approach is the Wayback Machine, which strives to maintain an archive of the entire Web.

The growing portion of human culture created and recorded on the web makes it inevitable that more and more libraries and archives will have to face the challenges of web archiving. National libraries, national archives and various consortia of organizations are also involved in archiving culturally important Web content.

Commercial web archiving software and services are also available to organizations who need to archive their own web content for corporate heritage, regulatory, or legal purposes.

While curation and organization of the web has been prevalent since the mid- to late-1990s, one of the first large-scale web archiving project was the Internet Archive, a non-profit organization created by Brewster Kahle in 1996. The Internet Archive released its own search engine for viewing archived web content, the Wayback Machine, in 2001. As of 2018, the Internet Archive was home to 40 petabytes of data. The Internet Archive also developed many of its own tools for collecting and storing its data, including Petabox for storing the large amounts of data efficiently and safely, and Hertrix, a web crawler developed in conjunction with the Nordic national libraries. Other projects launched around the same time included Australia’s Pandora and Tasmanian web archives and Sweden’s Kulturarw3.

From 2001 to 2010,[failed verification] the International Web Archiving Workshop (IWAW) provided a platform to share experiences and exchange ideas. The International Internet Preservation Consortium (IIPC), established in 2003, has facilitated international collaboration in developing standards and open source tools for the creation of web archives.

The now-defunct Internet Memory Foundation was founded in 2004 and founded by the European Commission in order to archive the web in the Europe. This project developed and released many open source tools, such as “rich media capturing, temporal coherence analysis, spam assessment, and terminology evolution detection.” The data from the foundation is now housed by the Internet Archive, but not currently publicly accessible.

Despite the fact that there is no centralized responsibility for its preservation, web content is rapidly becoming the official record. For example, in 2017, the United States Department of Justice affirmed that the government treats the President’s tweets as official statements.

Web archivists generally archive various types of web content including HTML web pages, style sheets, JavaScript, images, and video. They also archive metadata about the collected resources such as access time, MIME type, and content length. This metadata is useful in establishing authenticity and provenance of the archived collection.

The most common web archiving technique uses web crawlers to automate the process of collecting web pages. Web crawlers typically access web pages in the same manner that users with a browser see the Web, and therefore provide a comparatively simple method of remote harvesting web content. Examples of web crawlers used for web archiving include:

There exist various free services which may be used to archive web resources “on-demand”, using web crawling techniques. These services include the Wayback Machine and WebCite.

Database archiving refers to methods for archiving the underlying content of database-driven websites. It typically requires the extraction of the database content into a standard schema, often using XML. Once stored in that standard format, the archived content of multiple databases can then be made available using a single access system. This approach is exemplified by the DeepArc and Xinq tools developed by the Bibliotheque Nationale de France and the National Library of Australia respectively. DeepArc enables the structure of a relational database to be mapped to an XML schema, and the content exported into an XML document. Xinq then allows that content to be delivered online. Although the original layout and behavior of the website cannot be preserved exactly, Xinq does allow the basic querying and retrieval functionality to be replicated.

Transactional archiving is an event-driven approach, which collects the actual transactions which take place between a web server and a web browser. It is primarily used as a means of preserving evidence of the content which was actually viewed on a particular website, on a given date. This may be particularly important for organizations which need to comply with legal or regulatory requirements for disclosing and retaining information.

A transactional archiving system typically operates by intercepting every HTTP request to, and response from, the web server, filtering each response to eliminate duplicate content, and permanently storing the responses as bitstreams.

Web archives which rely on web crawling as their primary means of collecting the Web are influenced by the difficulties of web crawling:

However, it is important to note that a native format web archive, i.e., a fully browsable web archive, with working links, media, etc., is only really possible using crawler technology.

The Web is so large that crawling a significant portion of it takes a large number of technical resources. The Web is changing so fast that portions of a website may change before a crawler has even finished crawling it.

Some web servers are configured to return different pages to web archiver requests than they would in response to regular browser requests. This is typically done to fool search engines into directing more user traffic to a website, and is often done to avoid accountability, or to provide enhanced content only to those browsers that can display it.

Not only must web archivists deal with the technical challenges of web archiving, they must also contend with intellectual property laws. Peter Lyman states that “although the Web is popularly regarded as a public domain resource, it is copyrighted; thus, archivists have no legal right to copy the Web”. However national libraries in some countries have a legal right to copy portions of the web under an extension of a legal deposit.

Some private non-profit web archives that are made publicly accessible like WebCite, the Internet Archive or the Internet Memory Foundation allow content owners to hide or remove archived content that they do not want the public to have access to. Other web archives are only accessible from certain locations or have regulated usage. WebCite cites a recent lawsuit against Google’s caching, which Google won.

In 2017 the Financial Industry Regulatory Authority, Inc. (FINRA), a United States financial regulatory organization, released a notice stating all the business doing digital communications are required to keep a record. This includes website data, social media posts, and messages. Some copyright laws may inhibit Web archiving. For instance, academic archiving by Sci-Hub falls outside the bounds of contemporary copyright law. The site provides enduring access to academic works including those that do not have an open access license and thereby contributes to the archival of scientific research which may otherwise be lost.

Filed Under: Scheduling Content Tagged With: Scheduling Content

By Erik

Web Compatibility Test for Mobile Browsers

Web Compatibility Test for Mobile Browsers (often Mobile Acid test) is a test page published and promoted by the World Wide Web Consortium (W3C) to expose web page rendering flaws in mobile web browsers and other applications that render HTML. It was developed in the spirit of the Acid test by the Web Standards Project and test the relevant parts that mobile browser need to support. The test uses for some parts JavaScript to test the different technologies. The browser have to accomplish 16 different subtest indicated by a 4 X 4 raster of squares.

The mobile Acid test tests a variety of web standards published by the World Wide Web Consortium and the Internet Engineering Task Force.
Specifically, the mobile Acid test tests:

Filed Under: Scheduling Content Tagged With: Scheduling Content

By Erik

Web content lifecycle

The web content lifecycle is the multi-disciplinary and often complex process that web content undergoes as it is managed through various publishing stages.

Authors describe multiple “stages” (or “phases”) in the web content lifecycle, along with a set of capabilities such as records management, digital asset management, collaboration, and version control that may be supported by various technologies and processes. One recognized technology for managing the web content lifecycle is a web content management system.

Concepts often considered in the web content lifecycle include project management, information management, information architecture, and, more recently, content strategy, website governance, and semantic publishing.

Various authors have proposed different “stages” or “phases” in the content lifecycle. Broadly speaking, the stages include content creation/development, revision, distribution, and archiving. The lifecycle processes, actions, content status, and content management roles may differ from model to model based on organizational strategies, needs, requirements, and capabilities.

In 2003, McKeever described “two iterative phases”: “the collection of content, and the delivery or publishing of that content on the Web.” She also explains a Web Content Management (WCM) “four layer hierarchy”ocontent, activity, outlet, and audienceointended to illustrate the breadth of WCM.

Bob Boiko’s Content Management Bible
emphasizes three major parts: collect (creation and editing is much more than simply collecting), manage (workflows, approvals, versioning, repository, etc.), and publish. These concepts are graphically displayed in a Content Management Possibilities poster developed by Boiko. The poster details such content management concepts as metadata, syndication, workflows, repositories, and databases.

Gerry McGovern also sees three “processes,” designating them creation, editing, and publishing.

JoAnn Hackos’ Content Management for Dynamic Web Delivery
argues for four “components”: authoring, repository, assembly/linking, and publishing.

In Managing Enterprise Content,
Ann Rockley argues for the planning of content reuse through four stages: create, review, manage, deliver. A stage can have sub-stages; for example, the “create” stage has three sub-stages: planning, design, and authoring and revision. She notes that content is often created by individuals working in isolation inside an enterprise (the coined term is the Content Silo Trap). To counter this content silo effect, she recommends using a “unified content strategy,” “a repeatable method of identifying all content requirements up front, creating consistently structured content for reuse, managing that content in a definitive source, and assembling content on demand to meet your customersi needs.”

Nakano described five “collaboration operations”: Submit, Compare, Update, Merge, and Publish.

The State government of Victoria (Australia) produced a flowchart with a diagrammatic view of the web content lifecycle with five stages: Develop, Quality Approval, Publish, Unpublish, and Archive. Some of the stages include sub-stages (for example, Archive consists of Storage, Archived, and Disposed) intended to further delineate content status. In addition, this model depicts three aspectsoStatus, Process, and Rolesoas part of the flow for web content. The four roles in this model are content author, content quality manager, business quality manager, and records manager.

The AIIM speaks of managing content to achieve business goals. AIIM ECM 101 Poster from 2003, and the AIIM Solving the ECM Puzzle Poster from 2005, present the same five stages: Capture, Manage, Store, Deliver, Preserve.

The Content Management Lifecycle Poster devised by CM Pros suggests six “steps”:

Each step contains sub-steps. For example, step 1, Plan, consists of Align, Analyze, Model, and Design; and step 2, Develop, consists of Create, Capture, Collect, Categorize, and Edit.

There is also another six stage model based on the concept of product lifecycle:

Bob Doyle suggests seven stages of the Web content lifecycle:

Doyle argues for seven stages based on the psychologist George A. Miller’s famed magical number “seven plus or minus two” limit on human information processing. He notes this is merely a suggestion and that one should “add or subtract a couple of your own favorites.”

In a 2005 article, Woods addressed governance of the content lifecycle. In his model, there are categories of issues to address, rather than a simple, cradle-to-grave pathway. He writes that most content governance questions fall into one of the following categories:

More recently, Kristina Halvorson has humorously suggested 15 discrete steps in the web content lifecycle: Audit, Analyze, Strategize, Categorize, Structure, Create, Revise, Revise, Revise, Approve, Tag, Format, Publish, Update, Archive.

Enterprise content management as a business strategy might incorporate web content management:

When integrated with an ECM system, WCM enables organizations to automate the complete Web content lifecycle. As soon as new content is developed, the system ensures that it goes live the moment it is intended toonot a minute earlier. By specifying timed releases and expiration dates, content is published to and removed from the Web according to recommendations, requirements and even regulations.

A web content management system can support and enhance certain processes because of automation, including document management, templates, and workflow management. However, the absence of well defined roles and process governance will greatly dilute the effectiveness of any technology intended to augment/enhance the publishing process overall.

Information management describes the “organization of and control over the structure, processing, and delivery of information.” The goal of information lifecycle management is to use policies, operations, and infrastructure to manage information throughout its useful life. However, businesses struggle to manage their data and information.

The missing stage in all the major sources is the organization of information, structuring it where possible, for example using XML or RDF, which allows arbitrary metadata to be added to all information elements. This is the secret that the knowledge managers describe as turning mere data or information into knowledge. It allows information to be retrieved in a number of ways and reused or repurposed in many more.

Using semantic markup in the publishing process is part of semantic publishing. Tim-Berners Lee’s original vision for the Semantic Web has yet to be realized, but many projects in various research areas are underway.

Filed Under: Scheduling Content Tagged With: Scheduling Content

By Erik

Web content


Image by/from Concord hioz

Web content is the textual, visual, or aural content that is encountered as part of the user experience on websites. It may include—among other things—text, images, sounds, videos, and animations.

In Information Architecture for the World Wide Web, Lou Rosenfeld and Peter Morville write, “We define content broadly as ‘the stuff in your Web site.’ This may include documents, data, applications, e-services, images, audio and video files, personal Web pages, archived e-mail messages, and more. And we include future stuff as well as present stuff.”

While the Internet began with a U.S. Government research project in the late 1950s, the web in its present form did not appear on the Internet until after Tim Berners-Lee and his colleagues at the European laboratory (CERN) proposed the concept of linking documents with hypertext. But it was not until Mosaic, the forerunner of the famous Netscape Navigator appeared, that the Internet became more than a file serving system.

The use of hypertext, hyperlinks, and a page-based model of sharing information, introduced with Mosaic and later Netscape, helped to define web content, and the formation of websites. Today, websites are categorized mainly as being a particular type of website according to the content a website contains.

Web content is dominated by the “page” concept, its beginnings in an academic setting, and in a setting dominated by type-written pages, the idea of the web was to link directly from one academic paper to another academic paper. This was a completely revolutionary idea in the late 1980s and early 1990s when the best a link could be made was to cite a reference in a typewritten paper and name that reference either at the bottom of the page or on the last page of the academic paper.

When it was possible for any person to write and own a Mosaic page, the concept of a “home page” blurred the idea of a page. It was possible for anyone to own a “Web page” or a “home page” which in many cases the website contained many physical pages in spite of being called “a page”. People often cited their “home page” to provide credentials, links to anything that a person supported, or any other individual content a person wanted to publish.

Even though we may embed various protocols within web pages, the “web page” composed of “HTML” (or some variation) content is still the dominant way whereby we share content. And while there are many web pages with localized proprietary structure (most usually, business websites), many millions of websites abound that are structured according to a common core idea.

Blogs are a type of websites that contain mainly web pages authored in HTML (although the blogger may be completely unaware that the web pages are composed using HTML due to the blogging tool that may be in use). Millions of people use blogs online; a blog is now the new “home page”, that is, a place where a persona can reveal personal information, and/or build a concept as to who this persona is. Even though a blog may be written for other purposes, such as promoting a business, the core of a blog is the fact that it is written by a “person” and that person reveals information from her/his perspective. Blogs have become a mighty weapon used by content marketers who desire to increase their site’s traffic, as well as, rank in the search engine result pages (SERPs). Modern research from Technorati shows that blogs now outrank social networks for consumer influence (Technorati’s 2013 Digital Influence Report data).

Search engine sites are composed mainly of HTML content, but also has a typically structured approach to revealing information. A Search Engine Results Page (SERP) displays a heading, usually the name of the search engine itself, and then a list of websites and their web addresses. The web addresses are listed by their order of relevance according to the search query. Searchers typically type in keywords or keyword phrases to find or search what they are looking for on the web.

Discussion boards are sites composed of “textual” content organized by HTML or some variation that can be viewed in a web browser. The driving mechanism of a discussion board is the fact that users are registered and once registered can write posts. Often a discussion board is made up of posts asking some question to which other users may provide answers to those questions.

Ecommerce sites are primarily composed of textual material and embedded with graphics displaying a picture of the item(s) for sale. However, there are extremely few sites that are composed page-by-page using some variant of HTML. Generally, webpages are formed as they are being served from a database to a customer using a web browser. However, the user sees the mainly text document arriving as a webpage to be viewed in a web browser. Ecommerce sites are usually organized by the software we identify as a “shopping cart”.

While there are many millions of pages that are predominantly composed of HTML, or some variation, in general we view data, applications, E-services, images (graphics), audio and video files, personal web pages, archived e-mail messages, and many more forms of file and data systems as belonging to websites and web pages.

While there are many hundreds of ways to deliver information on a website, there is a common body of knowledge of search engine optimization that needs to be read as an advisory of ways that anything but the text should be delivered. Currently, search engines are text-based and are one of the common ways people using a browser locate sites of interest.

When talking of SEO or Search Engine Optimization, web content is divided into basic formats considering the structure of present-day websites. They are:

In this case, a website provides a blank space where web content is written in the form of paragraphs and bullets. Information written in these pages embellishes the services and amenities provided by a company. Non-template content is mainly used because it has a lower quantity of info graphics involved and can be customized. This reduces page load times.

Template web content are those where information is written as per specific formats provided on a web page. Specific sections of web content are written within fixed spaces. This web content includes graphics and structural design. Template web content is more prevalent among modern websites.

Because websites are often complex, a term “content management” appeared in the late 1990s identifying a method or in some cases a tool to organize all the diverse elements to be contained on a website.[better source needed] Content management often means that within a business there is a range of people who have distinct roles to do with content management, such as content author, editor, publisher, and administrator. But it also means there may be a content management system whereby each of the different roles is organized to provide their assistance in operating the system and organizing the information for a website. A business may also employ various content protection measures, which are typically technologies used to attempt to frustrate copying without permission.

Even though a business may organize to collect, contain, and represent that information online, content needs organization in such a manner to provide the reader (browser) with an overall “customer experience” that is easy to use, to be sure the site can be navigated with ease, and that the website can fulfill the role assigned to it by the business, that is, to sell to customers, to market products and services, or to inform customers.

Geotargeting of web content in Internet marketing and geomarketing is the method of determining the geolocation (the physical location) of a website visitor with geolocation software and delivering different content to that visitor based on his or her location, such as country, region/state, city, metro code/ZIP code, organization, Internet Protocol (IP) address, ISP, or other criteria.

A typical example for different content by choice in geo-targeting is the FedEx website at FedEx.com where users have the option to select their country location first and are then presented with a different site or article content depending on their selection.

With automated different content in Internet marketing and geomarketing, the delivery of different content based on the geographical geolocation and other personal information is automated.

Filed Under: Scheduling Content Tagged With: Scheduling Content

  • 1
  • 2
  • Next Page »

Categories

  • Article Marketing
  • Content Creation
  • On Page Optimization
  • Scheduling Content

Recent Posts

  • Web accessibility
  • Web archiving
  • Web Compatibility Test for Mobile Browsers
  • Web content lifecycle
  • Web content
  • Web colors
  • Web content management system
  • Article spinning
  • Progressive enhancement
  • Article directory
  • Link building
  • Web widget
  • Sandbox effect
  • Website audit
  • Tips That Push Your SEO To New Levels
  • Simple Tips For Search Engine Optimization
  • Article Marketing: Tips And Tricks For Success
  • Article Writing – The New SEO Master Tool – Part II
  • Article Writing – The New SEO Master Tool – Part 1
  • You Will See Results With These Article Marketing Ideas
  • You Against The World – The Most Important Article Marketing Tips Available
  • Worried About Efficient Article Marketing? Great Tips To Help Everyone.
  • Working For The Ultimate Goal With Article Marketing Staples
  • Why You Need To Get The Most Out Of Article Marketing
  • What You Should Learn About Article Marketing

Copyright © 2021 · Log in