Follow Mayo Takeuchi on Quora

Thursday, December 27, 2012

Google+ increasing its reach

Just about a week ago, it was announced that Blogger users may now mention either people or pages from Google+ in the same manner as within G+ itself. This would have been quite useful when I first promoted my +Mayo Takeuchi Plus page, which has now accumulated a good body of photographs. However, I cannot seem to cite myself, perhaps because I've linked this blog to my personal Google+ account and it would be self-serving?
In the meantime, I've also added more G+ related widgets on this blog, including one that allows me to show thumbnails of people who have circled my personal account. Another button hopefully will encourage more people to circle my aforementioned Plus page.
During my "day job" researching I'd also noticed that, although the follower/circle counts weren't up to date, that the PPC spots were also starting to make mention of sponsor pages on Google+.
In an article "marriage of SEO and Social Media" (which likens this union to a predictable yet sudden elopement, to extend the metaphor), Matt Cutts's views on "social signals" continuing to grow in importance is quoted. One presumes that Google+ related cues such as +1s and other activities native to their environment would likely hold more weight than other SNSs, although perhaps there could be favoured emphasis on Twitter for breaking news.
If Facebook and Bing's integration evolves the way it has been implied, such as the 2012 August 30 announcement of the latter enabling searches of photos hosted in the former, Google would find it natural to mine Google+ data to enhance personalized results. Indeed, searching for my name on Google (in an incognito browser session) has started to yield my publicly uploaded photos to my Google+ accounts.
Speaking of social signals and Google, their Android OS received an influential vote of confidence from former Apple evangelist and current author of a free e-guide to Google+, +Guy Kawasaki. I'll be looking through this publication with interest and trying out more of the Google+ features.
In the meantime, I notice that I'm only prompted to share a new post via Google+ if immediately publishing. I'm hoping that time delayed G+ shares/posts (such as are available for Twitter) will become possible eventually.

Sunday, December 16, 2012

Three Tips on Time Management

A little over fourteen months ago I'd posted the predecessor to this post, where I touched upon the concept of multitasking. Now, over three months since my last post, I find myself on vacation, and finally able (and willing) to return my attention to this blog.
I recently read a discussion by friends that mentioned that "the days are long, but the years are short". I've certainly found this to be the case also, for the client-facing work I've been involved with since August of last year.
There are advantages to being obsessive, focused, and absorbed in any activity, I believe, but there are also unavoidable detractors, such as needing to use a timer or other externalized tooling to ensure that less engrossing tasks involved in daily living are still accomplished as required.

So, here are three tips to avoid ending up like that (altogether uncomfortably identifiable) XKCD character.

1. Learn to accurately estimate task duration and effort.

As the XKCD comic mentions, building a schedule can be useful to structure one's day, week, or project. However, I also posit that an accurate ability to estimate the effort and time every task can be expected to be required to create an effective timetable.

2. When prioritizing tasks, pick your battles.

Another indispensable skill concerns correctly prioritizing one's to-do list. This I consider as an essential pre-requisite to creating schedules that can be followed, regardless of the desire (or lack thereof) to do so. There are hard deadlines, softer (more negotiable) ones, and "nice to have"s. Identify them first.

3. Make yourself a priority.

Building in breaks, meals, and contrasting activities to break up the day can contribute to successfully adhering to any schedule. This clearly involves self-awareness and learning one's proclivities from experience. Should one take a sugar boost in mid-afternoon to offset the lull of digesting a lunch? Is one most productive first thing in the morning, or late afternoon?

As a child, I thrived on routine, although by external perspectives my parents created a household that was completely enslaved to our (self-imposed) schedule. While now I enjoy heretofore unknown levels of flexibility with my work-week, I'd like to reassess all of my priorities this year's end. Part of the 2013 planning will hopefully include ensuring that I resume blogging more regularly!

Thursday, September 6, 2012

Caveat googlers?

Courtesy of article from, circa 2010

Google has enjoyed mainstream use as a verb, in English and Japanese ("ググる"). Furthermore, if Wikipedia is to be believed, people "google" things in Dutch, Korean, Portuguese, Russian, Spanish, and Turkish. However, a quick look at any of Google's portals shows that the company offers much, much more. Combining its various services with the perceived bias for presenting content with close allegations has led to my finding a recent article by Danny Sullivan. There he, in a nutshell, decries his company's having crossed an arbitrary line of what search engines are "expected" to do - objectively point to online content - and what it now (and increasingly) does: provide a biased subset of content that aligns with its business model. I'd found Mr. Sullivan's op/ed via an article posted to TechCrunch, which caught my eye due to its title: "Why we may no longer be able to trust Google".
This was a strange statement to me - Google has never been, to my knowledge, a non-profit or public sector service. Many of Google's offerings have evolved in order to compete against other private sector corporations such as, notably, Microsoft (which had its own news outlet, MSNBC, about 13 years before they unveiled Bing, their search engine).
For this reason, I have never trusted Google any more than I felt comfortable trusting Microsoft, Yahoo!Lycos, or any of the myriad of other search engine providers that have had their moments in the proverbial sun since the advent of the internet age. I expected each of these companies to have inherent biases in what the present. And as I have thought of mentioning before, I chose Blogger as my hosting site in large part because Google owns it, with the expectation that it would initially give me a slight edge in Google organic results over, say, Wordpress even if my content (read: site and page-specific SEO efforts) were exactly the same.
The fact that Google has branched out to acquire so many data sources, such as Zagat and Frommer's reviews as mentioned by Mr. Sullivan, didn't surprise me either. It's quite understandable that Google may believe that these historically credible sources of information will become popular with its users, thus encouraging widespread adoption of their location based search (now known as Google Plus Local). I believe that's fodder for another blog post, however.

In any case, here is my stance regarding the trustworthiness of search engine services, summarized:
  • The major search engines come from private sector companies, and do not have to conform to any idealized view of what search engines may be "expected" to deliver.
  • Since their objectives include running a profitable business, they will have inherent biases, which are likely not just to appear via paid advertisements, but in their organic results as well.
  • Their users would benefit from being aware of and accounting for said biases when conducting searches. 
If you're afraid of what you share on SNSs being exploited by the services themselves (e.g. Facebook or Twitter in targeting ads to you) or by prospective employers/schools/nosy acquaintances etc. performing background checks on you via said search engines, then control what you share, every time and via every service that you use.
In the meantime, my persistent advice for content creators - and if you're active on an SNS, you are one regardless of whether you realize it or not - is this: so long as you focus on creating high quality content, judiciously use social media to spread awareness of said content, and refrain from using disreputable (underhanded) techniques, your web presence should (eventually) rank reasonably well for your targeted keywords, even if unseating Wikipedia is but a dream. Certainly, I was pleased to see today that this blog is returned first in

Friday, August 31, 2012

Google+ers, please circle my "Mayo T Plus" page!

Photograph by Mayo Takeuchi
I use the macro feature liberally to create wallpaper-friendly photos

Also known in more colloquial (and honest) parlance as a "shameless plug", I would greatly appreciate if everyone using Google+ who has enjoyed my nature photographs, rants about language, or who may have an interest in my views on Japanese culture, to add my new Google+ hosted "business" page to their circles.

Why, you may ask, should you do so? Well, several reasons come to mind.

  • First, I have decided to begin populating this page, rather than use my personal G+ presence, to promote my "best of" photographs. Local foods and blurry candids may still be published to my Facebook timeline (as well as to pre-existing themed albums such as my pandas only album), but I believe that some of my pictures are actually good enough to use as desktop or mobile device wallpaper: hence the justification of this migration.
  • Second, I have in past done some fast and loose freelance translation - most recently, of a bunch of Beat Takeshi transcripts from his segments on a late night radio show during the 1980s that had a cult following, for someone's academic thesis. Noting that said material was so very NSFW (not safe for work) that other translators had actually declined to tackle it, I was simply grateful that I could puzzle out the extreme smut (which was easier to study online than, say, specialized technical jargon). Since this work isn't related to my full time, "mainstream" role, I decided I would discuss it on Google+, where I am hoping to find people interested in my views on informal language and how languages reflect culture.
  • Finally, I quickly discovered that for business pages, Google+ will only allow contacts to be added, if they add you first.

Why did I make the G+ page in the first place?
  • I don't believe that the aforementioned topics strictly belong to the normal scope of this blog, although I'm justifying this particular post as both a place to mention the recent and pending features for G+ business pages and to cross-promote my Mayo Takeuchi Plus page
  • As well, in order to speak about this aspect of G+, I decided a hands-on approach was easiest and most accurate.
  • I wanted to share my better quality photos as a "service" and showcase other non-dayjob skills.
So, to all who do kindly add me to your circles, a profound thanks!

Friday, July 13, 2012

ICANN't bring myself to buy a domain name (yet) seems to have plenty of detractors

With apologies for the unintentional hiatus I've returned to ramble, hopefully not too incoherently, about a topic to which I've given brief bursts of intensive thought over many years: domain names.

For my day to day job, I consider things like how valuable the gTLDs (generic top level domain - such as .net, .info, .org, and the ccTLD values - ISO compliant two letter country codes) happen to be for my clients' web sites. Of course, since they've relaxed the rules on new gTLDs (at a price of $185K USD a pop, much to many people's chagrin, as ranted about by asmartbear - and his commenters - from a year ago), there will be even more to consider for future site analyses.

In the context of maintaining (if intermittently) this blog, I'd read many articles and posts encouraging everyone to purchase their own domain, as the * address "seems unprofessional" and could adversely affect one's web cred.

And personally, it may surprise my readers that while I own a permanent forwarding web (and email) address by my alma mater, I have yet to purchase any personalized domains, despite having maintained a sequence of web sites that dated back to early 1994, which consisted of a handful of hand-coded HTML (v 1.0) pages.

What resurrected this topic to me this year, was Google's February-March roll out of automatic Blogger redirects to the ccTLDs from the .com gTLD. As a Blogger member I don't recall having received any forewarning of this, save the fact that my traffic source stats abruptly started to include the original *.com URL, which I'd registered on numerous directories, as the referrer address. 

Hopefully my new JavaScript addition (which forces the /ncr sub directory to be included in the server request, thereby stopping the redirects) that was courtesy of this blog entry has put a stop to this, but for a few weeks at least if not months, Google has been using the 302 type HTTP server request, i.e. a temporary redirect, to take my non-US based visitors to the appropriate ccTLD version of my original blog address. 

I have two concerns with Google's implementation decision of these redirects. First, should this request not be the 303 type (see other), despite the extensive misuse of 302? Second, these 302 type redirects presumably prevent the transfer of any link juice to any of the proliferation of new URLs they have created for these sites - not just the landing page, but each of the posts.

Having just double-checked, each Blogger-generated page does indeed automatically provide a populated canonical link element (I would have added them myself were this not the case). This should prevent duplicate content from being indexed from the alternate URLs that have been created in the interim.

So, why not buy a personal domain, one may ask. I've been price shopping, so I'm not ruling out the eventual purchase of one. However, one thing appalled me enough to discover the anti domain name registration service site that I've linked to my borrowed graphic above: it was the pricing structure. Drawn to them by their partnership with Google and its objectively low first year price for a .org address, I discovered upon going through the purchasing wizard, that they offered this pricing structure, rounded to the nearest US dollar:

1 year: 7USD (base price on sale, no frills, just the ICANN fee added)
2 years: 15USD (bringing the cost each year to 7.5USD)
3 years: 25USD (topping 8USD a year)
5 years: 75 USD (15 USD/year)
10 years: 162 USD (so, over 16USD a year)

Anyone capable of performing basic arithmetic would realize that, rather than reducing the annual cost as one might expect for a longer-term contract, they have increased the annual price to more than double the initial price for the 10 year subscription (and probably about 60% more than the non-promotional price) as compared with just a one year purchase. As I doubt that anything related to my full name is unlikely to generate much competition, I'm seriously considering purchasing it for a year at a time and renewing. 

Since then, I've learned that this particular company actually supported SOPA, so despite the ease of using them, I doubt I'll be registering anything through them, anytime soon. The search continues...

Wednesday, May 16, 2012

Thoughts on Google's Knowledge Graph

Example of Knowledge Graph enabled search in Google
Disambiguating "Taj Mahal" - structure or music band? Courtesy of Google's own blog
Otherwise known as semantic web, Google has announced its roll-out of ways to prompt the user to help disambiguate query terms ("strings", as in sequences of textual characters) to more specific concepts ("things"). Very catchy slogan.
The Mashable article provides a basic overview of what this news means, and as I read this, my thoughts invariably turned to my former job in LanguageWare (which has been partially described over four non-contiguous blog posts last year, related to Language Identification).
When one is first exposed to linguistic data which has been amassed for the purpose of spell-check, it becomes quickly clear that in order to use this same word lists effect grammatical checks and even orthographical ones (e.g. whether a proper noun needs to be title-cased even when it doesn't commence a sentence), the part of speech is important.
The aforementioned Mashable article cites "kings" as an example, where the likely senses are all to do with nouns. Actually quite a few words exist that are even more difficult to process in this way, such as "bank", which are not just nouns (repository of items to do with financial, genetic, food, blood, or other such as paper, data, memory? Geographical, geological senses also exist) but can be part of noun phrases ("bank shot" in sports), or verbs (to bank something). Its plural form could refer to surnames and place names, as well as the verbal inflection.
Granted, most search engine users have been conditioned, it appears, to minimize stop words and focus on noun phrases, but as with my example, one needs to disambiguate the shorter queries (one or two word terms) more often than not, even when it's a foretold conclusion that the concept being sought after is a noun.
To get a sense of how often such disambiguation is necessary, I thought that it might be interesting to understand how many pages exist in Wikipedia for this purpose. In its English set of pages, this search yielded 35,452 hits. Given the existence of 3,951,340 total pages (for English only, as of May 16 2012), disambiguation pages constitute 0.9% of the total. The Japanese, French and German language pages are structured differently, where disambiguating entries are not identified as such in the title (e.g. the pages for Hase in German and in French).
In order for pages to be correctly indexed by any search engine as belonging to a specific topic, then, the co-occurrence of terms that semantically enforce the primary keyword becomes crucial. A genre of writing where topic determination may be more challenging for search engine indexing, is in scientific journalism (as found in non refereed publications such as newspapers and non-specialized magazines). Anecdotally, I've noticed that even when the subject matter concerns pure science, the authors may attempt to make its contents more accessible or relatable to a layperson audience. Which in turn means that there can be mentions of popular culture or seemingly less related subjects, often found prominently (early on in the article) as a means of capturing the readers' attentions and providing analogies.
I may return with further thoughts on semantic web-enabled search; certainly I look forward to experimenting with Google's US roll-out.

Tuesday, April 3, 2012

A tale of Wikipedia's dominance

As illustrated in the xkcd comic above, Wikipedia has had an enormous impact on many web users. A contributing factor to their success in the more recent years, may be attributable to how visible their pages are in organic searches.

Google has been long reputed to favour Wikipedia content in their SERPs. However, recently Search Engine Watch established that (albeit by a narrow margin), Bing is even more likely than Google to return a Wikipedia page organically.

Personally, I find it completely unsurprising that Wikipedia articles would dominate organic rankings:

  • Their URLs are easy to hack: I often go directly to the topic I wish by crafting the URL, and they also have extensive redirects in place, allowing me to reach the desired content even if my guess wasn't the canonical term.
  • They make an effort to police their content to minimize bias and conjecture.
  • Many of its pages are updated frequently, again with the power of crowd-sourcing.
  • The writing quality is also monitored (to varying extents of effectiveness).
  • They have extensive and logical internal linking conventions.
  • Many external sites (this blog included!) link into their content as a matter of course.
  • They have a .org domain.

The combination of the above (and doubtless, more criteria that my brief brainstorming didn't capture) make it an ideal candidate to rank well in search engine indices. I also find it especially appealing to compare different language articles that cover the same topics, since these often reflect the general levels of interest, and more interestingly the differences which are partially attributable to cultural values.

Below is one last thought (of many xkcd strips devoted to Wikipedia), which resonates with me quite well. I've been known to pointedly avoid looking at the start page, since so many of the topics it showcases suddenly seem fascinating, despite my not having given most of them much thought ever before.

Monday, March 19, 2012

Build it as if they will come

Building something though people may not immediately attend? Karlskirche in Vienna by night

Last Friday, Search Engine Journal transcribed part of Matt Cutts' talk which pre-announced changes to Googlebot that will address "overly optimized" content:
What about the people optimizing really hard and doing a lot of SEO. We don’t normally pre-announce changes but there is something we are working in the last few months and hope to release it in the next months or few weeks. We are trying to level the playing field a bit. All those people doing, for lack of a better word, over optimization or overly SEO – versus those making great content and great site. We are trying to make GoogleBot smarter, make our relevance better, and we are also looking for those who abuse it, like too many keywords on a page, or exchange way too many links or go well beyond what you normally expect. We have several engineers on my team working on this right now.
First, it's my impression that although Cutts does say he was lacking for a better term, 1) if anything were "overly" optimized it would no longer BE optimized and 2) "great" is a pretty subjective label to apply to most things, web content included.

Then, it occurs to me that concocting text that contains what people (and Bots) perceive as "excessive" presence of keywords - be they the targeted ones or semantically related ones - would be another form of keyword stuffing, a long-acknowledged "black hat" measure when this is implemented in meta data. Or as I should have written before, an "unnatural" level of frequency with which certain terms is used (rather than incorporating variants and "mixing it up" so to speak) reflects writing with the aim to use Machine Translation more often than not, in order to localize texts more affordably.

The author of the afore-linked article in SEJ speculates that correlative factors such as number of shares and amount of (presumably positive?) interaction as evidenced by comments and discussions pertaining to a page will help to enable this "smarter" GoogleBot incarnation. While this may well be the case, I'm left with the sense that sound content creation practices, as they should always cater to the (human) audience, will sooner or later enable accurate search engine indexing. That is, with time search bots will all process language more naturally and they aim to mimic human comprehension and reaction - so craft one's text with one's intended readership firmly in mind, and the categorizing will (should) follow.

As an aside, I am trying to associate some of my personal photos with subsequent blog entries for a couple of reasons, although I shall readily admit that some of the links will be quite tenuous. Which in light of standard SEO practice (associating semantically relevant images with alternate text that reinforces the theme of the text), is humourously ironic.

In this case, the pictured Karlskirche (St. Charles's Church) was built for the patron saint of the healer of plagues, a year after the end of an epidemic. In other words, the population was significantly depleted from this period - yet, the church was still built in quite an opulent and triumphant style. One could say even that although the city was still in recovery, it anticipated imminent popularity with optimism and foresight. And indeed, it's a well visited tourist landmark today (I pass it often as well, since it's in my neighbourhood). So too, one may hope that curators of web presences aspire to achieve acclaim and publish well-loved content.

Friday, February 17, 2012

Is larger (PPC) better? Size matters, but... the #G+ strategy

After winding down from what still feels novel but is actually BAU (business as usual) for me today, I read an article which includes this passage:
In testing for the ads, Google mentioned clickthrough rates were significantly higher than the previous 2/3 line sitelinks. One would argue that is hardly surprising givent he[sic] real estate that these new ads take up, and that in itself presents more interesting scenarios to SEO’s[sic] who are already under pressure with many of the changes Google has made to its search results set. Further more[sic] these results bear many similarities to those of the sitelinks already in place within organic search results.
More real estate to PPC which this undoubtedly will mean, should mean yet more traction for PPC results, and less visibility on organic results potentially resulting in the following scenario
- More advertisers using PPC as organic visibility is being throttled
- Competition within both PPC and SEO significantly increasing as the battle for no1 increases significantly organically and the increased competition means CPC etc are going to be significantly tested
- Differentiation between SEO and PPC diminishing further
- Advertisers utilising more personalisation factors to try and influence eyefall where possible
Now, my first reaction was to think "hold on, no matter how much space paid advertisements may encroach upon a search engine results page (SERP), won't most users still ignore them simply because they're ads?"

I would argue that the descriptive text being added to the links in these ad spots are the differentiator, rather than the surface area that the overall ad takes up (e.g. if they'd simply enlarged the font and tinted background, it seems unlikely to me that click-through counts would proportionally increase with SERP real estate). And just because organic results are being squeezed out of the SERP, doesn't mean that users simply wouldn't become accustomed to hunting more diligently for them.
There are points later on in this article that I do agree with - namely that PPC should be carefully targeted, and to extrapolate from the given claims, I infer that paid ads will become increasingly eye-catching, via thumbnails of still and video images, because it's been well established that the eye is drawn to such objects.

Now, in light of the above, let us examine an article about Google+ that clearly demonstrates the potential of how successful corporate presence can immensely impact what its followers see in their Google personalized (search plus your world - or SPYW) search results. Its example is how an H&M fan on G+, upon using Google while logged in to search for information about soccer star David Beckham, is shown significantly more H&M content which concerns Beckham's collaboration with H&M and H&M related content which intersects with Beckham.

The inclusion of branded content "organically", then, is precisely what PPC is striving to achieve with its descriptive texts and relevant links. Success in amassing G+ followers equates to a significant increase in personalized result impressions. Add to this a strong visually oriented content campaign, it seems that Google is providing an irresistible incentive for businesses to not just create G+ presences, but work to become as popular as they can, in order to leverage SPYW.

Saturday, January 28, 2012

Pinterested? A(nother) primer

Since joining a few weeks back, I've seen quite a few blog posts and articles (such as this one) crop up about how best to use Pinterest, which I would succinctly describe as a visual social bookmarking service.  It's still in invitation-only mode (if you'd like an invitation, feel free to contact me for one), it allows for users to:

  • Create collections of bookmarks ("boards").
  • Boards may be assigned a category, which others can then search for and browse through.
  • Boards can be either solely editable by oneself, or contributed to by other users, whom one can specify by name.
  • Boards may be "liked" via Facebook plugin.
  • Add bookmarks as represented by either images and videos, either found anywhere online (publicly accessible), or via upload.
  • At the time of pinning, one can use Facebook and/or Twitter to share out the pin.
  • Comment on any pinned items.
  • "Like" and "re-pin" items.
  • Follow all of or a subset of other users' boards.
  • Draw users' attention to pins by @ referring to them like in Twitter.

So far, it seems to have a somewhat older and quite female demographic - and initially it was easiest to trawl through Wikipedia to populate my favourite foods board, though since then I've been adding more boards, since.

Truth be told, I haven't fully taken advantage of all its features. Even so, it's clear that there would be many more ways to benefit from it, such as what's listed in this "creative use" suggestion list.

Here's my "to try" list:

  • Hack the URL here:[Web site URL here]/ to find all pins that have been stored already from that site.
  • Plan a vacation using others' input for accommodation, dishes, sights etc.
  • Compile a "must see" collection of film/plays/musicians etc.
  • Assign my own graphics to my blog posts, and collect permalinks to my blog entries to publish there.

For my professional interest, I've started a board for infographics related to SEO and SMM and such, which has resulted in the highest individual board following count of all my boards so far. 

And for fun, I've created a photography board that solely contains my handiwork, mainly courtesy of my DSLR-like point and shoot (for those wondering what I use), though if I really get the hang of using my smartphone camera, I may pin some of those, too, as they are automatically uploaded via G+. I use my eponymous hashtag oh my photos to see if I can somewhat track how widespread my images become re-pinned, although when a user does re-pin an item, s/he has the option of editing the caption that's associated with it.

Upon first authenticating (with my Facebook credentials), I found that I was instantly following over 60 Facebook friends who had likewise allowed linkage of their accounts with Facebook. I'm also easily able to invite any other Facebook friends who have yet to join, but strangely, it doesn't (yet) have similar support for Twitter despite its integration as a publishing medium. Also, I've found browser-based differences in reliability (e.g. Chrome won't let me pin videos), but its bookmarklet function is extremely convenient to use, and it's easy to convert the time-sink of random web-surfing into an exercise to accrue Pinterest pins.

Wednesday, January 18, 2012

SOPA, PIPA: aka explaining today's site blackouts

If you haven't read about the Stop Online Piracy Act (SOPA) or the Senate version, PIPA, today's blackouts (of prominent sites including Wikipedia) may have surprised you.

Courtesy of the Oatmeal, which is also blacked out today, here's an animated graphic that humourously (and effectively) demonstrates why this legislation should be stopped:

For a more serious (but concise) look, here's an infographic about SOPA.

Finally, from today, Forbes' interview with Rep. Jared Polis (D-CO) about SOPA and why he opposes it.

Saturday, January 14, 2012

Thoughts on IFTTT

Thanks to Google+, I first learned about a service called IFTTT ("if this then that").  They provide a very simple interface where the registered user can set up tasks. Each task consists of selecting a channel (such as Craigslist, Delicious, Instagram and many other social utilities), where a trigger event from said channel results in an action on a target channel. For instance, one can set up an email to be sent to one's account when the local forecast calls for snow. Or in my case, I've set up a task that tweets a customized message of thanks when I'm re-tweeted or followed.

Possibly the most powerful channel that's available on IFTTT is the "Feed". Any RSS feed URL can be used as a trigger. This means that I can now consider leaving networkedblogs, on which I currently rely to syndicate new blog entry notices to Facebook and Twitter. I'd also like to review all my feed subscriptions, and see what else I'd like to automate.

Thinking along those lines, I especially appreciate that many of the channels have filterable action triggers, such as keyword or hashtag values that one can specify. That way, I could ensure that I always see new content about certain topics from specific channels, be they in the form of public bookmarks on Delicious or results from Twitter searches.

I just need to take the time to set it up. Preliminary testing has proven that it works well. However, not every available "recipe" (crowd-sourced configurations that anyone can activate for themselves) is a good idea: case in point, the service is not smart enough to discern whether a tweet is spam or not, so thanking everyone who @mentions you, while tempting to implement, could inadvertently help reward spammers.

About Mayo

My photo

Professional: As "Senior Enterprise SEO Strategist" in IBM's Digital Marketing division, I provide consulting and training services for both internal and external clients. Formerly I was involved in Natural Language Processing, software localization, quality assurance and documentation authoring.
Personal: INTJ Nikkei Nisei ex-patriated Canadian who takes photographs and enjoys Baroque through late Classical music. The G+ page shares some of the "best of" photos.