Thursday, December 20, 2012

Darden/Kors/AMD/NKE

Darden already pre-warned couple weeks back, when its price has been dumped by 10%; it finally announced the actual result this morning --- investors acted like it is news and drove it down by another ~4%; it is all about manipulation --- of the regular investors by the MM --- look at CHPs chart --- it is exactly the same route MM is doing

KORS --- being accu; watch Vol; get when low

AMD -- another round of short

NKE --- >= 110

Tuesday, December 18, 2012

Points

UP - volume, short ratio

DOWN - volume, short ratio

Saturday, December 1, 2012

Dead PC, really?

Being a AMD owner, I have been watching the PC market very closely. Is it really true that PC, aka AMD (and maybe INTC), is dead? From what I have observed, a different opinion comes to my mind --- I think there will be room, if not a lot, to grow back up. Why?

First of all, let's see what does Tablets replaces? Laptop, right? No one will seriously consider tablets for serious computing or processing works. For example, my son has never liked the experiences typing up his essay on his Android tablets! So, to be more specific, tablets replaced the "toy" or gaming portion of a laptop. Any work that requires typing needs a laptop;

Second, AMD is trying to get enough cash for expanding into other product area. By then, it would be doing much better.

TA

1 、股票的走势及走势线
   A 图是典型的升势图,将波浪的低点相连,即成升势线。在升势图中,请注意交易量的变化。在上升阶段,交易量增加,下调阶段,交易量减少。每个波动的最高点较上个波动为高,最低点也较上一最低点为高。
技术分析之股票走势线
   B 图是典型的跌势图。将波浪的高点相连,便成跌势线(阻力线)。在跌势时,交易量没有特别之处,但跌波的每个波峰较上一波峰为低,波谷也较上一波谷为低。
技术分析之无势图
   C 图是无势图,你根本就不知道这只股票的大方向是什么。交易量也没有特色。一只无势的股票通常不适合炒作
   心理分析: 有人问一位投资专家:“股价为什么会升?”他想了会儿说:“因为买者多过卖者。”现在大家明白了,股票升的原因不是低的成本收益比率,也不是高的红利或是其它堂皇的理由,而只是因为买者多过卖者。虽然成本收益比率或红利都会影响投资者买卖的决定,但它只代表了过去。影响投资者决定的最重要因素是对未来的预期。一只成本收益比率很高的股票,表示这家公司过去没赚什么钱,但不表示它未来也不赚钱。
   以升势为例,升势开始时一定是买主多过卖主,因为在无势时,买卖的力量基本均衡。 一下子多出了许多买主 ,在交易量上的表现就是交易增多。随着股价的升高,第一波买主入了场,这时有人在账面上开始有收益,他们开始获利卖股,我们在图上就看到反调。这时的卖主 总的来说不多,我们会看到交易量减少。否则这便是不正常的升势。如果股票确有吸引力,如开发成功什么新产品,第二波买主会进场,重复第一波的过程。在图形 上,我们看到一浪高过一浪,股票总是以波浪形上升。
   股票的运动有点像推石球上山,要往上推,你要很大的力,但石球往下滚,用不着很多力气。在股票跌势时,买主消失,不大的卖压就会使股价往下跌。虽然其间有人拾偏宜货,这种下跌时的反弹是靠不住的。在图形上是一波低过一波,但交易量并不具备什么特色。
   无势图表示市场对这只股票没有什么看法,它在某一区间漫无目标地游动。买方和卖方的力量基本平衡。朋友,你认为什么因素使投资者入市买股票? 华尔街有过调查,使一般投资者入场买股票的原因最主要的就是因为股票在升!你明白了吗?一般投资者入场买股票主要不是因为股票的成本收益比率低或红利高,而是因为股票在升!升!升! 除了股票在升的理由之外,其它因素都是次要的。这就是为什么股票一开始升势,它往上一波高过一波,不会马上停止。要想学习养成对股票运动的感觉,你必须牢牢记住这一点。
   你能猜到为何一般投资人卖股票吗?读完上一段落,结论应该很明显。华尔街的调查证实了你的猜测:投资人卖股票的最主要原因是因为股票在跌!在跌!投资人卖股票不是因为成本收益比率高或其它原因,而是因为股票在跌。这就是为何跌势一开始,不会马上停止。
   现在你能体会到为什么股票升时常常升得离谱,跌时跌到惨不忍睹的原因了吧?记住股民买卖股票的真正原因,耐心地观察市场,你很快就会发现股票运动是有迹可循的。
   2 、支撑线和阻力线
   当股票在一定的区间波动,把最高点相连便成阻力线,把最低点相连便成支撑线从字面上解释便是股价升到阻力线时会碰到很大的阻力,不容易继续升上去,即出现很多卖主。而股价跌到支撑线时会发现很多买主,股价不容易跌下去
股票技术分析之支撑线和阻力线
   心理分析: 走进交易大 厅,你有没有常常听到“这只股票跌到 10 元钱我就收进”,“这只股票升到 15 元我就卖出”之类的话?答案是肯定的,因为这也是我常听到的。为什么一般的股民会认为某只股票跌到 10 元就值得吃进,而升到 15 元就该脱手呢?这也源自我们日常生活的经验。精明的主妇通常知道某种衣服的最低价是什么。如果衣服以这个价格出售,大家便纷纷抢购。而衣服牌价升到某个价 位,就没有人问津了。这可以分别称为衣服的阻力价和支撑价。
   在股市上,如果参与交易的多数投资者认为 10 元是某股票的最低价,一旦股价跌到这个价位,便会有很多人买入,股价自然就跌不下去。在图上我们就看到支撑线。阻力线的道理相同。如果一个款式的衣服在 10-15 元之间的价格波动,想像一下服装商是怎么做生意的。
   当衣服的价格在 10 元的时候,买主认为衣服的价钱便宜,入场购货。但卖主会觉得价钱偏低,更低就不卖了。在 15 元时,买主觉得价钱高,不愿买,虽然卖主想卖更高的价钱,没有买主他无计可施。所以在 10 元时,因买主多过卖主,价格开始上升,但在 15 元时,卖主多过买主,价格只有下降。
   或迟或早,有人会对这个 价格区间持有不同看法,认为衣服的价钱太高或太低。无论这是一位大户还是一批小服装商,他们的行动将使买卖的力量失去均衡。如果他们的力量够大的话,将引 起一连串的反应。无论是正在买卖的服装商还是在外观望的投机商,他们的行动将会改变 10-15 元的交易区间。如果新的均势有利于买主,这将吸引新的买主入场,带来新的买压,而卖主期待更高的价钱,他们的惜售将会使卖压进一步减轻。结果是使衣服的价 钱高出 15 元。随着价钱的进一步升高,卖出的诱惑力越来越大,衣服的价钱会在新的均衡区间摇摆。这个过程便是阻力价或支撑价的突破。在股票上,我们便有阻力线和支撑线的突破。
   需要指出的是,阻力线一旦被突破便成了新的支撑线,同样,支撑线一旦被跌破便成了新的阻力线。 让我们以支撑线为例:在支撑线附近,足够的买方出现,卖方消失,股价无法跌破该线。几个来回之后,市场形成这便是“最低价”的概念。突然间,更大的卖压出 现,股价跌破支撑线,这时认为支撑线就代表“最低价”的买主全部亏钱。其中一部分可能止损抛售, 另 一部分固执原来的想法,认为股价很快就会反弹。无论如何,原来市场对该股的“底价”概念已被粉碎,市场“背叛”了他们。
   现假设股价又升回原来的支撑线,你认为原先的投资人有什么反应?那些还未“止损”的人会感谢上帝给他们一个全身而退的机会,股价跌破支撑线的那段亏钱的时光令他们寝食难安,现在终于有个不亏甚至小赚的机会,他们会赶快卖掉股票以结束噩梦。
   再 看看那些止损出场者,他们原以“底价”入场,结果被烫伤。今天股价又回到这个价位,但烫伤的记忆犹新,他们大多不敢在这个价位重新入场。我们看到卖压增 加,买方的力量却有所减少。所以支撑线一旦跌破便成了新的阻力线。阻力线一旦被突破便成支撑线的道理相近,读者们可以自己考虑一个其中的原由机理。
   3 、双肩图和头肩图
   A 图是典型的双肩图, B 图是典型的头肩图。这都是炒股中常见的图形。
双肩图和头肩图
   心理分析: 以双肩图为例。它的典型特点就是两个高点。要提醒读者,这两个高点的选择是和时间的跨度相关的,很明显,一天的高点和一年的高点是完全不一样的。但它们的解释相同。随着价钱的升高,买主们开始怀疑价钱是否能超过原来的最高点?卖主也在观察这个最高点是否还像上次一样会带来卖压,使价格的升势挫折。简 单地说,市场参与者在观察这次会不会有和上次同样的经历。上次价格升到这点引发买卖力的逆转,这次会发生同样的事情吗?结果只有两个:穿越上次的最高点和 不能穿越上次的最高点。在双肩图中,因为无法穿越上次的最高点,市场对价格的看法产生变化,股民对在这点附近持股感到不自在。在股市中,你会看到股价逐步 滑落。但假如买力不减,继续穿过上次的最高点,我们就回到升势图去了。头肩图的道理和双肩图类似。读者们请自己想像一下在其过程中股民的心态变动过程是怎 么样的。头肩图可以当成双肩图的变形。这些图还可以倒过来看。
   如果说正双肩图给你提供了卖的信息,那么倒双肩图便给你提供了买的信息。在这些图的后面其实是股民们对该股票价格认定的心理变化。你要用心来感受:如果你是股市的一员,你会怎么想,你会怎样做?这样,你慢慢地就会形成何时入市、何时出场的直觉。
   4 、平均线
   平均线的目的主要是用来判定股票的走势。
   股价的运动常常具有跳动的形式,平均线把跳动减缓成较为平坦的曲线。
   计 算平均线的方法有许多种,最常用的是取收市价作为计算平均值的参考。比如你要计算十天的平均值,把过去十天的收市价格加起来除以十,便得到这十天的平均 值。每过一天,分子式加上新一天的股票收市价,再减去倒第十一天的收市价,分母不变,便得到最新的平均值,把平均值连起来便成为平均线。
   平均线的形状取决于所选择的天数。天数越多,平均线的转折越平缓。
   我自己习惯用两百天平均线来衡量股票的长期走势,五十天平均线来衡量中期走势。我不怎么看五十天以下的平均线,因为我发现其参考价值不高。股票短期的运动方向我注重股价及交易量。
   我通常不买股价在两百天平均线下的股票,做短线时例外。
   5 、其它图形
   我自己日常留意的技术分析图形就是上述四种。但这节的题目是技术分析的基本知识,我不得不提一下其它图形,否则名不符实。
   一 般的技术分析书都会提到三角、隧道、旗子等等的图形。遗憾的是,我的实践经验证明它们没有什么实用价值。便对我没有实用价值并不表示对其他人也同样没用, 我建议严肃的炒手自己去找这方面的书学习。我将本书的范围限制在自己亲身证明最有用的知识,并不打算包罗万象,请读者原谅。
   有电脑软件的朋友常常会看到 MACD 、威廉 % 等等电脑计算的买卖指标,流行的有二三十种之多。我学股的第二年曾花很多时间研究这些指标,结果上了大当。我 自己花了很多学费后才明白这些指标都有“见光死”的特点。也不能说这些指标错了,这些指标的发明者通常有辉煌的经历。想像一下:如果每人都按照这些指标提 供的买卖信号炒股,结果将会是什么?我自认站在巨人的肩膀之上,结果从巨人肩膀跌下来,摔惨了!我自己常用的前面三个图形也不是我自己发明的。但我在实践 中体会到它们背后的心理因素,人性是不容易改变的,所以它们一直有效。希望它们不会因为这本书而同样“见光死”。当然,我相信不可能,人性哪有那么容易改 变的?关于其它图形背后的大众心理变化的合理解释,有待行家高手的进一步研究。
   6 、综合看图
   A 图:综合走势线及阻力线和支撑线,在支撑线稍下的点是卖出点。支撑线一旦被跌破,表示升势结束。
   B 图:和 A 图类似,但有别于阻力线,我们这里看到头肩图。道理和 A 图相似。
股票技术分析的基本知识
   C 图:把 A 图倒过来,我们就有了最常见的买入理想点。记住如果这是升势开始的话,交易量通常增大。
   D 图:这是我们在炒股中常见的股票运动图。
股票技术分析的基本知识

Sunday, November 11, 2012

Cupertino

Sitting on the side of Bollinger and De Anza Bldge, sipping Startbucks coffee, the cars are flying by with the honks and squeeking sounds every now and then, I am pondering - why the real estate price is so high here?

1) school - maybe; but every parent in the right mind knows that kids excel not because of the school, but because of the parents' attention and willingness to pay the bills; "Peer Pressure"? maybe, but it can become so big that the effect is just the opposite --- one delimma

2) the neighbors? could be. When everyone else is making millions a year by doing all sorts of hard-working/investment, you are going to be pressured and copy what the neighbor is doing, especially amongst the asian community; For this purpose, why not move to PA or Saratoga? More rich neighbors, more ideas ... or maybe a matter of fit-in?

3) conveniences - I think this may be the best reason to move into this neighborhood. It is close to a lot of amenities, including the after-school ECs for all the kids that are doing on weekends even on the weekday evenings .... a lot of time saved if they can find such a program around the corner of their house; besides, the shopping is so close-by if that does matter ... but to me, now a days, the shopping is really becoming less and less a portion of our family life --- we can just get by to shop at Trader Joes - once a week; not that big deal even if I have to drive a little ...

Will the RE price drop? maybe not.

Thursday, November 8, 2012

Election is over

with DOW off 300+ points

Now, it is all back to the old days ....

Healthcare? High-tech?

SPN - exit when it is right
STEC - exit when no hope

Monday, November 5, 2012

Kona

KONA: small institutional ownership
back to favor to 2006 level
good growth

Picking Methods

Key factors:

                  do NOT follow the fashion!

                   Study, study, and study .... pick the diamond from a pile of rocks

                    LOW p/e HIGH e/s

                 


Thursday, November 1, 2012

V, Ma, AXP

AXP is the past

MA has better earning than V

MA has more cash than V

11/01/2012

Really good day --- maybe it is close to voting day? I don't trust the job data --- because too many IT jobs are being cut, plus UBS 10k ...

Finished the book http://ishare.iask.sina.com.cn/f/34417256.html?w=MTgwMjU3MTg1Nw%3D%3D.

A top genius from BD

Pretty good writing skill

Pretty successful player in bonds

Pretty good deep-dive of Wall Street, esp. several analysis of the 2008 Wall Street problems....

only thing needs to think more is about Stock trading --- which he didn't seem to have much experiences in it

Monday, October 29, 2012

Thoughts on couple symbols

STMP - may have a way to go. Back to 12-month high? Maybe.

V - depends on the earnings coming out. May pick up some if it goes down

CROX - watch very carefully! They need some new ideas to get more growth. If not, dump when it goes back to close to the paid price

AMD - DO NOT add in more. Treat it as "dead money"


Tuesday, September 18, 2012

Monday, February 13, 2012

How to Monetize Google's App Engine?

This is a 2-folded million-dollar question,

First of all, how does Google make money from it?
 (somehow, this kept reminding me of Sun's Java. Hopefully history has taught a lesson to the tech people :-).

On the other hand, would any outsiders profit from this offering? I believe there is a way.

Friday, February 10, 2012

How to Design/Evaluate a Product


  • Who is the user?
  • http://www.kintya.com/.shared/image.html?/photos/uncategorized/2008/08/16/pmdesigntemplate.png What are the customers’ goals?
  • What are the business goa
  • What are the gaps between existing solutions and the customer’s ideal solution?
  • What are the different product alternatives?
  • This Answers my puzzle about Google Finance

    Maybe Google PM can take a look at this article.

    I tried to switch to Google Finance. But, somehow, I stayed with Yahoo Finance even though I have switched almost all other services to Google.com. It could be just a human habit that is in effect. The reason talked about in above article may also contribute to it. 

    Thursday, February 9, 2012

    TCP Congestion Conbtrol mechanism

    1, slow start -  here.

    2, congestion avoidance - here.

    3, fast retransmit -  when three or more duplicate ACKs are received, the sender does not even wait for a retransmission timer to expire before retransmitting the segment (as indicated by the position of the duplicate ACK in the byte stream). This process is called the Fast Retransmit algorithm.


    4, fast recovery -  TCP sender has implicit knowledge that there is data still flowing to the receiver. Rather than start at a window of one segment as in Slow Start mode, the sender resumes transmission with a larger window, incrementing as if in Congestion Avoidance mode. This allows for higher throughput under the condition of only moderate congestion.



    Write a regular expression which matches email address

    \w+@\W+\.[\w]{3}

    Or

    ^(([A-Za-z0-9]+_+)|([A-Za-z0-9]+\-+)|([A-Za-z0-9]+\.+)|([A-Za-z0-9]+\++))*[A-Za-z0-9]+@((\w+\-+)|(\w+\.))*\w{1,63}\.[a-zA-Z]{2,6} 

    Wednesday, February 8, 2012

    Definition of Success

    "... the meaning of success has also changed for most people. No longer do people think of success in terms only in the vertical terms (for example in terms of promotion). Increasingly, people define success in their own terms, measured against their own particular set of goals and values in life. We call this psychological success. The good thing about success from the individuals point of view is while there is only one way to achieve vertical success (that of moving up), there are an infinite variety of ways of achieving psychological success."

    - From Allan R Cohen book “The portable MBA in Management”: 

    What Does Datacenter want?

    1, Accelerated Business Performance;

    2, Optimized Asset Utilization;

    3, Lower System Acquisition and Operation;

    4, Reduced IT Complexity

    Considering Factors when choosing a TOR switch


    - Throughput Gbps
    - Forwarding Rate Mmps
    - Latency
    - Buffer
    - Power
    - Stacking Technology vs VC
    - Pricing

    Tuesday, February 7, 2012

    Anatomy of a Journaling File System

    Figure 1. A typical journaling file system
    A typical journaling file system

    Note, Metadata refers to the managing structures for data on a disk. Metadata represents file creation and removal, directory creation and removal, growing a file, truncating a file, and so on. In Google's case, the metadata must contain these attributes - DC, Rack, Slot, Tablet, etc. Of course, some of them located in the GFS master, some resides in the modified Linux file system whatever it is.

    Monday, February 6, 2012

    PAE Primer plus Linux Src Code


    As RAM increasingly becomes a commodity, the prices drop and computer users are able to buy more. 32-bit archictectures face certain limitations in regards to accessing these growing amounts of RAM. To better understand the problem and the various solutions, we begin with an overview of Linux memory management. Understanding how basic memory management works, we are better able to define the problem, and finally to review the various solutions.
    This article was written by examining the Linux 2.6 kernel source code for the x86 architecture types.
    ===========================================================================
    Overview of Linux memory management
    32-bit architectures can reference 4 GB of physical memory (2^32). Processors that have an MMU (Memory Management Unit) support the concept of virtual memory: page tables are set up by the kernel which map "virtual addresses" to "physical addresses"; this basically means that each process can access 4 GB of memory, thinking it's the only process running on the machine (much like multi-tasking, in which each process is made to think that it's the only process executing on a CPU).
    The virtual address to physical address mappings are done by the kernel. When a new process is "fork()"ed, the kernel creates a new set of page tables for the process. The addresses referenced within a process in user-space are virtual addresses. They do not necessarily map directly to the same physical address. The virtual address is passed to the MMU (Memory Management Unit of the processor) which converts it to the proper physical address based on the tables set up by the kernel. Hence, two processes can refer to memory address 0x08329, but they would refer to two different locations in memory.
    The Linux kernel splits the 4 GB virtual address space of a process in two parts: 3 GB and 1 GB. The lower 3 GB of the process virtual address space is accessible as the user-space virtual addresses and the upper 1 GB space is reserved for the kernel virtual addresses. This is true for all processes.
                 
          +----------+ 4 GB          
          |          |               
          |          |               
          |          |               
          | Kernel   |               
          |          |               +----------+ 
          | Virtual  |               |          |
          |          |               |          |
          | Space    |               | High     |
          |          |               |          |
          | (1 GB)   |               | Memory   |
          |          |               |          |
          |          |               | (unused) |
          +----------+ 3 GB             +----------+ 1 GB
          |          |                  |          |
          |          |                  |          |
          |          |                  |          |
          |          |               | Kernel   |
          |          |               |          |
          |          |               | Physical |
          |          |               |          |
          |User-space|               | Space    |
          |          |               |         |
          | Virtual  |               |          |
          |          |               |          |
          | Space    |               |          |
          |          |               |          |     
          | (3 GB)   |               +----------+ 0 GB
          |          |                 
          |          |                 Physical 
          |          |                  Memory 
          |          |                 
          |          |                 
          |          |                 
          |          |                 
          |          |                 
          +----------+ 0 GB      
              
            Virtual        
            Memory         
    
    The kernel virtual area (3 - 4 GB address space) maps to the first 1 GB of physical RAM. The 3 GB addressable RAM available to each process is mapped to the available physical RAM.
    The Problem
    So, the basic problem here is, the kernel can just address 1 GB of virtual addresses, which can translate to a maximum of 1 GB of physical memory. This is because the kernel directly maps all available kernel virtual space addresses to the available physical memory.
    Solutions
    There are some solutions which address this problem:
    1. 2G / 2G, 1G / 3G split
    2. HIGHMEM solution for using up to 4 GB of memory
    3. HIGHMEM solution for using up to 64 GB of memory
    1. 2G / 2G, 1G / 3G split
    Instead of splitting the virtual address space the traditional way of 3G / 1G (3 GB for user-space, 1 GB for kernel space), third-party patches exist to split the virtual address space 2G / 2G or 1G / 3G. The 1G / 3G split is a bit extreme in that you can map up to 3 GB of physical memory, but user-space applications cannot grow beyond 1 GB. It could work for simple applications; but if one has more than 3 GB of physical RAM, he / she won't run simple applications on it, right?
    The 2G / 2G split seems to be a balanced approach to using RAM more than 1 GB without using the HIGHMEM patches. However, server applications like databases always want as much virtual addressing space as possible; so this approach may not work in those scenarios.
    There's a patch for 2.4.23 that includes a config-time option of selecting the user / kernel split values by Andrea Arcangeli. It is available at his kernel page. It's a simple patch and making it work on 2.6 should not be too difficult.
    Before looking at solutions 2 & 3, let's take a look at some more Linux Memory Management issues.
    Zones
    In Linux, the memory available from all banks is classified into "nodes". These nodes indicate how much memory each bank has. This classification is mainly useful for NUMA architectures, but it's also used for UMA architectures, where the number of nodes is just 1.
    Memory in each node is divided into "zones". The zones currently defined are ZONE_DMA, ZONE_NORMAL and ZONE_HIGHMEM.
    ZONE_DMA is used by some devices for data transfer and is mapped in the lower physical memory range (up to 16 MB).
    Memory in the ZONE_NORMAL region is mapped by the kernel in the upper region of the linear address space. Most operations can only take place in ZONE_NORMAL; so this is the most performance critical zone. ZONE_NORMAL goes from 16 MB to 896 MB.
    To address memory from 1 GB onwards, the kernel has to map pages from high memory into ZONE_NORMAL.
    Some area of memory is reserved for storing several kernel data structures that store information about the memory map and page tables. This on x86 is 128 MB. Hence, of the 1 GB physical memory the kernel can access, 128MB is reserved. This means that the kernel virtual address in this 128 MB is not mapped to physical memory. This leaves a maximum of 896 MB for ZONE_NORMAL. So, even if one has 1 GB of physical RAM, just 896 MB will be actually available.
    Back to the solutions:
    2. HIGHMEM solution for using up to 4 GB of memory
    Since Linux can't access memory which hasn't been directly mapped into its address space, to use memory > 1 GB, the physical pages have to be mapped in the kernel virtual address space first. This means that the pages in ZONE_HIGHMEM have to be mapped in ZONE_NORMAL before they can be accessed.
    The reserved space which we talked about earlier (in case of x86, 128 MB) has an area in which pages from high memory are mapped into the kernel address space.
    To create a permanent mapping, the "kmap" function is used. Since this function may sleep, it may not be used in interrupt context. Since the number of permanent mappings is limited (if not, we could've directly mapped all the high memory in the address space), pages mapped this way should be "kunmap"ped when no longer needed.
    Temporary mappings can be created via "kmap_atomic". This function doesn't block, so it can be used in interrupt context. "kunmap_atomic" un-maps the mapped high memory page. A temporary mapping is only available as long as the next temporary mapping. However, since the mapping and un-mapping functions also disable / enable preemption, it's a bug to not kunmap_atomic a page mapped via kmap_atomic.
    3. HIGHMEM solution for using 64 GB of memory
    This is enabled via the PAE (Physical Address Extension) extension of the PentiumPro processors. PAE addresses the 4 GB physical memory limitation and is seen as Intel's answer to AMD 64-bit and AMD x86-64. PAE allows processors to access physical memory up to 64 GB (36 bits of address bus). However, since the virtual address space is just 32 bits wide, each process can't grow beyond 4 GB. The mechanism used to access memory from 4 GB to 64 GB is essentially the same as that of accessing the 1 GB - 4 GB RAM via the HIGHMEM solution discussed above.
    Should I enable CONFIG_HIGHMEM for my 1 GB RAM system?
    It is advised to not enable CONFIG_HIGHMEM in the kernel to utilize the extra 128 MB you get for your 1 GB RAM system. I/O Devices cannot directly address high memory from PCI space, so bounce buffers have to be used. Plus the virtual memory management and paging costs come with extra mappings. 

    VIew Linux source code Lnx2.6.

    Google's Mentality


    Jedis build their own lightsabres 
                             (the MS Eat your own Dog Food)
    Parallelize Everything
    Distribute Everything (to atomic level if possible)
    Compress Everything (CPU cheaper than bandwidth)
    Secure Everything (you can never be too paranoid)
    Cache (almost) Everything
    Redundantize Everything (in triplicate usually)
    Latency is VERY evil

    Saturday, February 4, 2012

    Latency - 1

    Hardware latency mainly comes from,
        - pipeline instructions waiting to finish execution even though it is at the execute stage already. Reason is, most microprocessors can only do 1 or 2 instructions per clock cycle
        - memory load delays

    Microprocessor Charts

    Design styles diagram

    Friday, February 3, 2012

    Why need Cell Number?

    In this country, everyone is assigned a SSN when she was born. It could be the ID, plus extra small piece of information,  for the devices she owns.

    What about privacy? Some ways of permutation, etc....

    Wednesday, February 1, 2012

    Disruption-Tolerant Network (DTN)


    disruption-tolerant network (DTN)

           A disruption-tolerant network (DTN) is a network designed so that temporary or intermittent communications problems, limitations and anomalies have the least possible adverse impact. There are several aspects to the effective design of a DTN, including: 
    • The use of fault-tolerant methods and technologies.
    • The quality of graceful degradation under adverse conditions or extreme traffic loads.
    • The ability to prevent or quickly recover from electronic attacks.
    • Ability to function with minimal latency even when routes are ill-defined or unreliable.
    Fault-tolerant systems are designed so that if a component fails or a network route becomes unusable, a backup component, procedure or route can immediately take its place without loss of service. At the software level, an interface allows the administrator to continuously monitor network traffic at multiple points and locate problems immediately. In hardwarefault tolerance is achieved by component and subsystemredundancy.
    Graceful degradation has always been important in large networks. One of the original motivations for the development of the Internet by the Advanced Research Projects Agency (ARPA) of the U.S. government was the desire for a large-scale communications network that could resist massive physical as well as electronic attacks including global nuclear war. In graceful degradation, a network or system continues working to some extent even when a large portion of it has been destroyed or rendered inoperative.
    Electronic attacks on networks can take the form of viruses, worms, Trojans, spyware and other destructive programs or code. Other common schemes include denial of serviceattacks and malicious transmission of bulk e-mail or spam with the intent of overwhelming network servers. In some instances, malicious hackers commit acts of identity theft against individual subscribers or groups of subscribers in an attempt to discourage network use. In a DTN, such attacks may not be entirely preventable but their effects are minimized and problems are quickly resolved when they occur. Servers can be provided with antivirus software and individual computers in the system can be protected by programs that detect and remove spyware.
    As networks evolve and their usage levels vary, routes can change, sometimes within seconds. This can cause temporary propagation delays and unacceptable latency. In some cases, data transmission is blocked altogether. Internet users may notice this as periods during which some Web sites take a long time to download or do not appear at all. In a DTN, the frequency of events of this sort is kept to a minimum.

    Open Storage Network

    Tuesday, January 31, 2012

    Can we do better on google.com?


    Google have so many products, but they still have a way to fully utilize and nail down their competitors. Here is a short list,
    • Google Buzz failed to build the momentum to compete against Twitter, despite being shoved down our throats through Gmail. There was no community or value for users.
    • Google Wave failed by being pointlessly complicated, even for geeks like me. They built an API for extensions before getting the main product right.
    • 3rd party authentication with multiple identities (Gmail / Google Apps) is a pain for users, unlike using Twitter or Facebook which have one clear identity: myself.
    • Social review sites (like Yelp) focused on consumers are eating Google’s lunch in the business listing market. Compare Yelp’s social approach to Google Maps’s aggregator for the same place.
    • Different products are barely connected. Google Analytics, Google Adwords and Google Webmaster Tools seem to follow completely different UI guidelines and have overlapping ways of verifying you own a site.

    Key Elements in Data Mining

    1, Knowing which questions matter most, and where to find the answers.
    2, Signal detection. 
    3, "Right fit" analysis
    4, Visualization
    5, Analytics Automation
    6, Commitment and Culture

    Some Remaining Issues with Search Engine

    1, Flash Contents are not well indexed


    Adobe Flash and SEO

    At present there is no effective way to optimize Flash for search engines. The main problem is not that search engines cannot index Flash content; the real problem is that search engine spiders cannot identify the content structure. While in regular HTML you can structure keywords in a hierarchical way using semantic mark-up, in Flash this is quite difficult. Additionally it is difficult for search engines to determine what is visible and what is invisible to the end user. Since this could give way to “flash content spam”, search engines take the safe approach to give extremely low priority to flash indexed content.
    To make things worse many designers make the mistake to bury text into images, making the indexing task from difficult to impossible. Maybe the worst possible error from designers is to bury links into Flash objects (the spider will follow the links conservatively). A Flash website navigation bar is the best way to prevent search engine from indexing your web site completely. Make no mistakes spiders will be able to follow links in Flash object, but the weight given to these links will be very low.

    Cloud Management Part 1

    I have been thinking the cloud management is hard, but necessary and marketable.

    Obviously, there has been similar mind-set around. Arista Networks is one of those.

    Monday, January 30, 2012

    Wireless Networking

    This is a good blog, Wireless Blogs

    How to Find Storage Buyers?


    Finding Storage Management Contacts 

    Finding target prospect organizations is the first requirement. The second is finding appropriate contacts in
    those organizations. A list of management level contacts (VP, director or manager) whose sole responsibility is
    managing storage would be a pot of gold at the end of a rainbow. Even a comprehensive list of storage
    administrators is elusive.  Although it seems counterintuitive based on the importance and growth of storage, the fact is that at the management level there simply are not many contacts available. Only in the biggest of the big storage user organizations will there be a storage management level contact.

    At the technical level (ie. storage administrators), we know there are plenty of these folks in the market, but that
    contact data just isn’t to be found. Technical contacts in any area of IT have always been a tough market to
    track. Frankly, there’s just no money in it for database companies to try and track down technical contacts.
    Both Jigsaw and Netprospex maintain large online databases with a lot of contact depth for large organizations.
    A simple keyword search for “storage” in contacts at the CXO/VP/Director/Manager levels  comes up with 454 contacts in Jigsaw (about 25 million total contacts tracked) and 496 contacts in Netprospex (about 15 million total contacts tracked). This includes contacts at IT companies with storage in their titles as well as a handful of managers at self-storage facilities. This is not a barometer for the number of management contacts that are out there, but perhaps an indicator as to how elusive they are to find.

    Storage marketers are generally targeting contacts with ultimate responsibility for storage. In some medium and
    most large IT shops it is the operations/data center group that has direct responsibility for storage management.
    At the lower end of the market, where operations may be a small group, the CIO/Director of IT will be the most commonly available contact and the place to start.

    How to identify Storage Market segments?

    Two important aspects that can not be ignored by anyone,

    1, price point;

    2, basic IT size characteristics

    Do we still need reliable file systems?

    Whats is the future of ZFS, etc?

    Interesting reference here.

    Relevant Products from Zetta

    enterprise cloud requirements?

    List of challenges? SLA?

    Friday, January 27, 2012

    What Can I Do for Google?

    I started to believe more and more that my product, Data Streamer, will be very useful for Google's Video Infrastructure, aka slow with sometimes pausing music.

    Wednesday, January 25, 2012

    Google's coming challenges

    Read the news about Yahoo's revenue, associating with Google's recent report. I had to ask myself this question,

    - is that it for the search era?

    Thinking more about it, looks like the answer may not be that difficult, at least from the surface. Like every other sectors, Ad revenue is not unlimited. With the fix-sized pie, more portions it is carved, smaller each portion will be. With Facebook joining in to share the pie, and MSFT aggresivelly fighting back, the slice for Yahoo and Google are to be impacted. This is no brainer.

    Now, what did Yahoo do? My wild guess is that they are seeking some new avenue for revenue. Otherwise, why would they pick someone with Paypal experiences? and kicked out the father of search, Jerry boy.

    For Google, who I personally have more interest in, what are coming? Well, Android + Motorola Solutions is one direction, obviously they started way earlier; Google TV, honestly, is not very successful (my gut feeling is  Apple TV would surpass it, if Tim decides to go that route); Google office, etc., obviously is fighting head-to-head with MSFT. This may be a chance. Question is, how to monetize these products. Remember MSFT makes tons of money of them while Google Docs are free (at this moment).

    Pondering, with the rich cash in the bank, GOOG can do something else.

    Tuesday, January 24, 2012

    Monday, January 23, 2012

    Google Picasa

    A good link to give some introduction on Google Picasa:

    http://digital-photography-howto.com/googles-version-of-free-photo-sharing-picasa-web/

    How to Track A Website


    If you have a useful product or service (or even a content site), the utility of it is bound to attract an audience. However, your ability to retain and covert that audience into loyal customers or users depends on how well you use and optimize for the right metrics.
    There are hundreds of different ways you can increase retention and conversions, but before you do that, you have to figure out what metrics you should be trying to improve.
    Conversion Graph
    To that end, here’s a cheat sheet that will help you determine the most important metrics to track:
    1. Traffic Sources
      It is important to have a diverse number of sources for incoming traffic. The three primary source categories are:
      1. direct visitors – the ones that visit your site by directly typing your url in their browser address bar,
      2. search visitors – the ones that visit your site based on a search query, and
      3. referral visitors – the ones that visit your site because it was mentioned on another blog or site.
      All three sources are important but have varying levels of conversion, so you should calculate how much each traffic source is converting and deal with them individually.
    2. New/Unique Visitor Conversion
      The way a first-time visitor interacts with your site is very different from how a returning visitor interacts. To improve first-time visitors conversions you have to isolate it from the conversion rates of your loyal or returning customers and determine what they see when they visit the website for the first time and how you can improve that experience. Usability plays an important role in reducing the bounce rate for first timers.
    3. Return Visitor Conversion
      There are two questions you should be asking yourself. 1) Why did the person return, and 2) did the person convert the first time around, and if they didn’t, why not and how can you convert them the second time around. Keep in mind, even if someone didn’t convert as a new visitor, you made enough of an impression to get them to come back. Now that they have liked you enough to return, your goal is to isolate the return visitor conversion rate and figure out how to increase that.
    4. Interactions Per Visit
      Even if your visitors don’t convert, it is important to monitor their behavior on the site. What exactly are they doing, how can you get them to do more of it, and how can you influence this behavior into conversions? For example, what are your page view rates per unique visitors, what is the time spent, comments or reviews made, and so on. Each of these interactions is important, and your goal should be not only to increase these interactions (e.g. increase time spent on the site), but also figure out how you can leverage these increased interactions into increased conversions (which might be downloads, subscriptions, purchases, etc.).
    5. Value Per Visit
      The value of a visit is tied directly to the interactions per visit. You can calculate this simply as number of visits divided by total value created. Calculating value per visit is difficult because there are many intangibles involved that create value that is hard to define. For example, blog visitors create value every time they add a page view to your traffic (because of cpm advertising) but they also create an intangible value when they comment on your site. Similarly, visitors on e-commerce sites create value every time they purchase a product, but they also create a somewhat incalculable value when they leave a product review or when they spread word of mouth.
    6. Cost Per Conversion
      The corollary to value per visit, and one of the most important metrics, is cost per conversion (alternatively: lead generation costs or cost per referral). It doesn’t matter if you have high conversions and high value per visit if your costs are so prohibitive that your net income is zero or even negative. While trying to increase conversion, keep your costs per conversion and overall margins in mind.
    7. Bounce Rate
      Your initial goal when trying to increase all five of the metrics above is to minimize your visitor bounce rate. The Bounce rate is the rate at which new visitors visit your site and immediately click away without doing anything (very low time spent and no interactions). A high bounce rate can mean several things, including weak or irrelevant sources of traffic and landing pages that aren’t optimized for conversion (have a poor design, low usability or high load times). Bounce rates for e-commerce sites are often called abandonment rates, i.e., the rate at which people abandon their shopping cart without making a purchase. This is usually a result of an overly complicated checkout process, expired deals, forced cart additions (e.g. to see the actual price of the product, add to your cart), and so on.
    8. Entrance and Exit Pages
      Where does it come to my page? This is important. Your bounce rates aren’t entirely derived from your home page. In many cases your final call to action or conversion may be on page 2 or 3 of a process. To maximize conversions you need to dive deeper into your exits and figure out at what stage in the process your visitors are exiting the site or abandoning their shopping cart, and optimize the process accordingly.
    Start monitoring all these metrics now, and next time we’ll tell you how to optimize each of them.


    Friday, January 20, 2012

    Found my stuff on this list, No.18, "WebOS" ... HooRay!


    1. A cure for the disease of which the RIAA is a symptom. Something is broken when Sony and Universal are suing children. Actually, at least two things are broken: the software that file sharers use, and the record labels' business model. The current situation can't be the final answer. And what happened with music is now happening with movies. When the dust settles in 20 years, what will this world look like? What components of it could you start building now?
    The answer may be far afield. The answer for the music industry, for example, is probably to give up insisting on payment for recorded music and focus on licensing and live shows. But what happens to movies? Do they morph into games?
    2. Simplified browsing. There are a lot of cases where you'd trade some of the power of a web browser for greater simplicity. Grandparents and small children don't want the full web; they want to communicate and share pictures and look things up. What viable ideas lie undiscovered in the space between a digital photo frame and a computer running Firefox? If you built one now, who else would use it besides grandparents and small children?
    3. New news. As Marc Andreessen points out, newspapers are in trouble. The problem is not merely that they've been slow to adapt to the web. It's more serious than that: their problems are due to deep structural flaws that are exposed now that they have competitors. When the only sources of news were the wire services and a few big papers, it was enough to keep writing stories about how the president met with someone and they each said conventional things written in advance by their staffs. Readers were never that interested, but they were willing to consider this news when there were no alternatives.
    News will morph significantly in the more competitive environment of the web. So called "blogs" (because the old media call everything published online a "blog") like PerezHilton and TechCrunch are one sign of the future. News sites like Reddit and Digg are another. But these are just the beginning.
    4. Outsourced IT. In most companies the IT department is an expensive bottleneck. Getting them to make you a simple web form could take months. Enter Wufoo. Now if the marketing department wants to put a form on the web, they can do it themselves in 5 minutes. You can take practically anything users still depend on IT departments for and base a startup on it, and you will have the enormous force of their present dissatisfaction pushing you forward.
    5. Enterprise software 2.0. Enterprise software companies sell bad software for huge amounts of money. They get away with it for a variety of reasons that link together to form a sort of protective wall. But the software world is changing. I suspect that if you study different parts of the enterprise software business (not just what the software does, but more importantly, how it's sold) you'll find parts that could be picked off by startups.
    One way to start is to make things for smaller companies, because they can't afford the overpriced stuff made for big ones. They're also easier to sell to.
    6. More variants of CRM. This is a form of enterprise software, but I'm mentioning it explicitly because it seems like this area has such potential. CRM ("Customer Relationship Management") means all sorts of different things, but a lot of the current embodiments don't seem much more than mailing list managers. It should be possible to make interactions with customers much higher-res.
    7. Something your company needs that doesn't exist. Many of the best startups happened when someone needed something in their work, found it didn't exist, andquit to build it. This is vaguer than most of the other recipes here, but it may be the most valuable. You're working on something you know customers want, because you were the customer. And if it was something you needed at work, other people will too, and they'll be willing to pay for it.
    So if you're working for a big company and you want to strike out on your own, here's a recipe for an idea. Start this sentence: "We'd pay a lot if someone would just build a ..." Whatever you say next is probably a good product idea.
    8. Dating. Current dating sites are not the last word. Better ones will appear. But anyone who wants to start a dating startup has to answer two questions: in addition to the usual question about how you're going to approach dating differently, you have to answer the even more important question of how to overcome the huge chicken and egg problem every dating site faces. A site like Reddit is interesting when there are only 20 users. But no one wants to use a dating site with only 20 users—which of course becomes a self-perpetuating problem. So if you want to do a dating startup, don't focus on the novel take on dating that you're going to offer. That's the easy half. Focus on novel ways to get around the chicken and egg problem.
    9. Photo/video sharing services. A lot of the most popular sites on the web are for photo sharing. But the sites classified as social networks are also largely about photo sharing. As much as people like to share words (IM and email and blogging are "word sharing" apps), they probably like to share pictures more. It's less work and the results are usually more interesting. I think there is huge growth still to come. There may ultimately be 30 different subtypes of image/video sharing service, half of which remain to be discovered.
    10. Auctions. Online auctions have more potential than most people currently realize. Auctions seem boring now because EBay is doing a bad job, but is still powerful enough that they have a de facto monopoly. Result: stagnation. But I suspect EBay could now be attacked on its home territory, and that this territory would, in the hands of a successful invader, turn out to be more valuable than it currently appears. As with dating, however, a startup that wants to do this has to expend more effort on their strategy for cracking the monopoly than on how their auction site will work.
    11. Web Office apps. We're interested in funding anyone competing with Microsoft desktop software. Obviously this is a rich market, considering how much Microsoft makes from it. A startup that made a tenth as much would be very happy. And a startup that takes on such a project will be helped along by Microsoft itself, who between their increasingly bureaucratic culture and their desire to protect existing desktop revenues will probably do a bad job of building web-based Office variants themselves. Before you try to start a startup doing this, however, you should be prepared to explain why existing web-based Office alternatives haven't taken the world by storm, and how you're going to beat that.
    12. Fix advertising. Advertising could be made much better if it tried to please its audience, instead of treating them like victims who deserve x amount of abuse in return for whatever free site they're getting. It doesn't work anyway; audiences learn to tune out boring ads, no matter how loud they shout.
    What we have now is basically print and TV advertising translated to the web. The right answer will probably look very different. It might not even seem like advertising, by current standards. So the way to approach this problem is probably to start over from scratch: to think what the goal of advertising is, and ask how to do that using the new ingredients technology gives us. Probably the new answers exist already, in some early form that will only later be recognized as the replacement for traditional advertising.
    Bonus points if you can invent new forms of advertising whose effects are measurable, above all in sales.
    13. Online learning. US schools are often bad. A lot of parents realize it, and would be interested in ways for their kids to learn more. Till recently, schools, like newspapers, had geographical monopolies. But the web changes that. How can you teach kids now that you can reach them through the web? The possible answers are a lot more interesting than just putting books online.
    One route would be to start with test prep services, for which there's already demand, and then expand into teaching kids more than just how to score high on tests. Another would be to start with games and gradually make them more thoughtful. Another, particularly for younger kids, would be to let them learn by watching one another (anonymously) solve problems.
    14. Tools for measurement. Now that so much happens on computers connected to networks, it's possible to measure things we may not have realized we could. And there are some big problems that may be soluble if we can measure more. The most important of all is the defining flaw of large organizations: you can't tell who the mostproductive people are. A small company is measured directly by the market. But once an organization gets big enough that people on in the interior are protected from market forces, politics starts to rule, instead of performance. An improvement of even a few percent in the ability to measure what actually happens in large organizations would have a huge impact on the world economy, and a startup that enabled it would be entitled to a cut.
    15. Off the shelf security. Services like ADT charge a fortune. Now that houses and their owners are both connected to networks practically all the time, a startup could stitch together alternatives out of cheap, existing hardware and services.
    16. A form of search that depends on design. Google doesn't have a lot of weaknesses. One of the biggest is that they have no sense of design. They do the next best thing, which is to keep things sparse. But if there were a kind of search that depended a lot on design, a startup might actually be able to beat Google at search. I don't know if there is, but if you do, we'd love to hear from you.
    17. New payment methods. There are almost certainly things whose growth is held back because there's no way to charge for them. And the people who could implement solutions don't realize how much demand there would be, precisely because this growth has been held back. So pretty much any new way of paying for things that's easier for some class of situations will turn out to have a bigger market than its inventors expected. Look at Paypal. (Warning: Regulated industry.)
    18. The WebOS. It probably won't be a literal translation of a client OS shifted to servers. But as applications migrate to servers, it seems possible there will be something that plays a central role like an OS does. We've already funded several startups that could be candidates. But this is a big prize, and there will probably be multiple winners.
    19. Application and/or data hosting. This is related to the preceding idea, but not identical. And again, while we've already funded several startups in this area, it's probably going to be big enough that it contains several rich markets.
    It may turn out that 4, 18, and 19 all have the same answer. Or rather, that there will be things that answer all three. But the way to find such a grand, overarching solution is probably not to approach it directly, but to start by solving smaller, specific problems, then gradually expand your scope. Start by writing Basic for the Altair.
    20. Shopping guides. Like news, shopping used to be constrained by geography. You went to your local store and chose from what they had. Now the space of possibilities is bewilderingly large, and people need help navigating it. If you already know what you want, Bountii can find you the best price. But how do you decide what you want? Hint: One answer is related to number 3.
    21. Finance software for individuals and small businesses. Intuit seems ripe for picking off. The difficulty is that they've got data connections with all the banks. That's hard for a small startup to match. But if you can start in a neighboring area and gradually expand into their territory, you could displace them.
    22. A web-based Excel/database hybrid. People often use Excel as a lightweight database. I suspect there's an opportunity to create the program such users wish existed, and that there are new things you could do if it were web-based. Like make it easier to get data into it, through forms or scraping.
    Don't make it feel like a database. That frightens people. The question to ask is: how much can I let people do without defining structure? You want the database equivalent of a language that makes its easy to keep data in linked lists. (Which means you probably want to write it in one.)
    23. More open alternatives to Wikipedia. Deletionists rule Wikipedia. Ironically, they're constrained by print-era thinking. What harm does it do if an online reference has a long tail of articles that are only interesting to a few people, so long as everyone can still find whatever they're looking for? There is room to do to Wikipedia what Wikipedia did to Britannica.
    24. A buffer against bad customer service. A lot of companies (to say nothing of government agencies) have appalling customer service. "Please stay on the line. Your call is important to us." Doesn't it make you cringe just to read that? Sometimes the UIs presented to customers are even deliberately difficult; some airlines deliberately make it hard to buy tickets using miles, for example. Maybe if you built a more user-friendly wrapper around common bad customer service experiences, people would pay to use it. Passport expediters are an encouraging example.
    25. A Craigslist competitor. Craiglist is ambivalent about being a business. This is both a strength and a weakness. If you focus on the areas where it's a weakness, you may find there are better ways to solve some of the problems Craigslist solves.
    26. Better video chat. Skype and Tokbox are just the beginning. There's going to be a lot of evolution in this area, especially on mobile devices.
    27. Hardware/software hybrids. Most hackers find hardware projects alarming. You have to deal with messy, expensive physical stuff. But Meraki shows what you can do if you're willing to venture even a little way into hardware. There's a lot of low-hanging fruit in hardware; you can often do dramatically new things by making comparatively small tweaks to existing stuff.
    Hardware is already mostly software. What I mean by a hardware/software hybrid is one in which software plays a very visible role. If you work on an idea of this type you'll tend to have the field to yourself, because most hackers are afraid of hardware, and most hardware companies can't write good software. (One reason your iPod isn't made by Sony is that Sony can't write iTunes.)
    28. Fixing email overload. A lot of people, including me, feel they get too much email. A solution would find a ready market. But the best solution may not be anything as obvious as a new mail reader.
    Related problem: Using your inbox as a to-do list. The solution is probably to acknowledge this rather than prevent it.
    29. Easy site builders for specific markets. Weebly is a good, general-purpose site builder. But there are a lot of markets that could use more specialized tools. What's the best way to make a web site if you're a real estate agent, or a restaurant, or a lawyer? There still don't seem to be canonical answers.
    Obviously the way to build this is to write a flexible site builder, then write layers on top to produce different variants. Hint: The key to making a site builder for end-users is to make software that lets people with no design ability produce things that look good—or at least professional.
    30. Startups for startups. The increasing number of startups is itself an opportunity for startups. We're one; TechCrunch is another. What other new things can you do?