This post is a follow up to *Crunching Numbers in APL* available for kindle at Amazon.
Undaunted by my last foray into online analytical processing (OLAP), I sat down to code what I thought was OLAP. Workspace more_olap is here.
Should you load this workspace, you will find the following variables:
A fact in this exercise is the line item and amount columns from irdb.
irdb is made up of these columns:
utl∆numberedArray ⊃ irdb[1;]  Category code  Category  Subcategory code  Subcategory  Line item  Fiscal year  Amount (millions of dollars)
The function olap∆buildFacts loads the irdb data and produces all of these variables. You should also consider olap∆buildVars, which will produce variables to use as indices of facts.
olap∆buildVars cats subs yrs )vars cat_ cat_a cat_e cat_l )vars sub_ sub_b sub_cs sub_da sub_dfb sub_dl sub_i sub_lo sub_nn sub_o sub_oa sub_oe sub_ol sub_orcv sub_rcv sub_re sub_s sub_sol )vars yr_ yr_2009 yr_2010
We now have some variables to use as indices of our fact cube and can look at our cube:
olap∆combineFacts facts[cat_e;yr_2010] Paid-in capital 11492 Retained earnings 28793 Accumulated other comprehensive loss ¯3043 ⍝ Or +/(olap∆combineFacts facts[cat_e;yr_2010])[;2] 37242
I still haven’t concluded that it’s easier than this:
SELECT Line_item, Amount from irdb where Category_code = 'e' and Fiscal_year = 2010;
But I’m biased. While I’ve been writing APL code longer, I’ve spent more time writing SQL.
This is a simple exercise with simple data. It demonstrates what a data cube might look like and how to simplify slicing and dicing the data. There is no generalized code in this workspace, and therefore I must write a whole new workspace when I want to analyze a new data set.
I’m reminded of the weeks I spent designing a gross margin reporting system for a manufacturer. This company had several product lines and three departments. It had detailed time reports from the factory floor, so that I knew how much labor cost was incurred by product line and by department. It had a perpetual inventory system, so that I knew what material had been drawn from raw material inventory and production counts for each department. This allowed me to construct a model of the manufacturing process and estimates of costs incurred through each step in that process.
I’d like to get my hands on that long-lost data and see if a data cube would simplify anything.
OLAP stands for OnLine Analytical Processing. This post describes why
I’m not ready to write about OLAP.
I’ve been reading various web pages about OLAP and have reached two
conclusions. First, the demand for OLAP is driven by SQL commands’
complexity, which can arise when the programmer is querying complex
databases. Nontechnical users stumble badly in that environment, and
even correct queries can take too long to execute.
Second, the processing is done on a new non-SQL database designed to
make querying easier and processing time faster. Generally, this means
an underlying SQL database is kept to record transactions, and an OLAP
database is updated periodically from the SQL database. The OLAP
database usually is an array of facts determined by the SQL queries.
Each dimension of the array is a fact attribute for which aggregate data
may be sought.
I’ve been struggling to a third conclusion, that I can reach a better
understanding of the data by investigating why and how the data was
compiled than by constructing complicated SQL queries.
I found an open-source OLAP application, Cubes, written in
python([[http://cubes.databrewery.org/]]). I downloaded the application
and started on the tutorial. Step one, called Hello World, constructs a
data cube from balance sheets of the International Bank for
Reconstruction and Development for the years 2010 and 2011.
I was planning to code an OLAP database in APL, so rather than following
the tutorial, I just loaded the supplied csv file into APL.
ibrd←import∆file 'Downloads/IBRD_Balance_Sheet__FY2010.csv' ⍴ibrd 63 7
That didn’t look like a lot of data, so I displayed some of it:
ibrd[⍳5;] Category Code Category Subcategory Code Subcategory Line Item Fiscal Year Amount (US$, Millions) a Assets dfb Due from Banks Unrestricted currencies 2010 1581 a Assets dfb Due from Banks Unrestricted currencies 2009 2380 a Assets dfb Due from Banks Currencies subject to restriction 2010 222 a Assets dfb Due from Banks Currencies subject to restriction 2009 664
My seven columns:
1. Category code
3. Subcategory code
5. Description (called line item above)
6. Fiscal year
7. Amount (in $US millions)
I concluded that our cube should have two dimensions, description and
year. Each fact (cell in the array) should be made up of a description
and an amount. The category Descriptions is a hierarchy of category,
subcategory and item.
With that in mind I dived off the cliff and started writing APL queries.
Question one always is whether the debits equal the credits, or in this
case whether total assets equal total liabilities plus total equities.
I needed to confirm that both years and amounts were in fact numbers:
utl∆numberp ¨ 1 0↓ibrd[⍳10;6 7] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
I knew I’d get tired of the column heads, so I copied the array without
db←1 0 ↓ibrd
I also determined the universe of category codes to simplify my next
db[;1] a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a l l l l l l l l l l l l l l l l l l l l l l e e e e e e e e
I concluded that *a* means assets, *l* means liabilities, and *e* means
equity. My queries:
+/(∊db[;1]='a')/db[;7] 558430 +/(∊db[;1]='l')/db[;7] 480838 +/(∊db[;1]='e')/db[;7] 77592 480838 + 77592 558430
I started planning my next query and was curious: what descriptions
describe each fact?
⍞←⎕tc utl∆join (db[;6]=2010)/db[;5] Unrestricted currencies Currencies subject to restriction Trading Securities purchased under resale agreements Nonnegotiable, nonintrest-bearing demand obligations on account of subscribed capital Investments Client operations Borrowings Other Receivables to maintain value of currency holdings on account of subscribed capital Receivables from investment securities traded Accrued income on loans Net loans outstanding Assets under retirement benefit plans Premises and equipment (net) Miscellaneous All Securities Sold under Repurchase Agreements, Securities Lent under Securities Lending Agreements, and Payable for Cash Collateral Received Investments Client Operations Borrowings Other Payable to Maintain Value of Currency Holdings on Account of Subscribed Capital Payable for investment securities purchased Accrued charges on borrowings Liabilities under retirement benefit plans Accounts payable and misc liabilities Paid-in capital Deferred Amounts to Maintain Value of Currency Holdings Retained Earnings Accumulated Other Comprehensive Loss
I am not a bank accountant, so I must open the accounting rulebook and
read it cover to cover before I try to read banks’ financial statements.
Let’s consider a few facts. Accumulated other comprehensive loss is an
equity account that accumulates unrealized gains and losses. What they
might be is in the financial statements. We have just one page.
The subcategories make matters worse.
⍞←⎕tc utl∆join (db[;6]=2010)/db[;4] Due from Banks Due from Banks Investments Securities Nonnegotiable Derivative Assets Derivative Assets Derivative Assets Derivative Assets Receivables Other Receivables Other Receivables Loans Outstanding Other Assets Other Assets Other Assets Borrowings Sold or Lent Derivative Liabilities Derivative Liabilities Derivative Liabilities Derivative Liabilities Other Other Liabilities Other Liabilities Other Liabilities Other Liabilities Capital Stock Deferred Amounts Retained Earnings Other
When the Financial Accounting Standards Board issued guidance on
derivatives, I passed. I could do better in Atlantic City than with
derivatives, and so how to account for them was irrelevant. It certainly
seemed probable that these liabilities give rise to some of the
accumulated other comprehensive losses.
So after I complete my study of the accounting rules, I need to digest
the footnotes to the financial statements to get some understanding of
four different kinds of derivatives.
I have yet to extract an analysis of the facts I have. And that’s why
I’m not writing about OLAP…yet.
Graphs in APL
When I took Accounting 1, I dreamed ledgers. In those days most medium to small companies kept their books by hand and stored them in a fireproof safe. One of the requirements of the course was a complete set of books consisting of financial reports for a fiscal period. It was all done by hand on ledger paper. I did manage to do some of it at work, where I could use a claculator [sic].
The dreams helped internalize accounting, and to this day if I have a difficult accounting problem, I’ll start with ledger paper and lay out my solution. At some point I’ll see the solution and then complete my work in APL.
What this means is that I am not afraid of long columns of figures nor of large arrays. I can extract insights just by examining the reports and doing some simple arithmetic.
Most people need something more, and a well-designed graph is always helpful. Accordingly, this post describes how to produce a graph in GNU APL and how to dress it up for company.
Graphing is not part of the ISO standard, but many APL interpreters provide a graphing function. In GNU APL it is
⎕plot. The syntax is
attributes ⎕plot data. The handle return by
⎕plot is an integer that identifies the graph. Close the window with
The data is what will be plotted. For a vector of real numbers, the data point will be positioned along the Y axis, and its position along the X axis by its position in the vector. Plotting according to pairs of data is done using complex numbers, which are the sum of a real number and an imaginary number. Thus, for a two-column array we could produce a graph by converting each row to a complex number:
⎕plot a_b[;1] + 0j1 × a_b[;2]
Dressing graphs up for company requires setting attributes. ⎕plot ” will give you a list of attributes.
I got into a heated discussion recently about U.S. tax policy, which led to the graph we’re about to construct.
I downloaded my data from BEA.gov and imported it into APL:
income_expense_raw←import∆file '/home/dalyw/AverageCrap/Research/Federal_I_E.csv' gdp_raw←import∆file '/home/dalyw/AverageCrap/Research/GDP_2017Q1_2022Q4.csv'
For our purposes all we want is gross domestic product, line 6 in gdp_raw, and federal receipts, line 6 in income_expense_raw.
We want to plot both statistics against an actual timeline so that the X axis labels show quarterly increments.
SPQ←×/91 24 60 60 ⍝ Seconds in one quarter q1←⎕fio.secs_epoch 2017 2 15 time←q1 + SPQ × ¯1 + ⍳23
We build the attribute array:
⍝ Set the GDP line color to blue att_gdp_tax.line_color_1←'#0000FF' ⍝ Set the tax line to green att_gdp_tax.line_color_2←'#00FF00' ⍝ Set the legend to identify both lines att_gdp_tax.legend_name_1←'Gross Domestic Product' att_gdp_tax.legend_name_2←'Tax receipts' ⍝ Position the legend away from the two lines att_gdp_tax.legend_X←50 att_gdp_tax.legend_Y←200 ⍝ Set the caption to identify the graph att_gdp_tax.caption←'US GDP and Tax Receipts in Billions' ⍝ Set the format of the X-axis labels to show year and quarter att_gdp_tax.format_X←'%yQ%Q'
Now we can call ⎕plot:
att_gdp_tax ⎕plot (time + 0j1 × gdp),[0.1] time + 0j1 × fed_receipts
Quod erat demonstrandum.
Free Cash Flow
This is post three of my Crunching Numbers in APL series. I’m
returning to my database of the top twenty stocks in the
Standard and Poor’s 500.
sp20[;1 2 3 4 7] Symbol Name Price Div $ FCF WMT Walmart 142.09 2.28 ¯10929 AMZN Amazon 95.82 0 ¯1112 AAPL Apple 149.4 0.92 ¯2343 CVS CVS Health 86.04 2.42 2832 UNH UnitedHealth Group 488.17 6.6 8651 XOM Exxon Mobil 110.74 3.64 28024 BRK-B Berkshire Hathaway 300.69 0 0 GOOG Alphabet 91.07 0 0 MCK McKesson 360.33 2.16 0 ABC AmerisourceBergen 159.5 1.94 0 COST Costco Wholesale 493.14 3.6 0 CI Cigna 295.65 4.92 0 T AT&T 19.25 1.11 0 MSFT Microsoft 254.77 2.72 0 CAH Cardinal Health 77.7 1.98 0 CVX Chevron 161.93 6.04 0 HD Home Depot 299.31 8.36 0 WBA Walgreens Boots Alliance 36.21 1.92 0 MPC Marathon Petroleum 125.52 3 0 ELV Elevance Health 486.12 5.92 0 KR Kroger 43.91 1.04 0 F Ford Motor 12.07 0.6 0 VZ Verizon Communications 38.53 2.61 0
You’ll note I’ve added a column with some data. FCF is Free Cash
Flow. I’m using my own definition. I hope that it will act as
sieve to highlight stocks which deserve a closer look.
First I’d like to discuss databases. Chapter 11 of Crunching
Numbers in APL applies the principles of database design to APL
variables. We’re not to that point. We’re still in discovery
I set up a workspace for my research into companies that do not
pay dividends it includes the table sp20 shown above. As I’ve
done calculations I’ve tried to save those calculations and the
workspace as I played with Free Cash Flow. Here is where I
)vars aapl_free_cash amzn_free_cash cvs_free_cash date∆US date∆cal date∆dates date∆delim date∆time∆M date∆time∆delim date∆time∆utce date∆tz final_vym free_cash g_data g_return goog goog_covar goog_free_cash goog_hist goog_variance s_data s_return sp20 sp500 sp500_df tmp unh_free_cash v_divs v_lillian vd_lillian voo vym vym2 vym_div vym_hist wmt_free_cash xom_free_cash
All the variables that begin
date∆ belong the to
the date workspace in library 3 DALY and I’ll ignore them.
vym... were used for last week’s post on zero
dividend. The variables that end
free_cash are for
APL’s workspace concept allows us this luxury. As I explore a
subject I can save my work in variables. They just exist and
don’t get in the way once I move on to something else.
For this project I created the variable free_cash and the
free_cash Cash from operations 0 Interest 0 ------- Adj Cash from ops 0 Capital exp 0 Dividends paid 0 Debt serv 0 Stock repurchases 0 ------ Free cash 0 ======= Debt service Interest 0 Debt repayment 0 other 0 ------- 0 ∇rs←calc_free_cash fc  ⍝ Function calculate free cash flow from a free_cash  ⍝ workpaper and returns a free_cash workpaper with  ⍝ those results.  rs←fc  rs[4;2]←+/rs[1 2;2]  rs[13;2]←-rs[2;2]  rs[7;2]←rs[17;2]←+/rs[13 14 15;2]  rs[10;2]←+/rs[4 5 6 7 8;2]
My idea is a measure of the cash available from the operations
that can be used to grow the company. Today’s Wall Street
Journal has an article on evaluating companies that pay
dividends. It recommends ignoring the amount of the dividend and
instead focusing on cash flow.
The financial statements have five basic statements:
- Balance Sheet
- Income Statement
- Comprehensive Income
- Stockholder’s Equity
- Cash flow
My free cash flow calculation pulls amounts from that last
statement, Cash flow. Its worth looking the statement as whole.
First it reconciles net income to cash from operations. That
reconciliation includes items used to calculate net income which
do not use or provide cash, depreciation for example. The
reconciliation also includes changes in working capital that
require or provide cash.
Second it shows investment activity. I get my capital
expenditures from this section. I know that the company must
replace plant, property and equipment as it wears out. I use
this line as an estimate for future operations.
Third it shows financing activity, debt and equity
transactions. Here I find the amount of dividends paid and stock
repurchased. I calculate debt service as interest (from the
income statement) plus debt repayment for this section of the
cash flow statement.
The decision to finance the company through debt requires
consideration of the payment of interest and the retirement of
principle. I recognize this by adding interest to cash from
operations, and including it in debt service.
Many things may be said about this approach. It is
too simple. A thorough reading the financial statements and
Management’s Discussion and Analysis of Financial Condition and
Results of Operations might yield better estimates. In fact
those estimates my be buried in the 10-K somewhere.
This method is quick and dirty but I like it.
MCK, McKesson, is next on my list. I found its 10-K for the year
ended March 31, 2022 at www.sec.gov and its statement of cash
flow on page 74. Here is how I calculate free cash flow.
mck_free_cash←free_cash mck_free_cash[1;2]←4434 ⍝ This from the bottom of the cash flow statement mck_free_cash[2;2]←186 ⍝ The total of property, plant and equipment and software mck_free_cash[5;2]←¯388 + ¯147 mck_free_cash[6;2]←¯277 ⍝ Repayment of long-term debt and debt extinguishments mck_free_cash[12;2]←¯1648 + ¯184 mck_free_cash[8;2]←¯3516 ⍞←mck_free_cash←calc_free_cash mck_free_cash Cash from operations 4434 Interest 186 ------- Adj Cash from ops 4620 Capital exp ¯535 Dividends paid ¯277 Debt serv ¯2018 Stock repurchases ¯3516 ------ Free cash ¯1726 ======= Debt service ¯1832 Interest ¯186 Debt repayment 0 other 0 ------- ¯2018
Now I’ll update my database and cross McKesson of my list.
sp20[10;] MCK McKesson 360.33 2.16 21.79 263966 0 sp20[10;7]←¯1726 sp20[;1 2 3 4 7] Symbol Name Price Div $ FCF WMT Walmart 142.09 2.28 ¯10929 AMZN Amazon 95.82 0 ¯1112 AAPL Apple 149.4 0.92 ¯2343 CVS CVS Health 86.04 2.42 2832 UNH UnitedHealth Group 488.17 6.6 8651 XOM Exxon Mobil 110.74 3.64 28024 BRK-B Berkshire Hathaway 300.69 0 0 GOOG Alphabet 91.07 0 0 MCK McKesson 360.33 2.16 ¯1726 ABC AmerisourceBergen 159.5 1.94 0 COST Costco Wholesale 493.14 3.6 0 CI Cigna 295.65 4.92 0 T AT&T 19.25 1.11 0 MSFT Microsoft 254.77 2.72 0 CAH Cardinal Health 77.7 1.98 0 CVX Chevron 161.93 6.04 0 HD Home Depot 299.31 8.36 0 WBA Walgreens Boots Alliance 36.21 1.92 0 MPC Marathon Petroleum 125.52 3 0 ELV Elevance Health 486.12 5.92 0 KR Kroger 43.91 1.04 0 F Ford Motor 12.07 0.6 0 VZ Verizon Communications 38.53 2.61 0 JPM JPMorgan Chase 139.67 4 0 GM General Motors 39.25 0.36 0
This is the second in a series of posts about using APL to crunch numbers. It starts with a database of sorts in APL.
I want to use present-value equations to assess various stocks. To do so I need specific stocks to investigate. I built a database of the top 20 stocks from the Fortune 500. I did have to do this by hand.
finance.yahoo.com is a good source for information about publicly traded stocks, bonds, and mutual funds. I got a list of the Fortune 500 and went to Yahoo. Here is what I compiled:
Symbol Name Price Div $ EPS Total Rev WMT Walmart 142.09 2.28 4.27 611289 AMZN Amazon 95.82 0 ¯0.28 513983 AAPL Apple 149.4 0.92 5.9 395328 CVS CVS Health 86.04 2.42 3.14 322467 UNH UnitedHealth Group 488.17 6.6 21.17 322132 XOM Exxon Mobil 110.74 3.64 13.26 398675 BRK-B Berkshire Hathaway 300.69 0 ¯0.97 345636 GOOG Alphabet 91.07 0 4.54 282836 MCK McKesson 360.33 2.16 21.79 263966 ABC AmerisourceBergen 159.5 1.94 8.25 238587 COST Costco Wholesale 493.14 3.6 13.23 226954 CI Cigna 295.65 4.92 21.29 180642 T AT&T 19.25 1.11 ¯1.1 120741 MSFT Microsoft 254.77 2.72 9 198270 CAH Cardinal Health 77.7 1.98 ¯4.56 181364 CVX Chevron 161.93 6.04 18.28 235717 HD Home Depot 299.31 8.36 16.68 157403 WBA Walgreens Boots Alliance 36.21 1.92 ¯3.43 132703 MPC Marathon Petroleum 125.52 3 27.98 177453 ELV Elevance Health 486.12 5.92 24.81 156595 KR Kroger 43.91 1.04 3.18 137888 F Ford Motor 12.07 0.6 ¯0.49 158057 VZ Verizon Communications 38.53 2.61 5.06 136835 JPM JPMorgan Chase 139.67 4 12.1 128641 GM General Motors 39.25 0.36 6.09 156735
All of this data and a lot more is available issue by issue at Yahoo. I
chose this data to help determine which issues are worth further
investigation. The statistic I was most interested in was the dividend
yield, column four divided by column three.
Financial theory proposes that a stock’s price is dividend ÷ (yield –
growth). Yield is expected to be your return on the investment, and
growth is the rate at which the dividend is expected to grow.
So before I could work on estimating growth and/or yield, I needed to address Alphabet, which has never paid a dividend but whose stock is quite valuable.
When I learned financial theory, I decided that companies that paid no dividends had no value; just do the arithmetic with any yield or growth assumption. Today’s post evaluates the zero-dividend strategy.
I needed a baseline to compare to Alphabet’s performance. I choose
Vanguard high-yield dividend fund (VYM). It is a mutual fund generally made up of dividend-paying stocks—high dividend if you believe the title. Vanguard is well known for mutual funds that provide similar returns to the market as a whole.
I went to Yahoo and downloaded the dividends paid by VYM over the 10 years ended 12/31/2022.
vym_div←date∆US import∆file∆withDates '/home/dalyw/Downloads/VYM.csv' ⍴vym_div 41 2 vym_div[⍳15;] Date Dividends 2013 3 22 0.361 2013 6 24 0.419 2013 9 23 0.437 2013 12 20 0.532 2014 3 24 0.401 2014 6 23 0.476 2014 9 22 0.469 2014 12 18 0.562 2015 3 23 0.462 2015 6 26 0.56 2015 9 23 0.528 2015 12 21 0.599 2016 3 15 0.478 2016 6 21 0.578
I also looked up the opening price of VYM on 1/1/2013—$45.89—and 1/1/2023—$108.21. I could now produce a date flow.
The workspace ‘5 DALY/fin’ distributed with gnu-apl. It has present and future value functions for date flows. A date flow is an ordered
collection of date–amount pairs. Each date is in Lillian format, that
is, the number of days from October 15, 1582, the first day of the
I took this data and assembled a date flow that assumed the purchase of VYM on 1/1/2012 and its sale on 12/31/2022:
fin∆df∆show vym 2013/01/01 (45.89) 2013/03/22 0.36 2013/06/24 0.42 2013/09/23 0.44 2013/12/20 0.53 2014/03/24 0.40 2014/06/23 0.48 2014/09/22 0.47 2014/12/18 0.56 2015/03/23 0.46 2015/06/26 0.56 2015/09/23 0.53 2015/12/21 0.60 2016/03/15 0.48 2016/06/21 0.58 2016/09/13 0.48 2016/12/22 0.67 2017/03/22 0.56 2017/06/23 0.60 2017/09/20 0.60 2017/12/21 0.64 2018/03/26 0.61 2018/06/22 0.63 2018/09/26 0.67 2018/12/24 0.74 2019/03/25 0.65 2019/06/17 0.62 2019/09/24 0.79 2019/12/23 0.78 2020/03/10 0.55 2020/06/22 0.84 2020/09/21 0.70 2020/12/21 0.81 2021/03/22 0.66 2021/06/21 0.75 2021/09/20 0.75 2021/12/20 0.94 2022/03/21 0.66 2022/06/21 0.85 2022/09/19 0.77 2022/12/19 0.97 2023/01/01 108.21
I computed an internal rate of return using fin∆df∆irr as 12.04%.
I went back to Yahoo for the opening price of Alphabet stock on 1/1/2013 and on 1/1/2023 and calculated the growth of the stock’s value.
goog←((date∆lillian 2012 1 1) ¯16.26 ) fin∆df∆add (date∆lillian 2023 1 1) 89.86 fin∆df∆irr goog .1 0.1552653407
Here I set up a date flow that assumes the purchase of stock on 1/1/2013 and its sale on 1/1/2023 and then calculate an annual return of 15.527%.
This certainly challenges my working hypothesis that a stock that pays no dividends is worthless. Had I bought Alphabet 10 years ago, I would have realized gains greater than the market, the holy grail of
I then went to
Public companies are required to register with the SEC and file detailed reports on their operations. The annual report is 10_K, which for GOOG I opened.
Page 1 had the first yellow flag. There are two classes of stock, A and
C, registered with the SEC. I wondered about B. Page 2 is the table of
contents with hyper links to various sections. I went to the financial
statements and read the notes. Class A, 6015 shares, allow 1 vote per
share. Class B, 893 shares, allow 10 votes per share and are not
publicly traded. Class C, 6334 shares, have no voting rights. I read the equity footnote twice. It addresses the rights of the various shares and makes the case that they share equally on liquidation of the company. What happens should Alphabet declare a dividend was not clear.
In summary, the company is absolutely controlled by the holders of the class B shares who apparently are not interested in receiving dividends.
I looked at the rest of the financial statements and made my own
estimate of free cash flow. That is, the cash generated from operations that could be used to pay dividends or repurchase stock.
In millions Cash from operations 91495 Interest 0 Stock based awards ¯9300 ------- Adj cash from operations 82195 Capital Expenditures ¯31485 Debt Service ¯1196 Dividends 0 Stock purchases ¯59296 ------- Free Cash ¯9782 =======
Not an encouraging picture. Alphabet is using its accumulated cash to buy back its stock. That suggests that the company can find no better investment.
High-growth companies do not pay dividends because they need every dollar of cash to support their growth. Clearly, Alphabet is no longer in this category, and it’s high time it shared the lolly. The promise of that growth suggests that the company can get higher returns from its operations than the stockholders can by reinvesting in other stocks.
I don’t think I’m changing my approach to zero dollar dividends.
Crunching Numbers in APL
Well, it’s done. That is, I’ve published the book I’ve been working on for the past three years. I never thought it would take three years.
You can buy it for your Kindle. Search for its title Crunching Numbers in APL.
Too much of this book is how to code APL and not enough of how to use APL. It is an unavoidable sojourn if you want to use APL to crunch numbers. So I’m going to write a series of posts on how to use APL.
We’ll start with inflation.
I suffer from my Wharton education, and one of the sources of that suffering is inflation. I attended Wharton during the seventies, when inflation got out of control. Wharton historically stresses monetary economic policies over Keynesian, and Milton Friedman was banging the drum for better control of the money supply. Washington has never really wanted to control the money supply, although during the eighties it had to.
I downloaded the Federal Reserve data on the money supply. (https://www.federalreserve.gov/releases/h6/current/default.htm) and created this graph:
You’ll note the sharp increase in the money supply in January through June of 2020. This was the period when Congress enacted blowout bills that it believed would help people in economic distress because of the pandemic.
Money supply keeps growing to November of 2021, when the Federal Reserve executed policies to reduce the rate of inflation. It’s now February of 2023, and the rate of inflation has moderated but is still too high.
The curve shows a small decline as the Fed acted.
How did I construct this graph?
I downloaded the file from federalreserve.gov and imported it into APL. This gave me a table 774 lines by 30 columns. Line one was a description of each column. I picked column one, the year and month; column two, seasonally adjusted M1; and column three, seasonally adjusted M2. I now had a table 769 lines by 3 columns.
I made several graphs using ⎕plot. My first try was
I was alarmed. I had hoped I didn’t have to figure out the differences between M1 and M2, but when I graphed M2, I got what I expected:
I consulted the oracle internet. M1 is generally currency in circulation plus bank demand deposits (read checking accounts). M2 is M1 plus net time deposits (read savings accounts). I found that the Federal Reserve had changed the banking rules in April of 2020. Just showing the underlying data shows how banks responded:
M1_M2[720+16 17 18;] 2020-03 4261.9 15988.6 2020-04 4779.8 17002.5 2020-05 16232.9 17835.2
I dressed the M2 graph up for company with gnuplot.
Egad, It’s User Hostile
I started to write this post as “User Friendly,” but after reading all sorts of blogs on that subject, I changed it.
Each of these blogs proposed a list of four or five features that describe user-friendly software. Here is my own summary:
There were many words surrounding these four. In some blogs there appeared to be some meaning associated with those words. In many there was none. All seemed redundant. [Sort of like describing sympathetic as having sympathy—KD]
Simple is perhaps the most difficult. How often have you confronted a simple idea that you wanted to internalize and discovered as you wrestled with it exactly how complicated is was?
Clean follows simple. As I read, I kept finding clean described as simple. Part of me wants to make clean a complement to simple or perhaps combine them. A clean and simple interface. I can’t imagine either clean or simple without the other.
Reliable is a new concept. Does the software always do what it is supposed to do? Is it buggy?
Intuitive is last. Every blogger believed that a user-friendly interface will allow the user to know just by looking at the screen what to do next.
Merriam-Webster defines intuition as (1) “the power or faculty of attaining to direct knowledge or cognition without evident rational thought and inference,” (2) “immediate apprehension1 or cognition” (https://www.merriam-webster.com/dictionary/intuition) I’m the wrong person for this idea. I won’t say that I don’t ever get flashes of insight; I do. I also know that those flashes come only after a struggle of rational thought and inference. Athene has not sprung full-grown and fully armed from my brow.
1 Apprehension here is used in MW’s third sense: perception, comprehension. It has nothing to do with fear.—KD
As I struggled with the concept of user friendliness, I thought I ought to look again at Excel, in some minds the epitome of user friendliness. [What a crock. I could tell stories…—KD] So I rebooted my machine from Debian to Windows (always painful) and started Excel. Once I had a spreadsheet loaded and was contemplating how to test friendliness, it struck me. One of my bloggers had offered up MS Office as an example of user hostility. His issue was the ribbons that Microsoft implemented several years ago and the struggle their user base had adapting.
How often is change used to simulate innovation? How often has Microsoft labeled change as innovation? [Microsoft is a piker compared with the textbook publishers; I could name a few textbooks whose six editions all had the same material. Way to kill the aftermarket, guys.—KD] I went back to Debian.
I remember switching to Quattro Pro from Lotus 1-2-3. At first it was dollars and cents. Quattro Pro was less than one fifth the cost of Lotus 1-2-3. I remembered the joy I felt using Quattro Pro. It had more features, but was it user friendly?
I found an old backup of Quattro Pro [The triumph of the hoarder—KD] and copied it onto my hard drive. This, it turned out, was all the installation I needed. Debian provides DOSBox, a DOS emulator. I used it to start Quattro Pro. I tried building a simple spreadsheet to remind myself how it worked.
Things were turned around. If I wanted to copy, I first selected copy from the menu; typed in the upper left and lower right corners of the source block and typed <ENTER>. I then typed in the upper left corner of the destination and <ENTER> again. I’m used to highlighting the source block with my mouse, right-clicking for a menu, and selecting copy.
While the mouse and the GUI interface changed how software worked, the keyboard procedure was easy to understand and to use. The latest version of Excel has many enhancements, some of which speed up construction of a spreadsheet. It also has a lot more functions on its ribbons—if only I could remember which ribbon and what the pictures on the ribbon actually mean.
How long did it take you to understand what a pivot table is? Can you find it on a ribbon on the first try?
I left out one idea that my research uncovered: the principle of least astonishment. “The behavior [of the software] should not astonish or surprise users.” (https://en.wikipedia.org/wiki/Principle_of_least_astonishment)
In my use of APL I’ve been struck over and over again how easy it is to type in a line of code and have it do exactly what I thought it would. APL uses an old-fashioned teletype-like interface to do powerful things. Because of the simplicity and cleanliness of its design, I can do those powerful things.
I am (used to be) a member of the Newtown Recorder Consort. I’ve played with the consort since 1995. In the beginning I was on soprano. At some point I switched to the alto, which I’ve played since Christmas 1959. A couple of years ago we lost our bass player and I switched again.
We stopped practicing in 2020 with the pandemic because wind instruments are especially good at virus transmission. The recorder, like any whistle, is a device to accelerate the performer’s breath to get a sound. With four of us, it meant the air was full of whatever viruses we had. Alas, we haven’t resumed.
The loss has been driving me nuts. For over twenty years and on most Tuesday nights we played. We didn’t limit ourselves to just baroque and classical music. We played folk tunes. We played jazz. I don’t remember playing Beethoven. I do remember Bach, Vivaldi, and Hayden.
We’re silent now, and there seems to be little impetus to start up again.
So I’ve been accompanying myself. I found a program, Audacity. I count out a preceding measure and then start playing. Audacity records it all. It will then play back in my headphones what I’ve recorded while it records what I’m playing now. I have a five-part arrangement of “Simple Gifts.” I recorded myself playing all five parts, and Audacity mixes them all together.
I started on lullabies for my grandson, aged seven months. It made up for the fact that I can’t remember lyrics. I can get the first two lines of the first verse out but then—nothing. This is not old age. It’s years of playing instrumental music. I’ve played “All Through the Night” with my sister since we were kids. I can get through “Sleep, My Child, l Let Peace Attend Thee” and then nothing. Recorder music rarely has the words. So now I play it in four parts through Audacity.
Barbershoppers have a goal called the fifth voice. It occurs when the harmony is so tight the the harmonics of the various voices combine to produce that fifth voice.
It works with recorders too, although I’ve heard it as a third voice playing duets. It doesn’t work with Audacity. I have the feeling that it’s a live performance thing.
You can hear the Daly Recorder Consort at https://dalywebandedit.com/music/AllThroughTheNight.mp3
No reviews, please.
I’ve been staring at a list of workspaces that make up APL Library (https://sourceforge.net/projects/apl-library) for more than a week trying to get inspired. Why did I start this thing in the first place?
I remember discovering GNU APL. I had purchased a copy of STSC APL thirty years ago. I still have it, and with a Linux DOS emulator I can still run it. I spent $500 for a license. At odd moments I’d go to IBM’s website to see what a license costs these days and then move on to something else.
Suddenly, all I had to do was download GNU APL and compile it. I was sold right there.
I found a printout of an editor I wrote in 1983. I tediously typed it into a new GNU APL workspace. I haven’t used it since I got it working. I like Emacs better.
I started work on APL Library almost immediately. My first pass was porting utilities that I had written in the STSC days. I discovered that porting was not exactly what I had to do. Retyping is a better description. As I learned APL2, which has features not present in the old APL, I discovered that many of these utilities weren’t relevant.
∆TAB is a good example. I copied this function out of APL: A Design Handbook for Commercial Systems, by Adrian Smith, Wiley, 1982. Its left argument is the delimiter used in its right argument. It returns a character array in which each item in the right argument is a line in the array.
Now I just enter the list ‘First item’ ‘Second item’ ‘Third item.’ Aren’t nested arrays a wonderful thing? (In APL each character string is an array. In APL2 one can combine several character strings into a nested array.)
I soon discovered that APL2 addressed many of the things I’d struggled with so much I abandoned the whole project.
At the same time there were many things I needed and an equal number of things that were just neat. utf8∆saveVar is a good example. This function writes a new workspace to disk with the code to generate one variable in the current workspace.
I spend way too much time keeping books. At the end of each quarter I do a bank reconciliation. It’s a tedious task, and with luck, having proved my work for that quarter, I’ll never have to look at that work paper again.
I am not a lucky guy, so I save the reconciliation with utf8∆saveVar.
APL Library is a combination of the flashes of insight like utf8∆saveVar and the tedious struggle to set up plebeian things like my date workspace. All in all it serves me well.