CHRONIC PAIN CHANGES BRAIN STRUCTURE AND FUNCTION

From a recent seminar I gave late-April at the American University of Beirut. Full text below:

The brain may suffer serious, structural and functional damage as a result of persistent chronic pain that does not respond to traditional medicinal therapy, Dr. Carl Saab, assistant professor of neuroscience and neurosurgery at Brown University, explained April 27, 2012 during an AUB seminar titled, “Brains Suffer from Pain.”

Saab said that a collection of recent studies revealed a correlation between chronic pain and changes in brain structure in tested patients. Previously, chronic pain was treated as a secondary symptom that had no, or benign effects on neurological structures.

“Such evidence has challenged us to rethink the concept that chronic pain is a disease entity by itself,” he said.

The discovery of correlations between pain and brain function could help researchers tap into novel diagnostics, Saab explained, whereby researchers could use methods of visualization of brain activity to reach an accurate diagnosis.

One obvious method involves the use of functional magnetic resonance imaging (fMRI), which scans the brain to map neural activity.

“Based on imaging data, we can conclude that certain brain regions are consistently activated in patients with chronic pain according to a reproducible, predictable pattern,” he said. “All of these regions are shown to be overactive or hyperactive, using fMRI, and are therefore referred to as the brain’s ‘neuromatrix’ for pain.

A second method of visualization uses electrophysiology, which measures the electrical activity of neurons at the highest temporal and spatial resolutions possible. One benefit of this method for researchers is that it can be tested on both humans and animals; it is also cost-effective and practical. This type of technology has similarly revealed a reliable correlation between pain and brain function, manifesting as measurable brain rhythms that shift under pain conditions.

But despite these correlations, Saab cautioned that all evidence was circumstantial and that scientists have yet to discover a 100-percent predictable diagnostic due to inherent technical limitations and the subjective nature of pain. So far, verbal reporting by the patient remains the gold standard for evaluating pain in humans.

– Carl

Vasotec

TAKING DOWN DAMASCUS IN MOSCOW

There is no question that Washington is suffering from a policy logjam on Syria. The better way to move forward, I say, is by launching high-stakes talks with the Russians. I share the following thoughts on the subject, which appeared in today’s the National Interest. Full text follows below:

Other than what it has already tried, there is nothing the United States can do to stop the violence in Syria or make things better for the opposition forces there: this is the conventional wisdom shared by a good number of analysts in Washington and almost ingrained in the minds of U.S. officials working on Syria policy. But there is another strategy worth pursuing with greater urgency: talk tough and bargain with Moscow.

Washington’s policy logjam on Syria is not surprising. There is an acute awareness of the high risks of alternative and perhaps more forceful strategies, be they diplomatic or military. The Obama administration sympathizes with the plight of the Syrian people and is eager to help, but it also does not want to make things worse in that country—and it can’t absorb substantial costs along the way, especially during the fall run-up to the presidential election.

Those who remember the horrors of America’s military intervention in Iraq and the fact that it cost the United States billions of dollars and 4,486 lives so far—not to mention the intangible and indirect costs from the invasion and post-war occupation—may immediately laud the administration for its extra cautious approach toward Syria.

But how much caution is too much? Is Washington being so careful on Syria that it risks undermining U.S. strategic interests in the Middle East?

As things currently stand, the two main U.S. priorities for Syria, containing the civil war and securing the regime’s WMD, are more or less fulfilled. The risk of chemical-weapons loss or usage in Syria is relatively low because Syrian president Bashar al-Assad is not facing a mortal threat—at least not yet. And the sectarian violence inside the country has not furiously spilled over to neighboring countries—again, not yet.

An Agenda for Washington

Does the present calm mean that the United States can afford to watch from afar and do the bare minimum in Syria? Assad may be in good shape now, and the balance of power currently may be tilted in favor of his forces, but several developments could change the dynamics inside Syria in the not so distant future and undermine U.S. priorities there.

Neighboring Turkey is starting to get worried about its own security. The recent firing by Syrian soldiers into a refugee camp inside Turkey (Turkey hosts thousands of Syrian refugees), killing two, has raised the prospects of Turkish military action, with prime minister Recep Tayyip Erdogan calling for NATO intervention. And it is only a matter of time before Saudi Arabia, Qatar and other neighboring countries actually deliver on their promises to supply the Syrian rebels with substantial amounts of money and modern weaponry. Unsurprisingly, Kofi Annan’s peace plan has failed to stop the violence, thus boosting the chances that military options will be seriously entertained by neighboring countries.

The reality is that a full-blown civil war in Syria is in the works, one that will surely change U.S. priorities in the country. Thus, Washington cannot afford to lead from behind. It is smart to repeat that Syria is not Libya in advancing an argument against military intervention. But it is precisely because Syria is not Libya that Washington cannot merely state its concerns and hope for the best. Unlike in Libya, the stakes in Syria are high, and the United States must take charge, although that does not necessarily mean boots on the ground, another Libya-like aerial campaign or other military options.

Washington’s reactive Syria strategy is at risk of being overrun by events on the ground. A more proactive strategy is desperately needed, one that entails tough bargaining and creative diplomacy with the Russians. The United States needs to know what it would take for Russia to abandon the Syrian regime. If it is continued access to the port of Tartous and business opportunities, as well as healthy trade and strategic relations with the next Syrian government, then so be it. The administration should get it on paper and have the Syrian opposition sign off. There also should be frank discussions about the U.S. policy of NATO expansion.

This high-stakes negotiation with Moscow will obviously not be just about Syria. It will be about the future of the Middle East and U.S. strategic interests—oil, Israel, stability and democracy promotion—in that vital part of the world. Maybe the price of Russian cooperation is higher than this, but it’s high time Washington negotiates with Moscow in a serious fashion.

If Assad loses Moscow as a friend at the UN Security Council, things will get much tougher for him at home. Russia’s change of position could well be the trigger for some real defections in the Syrian government. Yet domestic politics in Washington and Moscow could stand in the way of a more aggressive U.S. diplomatic strategy. Will Barack Obama risk raising the stakes on Syria before November and talk tough with Vladimir Putin? Will Putin play ball at a time when he is trying to reassert himself on the international stage and show domestic opponents that he can defy Washington? It’s possible but not inevitable—and only an offensive diplomatic strategy can keep the possibility open.

-Bilal

BIG BANKS GETTING BIGGER … AND THAT’S NOT EVEN HALF THE STORY

After the crisis of 2008, global finance, starting in the United States, plainly needed better regulation and supervision. Lots of institutions had turned out to enjoy taxpayer backing because they were perceived to be too big to fail. Huge derivatives exposures had gone unnoticed. Supervisory responsibilities were too fragmented. The Dodd-Frank act came into life in July 2010, and attempted to address these issues under four axes: securitisation, compensation, liquidation and systemic risk (too big to fail). Two years now since Dodd-Frank, how has the situation evolved?

Several analysts, using data from the FFIEC, recently picked up on the very verifiable story that America’s big 5 have in fact gotten even bigger compared to pre-crisis levels. see here and here. I reworked the numbers using the same source data and came up with a slightly nuanced outcome: big banks are indeed as big or even bigger than what they were prior to Dodd-Frank (depending on what your base comparison year is), but the ratio of the top 5 bank holding companies to real US GDP (most other analysts use nominal GDP) has been steadily decreasing since 2009. It currently stands at 65% of real US GDP, or $8.7 trillions (see below), compared to 68% at end-2009. So, to be fair, if we’re trying to answer the question of whether Dodd-Frank brought about positive changes to the “too big to fail” dilemma, the comparison should be between today and 2010, not 2007.

This said, I personally think that the emphasis should be placed more on the soundness and stability of a bank rather than its relative size, the simple argument being that if a bank is well capitalized, liquid, and overall sound in its credit positions, then the “fail” part of “too big to fail” would be taken care of, and the “too big” part would become irrelevant. Krugman and I seem to be in agreement, and he further adds that “the pursuit of a world in which everyone is small enough to fail is the pursuit of a golden age that never was.” Regulate and supervise, and do it right and airtight.

Speaking of banking stability and soundness, are big banks today any more solvent (and, by direct consequence, less leveraged) than they were pre-crisis?  Hardly, judging by the latest data, again from the FFIEC. See for yourself below. Granted, the Tier 1 leverage ratio is hardly a comprehensive assessment of bank stability and soundness (I should know, I’ve been drafting banking sector stability reports for the IMF for 10 years), but it is a fundamental measure of solvency, hence potential bankruptcy and failure. This is what the Fed’s stress testings are all about: will the banks have enough capital to sustain operations and pay back shareholders and creditors in severely adverse scenarios? The news was conveyed last month in the resultsof the Federal Reserve’s latest bank stress tests. As presented by the Fed, most of the news was good. Some large financial institutions were judged likely to have sufficient equity capital even if the U.S. economy were to experience a significant downturn. With that, banks such as JPMorgan were allowed to increase their dividends and even buy back shares. But there’s a problem, and it’s not a small one. If you buy the Fed’s view of what is likely to constitute stress, there is some justification for its action. But why would we let banks reduce their capital in the face of so much financial and economic uncertainty around the world (re: Europe)? We all know that lower equity at big banks means higher expected losses for taxpayers down the road: The disaster of 2008 caused about a 50 percent increase in U.S. debt relative to gross domestic product — the second largest shock to the country’s balance sheet after World War II.

On these stress tests, the Fed’s assumption in the stress scenario that Europe would have a mild recession seems too benign given the latest developments re: Spain, Portugal, Greece. How much faith do we have in these stress tests? a combined 96 percent of North American financial services professionals were “not at all confident” or only “somewhat confident” that the Fed’s stress testing addresses all of the important risks to the banking system, according to a recent Sybase survey.

So we’ve established that big banks are as big or even bigger than they were pre-Dodd-Frank, and possibly as thinly capitalized and over-leveraged as ever, but are they more prosperous?  you bet, not quite as rich as 2007, but getting there quickly and steadily. See below.

The financial system it seems hasn’t become safer since September 2008. We are not in a strong position to weather any financial storm that starts gathering on the horizon, and that should be ample cause for concern.

-Samer

RESPONDING TO VIOLENT REPRESSION: THE ARAB WORLD AND BEYOND

The Arab Uprising, and specifically the ongoing Syrian crisis, has got me thinking about a number of things that are related to the future of the Middle East. But for now, my thoughts have converged on one topic – the relationship between (violent) repression and dissent. The question I have been pondering, which I believe will have direct relevance to socio-political events in the Arab world for generations to come, is the following:

Why do dissidents sometimes respond to physical state repression by increasing their protest behavior and at other times respond by decreasing their protest behavior? The answer seems simple, right? Well, after doing some basic searching, I realized that it is a little bit more complicated than I thought.

Scholars of political repression have produced and developed a good number of systematic studies on the conditions under which national governing elites resort to repressive action to counter and/or deter actual or latent dissident behavior (Boudreau, 2004; Davenport 2004 & 2007; Della Porta and Reiter, 1998; Earl, 2003; Ekiert and Kubick, 1999; Ferrara, 2003; Francisco, 2004). Yet, despite this sizeable body of academic research, theoretical work and empirical analysis on the strategic interaction between the agents of repression (governments) and the agents of dissent (protestors) are lacking, creating an important gap in the literature.

The relationship between repression and dissent is important, I think, for at least two reasons. First, it is closely tied to one of the major debates in the literature on violent political conflict: many rational choice explanations, including the resource mobilization/political process school (McAdam, McCarthy, Zald 1996), suggest that repression will reduce dissident activity whereas the relative deprivation approach (Gurr, 1970) suggests that repression will increase dissident activity. The first group contends that repression raises costs to collective action whereas the second contends that repression will increase people’s sense of relative deprivation. Second, the literature is plagued by inconsistent empirical findings connected to the contrary theoretical expectations. As such, it provides a puzzle for scholars interested in evaluating general explanations of political phenomena by confronting them with systematically gathered evidence.

I am more curious about political phenomena after governments initiate physical repressive action against dissidents to crush collective action (obviously, the causes of initial dissident behavior would be relevant to any systematic study as well). What are the crucial factors that influence dissidents’ response? I suspect that a careful investigation of this sequence of events could challenge some of the findings of several prominent repression studies that have shown empirical support for the domestic democratic peace, i.e., that democracy decreases state repression (Davenport, 2007).

In the repression literature, there are two major explanations for the observation that repression sometimes deters and at other times spurs dissident activity. The first explanation, presented by Lichbach (1987), suggests that dissidents view nonviolent and violent protest activity as substitutes and select the type that best achieves their goals, depending on state repression and concessions. Lichbach presumes that because dissidents are interested in maximizing the shift in policy, they will pursue the most effective protest activity. Hence, if the state responds to violent protest behavior with repression (as opposed to accommodation), then dissidents will abandon violent protest behavior in favor of nonviolent protest behavior. Similarly, if the state repressed nonviolent protest behavior, then the dissidents will respond with violent protest behavior.

The second explanation, offered by Gupta, Singh, and Sprague (1993), suggests that context (i.e., the type of regime) explains the difference in responses. Here, dissidents are believed to choose between economic and political activity (rather than nonviolent and violent protest), and protest behavior is a function of government coercion, regime type (democratic or autocratic), group identity, and benefits from economic activity. Gupta, Singh, and Sprague find that in democracies repression is positively (and linearly) associated with both nonviolent and violent protest behavior, but in autocracies repression has an inverted –U relationship with both nonviolent and violent protest behavior.

Building on Rasler (1996), I thought of a third explanation which focuses on timing (i.e., short-run vs. long-run) effects and concessions by the state. I believe that it is important to distinguish between short-run reactions to repression and long-run reactions to repression (again, I am only interested in the effects ofphysical repression and not those types of repression that indirectly constrain political behavior). Examples of physical repression during a protest, strike, or demonstration include individual or mass arrests, torture, beatings, disappearance, imprisonment, and individual assassinations or mass killings.

I would expect that in the short run, dissidents perceive physical repression as a cost and, hence, decrease their protest behavior. Yet, as time goes by (it could take weeks or months), grievances become more acute and lead to a lagged spur to new protest activity. Thus, I believe that a single act of physical repression has both a negative “instantaneous effect” on protest activity and a positive “lagged effect” on protest activity. But we should also consider not only repression, but also concessions by the state. Thus, concessions, often in revolutionary contexts, could well spur further protest. So in sum, in the short-run, physical government repression decreases protest behavior, whereas in the long-run, physical government repression increases protest behavior.

People usually rebel if they become convinced that dissent will achieve the collective good (Muller and Opp 1986; Finkel, Muller, and Opp 1989). If the value of the collective good is combined with a high expectation of success, people are likely to participate in mass actions. The factors that are likely to increase the expected value of a collective good are individual assessments about whether their participation will make a difference in achieving the public good, and expectations that group action will be successful. Government concessions to highly visible groups enhance their perceived influence and increase the probability that individuals will join them for mass action (Muller and Opp, 1986). Thus, government concessions increase protest behavior.

In order to develop more precise empirical tests and more refined causal relationships between repression and dissent, we need to focus on sequences of interactions. A number of scholars have taken interest in studying sequences of behavior (Abell, 1993; Abbott, 1992: Dixon 1988: Heise 1989; Schrodt 1990: Schrodt and Gerner 1997) and several have done so with a particular focus on the repression-dissent nexus (Davies and McDaniel, 1996; Khawaja, 1993, 1994, 1995; Olzak, 1992; Poe et al, 1996: Snyder, 1976; Tilly, 1985).

Whether it makes sense to construct one’s theories by thinking about the sequences of interactions among actors depends entirely on the questions one asks. I personally am interested in examining dissident responses to physical state repression by paying close attention to processes and sequences of actions located within constraining or enabling structures (it should be noted that Lichbach’s model, mentioned above, relies primarily on sequential analysis).

I would probably design the statistical analysis by specifying an action-reaction equation where dissident activity is a function of the dissident’s previous behavior, previous government action, and a lagged variable of past government action, but instead of using the regression techniques used in the repression studies I referenced above (which rely on aggregate analysis), I would follow Dixon (1988) who makes a case for the superiority of using a logit estimator.

Let’s say I work on this study and go about operationalizing and measuring my variables and conduct some statistical tests, if I find in my results that there is strong empirical support for changing effects of repression over time, then I would say that we need to examine dynamic relationships between governments and challengers over time a littel bit more carefully. These interactive relationships between government and non-government actors may have more explanatory power than the type of the regime under study, be it a democracy or a non-democracy, thus challenging the findings of the domestic democratic peace. So how much does it really matter that the Syrian government is a brutal, authoritarian, killing machine? Ok, maybe it still matters big time (these guys are nuts!). But you get my point. It might have to do with a little bit more than the fact that Damascus is not Athens.

-Bilal

SHORT-TERM FORECAST FOR PAIN RESEARCH : FOCUS ON NEUROTECHNOLOGY

Prevalence and Economic Impact of Chronic Pain

Chronic pain is a disease that has reached endemic proportions irrespective of gender, social status, ethnicity or geographical location. In the United States, pain is a national health problem with over $150 billion/year in direct costs and lost productivity, whereas globally, pain secondary to nerve injury affects 170 to 270 million individuals 1. Hospitalized patients with intractable pain experience increased length of stay, longer recovery time and weakened immunity.

Pain is defined as chronic when lasting more than 6 months. In the absence of overt tissue damage, it is considered as abnormal or pathological 2. More than a sensory experience, pain engenders emotive and cognitive processes with significant behavioral consequences 3. A person suffering from long-term pain may be facing, in addition to physical agony, a somber forecast of losing one’s steady job and income, depression, sleep disturbance, deterioration in family relationships, and drainage of mental powers to deal with the pain… in short, it is a recipe for mental decline and social alienation.

In the face of incessant pain, the prospects of recovery lie in the hands of caregivers, which in Western societies equates with healthcare professionals. Herein lays the first hurdle the patient needs to overcome. A typical journey for someone with pathological pain may start in the clinic of a primary care physician, but is unlikely to yield an accurate diagnosis with an average of at least 5 referrals. People with chronic pain conditions are typically referred to one – or a combination of- the following specialists: neurology, orthopedic, anesthesia, neurosurgery, emergency medicine, gastroenterology, ear-nose-throat… whereas the most optimal path to adequate pain management should ideally start with a visit to the clinic of a board-certified pain management specialist.

Aside from the confusion surrounding the appropriate and timely referral of the pain patient, there’s the challenge of prescribing the right cocktail of pharmacotherapeutics, truly an art by itself, followed by physical therapy (if applicable) which ought to be concomitant with psychological therapy to better cope with some of the cognitive sequels of pain discussed above. In spite of a staggering healthcare cost, adding burden to the patient, insurers and tax payers, there is no guarantee of a cure. In some cases, patients are deprived of even a ‘worthy’ diagnosis, their pain labeled as ‘psychogenic’ or ‘exaggerated’. Surprisingly, objective diagnostic tools are lacking and verbal reporting by the patient remains the gold standard for pain diagnosis, with ensuing medico-legal issues and increased risks of misdiagnosis, unnecessary suffering and adverse side effects.

This, in a nutshell, is the harsh reality of chronic pain forcing some patients to commit suicide (yes, ‘Pain can kill’ 4).

Breakthrough technological advances with diagnostic and therapeutic potentials for pain

A report recently released by the Institute of Medicine of the National Academies concluded that “persistent pain can cause changes in the nervous system and become a distinct chronic disease” (see figure below, full report available online 5). The answer, it seems, lies in the central nervous system.

Looking at the future, innovation in the field of pain research is expected to come from an unfamiliar place: Neurotechnology. The nervous system is unique in its capability to utilize electricity for communication along nerve fibers, while being uniquely positioned to tolerate well, and effectively respond to, low-threshold electrical stimulation.

Thanks to recent breakthroughs in computational neuroscience and electrophysiological recording techniques, restoration of nervous system function has become feasible using neuroprostheses and brain-machine interfaces for motor disorders 6, or patient-controlled real-time feedback of brain function 7. Previously considered science fiction, it is now possible to harness the neuronal code from the brain of an individual with severe motor disability to control the motion of a robotic arm, or to communicate his/her thoughts by commanding a cursor on a computer screen 6.

Neurotechnology for pain management mainly refers to neuromodulation using deep brain stimulation (DBS, alternating current stimulation), transcranial direct current stimulation, or transcranial magnetic stimulation. In many respects, the field of pain research is still in the dark ages compared to state-of-the-art neurotechnology currently available for treating other forms of debilitating neurological disorders. After more than half-a-century, the exact mechanisms mediating the analgesic effects of neurostimulation techniques are uncertain, and devices being used are akin to an open-loop circuitry powered by a battery. Thus far, microstimulation in the nervous system is thought to trigger by one or several of the following events: ‘jamming’ of local hyperactive nociceptive circuitry, activation of analgesic structures, blockade of membrane ion channels such as voltage-gated currents 8, synaptic exhaustion, induction of early genes 9, 10, or even neurogenesis 11. Thorough understanding of neuromodulation phenomena requires further clinical testing in combination with well-designed animal experiments.

Although the classical DBS approach is to stimulate brain regions involved in modulatory systems known to initiate an endogenous morphine-like response (for example stimulation in the periaqueductal gray), experimental evidence suggests that the ‘pain circuitry’ in the brain can be directly targeted to reverse the hyperexcitability of neurons transmitting signals related to a painful stimulus, such as using high-frequency DBS in the sensory thalamic nucleus to inhibit sensitized neurons 12. Other options include motor cortex modulation with low-frequency stimulation, which is thought to release inhibition unto thalamic sensory neurons, at least in experimental animal models 13 (watch the video).

Video Legend:

A ‘mock’ patient with chronic pain. Data using electrophysiological recording of neuronal activity demonstrate abnormal burst activity in the sensory nucleus of the thalamus, whereas data using functional magnetic imaging suggest changes in cortical density, resulting in cortical thinning. Together, circuits in the cortex and thalamus communicate with each other, forming thalamocortical loops which oscillate at a defined frequency (or ‘rhythm’) under normal conditions. However, data using electroencephalography (EEG) demonstrate that this rhythm is disrupted under pain conditions and shifts to lower frequency domains. These findings carry diagnostic potentials for pain in the clinic (Courtesy of animal-LLC, http://animal-studio.com).

It is predicted that lessons learned from successful neurotechnologies will offer unprecedented opportunities for pain research in the near future. For example, it is envisioned that development of a sensor for the reliable detection of pain-related signals in the brain, coupled with a neuromodulation device for effective reversal of the pain ‘biomarker’, could yield a feedback closed-loop system for pain therapy, a novel concept that has already been validated clinically for the management of refractory epilepsy 14, 15. Ultimately, bearing in mind the multidimensional aspects of chronic pain and the host of co-morbid conditions associated with it would be necessary for advancing creative solutions to a neurological condition considered as the ‘holy grail’ of cognitive disorders.

Summary

hronic pain should be considered as a disease entity with poorly localized anatomical underpinnings within the nervous system 16. It is a multidimensional experience co-morbid with other psychological and cognitive states. For the healthcare provider, intractable pain poses the challenge of ineffective pharmacotherapy, compounded with absence of objective diagnostics. For Big Pharma industries, late stage failures of CNS therapeutics is blamed on lack of efficacy 17. Laboratory researchers, for their part, are also beginning to doubt the validity of existing animal models 18. Light at the end of the tunnel for patients and stake holders in pain research might come from an unlikely source, neurotechnology.

-Carl

Citations

1. ResearchandMarkets http://www.researchandmarkets.com/product/95248b/peripheral_neuropathy_and_neuropathic_pain.

2. Dworkin, R.H., et al. Evidence-based clinical trial design for chronic pain pharmacotherapy: a blueprint for ACTION. Pain 152, S107-115

3. McWilliams, L.A., et al. (2003) Mood and anxiety disorders associated with chronic pain: an examination in a nationally representative sample. Pain 106, 127-133

4. Liebeskind, J.C. (1991) Pain can kill. Pain 44, 3-4

5. InstituteofMedicine http://www.iom.edu/Reports/2011/Relieving-Pain-in-America-A-Blueprint-for-Transforming-Prevention-Care-Education-Research/Report-Brief.aspx.

6. Hochberg, L.R., et al. (2006) Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature 442, 164-171

7. Cunningham, J.P., et al. (2011) A closed-loop human simulator for investigating the role of feedback control in brain-machine interfaces. Journal of neurophysiology 105, 1932-1949

8. Beurrier, C., et al. (2001) High-frequency stimulation produces a transient blockade of voltage-gated currents in subthalamic neurons. J Neurophysiol 85, 1351-1356

9. Benabid, A.L., et al. (2002) Mechanisms of deep brain stimulation. Mov Disord 17 Suppl 3, S73-74

10. Hammond, C., et al. (2008) Latest view on the mechanism of action of deep brain stimulation. Mov Disord 23, 2111-2121

11. Toda, H., et al. (2008) The regulation of adult rodent hippocampal neurogenesis by deep brain stimulation. Journal of neurosurgery 108, 132-138

12. Iwata, M., et al. (2011) High-frequency stimulation in the ventral posterolateral thalamus reverses electrophysiologic changes and hyperalgesia in a rat model of peripheral neuropathic pain. Pain

13. Lucas, J.M., et al. (2011) Motor cortex stimulation reduces hyperalgesia in an animal model of central pain. Pain 152, 1398-1407

14. Ativanichayaphong, T., et al. (2008) A combined wireless neural stimulating and recording system for study of pain processing. Journal of neuroscience methods 170, 25-34

15. Venkatraman, S., et al. (2009) A system for neural recording and closed-loop intracortical microstimulation in awake rodents. IEEE transactions on bio-medical engineering 56, 15-22

16. Tracey, I., and Bushnell, M.C. (2009) How neuroimaging studies have challenged us to rethink: is chronic pain a disease? The journal of pain : official journal of the American Pain Society 10, 1113-1120

17. Arrowsmith, J. (2011) Trial watch: phase III and submission failures: 2007-2010. Nature reviews. Drug discovery 10, 87

18. Mogil, J.S., et al. (2010) The necessity of animal models in pain research. Pain 151, 12-17

JAMAICA’S PUBLIC DEBT – A CASE STUDY IN DEBT EXCHANGE OPERATIONS

Here’s a prezi I made for a talk I gave at the Caribbean Development Bank in Bridgetown-Barbados last February on Jamaica’s public debt, and more specifically the lessons learned from the 2010 debt exchange operation. In hindsight, it certainly looks now that the exchange was a missed opportunity to reduce the debt burden, as the latest indications show a full-circle return to pre-exchange debt dynamics for Jamaica.

-Samer

PUBLIC PRIVATE DEBT BURDEN: A STOCKTAKING SINCE THE LEHMAN FAILURE

The global financial crisis showed the distinction between private and public debt is far less important than previously thought: private debt can quickly become public debt and high public debt can quickly hinder private borrowing. Recent experience in Europe shows that refinancing problems, even for the sovereign, can arise abruptly. More importantly, they can arise either from large public debt (Greece), large private debt (Ireland), or a combination of the two (Portugal). In February 2011, the G20 announced that they would monitor country imbalances on the following indicators: public debt and fiscal deficit; private debt and private savings rate; and the external imbalance. In this post, we take stock of the main changes in public and private debt levels after the crisis and explore their implications for financial stability.

METHODOLOGY

We examine changes in the total (public and private) debt of a number of countries during 2005-10 based on quarterly flow of funds level data from Haver Analytics. Total debt is calculated by summing up the debt of nonfinancial corporations, financial institutions, general government and households including non-profit institutions serving households (NPISHs). Debt is calculated as the sum of (i) securities other than shares (excluding financial derivatives).; (ii) loans, and (iii) accounts payable of each sector. The underlying data for these variables come from national flow of funds. Finally, we divide the debt stocks by GDP to get a measure of the total debt burden of the economy (given that GDP reflects the capacity of the whole economy to generate income and repay the debt).

FINDINGS

Outstanding debt levels in advanced countries—public and private—are generally higher than they were before Lehman, despite deleveraging in some sectors. This suggests higher refinancing risk for both public and private borrowers in the years ahead.

In particular:

  • The public sector has been the primary driver of debt accumulation in advanced countries after Lehman’s demise due to continued fiscal deficits and sluggish growth, sometimes more than offsetting the positive developments in other sectors. Gross general government debt is now higher across the board for all countries, but especially for Japan and the US. This is also the case on a net debt basis.
  • Outstanding debt of the financial sector is also higher in most cases, except for Germany, Ireland and the United States. However, due to ongoing efforts to boost bank capital, the net asset position of the financial sector has improved in most cases.
  • The household sector has been deleveraging in most advanced countries, except in GIP, as consumers shy away from contracting new debt and increase savings. At the same time, due to increased valuation of household assets, the asset position of households is now in much better shape, except for Greece. In the US, a significant part of the household deleveraging can be attributed to write-offs on consumer loans and mortgages.
  • Outstanding debt of the non-financial corporate sector is somewhat higher than before, especially in Ireland and Portugal. On a net basis, the financial position of the non-financial corporate sector has improved in most cases, except for Ireland and Portugal.

More broadly:

  • Total Debt is still growing. In most advanced countries, the economy as a whole, putting all sectors together, is still accumulating debt.
  • Risk transfer. Although private debt has been declining gradually since Lehman, this has been more than offset by the increase in public debt.
  • Initial conditions matter. We can show (see below) that countries with higher private debt before the crisis experienced larger increases in public debt in the post-crisis period.

  • Refinancing risk. Given the larger total debt stock, refinancing risks (for both private and public debt issuers) are now much higher and likely to remain elevated for some time.
  • Debt workouts. In highly indebted countries where domestic savings are low (as reflected in current account deficits) and growth prospects are weak, debt workouts may become necessary.
  • Sovereign risk. A lesson from the crisis seems to be that authorities need to monitor the private stock of debt, in addition to the public sector debt.

With the public sector becoming the main debt accumulation engine in the post-crisis period, there is a need by policymakers (including public debt managers) to expand the monitoring scope to include private as well as public debt. To the extent that new government debt could end up crowding out corporate debt at specific maturities and vice versa, policy makers and private-sector risk managers need to pay more attention to refinancing risk in the years ahead. Gross debt and net debt levels are worth monitoring in parallel, as the former can give a picture of potential liquidity problems, while the latter can give show solvency problems.

-Samer

Serkan Arslanalp from the IMF contributed to this post.

US HOUSEHOLDS NET WORTH – SOME GOOD NEWS

Economic realities and short-term prospects are bad in the US. You hear it everywhere, from the media to your next door neighbor. Growth is flat, unemployment is high, and the housing market has crashed with presumably still some downward space to go. But beyond the media hype, how bad are things? Fundamentally? and how does it compare with previous recessions? I wanted to shed some light on this matter using a single, uncomplicated economic measure with a long enough time-series to cover at least the last 50 years. The simple yet powerful concept of net worth came to mind.

Net worth is simply what’s left on someone’s balance sheet after you deduct all your liabilities from your assets. What you have minus what you owe. It’s a core measure of solvency, used mostly in financial sector economics (banks capital), but also in any kind of balance sheet analysis involving assets and liabilities. I wanted to analyze the combined net worth of US households over the last 50 years, and see where we stand. Luckily, the readily-available and infinitely useful flow of funds datafrom the Federal Reserve provide a nice historical quarterly time-series of the US households’ balance sheet positions. Below is a quick visualization I made using that data.

One line traces the evolution of combined households net worth in USD billions, and the other shows a share of net worth over total assets, the higher the number the better for both indicators.

The viz shows a steady increase in net worth from the early 50′s to 2007, except for a hiccup related to the dot.com bust of the first few years of the 21st century. Then came 2008 and everything came crashing with armageddon-like ferocity.

The evolution of the share of net worth over assets, which can be considered as a measure of “soundness” of net worth, has similarly gone through the dot.com and 2008 turbulence, but has on the other hand been on a slow but steady erosion since the early 50′s.

So far so good, except that, before crunching the numbers, I was expecting to see a particular trend post-2008, particularly one of declining net worth and declining share of net worth to assets among US households, to go with the gloomy stream of economic news and market sentiment of the past few years. Instead, the numbers show a rather strong rebound in both indicators, starting from early 2009. Low and behold, things are not so desperate after all if you were to trust the fundamental information derived from the flow of funds data. How could that be? One simple explanation, also provided by the flow of funds sectoral balance sheet data of the US economy, is that Americans have taken extra care post-2008 to reduce their credit card debt (liabilities) and increase their savings deposits (assets), hence improving their net worth. While aggregate consumption ( and hence growth) has been the main victim of this balance sheet cleanup, things are looking better fundamentally for american households.

Caveats. Along with the level of net worth, one has to also look at the distribution: We already know from census data that the rich are getting richer and skewing the averages. Equally important is a look at earning power and disposable income: unlike net worth (a solvency concept), earning power is an income statement concept, or a “liquidity” concept. It is indeed more palpable than the net worth concept. So granted, households net worth is only one part of the economic picture, but it’s a fundamental one with long-term implications.

It is hard to convince someone who has lost their job or saw the value of the house they live in slashed in half that things are looking brighter. The alternative is trusting the media. I’d rather trust the hard numbers.

-Samer