Thursday, December 1, 2011

ARZone Podcast 23: David Sztybel - Vegan Animal Ethicist

Episode 23 features Dr. David Sztybel.

David has been a vegan since 1988 and talks about his evolving philosophy in the field of animal ethics, how a genuine interest in the concept of ahimsa has been an influence in his life, his interest in normative sociology, his journey to veganism, and his vision for the future.




You may also LISTEN HERE.
There were one or two technical difficulties during the recording of the podcast, so we hope that you do not find them too distracting - now and then, a word or two is not audible.

Also, since some of the podcast is quite philosophical in nature, you may find it useful to ref to the following glossary of terms while you listen.


GLOSSARY

Affect. Principally refers to (a) feelings and (b) desires. Also extends to moods, attitudes, and related phenomena.

Affective cognition. Part of David Sztybel's ethical theory: awareness of feelings and desires and their properties. For example, one may be aware that one is pleased. A property of the pleasure is that it is medium-grade wistful anticipation. Traditionally, feelings and desires are supposed to be noncognitive. Sztybel questions this because we are aware of our feelings, and not through the five senses either. So we must be aware of them somehow: through another mode of cognition or awareness. Let's call it feeling cognition, since it is literally the way we come to be aware of feelings as such. Are we aware that pain feels bad, not only in a verbal sense, through affective cognition? Does pain ever feel good or indifferent? Masochists feel bad from pain too or they would not inflict such hardship on themselves out of self-hatred or what-have-you.

Epistemology. The theory of knowledge in philosophy. It investigates questions such as: What is knowledge? How strict a standard should we use before we say we know that a belief is true? Should a belief be beyond all possible doubt? How is knowledge distinguished from, say, rational beliefs? Many epistemologists agree that we have certain knowledge of our mental states, but that technically, we do not have knowledge of the external world (which includes pencils and pogo sticks), because, for example, our perceptions could conceivably be "fed" to us as an illusion by a powerful being. These theorists would say we have very rational beliefs about the world that we perceive outside of ourselves, whereas we know 2 + 2 = 4, or if we are in pain, for two examples. Others say of course we know about the external world! But then it is a real challenge as to how to theorize that supposed knowledge. There are four popular theories of knowledge:

(a) foundationalism. Holds that we are aware of some truths that are self-evident, or otherwise evident, and that these basic pieces of knowledge form a foundation from which we can logically derive other pieces of knowledge. For example, I am aware of the side of a car. With other foundational bits of knowledge, I can be aware of a car as a whole, calculate estimated times of arrival, and so forth.

(b) coherentism.
Propounds that no single beliefs are foundations of knowledge, but rather a web of beliefs that "cohere" (stick together in terms of logic, analysis, causation, explanation, definitions, and so on) accounts for our claims to knowledge. My knowledge of the side of a car depends on knowledge about colour, distance, cars, and so forth. Beliefs depend on many other beliefs and a variety of different forms of evidence.

(c) skepticism. Essentially denies that we have knowledge. Sometimes it is confined to certain areas of knowledge, such as purported knowledge of the so-called external world.

(d) pragmatism. Agrees at some level with the skeptic that we cannot give an absolute account of supposedly everyday knowledge, but insists that we need a sense of knowledge by which we can unequivocally say absolute truths about the world we try to access with the five senses of sight, sound, smell, taste, and touch. So pragmatism holds that if it "works" for individuals to say that they know, then they know. It works much better in a court of law, for example, to declare what one has witnessed with certitude, if that applies, rather than unnecessarily introducing doubts that are impractcal to consider, and could needlessly undermine a case.

Intuition. A basic belief that is so basic that no justification can be given for it. For example, the following assumption is an intuition: We should maximize pleasure and minimize pain overall. The latter is a version of act utilitarianism in ethics. See below on utilitarianism.

Intuitionism. The view, sometimes confined to ethics, which states that it is acceptable to base our moral theories on intuitions. After all, we need to start somewhere. If we have an infinite chain of a belief that in turn justifies a belief that in turn justifies a belief, and so forth, we will not be able to start the process of justification. The first belief would also need to be justified, and so on. Intuitionism has in common with foundationalism (see above) the possibility of starting with a single belief. Where intuitionism and foundationalism may differ might be that whereas moral intuitions have no justification, a foundational belief about a computer screen in front of oneself is informed by various sensory perceptions that seem to count as relevant data in forming the belief in question.

Utilitarianism. The view that states that we should maximize the good and minimize the bad overall. Two typical versions of value theory for utilitarianism include hedonism (good = pleasure and bad = pain) and what I call preferentialism (good = preference-satisfaction and bad = preference frustration).

(a) Act utilitarians try to choose the most utility, or most units of value and least units of disvalue, for individual actions.

(b) Rule utilitarians hold that we cannot literally calculate utility for actions, and trying to do so might result in biased results, or courses of action that are too risky. Therefore, we should go by that set of rules that maximizes utility. Examples: Do not kill. Do not rape. And so forth.

(c) Indirect utilitarianism. Puts forward the idea that we cannot calculate utility for rules either and that, paradoxically, it would maximize utility (produce the most good and least bad overall) to forget about utilitarianism and just respect rights, be a loyal friend or love, and have a sturdy character. In other words, to go by common-sense morality. It is believed that trying to be a utilitarian might make one cold, calculating, treacherous, uncaring, and untrustworthy. For instance, one might be inclined to betray someone if it serves "the greater good."

2 comments:

Ellie said...

There seems to be a problem with the media player. I look forward to Dr. David Sztybel's Podcast.

ARZone said...

Hi Ellie,

I have just tried to listen via the link and the player above and they both work fine for me. If you're still having problems, please let me know though and I'll be more than happy to look into it further.

Thanks!

Carolyn
Carolyn@ARZone.net

Post a Comment