The book publishing program at Duke University Press is growing!

This month we add a new acquisitions editor—Elizabeth Ault—to our team. Elizabeth started at the Press in 2012, and she has been working with our editorial director, Ken Wissoker, on his book projects. In 2014 Elizabeth was promoted to assistant editor as she began to acquire projects of her own, and in 2016 she was promoted to associate editor. She has steadily built a list in African studies and has been regularly attending the African Studies Association conference on behalf of the Press. She has also acquired titles in film and media studies and American studies and has worked with the editors of our journal *Camera Obscura* to restart their book series.

Most recently, Elizabeth launched a new books series “Theory in Forms”—edited by Achille Mbembe, Nancy Hunt, and Juan Obarrio–which will focus on theory from the Global South. The series builds upon Duke’s commitment to innovative, interdisciplinary, and international scholarship and also points to some of the new directions that Elizabeth’s list will take.

Elizabeth plans to acquire titles in African studies, urban studies, Middle East studies, geography, theory from the South, Black and Latinx studies, disability studies, trans studies, and critical prison studies. As is characteristic of our list, these areas overlap and intersect with other editors’ areas of acquisitions. We take pride in the intellectual synergy that comes from the intersections between our editors’ lists (as well as between our book and journal publications), and we hope that adding another editor to our team will allow Duke UP to expand the intellectual breadth of our list even further.

Elizabeth says, “It’s an exciting time for me – and for the Press! I’m looking forward to finding surprising turns in established fields of inquiry as well as supporting emerging conversations, particularly those between activists and academics. I’m so thrilled that I’ll be able to more fully support the authors and series editors I’ve already been working with, and also that I’ll get to learn fields that will be new to me and to DUP, expanding our spirit of interdisciplinary inquiry.”

Prior to joining Duke UP, Elizabeth earned an A.B. in American Studies from Brown University and a Ph.D. in American Studies from the University of Minnesota. She has published her research in *Television & New Media*, among other places. While in graduate school, Elizabeth worked at the Minnesota Historical Society Press, where she helped to write the catalog for The 1968 Exhibit. In addition to her editorial work, Elizabeth is an active participant in Durham community organizations like Southerners on New Ground and the Durham Prison Books Collective.

To submit your book project to Duke University Press, contact Elizabeth or another of our acquisitions editors by email. See the requirements here.

sure is important observed prof dr mircea orasanu

LikeLike

here we consider some important aspects as say prof de mircea orasanu and prof horia orasanu as the followed

LAGRANGIAN AND ELECTROMAGNETISM

ABSTRACT

Transformations

Spherical:

Transformations

Useful equations

Cylindrical:

Differential length vectors

Del Operator:

Green’s Theorem

Divergence Theorem

Stoke’s Theorem

Dielectric Material Properties:

Magnetic Material Properties:

Displacement Field:

Magnetic Field Intensity:

Electric Field for a point charge, q

Magnetic Field for a ‘point’ current, dI

Lorentz Force Equation:

Ohm’s Law:

Maxwell’s Equations Integral Form:

1 INTRODUCTION

Maxwell’s Equations Point Form:

Boundary Conditions:

Electric and Magnetic Potentials:

Stored Energy:

Poynting Vector:

General Wave Equations:

Plane Wave:

Magentic Vector Potential:

Hertzian Dipole Antenna:

Long Dipole Antenna: (Far field)

2 Antenna Array: (Form Factor)

2 Group Array: (Form Factor)

Index

Section 1. Basic concepts and basic mathematics

1.1. History

For now (until I can write my own version)

Copied from http://history.hyperjeff.net/electromagnetism.html

A

n

t

i

q

u

i

t

y Many things are known about optics: the rectilinearity of light rays; the law of reflection; transparency of materials; that rays passing obliquely from less dense to more dense medium is refracted toward the perpendicular of the interface; general laws for the relationship between the apparent location of an object in reflections and refractions; the existence of metal mirrors (glass mirrors being a 19th century invention).

ca

300

BC Euclid of Alexandria (ca 325 BC – ca 265 BC) writes, among many other works, Optics, dealing with vision theory and perspective.

Convex lenses in existence at Carthage.

1st

cent

BC Chinese fortune tellers begin using loadstone to construct their divining boards, eventually leading to the first compasses. (Mentioned in Wang Ch’ung’s Discourses weighed in the balance of 83 B.C.)

1st

cent South-pointing divining boards become common in China.

2nd

cent Claudius Ptolemy (ca 85 – ca 165) writes on optics, deriving the law of reflection from the assumption that light rays travel in straight lines (from the eyes), and tries to establish a quantitative law of refraction.

Hero of Alexandria writes on the topics of mirrors and light.

ca

271 True compasses come into use by this date in China.

6th

cent (China) Discovery that loadstones could be used to magnetize small iron needles.

11th

cent Abu Ali al-Hasan ibn al-Haitam (Alhazen) (965-1039) writes Kitab al-manazir (translated into Latin as Opticae thesaurus Alhazeni in 1270) on optics, dealing with reflection, refraction, lenses, parabolic and spherical mirrors, aberration and atmospheric refraction.

(China) Iron magnetized by heating it to red hot temperatures and cooling while in south-north orientation.

1086 Shen Kua’s Dream Pool Essays make the first reference to compasses used in navigation.

1155

–

1160 Earliest explicit reference to magnets per se, in Roman d’Enéas. (see reference)

1190

–

1199 Alexander Neckam’s De naturis rerum contains the first western reference to compasses used for navigation.

13th

cent Robert Grosseteste (1168-1253) writes De Iride and De Luce on optics and light, experimenting with both lenses and mirrors.

Roger Bacon (1214-1294) (student of Grosseteste) is the first to try to apply geometry to the study of optics. He also makes some brief notes on magnetism.

Pierre de Maricourt, a.k.a. Petri Pergrinus (fl. 1269) writes Letter on the magnet of Peter the Pilgrim of Maricourt to Sygerus of Foucaucourt, Soldier, the first western analysis of polar magnets and compasses. He also demonstrates in France the existence of two poles of a magnet by tracing the directions of a needle laid on to a natural magnet.

Witelo writes Perspectiva around 1270, treating geometric optics, including reflection and refraction. He also reproduces the data given by Ptolemy on optics, though was unable to generalize or extend the study.

Theodoric of Freiberg (d ca 1310), working with prisms and transparent crystalline spheres, formulates a sophisticated theory of refraction in raindrops which is close to the modern understanding, though it did not become very well known. (Descartes presents a nearly identical theory roughly 450 years later.)

Eyeglasses, convex lenses for the far-sighted, first invented in or near Florence (as early as the 1270s or as late as the late 1280s – concave lenses for the near-sighted appearing in the late 15th century).

16th

cent Girolamo Cardano (1501-1576) elaborates the difference between amber and loadstone.

1558 Giambattista Della Porta (1535-1615) publishes his major work, Magia naturalis, analyzing, among many other things, magnetism.

1600 William Gilbert (1544-1603), after 18 years of experiments with loadstones, magnets and electrical materials, finishes his book De Magnete. The work included: the first major classification of electric and non-electric materials; the relation of moisture and electrification; showing that electrification effects metals, liquids and smoke; noting that electrics were the attractive agents (as opposed to the air between objects); that heating dispelled the attractive power of electrics; and showing the earth to be a magnet.

1606 Della Porta first describes the heating effects of light rays.

1618 April 2nd, Francesco Maria Grimaldi discovers diffraction patterns of light and becomes convinced that light is a wave-like phenomenon. The theory is given little attention.

1621 Willebrord van Roijen Snell (1580-1626) experimentally determines the law of angles of incidence and reflection for light and for refraction between two media.

1629 Nicolo Cabeo publishes his observations on electrical repulsion, noting that attracting substances may later repel one another after making contact.

1637 René Descartes publishes his Dioptics and On Meteors as appendices to his Discourse on a Method, detailing a theory of refraction and going over a theory of rainbows which, while containing nothing essentially new, encouraged experimental exploration of the subject.

1644 Descartes’ Principia philosophiae, describing magnetism as the result of the mechanical motion of channel particles and their displacements, and proposing the absence of both void and action at a distance.

1646 Thomas Browne coins the term “electricity” in his Pseudodoia Epidemica.

ca

1650 (Coffee begins to be important to and catch on in Europe.)

1657 Pierre de Fermat (1601-1665) formulates the principle of least time for understanding the way in which light rays move.

1660 Otto von Guericke (1602-1686) builds the first electrical machine, a rotating frictional generator.

1661 Fermat is able to apply his principle of least time to understand the refractive indices of different materials.

1664 Robert Hooke (see also: Robert Hooke) (1635-1703) puts forth a wave theory of light in his Micrographia, considering light to be a very high speed rectilinear propagation of longitudinal vibrations of a medium in which individual wavelets spherically spread.

1665 Francesco Maria Grimaldi’s Prysico-mathesis de lumine coloribus et iride describes experiments with diffraction of light and states his wave theory of light.

1669 Erasmus Bartholin publishes A Study of Iceland Spar, about his discovery of double refraction.

1675 Robert Boyle (1627-91) writes Experiments and Notes about the Mechanical Origine or Production of Electricity. Electrical attraction, it was written, was “a Material Effluvium issuing from and returning to, the Electrical Body.”

1676 Ole Christensen Rømer (1644-1710) demonstrates the finite speed of light via observations of the eclipses of the satellites of Jupiter, though he does not calculate a speed for light. His results were not widely accepted.

1677 Christiaan Huyghens (1629-95) extends the wave theory of light in his work Treatise on Light, unpublished until 1690.

1687 Isaac Newton (1642-1727) notes magnetism to be a non-universal force and derives an inverse cubed law for two poles of a magnet.

1690 Huyghens formulates his wave theory of light in Traité de la Lumière, giving the first numerical quote for the speed of light, usually attributed to Rømer, of 2.3 x 108 m/s.

1704 Newton’s research on light culminates in the publication of his Optics, describing light both in terms of wave theory and his corpuscular theory.

1709 Francis Hauksbee’s Physico-Mechanical Experiments on Various Subjects.

1728 James Bradley (1693-1762) discovers the phenomenon of steller aberration, confirming earlier suggestions by Rømer that the speed of light is finite.

1729 Stephen Gray (ca 1670-1736) shows static electricity to be transported via substances, especially metals.

1733 Charles-Francois de Cisternai du Fay (1698-1739) discovers that electric charges are of two types and that like charges repell while unlike charges attract.

1745 Kleist invents the Leyden jar for storing electric charge.

1746 William Watson (1715-89) suggests conservation of electric charge.

Jean Antoine Nollet’s Essai sur l’electricité des corps.

1747 Benjamin Franklin (1706-90) proposes that electricity be modeled by a single fluid with two states of electrification, materials have more or less of a normal amount of electric fluid, independently proposing conservation of electric charge, and introducing the convention of describing the two types of charges as positive and negative.

Watson passes electrical charge along a two mile long wire.

1750 John Michell (1724-93) demonstrates that the action of a magnet on another can be deduced from an inverse square law of force between individual poles of the magnet, published in his work, A Treatise on Artificial Magnets.

1759 Franz Ulrich Theodosius Aepinus (1724-1802) publishes An Attempt at a Theory of Electricity and Magnetism, the first book applying mathematical techniques to the subject.

1764 Johannes Wilcke invents the electrophorus, a device which can produce relatively large amounts of electric charge easily and repeatedly. (See Links)

1766 Joseph Priestley (1733-1804) deduces the inverse square law for electric charges using the results of experiments showing the absence of electrical effects inside a charged hollow conducting sphere.

1772 Henry Cavendish publishes, “An Attempt to Explain some of the Principal Phenomena of Electricity, by Means of an Elastic Fluid.”

1775 Alessandro Guiseppe Antonio Anastasio Volta (1745-1827) invents an electrometer, a plate condenser and the electrophorus.

1777 Charles Augustin de Coulomb’s (1736-1806) research sets a new direction in research into electricity and magnetism.

early

1780s Luigi Galvani (1737-98) uses the response of animal tissue to begin studies of electrical currents produced by chemical action rather than from static electricity. The mechanical response of animal tissue to contact with two dissimilar metals is now known as galvanism.

2 FORMULATION

early

1780s

1785 Coulomb independently invents the torsion balance to confirm the inverse square law of electric charges. He also verifies Michell’s law of force for magnets and also suggests that it might be impossible to separate two poles of a magnet without creating two more poles on each part of the magnet.

1799 Volta shows that galvanism is not of animal origin but occurred whenever a moist substance is placed between two metals. This discovery eventually leads to the “Volta pile” a year later, the first electric batteries.

1800 Volta writes a paper on electricity by contact.

1801 Thomas Young’s (1773-1829) work on interference revives interest in the wave theory of light. He also accounts for the recently discovered phenomenon of light polarization by suggesting that light is a vibration in the aether transverse to the direction of propagation.

Johann Georg von Soldner (1776-1833) makes a calculation for the deflection of light by the sun assuming a finite speed of light corpuscles and a non-zero mass. (The result, 0.85 arc-sec, was rederived independently by Cavendish and Einstein (1911), but went unnoticed until 1921. )

1807 H Davy’s lecture, “On Some Chemical Agents of Electricity,” drawing close the possible relationships of chemical and electrical forces.

1812 Simeon-Denis Poisson (1781-1840) formulates the concept of macroscopic charge neutrality as a natural state of matter and describes electrification as the separation of the two kinds of electricity. He also points out the usefulness of a potential function for electrical systems.

1813 Measurements of specific heat of air as a function of pressure by Delarache and Bérard.

1814 Augustin Jean Fresnel (1788-1827) independently discovers the interference phenomena of light and explains its existence in terms of wave theory.

1817 Fresnel predicts a dragging effect on light in the aether.

1818 Fresnel’s essay on optics and the aether.

1820 (July 21) Hans Christian Oersted (1777-1851) notes the deflection of a magnetic compass needle caused by an electric current after giving a lecture demonstration. Oersted then demonstrates that the effect is reciprocal.This initiates the unification program of electricity and magnetism.

July 27, André Marie Ampère (1775-1836) confirms Oersted’s results and presents extensive experimental results to the French Academy of Science. He models magnets in terms of molecular electric currents. His formulation inaugurates the study of electrodynamics independent of electrostatics.

Fall, Jean-Baptiste Biot (1774-1862) and Felix Savart (1792-1841) deduce the formula for the strength of the magnitec effect produced by a short segment of current carrying wire.

1825 Ampère ‘s memoirs are published on his research into electrodynamics.

1827 Georg Simon Ohm (1789-1854) formulates the relationship between current to electromotive force and electrical resistance.

1828 George Green (1793-1841) introduces the notion of potential and formulates what is now called Green’s Theorem relating the surface and volume distributions of charge. (The work goes unnoticed until 1846.)

1831 Michael Faraday (1791-1867) begins his investigations into electromagnetism.

1832 Gauss (1777-1855) independently states Green’s Theorem without proof. He also reformulates Coulomb’s law in a more general form, and establishes experimental methods for measuring magnetic intensities.

1835 Gauss formulates separate electrostatic and electrodynamical laws, including “Gauss’s law.” All of it remains unpublished until 1867.

1838 Faraday explains electromagnetic induction, electrochemistry and formulates his notion of lines of force, also criticizing action-at-a-distance theories.

Wilhelm Eduard Weber (1804-91) and Gauss apply potential theory to the magnetism of the earth.

1839 The potential theory for magnetism developed by Weber and Gauss extented to all inverse-squared phenomena.

1842 William Thomson (Lord Kelvin, 1824-1907) writes a paper, “On the uniform motion of heat and its connection with the mathematical theory of electricity,” based on the ideas of Fourier. The analogy allows him to formulate a continuity equation of electricity, implying a conservation of electric flux.

1845

to

1850 Michael Faraday introduces the idea of “contiguous magnetic action” as a local interaction, instead of the idea of instantaneous action at a distance, using concepts now known as fields. He also estabishes a connection between light and electrodynamics by showing that the transverse polarization direction of a light beam was rotated about the axis of propagation by a strong magnetic field (today known as “Faraday rotation”).

G T Fechner proposes a connection between Ampère’s law and Faraday’s law in order to explain Lenz’s law.

1846 Weber proposes a synthesis of electrostatics, electrodynamics and induction using the idea that electric currents are moing charged particles. The interactions are instantaneous forces. Weber’s theory contains a limiting velocity of electromagnetic origin with the value Sqrt(2) * c.

William Robert Grove’s (1811-1896) Correlation of physical forces

The partial-drag theory of George Gabriel Stokes (1819-1903) is revived for the explanation of stellar aberration.

1849 A.H.L. Fizeau begins experiments to determine the speed of light.

1851 Fizeau’s interferometry experiment confirming Fresnel’s theoretical results.

1852 Stokes names and explains the phenomena of fluorescence.

1854 Bernhard Riemann (1826-66) makes unpublished conjectures about an ‘investigation of the connection between electricity, galvanism, light and gravity.’

1855 Weber and R Kohlrausch determine a limiting velocity which turns up in Weber’s electrodynamic theory, and that it’s value is about 439,450 km/s.

1855

to

1868 James Clerk Maxwell (1831-79) completes his formulation of the field equations of electromagnetism. He established, among many things, the connection between the speed of propagation of an electromagnetic wave and the speed of light, and establishing the theoretical understanding of light.

1858 Riemann generalizes Weber’s unification program and derives his results via a solution to a wave function of a electrodynamical potential (finding the speed of propagation, correctly, to be c). He claimed to have found the connection between electricity and optics. (Results published postumously in 1867.)

1861 Riemann uses Lagrange’s theorem to deal with velocity-dependent electrical accelerations.

Gustav Robert Kirchhoff (1824-1887) formulates the model of the black body.

1863 John Tyndall’s Heat Considered as a Mode of Motion.

1864 Maxwell publishes A Dynamical Theory of the Electromagnetic Field, his first publication to make use of his mathematical theory of fields.

1865 Maxwell’s A Dynamical Theory of the Electromagnetic Field, formulating an electrodynamical formulation of wave propagation using Lagrangian and Hamiltonian techniques, obtaining the theoretical possibility of generating electromagnetic radiation. (The derivation is independent of the microscopic structures which may underlie such phenomena.)

1870 Hermann Ludwig Ferdinand von Helmholtz (1821-94) developes a theory of electricity and shows Weber’s theories to be inconsistent with the conservation of energy.

1873 The first edition of Maxwell’s Treatise on Electricity and Magnetism is published.

1874 George J Stoney estimates the charge of an electron to be about 10-20 Coulombs and introduces the term “electron.”

1875 Heinrich Antoon Lorentz (1853-1928), in his doctoral thesis, derives the phenomena of reflection and refraction in terms of Maxwell’s theory.

W Crookes performs experiments on cathode rays.

1879 Maxwell suggests that an earth-based experiment to detect possible aether drifts could be performed, but that it would not be sensitive enough.

1881 A.A. Michelson begins his interferometry experiments to detect a luminiferous aether.

1884 Heinrich Rudolf Hertz (1857-94) develops a reformulation of electrodynamics and shows his and Helmholtz’s theories both amount to Maxwell’s theory.

Poynting establishes that for electromagnetic radiation energy can be localized and flow (the first such energy localization principle established).

where A and A’ are two different surfaces with the same edge. Thus we can evaluate the integral over a surface which is entirely in the x-y plane with the normal in the z direction.

Problem 10

There are at least two ways to do this problem. First, since the surface is closed, we can divide it into two parts, each having the same edge. From problem 9 we know that

LikeLike

here we have some considerations as say prof dr mircea orasanu and prof horia orasanu as followed and

LAGRANGIAN AND GRADIENTS IN ELECTROMAGNETISM

ABSTRACT

The action S (or W) is stationary for true trajectories, i.e., the first variation δS vanishes for all small trajectory variations consistent with the given constraints. If the second variation is positive definite (δ2S>0) for all such trajectory variations, then S is a local minimum; otherwise it is a saddle point, i.e., at second order the action is larger for some nearby trial trajectories and smaller for others, compared to the true trajectory action. As defined in Section 1, action is never a local maximum, as we shall discuss. (In relativistic mechanics (see Section 9) two sign conventions for the action have been employed, and whether the action is never a maximum or never a minimum depends on which convention is used. In our convention it is never a minimum.) We discuss here the case of the Hamilton action S for one-dimensional (1D) systems, and refer to Gray and Taylor (2007) for discussions of Maupertuis’ action W , and 2D etc. systems. For some 1D potentials V(x) (those with ∂2V/∂x2≤0 everywhere), e.g. V(x)=0 , V(x)=mgx , and V(x)=−Cx2 , all true trajectories have minimum S . For most potentials, however, only sufficiently short true trajectories have minimum action; the others have an action saddle point. “Sufficiently short” means that the final space-time event occurs before the so-called kinetic focus event of the trajectory. The latter is defined as the earliest event along the trajectory, following the initial event, where the second variation δ2S ceases to be positive definite for all trajectory variations, i.e., where δ2S=0 for some trajectory variation. Establishing the existence of a kinetic focus using this criterion is discussed by Fox (1950). An equivalent and more intuitive definition of a kinetic focus can be given. As an example, consider a family of true trajectories x(t,v0) for the quartic oscillator with V(x)=(1/4)Cx4 , all starting at P(x=0 at t=0) , and with various initial velocities v0>0 . Three trajectories of the family, denoted 0 , 1 , and 2 , are shown in Figure 1.

1 INTRODUCTION

By an argument due originally to Jacobi, it is easy to see intuitively that action S can never be a local maximum (Morin 2008, Gray and Taylor 2007). Note that for any true trajectory the action S in (1) can be increased by considering a varied trajectory with wiggles added somewhere in the middle. The wiggles are to be of very high frequency and very small amplitude so that there is increased kinetic energy K compared to the original trajectory but only a small change in potential energy V. (We also ensure the overall travel time T is kept fixed.) The Lagrangian L=K−V in the region of the wiggles is then larger for the varied trajectory and so is the action integral S over the time interval T. Thus S cannot be a maximum for the original true trajectory. A similar intuitive argument due originally to Routh shows that action W also cannot be a local maximum for true trajectories (Gray and Taylor 2007).

For the purpose of determining the true trajectories, the nature of the stationary action (minimum or saddle point) is usually not of interest. However, there are situations where this is of interest, such as investigating whether a trajectory is stable or unstable (Papastavridis 1986), and in semiclassical mechanics where the phase of the propagator (Section 10) depends on the true classical trajectory action and its stationary nature; the latter dependence is expressed in terms of the number of kinetic foci occurring between the end-points of the true trajectory (Schulman 1981). In general relativity kinetic foci play a key role in establishing the Hawking-Penrose singularity theorems for the gravitational field (Wald 1984). Kinetic foci are also of importance in electron and particle beam optics. Finally, in seeking stationary action trajectories numerically (Basile and Gray 1992, Beck et al. 1989, Marsden and West 2001), it is useful to know whether one is seeking a minimum or a saddle point, since the choice of algorithm often depends on the nature of the stationary point. If a minimum is being sought, comparison of the action at successive stages of the calculation gives an indication of the error in the trajectory at a given stage since the action should approach the minimum value monotonically from above as the trajectory is refined. The error sensitivity is, unfortunately, not particularly good, as, due the stationarity of the action, the error in the action is of second order in the error of the trajectory. Thus a relatively large error in the trajectory can produce a small error in the action.

Relation of Hamilton and Maupertuis Principles

For conservative (time-invariant) systems the Hamilton and Maupertuis principles are related by a Legendre transformation (Gray et al. 1996a, 2004). Recall first that the Lagrangian L(q,q˙) and Hamiltonian H(q,p) are so-related, i.e.

H(q,p)=pq˙−L(q,q˙),(6)

where in general pq˙ stands for p1q1˙+p2q2˙+…+pfqf˙. If we integrate (6) with respect to t along an arbitrary virtual or trial trajectory between two points qA and qB , and use the definitions (1) and (3) of S and W we get E¯T=W−S , or

S=W−E¯T,(7)

where E¯≡∫T0dtH/T is the mean energy along the trial trajectory. (Along a true trajectory of a conservative system, with E¯=E= const, (7) reduces to the well-known relation (Goldstein et al. 2002) S=W−ET .) From the Legendre transformation relation (7) between S and W , for conservative systems one can derive Hamilton’s principle from Maupertuis’ principle, and vice-versa (Gray et al., 1996a, 2004). The two action principles are thus equivalent for conservative systems, and related by a Legendre transformation whereby one changes between energy and time as independent constraint parameters.

The existence in mechanics of two actions and two corresponding variational principles which determine the true trajectories, with a Legendre transformation between them, is analogous to the situation in thermodynamics (Gray et al. 2004). There, as established by Gibbs, one introduces two free energies related by a Legendre transformation, i.e. the Helmholtz and Gibbs free energies, with each free energy satisfying a variational principle which determines the thermal equilibrium state of the system.

Generalizations

We again restrict the discussion to time-invariant (conservative) systems. If we vary the trial trajectory q(t) in (7), with no variation in end positions qA and qB but allowing a variation in end-time T, the corresponding variations δS , δW , δE¯ and δT for an arbitrary trial trajectory are seen to be related by

δS+E¯δT=δW−TδE¯.(8)

Next one can show (Gray et al. 1996a) that the two sides of (8) separately vanish for variations around a true trajectory. The left side of (8) then gives δS+EδT=0 , since E¯=E (a constant) on a true trajectory for conservative systems, which is called the unconstrained Hamiltonian principle. This can be written in the standard form for a variational relation with a relaxed constraint

δS=λδT ,

where λ is a constant Lagrange multiplier, here determined as λ=−E (negative of energy of the true trajectory). If we constrain T to be fixed for all trial trajectories, then δT=0 and we have (δS)T=0 , the usual Hamilton principle. If instead we constrain S to be fixed we get (δT)S=0 , the so-called reciprocal Hamilton principle.

The right side of (8) gives δW−TδE¯=0 , which is called the unconstrained Maupertuis principle, which can also be written in the standard form of a variational principle with a relaxed constraint, i.e. δW=λδE¯ where λ=T (duration of true trajectory) is a constant Lagrange multiplier. If we constrain E¯ to be fixed for the trial trajectories, we get (δW)E¯=0 , which is a generalization of Maupertuis’ principle (4); we see that the constraint of fixed energy in (4) can be relaxed to one of fixed mean energy. If instead we constrain W to be fixed, we get

(δE¯)W=0 ,

which is called the reciprocal Maupertuis principle. In these various generalizations of Maupertuis’ principle, conservation of energy is a consequence of the principle for time-invariant systems (just as it is for Hamilton’s principle), whereas conservation of energy is an assumption of the original Maupertuis principle.

In all the variational principles discussed here, we have held the end-positions qA and qB fixed. It is possible to derive additional generalized principles (Gray et al. 2004) which allow variations in the end-positions. A word on notation may be appropriate in this regard: the quantities δS , δW , δT and δE¯ denote unambiguously the differences in the values of S etc. between the original and varied trajectories, and q(t) and q(t)+δq(t) denote the original and varied trajectory positions at time t. In considering a generalized principle involving a trajectory variation which includes an end-position variation of, say, qB , one needs a more elaborate notation (Whittaker 1937, Papastavridis 2002) in order to distinguish between the variation in position at the end-time tB of the original trajectory, i.e. δqB≡δq(t=tB=T) , and the total variation in end-position ΔqB which includes the contribution due to the end-time variation δtB≡δT if it is nonzero, i.e. ΔqB=δqB+q˙BδT . Since we consider only variational principles with fixed end-positions in this review (i.e.ΔqB=0), we do not need to pursue this issue here.

As we shall see in the next section and in Section 10, the alternative formulations of the action principles we have considered, particularly the reciprocal Maupertuis principle, have advantages when using action principles to solve practical problems, and also in making the connection to quantum variational principles. We note that reciprocal variational principles are common in geometry and in thermodynamics (see Gray et al. 2004 for discussion and references), but their use in mechanics is relatively recent.

Practical Use of Action Principles

Just as in quantum mechanics, variational principles can be used directly to solve a dynamics problem, without employing the equations of motion. This is termed the direct variational or Rayleigh-Ritz method. The solution may be exact (in simple cases) or essentially exact (using numerical methods), or approximate and analytic (using a restricted and simple set of trial trajectories). We illustrate the approximation method with a simple example and refer the reader elsewhere for other pedagogical examples and more complicated examples dealing with research problems (Gray et al. 1996a, 1996b, 2004). Consider a one-dimensional quartic oscillator, with Hamiltonian

H=p22m+14Cx4.(9)

Unlike a harmonic oscillator, the frequency ω will depend on the amplitude or energy of motion, as is evident in Fig.1. We wish to estimate this dependence. We consider a one-cycle trajectory and for simplicity we choose x=0 at t=0 and at t=T (the period 2π/ω). As a trial trajectory we take

x(t)=Asinωt ,(10)

where the amplitude A is regarded as known and where we treat ω as a variational parameter; we will vary ω such that an action principle is satisfied. For illustration, we use the reciprocal Maupertuis principle (δE¯)W=0 discussed in the preceding section, but the other action principles can be employed similarly. From the definitions, we find the mean energy E¯ and action W over a cycle of the trial trajectory (10) to be

E¯=ω4πW+C3W232π2m2ω2,(11)

W=πωmA2.(12)

Treating ω as a variational parameter in (11) and applying (∂E¯/∂ω)W=0 gives

ω=(3CW4πm2)1/3.(13)

Substituting (13) in (11) gives for E¯

E¯=12(Cm2)1/3(3W4π)4/3.(14)

Eq. (13) can be combined with (12) or (14) to give

ω=(3C4m)1/2A=(2CE¯m2)1/4,(15)

i.e. a variational estimate of the frequency as a function of the amplitude or energy. The frequency increases with amplitude, confirming what is seen in Fig.1.

This problem is simple enough that the exact solution can be found in terms of an elliptic integral (Gray et al. 1996b), with the result ωexact/ωapprox=23/4πΓ(3/4)/Γ(1/2)Γ(1/4)=1.0075 . Thus the approximation (15) is accurate to 0.75%, and can be improved systematically by including terms Bsin3ωt , Dsin5ωt , etc., in the trial trajectory x(t) .

Direct variational methods have been used relatively infrequently in classical mechanics (Gray et al. 2004) and in quantum field theory (Polley and Pottinger 1988). These methods are widely used in quantum mechanics (Epstein 1974, Adhikari 1998), classical continuum mechanics (Reddy 2002), and classical field theory (Milton and Schwinger 2006). They are also used in mathematics to prove the existence of solutions of differential (Euler-Lagrange) equations (Dacorogna 2008).

Relativistic Systems

The Hamilton and Maupertuis principles, and the generalizations discussed above in Section 7, can be made relativistic and put in either Lorentz covariant or noncovariant forms (Gray et al. 2004). As an example of the relativistic Hamilton principle treated covariantly, consider a particle of mass m and charge e in an external electromagnetic field with a four-potential having contravariant components Aα=(A0,Ai)≡(ϕ,Ai) , and covariant components Aα=(A0,−Ai)≡(ϕ,−Ai) , where ϕ(x) and Ai(x) (for i=1,2,3) are the usual scalar and vector potentials respectively. Here x=(x0,x1,x2,x3) denotes a point in space-time. A Lorentz invariant form for the Hamilton action for this system is (Jackson 1999, Landau and Lifshitz 1962, Lanczos 1970)

S=m∫ds+e∫Aαdxα.(16)

The sign of the Lagrangian and corresponding action can be chosen arbitrarily since the action principle and equations of motion do not depend on this sign; here we choose the sign of Lanczos (1970) in (16), opposite to that of Jackson (1999). An advantage of the choice of sign of Lagrangian L implied by (16), as discussed briefly by Gray et al. (2004) and in detail by Brizard (2009) who relates this advantage to the consistent choice of sign of the metric (given just below), is that the standard definitions of the canonical momentum and Hamiltonian can be employed – with the other choice unorthodox minus signs are required in these definitions (Jackson 1999). A disadvantage of our

2 FORMULATION

Equations (A.11) and (A.13) provide a way of determining the compressive pressure ps(0) and the volume fraction (0) at the bottom of the tube from the quantity .

Buscall & White (1987) suggested an approximate solution to equations (A.10) and (A.13). It allows the compressive pressure and the volume fraction at the bottom of the tube to be calculated from the variation in the steady-state pellet height as a function of centrifugal acceleration:

(A.14)

(A.15)

For a set of initial volume fractions 0 and volumes h0 in the tube, we have determined the variation of heq with centrifuge acceleration as shown in Fig. 8. The quantity is therefore defined as the slope of that curve for a given set of data.

Green, Eberl & Landman (1996) have shown that this variation of heq with when normalized with respect to initial conditions follows a polynomial law. Here the best adjustment is obtained with a polynomial function of the type:

(A.16)

with a0 = – 0.671; a1 = 0.427; a2 = – 4.68 ´10-2; a3 = 1.39 ´10-3

Applying equation (A.16) to each set of data (heq, ), we could estimate the variation of for a large range of initial conditions (0, h0).

References

Buscall, R. & White, L.R. (1987). The consolidation of concentrated suspensions. J. Chem. Soc. Faraday Trans. I., 83, 873-891.

Dullien, F.A.L. (1979). Porous Media: Fluid Transport and Pore Structure. New York: Academic Press.

Green, M.D., Eberl, M., & Landman, K.A. (1996). Compressive yield stress of flocculated suspensions: determination via experiment. AIChE J., 42, (8), 2308

Green, M.D., & Boger, D.V. (1997). Yielding of suspensions in compression. Ind. Eng. Chem. Res., 36, 4984-4992.

DeKretser, R.G, Scales, P.J., & Boger, D.V. (1997). Improving clay-based tailing disposal: a case study on coal tailings. AIChE J., 43, 1894-1903.

Landman, K.A., White, L.R., & Buscall, R. (1988). The continuous-flow gravity in thickener: steady state behavior, AIChE J., 34, 239.

McCarthy A.A, Gilboy P.,Walsh P.K.,& Foley G.(1999), Characterisation of cake compressibiliy in dead-end microfiltration of microbial suspensions, Chem. Eng. Com, 173, 79-90.

Meireles, M., Clifton, M.J., & Aimar, P. (2002). Filtration of yeast suspensions: experimental observations and modelling of dead-end filtration with a compressible cake. Desalination, 147, 19-23.

Miller, K.T., Melant, R.M., & Zukoski, C.F. (1996). Comparison of compressive yield response of aggregated suspensions: pressure filtration, centrifugation and osmotic consolidation. J. Ceram. Soc., 79(10), 2545-25646

Nakanishi, K., Tadokoro, T., & Matsuno, R. (1987). On the specific resistance of cakes of microorganisms. Chem. Eng. Comm., 62, 187-201.

Nomura, S. (1989). Studies on filtration mechanism in cross-flow microfiltration. Master’s Thesis, Chem. Eng., University of Tokyo

Ogden, G.E., & Davis, R.H. (1990). Experimental determination of the permeability and relative viscosity for fine latexes and yeast suspensions. Chem. Eng. Comm., 91, 11-28.

Ofsthun, N.J. (1989). Crossflow membrane filtration of cell suspensions. Ph. D. Thesis, Massachusetts Institute of Technology, Cambridge, Ma.

Piron, E., René, F., & Latrille, E. (1995). A cross flow filtration model based on integration of the mass transport equation. J. Membr. Sci.,108, 57-70.

Redkar, S.G., & Davis, R.H. (1993). Crossflow microfiltration of yeast suspensions in tubular filters. Biotechnol. Prog. 9, 625-634.

Rushton, A., & Khoo, E. (1977). The filtration characteristics of yeast. J. Appl. Chem. Biotechnol., 27, 99-109.

Ruth B.F.(1935) .Studies in filtration.III.Derivation of general filtration equations, Ind.Eng. Chem, 27, 708-715.

Smith, A.E., Zhang, Z., & Thomas, C.R. (2000). Wall material properties of yeast cells: Part 1. Cell measurements and compression experiments. Chemical Engineering Science, 55, 2031-2041.

Smith, A.E., Moxham, K.E. & Middelberg, A.P.J. (1998). On uniquely determining cell-wall material properties with the compression experiment. Chemical Engineering Science, 53, 3913-3922.

Smith, A.E., Moxham, K.E. & Middelberg, A.P.J. (2000). Wall material properties of yeast cells: Part 2. Analysis. Chemical Engineering Science, 55, 2043-2053.

Sorensen, P.B., Moldrup, P., Hansen, J.A.A. (1996). Filtration and expression of compressible cakes. Chemical Engineering Science, 51(6), 967-979.

Starov, V., Zhdanov, V., Meireles, M., & Molle, C. (2001). Viscosity of concentrated suspensions: influence of cluster formation. Advances in Colloid and Interface Science, 96, 279-293.

Succi, S. (2001). The Lattice Boltzmann Equation for Fluid Dynamics and Beyond, Oxford: Clarendon Press.

Tiller, F.M. (1975). Compressible cake filtration. In K.J. Ives, The Scientific Basis of Filtration, Leyden: Noordhoff.

Tiller, F.M., Yeh, C., & Leu, W.F. (1987). Compressibility of particulate structures in relation to thickening, filtration and expression, a review. Sep. Sci. Technol., 22, 1037-

Zydney, A.L, Saltzman, W.M., & Colton, C.K. (1989). Hydraulic resistance of red cell beds in an unstirred filt

LikeLiked by 1 person

here sure we consider some as say prof dr mircea orasanu and prof horia orasanu as followed and for

LAGRANGIAN AND CONCEPTS OF STIELTJES RESULTS

ABSTRACT ) The natural representation of a curve, c = c(s), satisfies the condition |dc/ds| = 1, where s is the natural parameter for the curve.

a) Describe in words and a sketch what this condition means.

b) Demonstrate that the following vector function (3.6) is the natural representation of the circular helix (Fig. 1) by showing that it satisfies the condition |dc/ds| = 1.

(1)

c) Use (1) and MATLAB to plot a 3D image of the circular helix (a = 1, b = 1/2). An example is shown in Figure 1. Describe and label a and b.

Figure 1. Circular helix with unit tangent (red), principal normal (green), and binormal (blue) vectors at selected locations.

2) An arbitrary representation of a curve, c = c(t), satisfies the condition |dc/dt| = ds/dt, where t is the arbitrary parameter and s is the natural parameter for the curve.

a) Demonstrate that the following vector function (3.2) is an arbitrary representation of the circular helix by showing that it satisfies this condition.

1 INTRODUCTION

Using your result from part b) for t(t) write the equation for the unit tangent vector, t(s), as a function of the natural parameter. Use this equation and MATLAB to plot a 3D image of a set of unit tangent vectors on the circular helix (a = 1, b = 1/2) as in Figure 1.

3) The curvature vector, scalar curvature, and radius of curvature are three closely related quantities (Fig. 3.10) that help to describe a curved line in three-dimensional space.

a) Derive equations for the curvature vector, k(s), the scalar curvature, (s), and the radius of curvature, (s), for the natural representation of the circular helix (1).

b) Show how these equations reduce to the special case of a circle.

c) Derive an equation for the unit principal normal vector, n(s), for the circular helix as given in (1).

d) Use MATLAB to plot a 3D image (Fig. 1) of a set of unit principal normal vectors on the circular helix (a = 1, b = 1/2). Describe the orientation of these vectors with respect to the circular helix itself, and the Cartesian coordinates.

e) Derive an equation for the unit binormal vector, b(s), for the circular helix (1). This is the third member of the moving trihedron.

f) Use MATLAB to plot a 3D image (Fig. 1) of a set of unit binormal vectors on the circular helix (a = 1, b = 1/2).

4) If c = c(t) is the arbitrary parametric representation of a curve, then a general definition of the scalar curvature is given in (3.26) as:

(3)

a) Show how this relationship may be specialized to plane curves lying in the (x, y)-plane where c(t) = cxex + cyey and the components are arbitrary functions of t.

b) Further specialize this relationship for the plane curve lying in the (x, y)-plane where the parameter is taken as x instead of t, so one may write cx = x and cy = f(x) such that c(x) = xex + f(x)ey and the normal curvature is:

(4)

c) Evaluate the error introduced in the often-used approximation (x) ~ |d2f/dx2| by plotting the following ratio as a function of the slope, df/dx, in MATLAB:

(5)

Develop a criterion to limit errors to less than 10% in practical applications.

5) If c = c(t) is the arbitrary parametric representation of a curve, then a general definition of the scalar torsion is given is (3.50) as:Both these stochastic terms introduce a strong local error of O(t1.5) and, as a consequence, a strong global error of O(t) (see the discussion in Section 4.8.1). Finally consider the last error term:

This stochastic term introduces a strong local error of O(t) and a strong global error O(t1/2). This last error term dominates and determines the strong order of convergence of the Euler scheme.

For weak order convergence many realizations are generated and averaged to determine an approximation of (see definition of weak order of convergence in Section 4.8.1):

Because of the averaging procedure all random error terms cancel out and vanish for increasing number of realizations. As a result for weak order of convergence only the first deterministic error term has to be taken into account resulting in a weak order of convergence of the Euler scheme of O(t). This implies that if we use the Euler scheme and generate many tracks then the individual tracks are only half order accurate (strong convergence) while for example the results on the mean and variance of the tracks are first order accurate (weak convergence). This is caused by the fact that the stochastic errors in the track wise computations cancel out in computing ensemble mean quantities like mean and variance.

Exercise 4.9

Consider the same Ito SDE as in Exercise 4.8. Now we use the Euler scheme to compute the mean of Xt using 1000 samples and compare the result with the exact mean.

ex9

Stoke’s Theorem

It states that the circulation of a vector field around a closed path is equal to the integral of over the surface bounded by this path. It may be noted that this equality holds provided and are continuous on the surface.

Let us consider an area S that is subdivided into large number of cells as shown in the figure below.

Let cell has surface area and is bounded path while the total area is bounded by path . As seen from the figure that if we evaluate the sum of the line integrals around the elementary areas, there is cancellation along every interior path and we are left the line integral along path . Therefore we can write,

2 FORMULATION

(10)

a) Derive the general equations for the tangent plane, P, to this sphere.

b) Evaluate your equation for the particular case of the point = /2, = /2 and explain how your result matches (or not) your intuition.

c) Use MATLAB to plot the sphere and a portion of the tangent plane at the point = /2, = /2. Also, plot the tangent plane at the point = -/4, = /4.

8) Given a general parametric representation of a surface s(u, v), where u and v are the parameters, the unit normal vector to the surface is defined in (3.73) as:

(11)

a) Derive the equation for the unit normal vector, N, for the sphere of radius a with the parametric representation:

(12)

b) Show that your equation for N obeys the condition N = -s/a, where s(, ) is the position vector for a point on the sphere from an origin at the center of the sphere.

c) Given orientation data from a field measurement of strike and dip (s, s) of a bedding surface, show how these angles would be converted to the trend and plunge (n, n) of the normal to that bed. Then show how these angles are used to compute the components of the unit normal vector for the bedding surface.

9) The coefficients of the first fundamental form are used to calculate arc lengths of curves. The coefficients of the first fundamental form are defined in (3.85) as:

(13)

The arc length of a curve c[u(t), v(t)] on a surface, s(u,v) is defined in (3.89) as:

(14)

As an example consider the parametric representation of the elliptic paraboloid:

(15)

Figure 2. Elliptic paraboloid with unit normal vectors.

a) Consider the u-parameter curve c(u, 0.7) and calculate the arc length of this parabola from u = -1m to u = +1m.

b) Use MATLAB to plot the elliptic paraboloid (15) and the parabolic curve c(u, 0.7).

10) The coefficients of the first fundamental form may be used to calculate surface area (Fig. 3.26) given the parametric representation of a surface. The general equation for the surface area in terms of the parametric representation s(u,v) and the coefficients of the first fundamental form is given in (3.95) as:

(16)

Consider the following representation of the sphere of radius a:

(17)

a) Derive an equation for the surface area of the sector of the sphere within the range 0 ≤ ≤ /4 and < ≤ and evaluate this for a = 1m.

b) Plot this sector of the sphere in 3D using MATLAB.

11) The general equation for the unit normal vector to a surface with the parametric representation s(u,v) is given in (3.73) as:

(18)

As an example consider the surface with the parametric representation:

(19)

a) Use MATLAB to plot a 3D illustration of this surface as a wire frame or other suitable graph and describe the shape. This is the hyperbolic paraboloid (Fig. 3.29c).

b) Derive an equation for the unit normal vector, (u, v), of the hyperbolic paraboloid given in (19). Compare your equation to (3.75) which is the unit normal vector of the elliptic paraboloid

LikeLiked by 1 person

here as sure we consider some observations as say prof dr mircea orasanu and prof dhoria orasanu and as followed with

LAGRANGIAN AND EXACT DIFFERENTIAL TOTAL FOR OTHER SITUATIONS

ABSTRACT

This stochastic term introduces a strong local error of O(t) and a strong global error O(t1/2). This last error term dominates and determines the strong order of convergence of the Euler scheme.

For weak order convergence many realizations are generated and averaged to determine an approximation of (see definition of weak order of convergence in Section 4.8.1):

Because of the averaging procedure all random error terms cancel out and vanish for increasing number of realizations. As a result for weak order of convergence only the first deterministic error term has to be taken into account resulting in a weak order of convergence of the Euler scheme of O(t). This implies that if we use the Euler scheme and generate many tracks then the individual tracks are only half order accurate (strong convergence) while for example the results on the mean and variance of the tracks are first order accurate (weak convergence). This is caused by the fact that the stochastic errors in the track wise computations cancel out in computing ensemble mean quantities like mean and variance.

1 INTRODUCTION

ex9

From the figure we see (more or less) that the Euler scheme is O(t) in the weak sense. Repeat the experiment to demonstrate that the effect of statistical fluctuations is very large. This shows that a huge amount of sample is required before the O(t1/2) errors cancel out by the averaging and becomes relatively small compared to the remaining O(t) errors.

Consider again the error term that dominated the strong order of convergence of the Euler scheme:

and apply Ito’s differential rule to the integrant:

or:

and substitute the result in the Taylor expansion (19):

From this result we see that a more accurate scheme for scalar stochastic differential equations has been obtained:

(21) .

This scheme is called the Milstein scheme and is in the strong sense for scalar equations. For vector systems it generally only (except for very special differential equations when its accuracy is as in the scalar case ). In the weak sense the Milstein scheme has the same order of convergence as the Euler scheme.

Exercise 4.10

Consider again the Ito SDE of Exercise 4.8. In the next program the strong results of the Euler scheme are compared with the results of the Milstein scheme.

ex10

From the figure we see that the Milstein scheme is indeed O(t) in the strong sense and that this scheme is much more accurate then the Euler scheme.

It is very hard to develop higher order numerical schemes for stochastic differential equations. In general it is more efficient to use the Euler scheme or the Milstein scheme with a smaller time step. To improve weak convergence the use of Richardson extrapolation with the Euler scheme is often the most efficient way (see Chapter 2).

4.9 Random walk models for pollution transport

For the prediction of the transport of pollutants in coastal waters or ground water the well-known advection diffusion equation can be used. Consider for example the two-dimensional case:

(21)

where C(x,y,t) is the concentration, H(x,y,t) is the water depth, u(x,y,t) and v(x,y,t) are the water velocities in respectively the x and y directions, and D(x,y) is the dispersion coefficient.

Another way to model pollution transport processes is by means of a random walk model for the position of individual particles of the pollutant. These models are of the type (10). By simulating the position for a very large number of particles the spreading of the pollutant can be described. In this case the particle distribution will be equivalent to the probability density function of the particle position. As a result the particle concentration will be equivalent to the solution of the Fokker-Planck equation.

We can now relate the advection-diffusion type models with the particle models by interpreting the advection diffusion equation as a Fokker-Planck equation. Substituting HC=p in equation (22) and rearranging terms yields:

As a result the underlying particle model with the equation just derived as Fokker-Planck equation is:

.

.

where and are two independent Wiener processes.

Exercise 4.11

In the next program the spreading of a pollutant in the Dutch coastal waters is simulated with a particle model. Here we used a realistic geometry and a flow field that has been computed by a numerical shallow water flow model.

cd c:\watbook\progsde\ex11d

load progr

ex11

Show the impact of the dispersion coefficient (file: parkarin.m) on the simulation results.

Problems

4.1 Show that the Wiener process is a Gaussian process.

4.2 Show that the Wiener process is a Markov process.

4.3 Prove the rule (7).

4.4 Prove the rule (8).

Hint: use the relation:

4.5 Prove the rule (9).

4.6 Show that

2 FORMULATION

Consider the Ito stochastic differential equation (10). To gain insight into the probability of exceedence of the process Xt we need to know the probability density function of Xt. This function can be obtained by solving the Fokker-Planck equation also known as the Kolmogorov forward equation:

(18)

where:

is the probability density function of Xt at time t given Xt=x0 at t0. The initial condition for equation (18) is:

Using Bayes’ rule it is easy to include the uncertainty due to the initial condition and to compute the probability distribution p(x,t) of Xt:

where p(x0,t0) is the probability distribution of the initial condition Xt at t0.

The Fokker-Planck equation is a deterministic partial differential equation that in general has to be solved numerically. For vector systems with dimension larger than, say, 3 this is very time consuming. In this case the probability density can be determined more efficiently by generating a large number of tracks of the underlying SDE.

Example 4.9

Consider again the SDE from Example 4.6:

The Fokker-Planck equation for this SDE is:

The initial condition for this equation is:

It is easy to verify that in this case the solution of the Fokker-Planck equation is:

4.8 Numerical approximation of stochastic differential equations

4.8.1 Order of convergence of a numerical scheme

Consider first the deterministic equation:

We can approximate this equation numerically with the Euler scheme:

where is the time step. Recall that the order of convergence of a numerical scheme for a deterministic differential equation is defined as follows (see Chapter 2):

Definition: The order of convergence is j if there exists a positive constant K and a positive constant such that for fixed :

for all .

We known that the local error of the Euler scheme is (see Chapter 2). The global error EN for fixed can be found easily by computing:

So for deterministic models we have that the global error is since we make times a local error of .

Now consider the stochastic case:

with the Euler scheme introduced in Section 4.5:

or with t=nt:

First we have to generalize the definition of the order of convergence:

Definition: The strong order order of convergence is j if there exists a positive constant K and a positive constant such that for fixed :

for all .

Example 4.10

Consider the vector Ito SDE:

with zero initial condition. As a result we have:

From Example 4.4 we know the exact solution of this equation:

Using the Euler scheme to approximate the solution of the SDE results in:

Using the results of Example 4.3 we can derive an expression for the variance of the global error:

The variance of the error is O(t) so that the strong order of convergence of the Euler scheme in this Example is only O(t1/2).

From Example 4.10 we see that one step of the Euler scheme introduces a local error with variance :

This implies a local error of O(t). For the deterministic case this would imply a global error of , i.e. no convergence at all! However, thanks to the fact that the Wiener process has independent increments, the local errors are all independent of each other. This means that we do not have to add up all the local errors of , but can computer the variance of the global error EN as follows:

which implies that the standard deviation of the error is:

As a result, the global error is and we still have convergence.

Exercise 4.8

Consider again the Ito SDE from Exercise 4.6:

The exact solution of this equation Xt=e bWt. In the next program the Euler scheme is used to compute a numerical approximation of Xt for a number of different time steps.

ex8

In the figure the mean of the absolute error is shown. In order to determine the mean of the error, 100 samples have been generated and averaged. From the figure we see that the Euler scheme is indeed O(t1/2) in the strong sense. Repeat the experiment a number of times to show that the results are sensitive to statistical fluctuations.

Convergence in the strong sense is a track wise approach. The exact track Xt is approximated as accurate as possible by a numerical track Xn. However for many practical Monte Carlo simulation problems we are not interested in very accurate individual tracks. For instance if we want to compute moments of the probability density function of Xt or if we want to determine the probability of exceedence. For these problems we need accurate results on the moment or exceedence probabilities. Therefore we also describe a weaker form of convergence.

Definition: The weak order of convergence is j if there exists a positive constant K and a positive constant such that for fixed :

for all and for all functions h with polynomial growth.

Example 4.11

If we take h(x,t)=x the definition of weak order convergence reduces to:

In this case we use the realizations of Xt to determine the mean at time T, and we evaluate the accuracy of the numerical scheme in computing this quantity. We do not evaluate the accuracy of the underlying tracks. If h(x,t) = x2 we have:

and we evaluate the accuracy of the numerical scheme in computing the second moment.

4.8.2 Stochastic Taylor expansion

For deterministic differential equations the Taylor series expansion is an important method to analyze the order of convergence of a numerical scheme. Let us now study the stochastic case and derive a stochastic version of the Taylor expansion. Consider first again the Ito differential rule introduced in Section 4.6 for scalar systems:

(19)

where the operators L0 and L1 are defined as:

Ito’s differential rule holds for arbitrary functions . So we can apply the rule also for the functions f and g:

which results in:

and:

which results in:

Substituting these results in the stochastic equation (11) yields:

(19)

where we have assumed that the functions f and g are sufficiently smooth. Equation (20) is the stochastic Taylor expansion. By applying Ito’s rule again to the various integrants higher order terms of the expansion can be obtained.

The first terms of the stochastic Taylor expansion represents the stochastic Euler scheme discussed in Section 4.8.1:

or with t=nt:

By analyzing the error terms of equation (20) the order of convergence of the Euler scheme can be determined heuristically. Consider first the error term:

This deterministic error term introduces a local error of O(t2) and, as a consequence, a global error of O(t). For the two stochastic terms we have:

In order to understand the usefulness of Ito’s lemma let us find the differential equation for the second moment of the Ito SDE:

Here, = x2 (for calculating the second moment) and hence

therefore, applying the Ito’s differential lemma we get (note we substitute for dXt from the above Ito SDE)

Expanding, simplifying and taking expectation we get

We used the fact that E(dWt2) = dt above and note that the middle term becomes zero because dWt is independent of Xt2 and hence cov(Xt2 ,dWt) and E(dWt) are zero. Once the second moment equation is known and the initial condition is given we can determine the mean and the variance using the moment equation. In the case of pure Wiener process as studied earlier, the second moment equation is:

This is a linear function as seen from the results obtained from the numerical integration.

Exercise 4.6

Recall that x(t)=ebt is the solution of the deterministic differential equation:

where b is a deterministic constant. However, if we have a Wiener process Wt, then what is the differential equation that satisfies the solution Xt=e bWt? Apply the Ito differential rule for :

which results in:

Therefore, when Wt is a standard Wiener process, the Ito differential equation for Xt has an extra dt term compared to the deterministic result. Using the program below can generate some samples of solutions of this differential equation.

ex6

In the figure above the exact track generated as in Exercise 4.3, is compared to the track that has been obtained by approximating the stochastic differential equation just derived using the Euler scheme. Show by using the program that the extra dt term in the Ito equation is essential.

A stochastic differential equation can also be defined in Stratonovitch sense using the Stratonovitch integral definition. The relation between the Ito and Stratonovitch stochastic differential equation are given below:

If a physical process Xt can be described by the Ito equation:

then the same process can be described also with the Statonovitz equation:

(16)

LikeLiked by 1 person

here we see some problems a question as say prof dr mircea orasanu and prof horia orasanu as followed

LAGRANGIAN AND EXACT PARTIAL DERIVATIVES OF FUNCTIONS

ABSTRACT The general solution of a linear partial differential equation is a linear combination of all linearly independent solutions of the equation with as many arbitrary functions as the order of the equation; a partial differential equation of order 2 has 2 arbitrary functions. A particular solution of a differential equation is one that does not contain arbitrary functions or constants. Homogeneous linear partial differential equation has an interesting property that if u is its solution then a scalar multiple of u, that is, cu, where c is a constant, is also its solution. Any equation of the type F(x,y,u,c1,c2)=0, where c1 and c2 are arbitrary constants, which is a solution of a partial differential equation of first-order is called a complete solution or a complete integral of that equation. An equation F(,)=0 involving arbitrary function. F connecting two known functions and of x, y and u, and providing a solution of a first order differential equation is called a general solution or general integral of that equation. It is clear that in some sense general solution provides a much broader set of solutions than a complete solution. However a general solution may be derived once a complete solution is known.

Very often ux = , uy = ,uxx =

uxy = and uyy = are respectively denoted by p, q,r, s and t.

In this notation the general form of partial differential equation of first-order is

F(x,y,u,p,q)=0 (11.4)

The general second-order partial differential equation is of the form

F(x,y,u,p,q,r,s,t)=0 (11.5)

A partial differential equation is said to be quasilinear if it is linear in all the highest-order derivatives of the dependent variable. The most general form of a quasi linear second- order equation is

A(x,y,u,p,q) uxx + B(x,y,u,p,q) uxy + C(x,y,u,p,q) uyy +f(x,y,u,p,q)=0 (11.6)

A partial differential equation of first-order is called semilinear if it is linear in the principal part, namely the terms involving first derivatives: thus, for A + B = C, these equations are defined to be such that the left hand side, which contains all derivatives is linear in u in that A,B depend on x and y alone; however C may depend non linearly on u. A semi linear partial differential equation of second-order is of the form

A + 2B +C = f(x,y,u, , ) (11.7)

where A,B,C are functions of x and y.

11.2. Classification of Partial Differential Equations

We have seen the classification of Partial Differential equations into linear, quasilinear, semi li

1 INTRODUCTION The problem of determining the solution of (11.8) satisfying initial conditions (11.9)-(11.10) is known as the initial-value problem. Here initial-value usually refer to the data assigned at y=y0. If initial values are prescribed along some curve in the (x,y) plane, that is, finding solution of equation (11.8) subject to prescribed value of y on some curve is called the Cauchy problem. These conditions are called Cauchy data. Actually two names are synonymous.

Example 11.3 (a) ut = uxx 0<x0

u(x,0)= cos x 0 x l

is an initial-value problem.

(b) Suppose that is a curve in the (x,y) plane; we define Cauchy data to be the prescription of u on . It is convenient to write this boundary condition in the parametric form

x=x0(s), y=y0(s), u=u0(s), for s1 s s2.

A(x,y,u) +B (x,y,u) =C (11.10)

subject to (11.10) is a Cauchy problem

(c) Let us consider the equation

A(x,y) uxx + B(x,y) uxy + C uyy = F(x,y,u,ux,uy). (11.11)

Let (x¬0,y0) denote points on a smooth curve in the (x,y) plane. Also let the parametric equations of this curve be

x=x0 (), y0=y0 ()

where is a parameter.

We suppose that two functions f() and g() are prescribed along the curve . The Cauchy problem is now one of determining the solution u(x,y) of Equation (11.11) in the neighbourhood of the curve satisfying the Cauchy conditions

u=f(), =g()

on the curve . n is the direction of the normal to which lies to the left of in the counter clockwise direction of increasing arc length. The functions f() and g(() are the Cauchy data.

The solution of the Cauchy problem is a surface, called an integral surface, in the (x,y,u) space passing through a curve having as its projection in the (x,y) plane and satisfying =g() which represents a tangent plane to the integral surface along .

Types of Boundary Conditions

The boundary conditions on partial differential equation (11.6) fall into the following three categories:

(i) Dirichlet boundary conditions (also known as boundary conditions of the first kind), when the values of the unknown function u are prescribed at each point of the boundary of a given domain on which (11.6) is defined.

(ii) Neumann boundary conditions (also known as boundary conditions of the second kind), when the values of the normal derivatives of the unknown function u are prescribed at each point of the boundary .

(iii) Robin boundary conditions (also known as boundary conditions of the third kind, or mixed boundary conditions), when the values of a linear combination of the unknown function u and its normal derivative are prescribed at each point of the boundary .

Example 11.4 (i) = k , 0<x0

u(x,o)=f(x)

(x,o)=g(x), 0<x0

It is a Dirichlet boundary value problem.

(ii) = k , 0<xo

u(x,o)=f(x), (x,o)=g(x), 0<x0

It is an example of Neumann boundary value problem.

(iii) = k , 0<x0

u(x,o)=f(x), (x,o)=g(x), 0<x0, B2-AC=0, or B2-AC<0.

The equation

A u2x+2B uxy + C u2y = 0 (14.14)

is called the characteristic equation of the partial differential equation (11.13). Solutions of (11.14) are called the characteristics

Example 11.5 Examine whether the following partial differential equations are hyperbolic, parabolic, or elliptic.

(i) + x + 4 = 0

(ii) + y = 0

(iii) – = 0

(iv) uxx + x2 uyy = 0

(v) x uxx + 2x uxy + y uyy = 0

Solution (i) A = 1, C = x, B = 0

B2-AC = 0 –x 0

Thus the equation is elliptic if x > 0, is hyperbolic if x 0 if y<0 and so the equation is hyperbolic if y0.

(iii) A=y2, B=0, C = -1.

B2-AC=y2>0 for all y. Therefore the equation is hyperbolic.

(iv) A=1,

P(x,y) x(x,y)+ Q (x,y) y(x,y)=f(x,y)

Hence, Eq (11.21) is reduced to

P(x,y)ux+Q (x,y) uy =R(x,y) (11.22)

where P,Q,R in (11.22) are not the same as in (11.21). The following theorem provides a method for solving (11.22) often called Lagrange’s Method.

Theorem 11.1 The general solution of the linear partial differential equation of first order

Pp+Qq=R; (11.23)

where p= , P, Q and R are functions of x y and u

is F(, ) = 0 (11.24)

where F is an arbitrary function and (x,y,u) =c1 and (x,y,u)=c2 form a solution of the auxiliary system of equations

(11.25)

Proof: Let (x,y,u)=c1 and (x,y,u)=c2 satisfy (11.25), then equations

xdx+y dy +udu=0

and

must be compatible, that is, we must have P x+Qy+Ru=0

Similarly we must have

Px+Qy+Ru=0

Solving these equations for P,Q, and R, we have

(11.26)

where (,)/(y,u)= yu- yu0 denotes the Jacobian.

Let F(,)=0. By differentiating this equation with respect to x and y, respectively, we obtain the equations

and if we now eliminate and from these equations, we obtain the equation p +q =

2 FORMULATION P(x,y)ux+Q(x,y)uy+f(x,y)u=R(x,y) (11.21)

We can eliminate the term in u from (11.21) by substituting u=ve-(x,y), where (x,y) satisfies the equation

(11.27)

Substituting from equations (11.26) into equation (11.27), we see that F(,)=0 is a general solution of (11.23). The solution can also be written as

=g() or =h(),

Example 11.7 Find the general solution of the partial differential equation y2up + x2uq = y2x

Solution: The auxiliary system of equations is

(11.28)

Taking the first two members we have x2dx = y2dy which on integration given x3-y3 = c1. Again taking the first and third members,

we have x dx = u du

which on integration given x2-u2 = c2

Hence, the general solution is

F(x3-y3,x2-u2) = 0

11.3.3 Charpit’s Method for solving nonlinear Partial Differential Equation of First-Order

We present here a general method for solving non-linear partial differential equations. This is known as Charpit’s method.

Let

F(x,y,u, p.q)=0 (11.29)

be a general non linear partial differential equation of first-order. Since u depends on x and y, we have

du=uxdx+uydy = pdx+qdy (11.30)

where p=ux= , q = uy=

If we can find another relation between x,y,u,p,q such that

f(x,y,u,p,q)=0 (11.31)

then we can solve (11.28) and (11.30) for p and q and substitute them in equation (11.29). This will give the solution provided (11.29) is integrable.

To determine f, differentiate (11.28) and (11.30) w.r.t. x and y so that

(11.32)

3 SOLUTION

An Integral of these equations, involving. p or q or both, can be taken as the required equation (11.30). p and q determined from (11.28) and (11.30) will make (11.29) integerable.

Example 11.8 Find the general solution of the partial differential equation.

(11.38)

Solution: Let p = , q =

The auxiliary system of equations is

(11.39)

which we obtain from (11.36) by putting values of

and multiplying by -1 throughout the auxiliary system. From first and 4th expression in (11.38) we get

dx = . From second and 5th expression

dy=

Using these values of dx and dy in (11.38) we get

=

or

Taking integral of all terms we get

ln|x| + 2ln|p| = ln|y|+2ln|q|+lnc

or ln|x| p2 = ln|y|q2c

or p2x=cq2y, where c is an arbitrary constant. (11.40)

Solving (11.37) and (11.39) for p and q we get cq2y+q2y -u=0

(c+1)q2y=u

q=

p=

(11.29) takes the following form in this case

du=

or

By integrating this equation we obtain

This is a complete solution.

11.3.4 Solutions of special type of partial differential equations

(i) Equations containing p and q only

Let us consider a partial differential equation of the type

F(p,q)=0 (11.41)

The auxiliary system of equations of Charpit’s method (Equation (11.36)) takes the form

It is clear that p=c is a solution of these equations. Putting value of p in (11.40) we have

F(c,q)=0 (11.42)

So that q=G(c) where c is a constant

Then observing that

du=cdx+G(c) dy

we get the solution u=cx +G(c) y+c1,

where c1 is another constant.

Example 11.9 Solve p2+q2=1

Solution: The auxiliary system of equation is

–

or

Using dp =0, we get p=c and q= , and these two combined with du =pdx+qdy yield

u=cx+y + c1 which is a complete solution.

Using = p , we get du = where p= c

Integrating the equation we get u = + c1

Also du = , where q =

or du = . Integrating this equation we get u = y +c2

This cu = x+cc1 and = y + c2

Replacing cc1 and c2 by – and - respectively, and eliminating c, we get

u2 = (x-)2 + (y-)2

This is another complete solution.

This is another complete solution.

(ii) Clairaut equations

The infinity of normals passing through a fixed point generates a cone known as the normal cone. The corresponding tangent planes to the integral surfaces envelope a cone known as the Monge cone. In the case of a linear or a quasi linear equation, the normal cone degenerates into a plane since each normal is perpendicular to a fixed line. Consider the equation ap+bq=c, where a,b, and c are functions x,y, and u. Then the direction p,q,-1 is perpendicular to the direction ratios a,b,c. This direction is fixed at a fixed point. The Monge cone then degenerates into a coaxial set of planes known as the Monge pencil. The common axis of the planes is the line through the fixed point with direction ratios a,b,c. This line is kno wn as the Monge axis.

High

11.4 Solutions of Linear Partial Differential Equation of Second Order with Constant Coefficients

11.4.1 Homogeneous Equations

Let Dx=

We are looking for solving equations of the type

(11.45)

where k1 and k2 are constants.

(11.44) can be written as

or F(Dx, Dy) u=0 (11.46)

The auxiliary equation of (11.45) (compare with Section 5.5) is

Dy then equation (11.45) can be written as

Let the roots of this equation be m1 and m2, that is, Dx=m1Dy, Dx=m2Dy

(Dx-m1Dy) (Dx-m2Dy)u=0- (11.47)

This implies

(Dx-m2Dy) u=0 or p-m2q=0

The auxiliary system of equations for p-m2q=0 is of the type

This gives us -m2dx=dy

LikeLiked by 1 person

here we see some questions as say prof dr mircea orasanu and prof horia orasanu as followed

NEWTONIAN AND LAGRANGIAN

ABSTRACT Binomial coefficients (N choose K): The number of ways in which you can choose K elements from a set of N elements. This equals n! / ( k! (n-k)! ).

Catalan numbers (1, 2, 5, 14, 42, …): The number of ways you can divide a polygon with N sides into triangles, using non-intersecting diagonals (a triangle has 1 way, a rectangle has 2 ways, a pentagon has 5 ways, a hexagon has 14 ways, and so on). The Catalan numbers can be computed using the formula:

Fibonacci numbers (1, 1, 2, 3, 5, 8, …): A series in which the first two numbers are 1 and each subsequent number is the sum of the preceding two numbers.

Hexagonal numbers (1, 6, 15, 28, …): Numbers that can be represented as the number of points on the perimeter of a hexagon with a constant number of points on each edge. These are given by the formula N * (2N-1), and can be seen in the following figure:

Pentatope numbers (1, 5, 15, 35, 70, …) A figurate number (a number that can be represented by a regular geometric arrangement of equally spaced points) given by:

Ptopn = (1/4)Tn(n+3) = (1/24) n (n+1) (n+2) (n+3)

for tetrahedral number Tn. Note: pentatopes are 4-dimensional analogs of tetrahedra.

Sierpinski’s triangle: a famous fractal formed by connecting triangle midpoints as such:

Tetrahedral numbers (1, 4, 10, 20, …): a figurate number formed by placing discrete points in a tetrahedron (triangular base pyramid). The formula is given by: n(n+1)(n+2)/6.

Triangular numbers (1, 3, 6, 10, …): The number of dots you need to form a triangle:

In any forms ar

LikeLiked by 1 person

here we consider and go as say prof dr mircea orasanu and prof horia orasanu as followed and

DEFINITION FOR LAGRANGIAN

ABSTRACT

In considering the future of teacher education at the present time, I believe that it is relevant to consider the wider social and political context in which schools and institutions of teacher education are placed at this time. In particular I wish to draw attention to what Prime Minister Tony Blair had to say in his speech to the 1998 Labour Party Conference, where he argued that:

The centre-left may have lost in the battle of ideas in the 1980s, but we are winning now. And we have won a bigger battle today: the battle of values. The challenge we face has to be met by us together: one nation, one community.

When a young black student, filled with talent, is murdered by racist thugs, and Stephen Lawrence becomes a household name not because of the trial into his murder but because of inquiry into why his murderers are walking free, it isn’t just wrong: it weakens the very bonds of decency and respect we need to make our country strong. We stand stronger together.

But where is Mr. Blair’s vision of ‘the battle of values’ when it comes to education policy? Whilst accepting a need to improve levels of achievement, I want to argue that there is a lack of vision in relation to ‘values’ in education at the present time. Further I propose that the reasons for this lie in part in the recent social, political and historical context in relation to the National Curriculum and also, in the perspective of some of the key government agencies, such as the TTA. In particular there are problems about the way in which the National Standards for teacher education have been prescribed.

A SHORT OVERVIEW OF THE RECENT HISTORICAL CONTEXT (1)

Consideration of the recent changes that have taken place in teacher education cannot be made in isolation from those happening in the National Curriculum for schools in England and Wales, just as any future changes to the school curriculum will imply corresponding changes for teacher education. Following a systematically orchestrated campaign from right wing pressure groups throughout the 1980s, political intervention in the school curriculum reached a high point at the Conservative Party conference in 1988, with the famous statement from Prime Minister Margaret Thatcher:

Children who needed to count and multiply were learning anti-racist mathematics – whatever that might be.

It was in such a climate that the proposals were put to the Secretary of State for Education, Kenneth Baker, on the composition of the mathematics curriculum. These proposals stated that it was unnecessary to include any ‘multicultural’ aspects in any of the attainment targets. This position was supported by arguing that that those proposing such an approach with a view to raising the self-esteem of ethnic minority pupils and to improving mutual understanding and tolerance between races, were afflicted with an attitude that was ‘misconceived and patronising’. Tooley’s (1990) support for such a position and his associated critique of arguments put by those in the mathematics education community he labelled as ‘multiculturalists’ is both misleading and flawed in several respects. He misleads by his mischievous suggestion that the ‘multiculturalists’ wished to dictate to teachers: e.g. he asserts that ‘the failure to ‘compel “multicultural” examples’ is not ‘a great handicap’ of the National Curriculum. In fact the pressure at that time was in precisely the opposite direction and compulsion was never part of the agenda of the so-called ‘multiculturalists’ in the first place. In his reflections on this context Woodrow (1996) has pointed out that the concerns of teachers, following the Swann Report and the tragedy leading up to the MacDonald report, have been dissipated as a result of the introduction of a National Curriculum in which there is ‘no internationalism … no celebration of a pluralist culture and no sense of diversity’. Tooley also creates a false dichotomy between those teachers ‘who prefer to raise the political consciousness of their pupils, rather than their mathematical attainment’. Can it not be the case that teachers of mathematics can do both? Is there not a case to be made for considering the contribution teachers of mathematics might make in terms of citizenship and democracy?

Set against this background it is not surprising that the debate around issues of social justice and equal opportunities in the classroom came to whither on the vine during the last decade in England and Wales. Further it is not surprising that schools are now seen, by the African and Caribbean Network of Science and Technology (ACNST), to be failing black pupils in the ‘status and power’ subjects of science, mathematics and technology (Ghouri, 1998b). It does seem that Tooley’s (1990) expectation that the National Curriculum proposals ‘have the potential to tackle that problem’ of underachievement has proved to be unfounded. Rather the ACNST research points towards the lack of role models for young black people e.g. ‘black British scientists’. It does seem that Tooley is also mistaken in his view that the use of exotic stereotypes such as the San people of the Kalahari desert is an appropriate and sufficient level of response to this problem.

THE PROBLEM (1)

My starting point in the debate is to agree with those critics such as Sir Herman Ousley and the ACNST that indeed there is a problem with both the National Curriculum for schools and with the principles and practices underpinning the system of teacher education at this time. In particular, I argue that the heart of the problem is the lack of a shared sense of purpose about the aims of education in this country, which has given rise to conflicting interpretations by government agencies that appear to contradict each other.

It would seem that the issue of the lack of a shared sense of purpose is recognised by the Qualifications and Curriculum Authority (QCA) in identifying working practices on policy formulation as an area in need of reform and in drawing attention to the wider international context. In relation to the nat

1 INTRODUCTION

The lack of a shared sense of purpose in our education system stands in sharp contrast to the sense of consensus that can be seen in other educational systems. For example with regard to the notion of Didaktik in the German and Scandinavian traditions the overall aim of the education system is that of ‘Gebildete’ which can be broadly translated as ‘educated personality’. This means, for example, fostering a sense of egalitarianism and having a curriculum that relates to the central problems of living and is relevant to the key problems of society. There is an emphasis here on attitudes and values, which seems to be singularly lacking from the UK context. This aspect has been highlighted by Moon (1998) who argues that ‘standards do not exist in a vacuum’ and that the imposition of standards without values ‘can easily become standardisation, the very process that a vibrant and dynamic culture has to avoid’. He highlights the educational systems of Scotland, Germany, France, USA and South Africa to illustrate the willingness in these countries to ‘link the education of teachers to a values system’. So for example, the Scottish model, which was developed following extensive consultation, includes in its guidelines ‘a set of attitudes that have particular power in that they are communicated to those being taught’. Included in these is:

• a commitment to views of fairness and equality of opportunity as expressed in multi-cultural and other non-discriminatory policies.

In the South African context the task of reconstructing the education system was preceded by widespread consultation over values, which led to the identification of five core ‘socio-political’ values and five core ‘pedagogical’ values. The former consist of ‘democracy, liberty, equality, justice and peace’ and the latter are made up of ‘relevance, learner-centredness, professionalism, co-operation and collegiality and innovation’. With regard to equality and justice in particular there is specific reference to equity, redress, affirmative action and the removal of gender and racial bias. The contrast with the situation in England and Wales at the present time is stark.

THE WAY FORWARD (1)

In my view the issues that need to be addressed for the future fall under three categories of need:

• for the development a shared sense of purpose about the aims and values of education, as these relate to both schools and teacher education

• to reform working practices between the various stakeholders in the way in which policy is developed

• to reconceptualise the notion of teacher competence as currently set out in the National Standards

The first of these has been the main focus of this paper. However the starting point for such a project would be a key factor in developing such a shared sense of purpose. A rightful concern would be around the question of ‘Whose values?’ and also of the threat of the imposition of an authoritarian agenda. However one might look to the communitarian agenda for a starting point and in particular to Etzioni (1995) who argues that we might start with those values that are widely shared. These include that ‘the dignity of all persons ought to be respected, that tolerance is a virtue and discrimination abhorrent and that peaceful resolution of conflicts is superior to violence’.

LikeLike

here we see some aspects as say prof dr mircea orasanu and prof horia orasanu concerning as followed some questions

with

DEFINITION AND NOTIONS OF LAGRANGIAN

ABSTRACT

1. Area of a Triangle = bh/2

2. Pythagorean Theorem: Euclid’s Windmill Proof

3. Pythagorean Theorem: Chinese Proof (or perhaps the Indian mathematician Bhaskara’s)

4. Pythagorean Theorem: President Garfield’s Trapezoid Proof

5. The Distance Formula: derived from Pythagorean Theorem!

6. Fermat’s Last Theorem: xn + yn = zn has no positive integral solutions for n>2.

Proven recently by Andrew Wiles (omitted here for lack of room in the margin).

7. Hypotenuses in a “square root spiral” are of length sqrt 2, sqrt 3, sqrt 4, sqrt 5,…

(Inductive proof)

8. The square root of 2 is irrational.

a. (p/q) 2 = 2 p2 = 2q2, then apply Fundamental Theorem of Arithmetic lhs has even # of prime factors, rhs has odd #, QED.

b. (p/q) 2 = 2 p2 = 2q2 p is even … q is even, QED.

9. “Nearly all” real numbers are irrational!

a. The integers are countable (as are evens, primes, powers of 10, …)

b. Integer pairs – Z2 – are countable (dovetailing!)

c. Integer triplets, etc – Z3 , Z4,… – are countable.

d. Rationals are countable.

e. Algebraics are countable.

f. Reals are not countable (diagonalization!)

g. Thus, “nearly all” reals are irrational (even non-algebraic, hence transcendental!)

7. The Circle Problem and Pascal’s Triangle

a. How many intersections of chords connecting N vertices?

b. How does this relate to Pascal’s Triangle?

8. Patterns in Pascal’s Triangle (see http://www.kosbie.net/lessonPlans/pascalsTriangle/)

a. Simple Patterns

i. Natural Numbers (1,2,3,4…)

ii. Triangular Numbers (1,3,6,10,…)

iii. Binomial Coefficients (nCk) Pascal’s Binomial Theorem

iv. Tetrahedral Numbers (1,4,10,20,…)

v. Pentatope Numbers (1,5,15,35,70…)

b. More Challenging Patterns

i. Powers of 2 (2,4,8,16,…)

ii. Hexagonal Numbers (1,6,15,28,…)

iii. Fibonacci Numbers (1,1,2,3,5,8,…) Prove This!

iv. Sierpinski’s Triangle

v. Catalan Numbers (1,2,5,14,42,…) Prove This!

vi. Powers of 11 (11, 121, 1331, 14641,…)

9. Applications of the Binomial Theorem

a. Find the coefficient of x3 in (x + 5) 3

b. Prove: nC0 + nC1 + … + nCn= 2n (Hint: 2 = 1+1, so what does 2n = ?)

Summer Math Series: Week 4

10. π = C/D (By observation! Since Babylonian times, where π =~ 3.125)

11. Area of a Polygon = ½ hQ (1/2 * apothem * perimeter)

12. Archimedes’ Proof that A = πr2

a. Approximate circle with inscribed (2n)-gons

b. Rephrasing of argument on page 93:

i. Apolygon = ½ hQ, but:

1. As sides infinity, h r (apothem radius)

2. As sides infinity, Q C (perimeter circumference)

ii. So:

As sides infinity, Apolygon ½ r C Acircle

(area of polygon area of circle)

c. Last step (p. 96): combine:

i. A = ½ r C

1 . INTRODUCTION

A high-quality mathematics program is essential for all students and provides every student with the opportunity to choose among the full range of future career paths. Mathematics, when taught well, is a subject of beauty and elegance, exciting in its logic and coherence. It trains the mind to be analytic – providing the foundation for intelligent and precise thinking.

To compete successfully in the worldwide economy, today’s students must have a high degree of comprehension in mathematics. For too long schools have suffered from the notion that success in mathematics is the province of a talented few. Instead, a new expectation is needed: all students will attain California’s mathematics academic content standards, and many will be inspired to achieve far beyond the minimum standards.

These content standards establish what every student in California can and needs to learn in mathematics. They are comparable to the standards of the most academically demanding nations, including Japan and Singapore – two high-performing countries in the Third International Mathematics and Science Study (TIMSS). Mathematics is critical for all students, not only those who will have careers that demand advanced mathematical preparation but all citizens who will be living in the twenty-first century. These standards are based on the premise that all students are capable of learning rigorous mathematics and learning it well, and all are capable of learning far more than is currently expected. Proficiency in most of mathematics is not an innate characteristic; it is achieved through persistence, effort, and practice on the part of students and rigorous and effective instruction on the part of teachers. Parents and teachers must provide support and encouragement.

The standards focus on essential content for all students and prepare students for the study of advanced mathematics, science and technical careers, and postsecondary study in all content areas. All students are required to grapple with solving problems; develop abstract, analytic thinking skills; learn to deal effectively and comfortably with variables and equations; and use mathematical notation effectively to model situations. The goal in mathematics education is for students to:

Develop fluency in basic computational skills.

Develop an understanding of mathematical concepts.

Become mathematical problem solvers who can recognize and solve routine problems readily and can find ways to reach a solution or goal where no routine path is apparent.

Communicate precisely about quantities, logical relationships, and unknown values through the use of signs, symbols, models, graphs, and mathematical terms.

Reason mathematically by gathering data, analyzing evidence, and building arguments to support or refute hypotheses.

Make connections among mathematical ideas and between mathematics and other disciplines.

The standards identify what all students in California public schools should know and be able to do at each grade level. Nevertheless, local flexibility is maintained with these standards. Topics may be introduced and taught at one or two grade levels before mastery is expected. Decisions about how best to teach the standards are left to teachers, schools, and school districts.

The standards emphasize computational and procedural skills, conceptual understanding, and

Adopted by the California State Board of Education December 1997

problem solving. These three components of mathematics instruction and learning are not

separate from each other; instead, they are intertwined and mutually reinforcing.

Basic, or computational and procedural, skills are those skills that all students should learn to use routinely and automatically. Students should practice basic skills sufficiently and frequently enough to commit them to memory.

Mathematics makes sense to students who have a conceptual understanding of the domain. They know not only how to apply skills but also when to apply them and why they should apply them. They understand the structure and logic of mathematics and use the concepts flexibly, effectively, and appropriately. In seeing the big picture and in understanding the concepts, they are in a stronger position to apply their knowledge to situations and problems they may not have encountered before and readily recognize when they have made procedural errors.

The mathematical reasoning standards are different from the other standards in that they do not represent a content domain. Mathematical reasoning is involved in all strands.

The standards do not specify how the curriculum should be delivered. Teachers may use direct instruction, explicit teaching, knowledge-based, discovery-learning, investigatory, inquiry based, problem solving-based, guided discovery, set-theory-based, traditional, progressive, or other methods to teach students the subject matter set forth in these standards. At the middle and high school levels, schools can use the standards with an integrated program or with the traditional course sequence of algebra I, geometry, algebra II, and so forth.

Schools that uti

2 . FORMULATION

The standards for grades eight through twelve are organized differently from those for kindergarten through grade seven. Strands are not used for organizational purposes because the mathematics studied in grades eight through twelve falls naturally under the discipline headings algebra, geometry, and so forth. Many schools teach this material in traditional courses; others teach it in an integrated program. To allow local educational agencies and teachers flexibility, the standards for grades eight through twelve do not mandate that a particular discipline be initiated and completed in a single grade. The content of these disciplines must be covered, and students enrolled in these disciplines are expected to achieve the standards regardless of the sequence of the disciplines.

Mathematics Standards and Technology

As rigorous mathematics standards are implemented for all students, the appropriate role of technology in the standards must be clearly understood. The following considerations may be used by schools and teachers to guide their decisions regarding mathematics and technology:

Students require a strong foundation in basic skills. Technology does not replace the need for all students to learn and master basic mathematics skills. All students must be able to add, subtract, multiply, and divide easily without the use of calculators or other electronic tools. In addition, all students need direct work and practice with the concepts and skills underlying the rigorous content described in the Mathematics Content Standards for California Public Schools so that they develop an understanding of quantitative concepts and relationships. The students’ use of technology must build on these skills and understandings; it is not a substitute for them.

Technology should be used to promote mathematics learning. Technology can help promote students’ understanding of mathematical concepts, quantitative reasoning, and achievement when used as a tool for solving problems, testing conjectures, accessing data, and verifying solutions. When students use electronic tools, databases, programming language, and simulations, they have opportunities to extend their comprehension, reasoning, and problem-solving skills beyond what is possible with traditional print resources. For example, graphing calculators allow students to see instantly the graphs of complex functions and to explore the impact of changes. Computer-based geometry construction tools allow students to see figures in three-dimensional space and experiment with the effects of transformations. Spreadsheet programs and databases allow students to key in data and produce various graphs as well as compile statistics. Students can determine the most appropriate ways to display data and quickly and easily make and test conjectures about the impact of change on the data set. In addition, students can exchange ideas and test hypotheses with a far wider audience through the Internet. Technology may also be used to reinforce basic skills through computer-assisted instruction, tutoring systems, and drill-and-practice software.

The focus must be on mathematics content. The focus must be on learning mathematics, using technology as a tool rather than as an end in itself. Technology makes more mathematics accessible and allows one to solve mathematical problems with speed and efficiency. However, technological tools cannot be used effectively without an understanding of mathematical skills, concepts, and relationships. As students learn to use electronic tools, they must also develop the quantitative reasoning necessary to make full use of those tools. They must also have opportunities to reinforce their estimation and mental math skills and the concept of place value so that they can quickly check their calculations for reasonableness and accuracy.

Technology is a powerful tool in mathematics. When used appropriately, technology may help students develop the skills, knowledge, and insight necessary to meet rigorous content standards in mathematics and make a successful transition to the world beyond school. The challenge for educators, parents,

LikeLike

here indeed we have some important as say prof dr mircea orasanu and prof horia orasanu as followed

with

INTERPOLATION FOR LAGRANGIAN

ABSTRACT

11. Derive the Quadratic Formula by Completing the Square (coming soon: cubics, quartics!)

12. The Locker Problem (coming soon: Fundamental Thm of Arithmetic, number theory)

13. Prove the Binomial Theorem (by induction!) (coming soon: more number theory!)

a. Hint: Using construction method of Pascal’s Triangle, find recursive defn of nCk

b. Application: prove: nC0 + nC1 + … + nCn= 2n

Summer Math Series: Week 3

14. Pascal’s Triangle and Pascal’s Binomial Theorem

a. nCk = kth value in nth row of Pascal’s Triangle! (Proof by induction)

b. Rows of Pascal’s Triangle == Coefficients in (x + a)n. That is:

15. The Circle Problem and Pascal’s Triangle

a. How many intersections of chords connecting N vertices?

b. How does this relate to Pascal’s Triangle?

16. Patterns in Pascal’s Triangle (see http://www.kosbie.net/lessonPlans/pascalsTriangle/)

a. Simple Patterns

i. Natural Numbers (1,2,3,4…)

ii. Triangular Numbers (1,3,6,10,…)

iii. Binomial Coefficients (nCk) Pascal’s Binomial Theorem

iv. Tetrahedral Numbers (1,4,10,20,…)

v. Pentatope Numbers (1,5,15,35,70…)

b. More Challenging Patterns

i. Powers of 2 (2,4,8,16,…)

ii. Hexagonal Numbers (1,6,15,28,…)

iii. Fibonacci Numbers (1,1,2,3,5,8,…) Prove This!

iv. Sierpinski’s Triangle

v. Catalan Numbers (1,2,5,14,42,…) Prove This!

vi. Powers of 11 (11, 121, 1331, 14641,…)

17. Applications of the Binomial Theorem

a. Find the coefficient of x3 in (x + 5) 3

b. Prove: nC0 + nC1 + … + nCn= 2n (Hint: 2 = 1+1, so what does 2n = ?)

1 . INTRODUCTION

14. Archimedes’ Approximation of π

a. Inscribe (2n)-gons

15. Newton’s Binomial Theorem

a. Generalization to negative integer powers:

b. (P + PQ)m/n = P m/n + (m/n)AQ + (m-n)/(2n) BQ + (m-2n)/(3n) CQ + …

where A,B,C,… represent the immediately preceding terms

so B = (m/n)AQ, C = (m-n)/(2n) BQ, …

c. After some algebra:

(1 + Q) m/n = 1 + (m/n)Q + (m/n)(m/n – 1)/2 Q2 + (m/n)(m/n – 1)(m/n – 2)/(3*2) Q3 + …

d. That is:

16. Applications of Newton’s Binomial Theorem

a. 1 / (1 + x)3 = 1 – 3x + 6×2 – 10×3 + 15×4 – …

b. Sqrt(1 – x) = 1 – (1/2)x – (1/8)x2 – (1/16)x3 – (5/128)x4 – …

c. So, sqrt(7) = 3 sqrt(1 – 2/9) fast approximation for square roots!

d. Also cube roots, etc, since (1 – x)1/3 can be expanded this way, too…

Summer Math Series: Week 5

17. Newton’s Calculus (“Fluxions” from De Analysi; see Dunham pp. 171-3)

a. f(x) = x2 f’(x) = 2x

i. What this means graphically (max/min of f(x) = x2)

b. f(x) = a g(x) f’(x) = a g’(x)

c. f(x) = g(x) + h(x) f’(x) = g’(x) + h’(x)

d. f(x) = xa f’(x) = a x(a-1)

e. General derivative of a polynomial:

f(x) = a0x0 + … + anxn f’(x) = a1x0 + 2a2x1 + 3a3x2… + nanx(n-1)

f. Homework:

i. Prove the Product Rule:

f(x) = g(x) h(x) f’(x) = g’(x)h(x) + h’(x)g(x)

ii. Prove the Chain Rule:

f(x) = g(h(x)) f’(x) = g’(h(x)) h’(x)

iii. Prove the Quotient Rule:

f(x) = g(x) / h(x) f’(x) = (g’(x) h(x) – h’(x) g(x)) / h(x)2

Hint: rewrite as f(x) = g(x) h(x)-1 and use the Product and Chain Rules.

g. Some other useful derivatives:

i. f(x) = cos(x) f’(x) = -sin(x)

ii. f(x) = sin(x) f’(x) = cos(x)

iii. f(x) = ex f’(x) = ex

iv. Homework: Find the derivatives of tan(x), cot(x), sec(x), csc(x)

h. Integral calculus and Newton’s Physics

i. Constant acceleration: a(t) = a0 (9.8 m/s2)

ii. v’(t) = a(t) v(t) = a0t + v0

1. Why did Newton assume that v’(t) = a(t)

2. Why does this imply that v(t) = a0t + v0?

3. How fast is a free-falling object moving after 5 seconds?

iii. s’(t) = v(t) s(t) = ½ a0t2 + v0t + s0

1. Why did Newton assume that s’(t) = v(t)

2. Why does this imply that s(t) = ½ a0t2 + v0t + s0?

3. How far did that free-falling object travel in 5 seconds?

i. Find loca

LikeLike

and with

LikeLike

sure here we see some questions as say prof dr mircea orasanu and prof horia orasanu concerning as followed some questions

LikeLike

also indeed we see that these appear as say prof dr mircea orasanu and prof horia orasanu for some aspects concerning as followed

LAGRANGIAN OPERATOR AND APPLICATIONS

ABSTRACT

Un examen más cercano, que aquí no podemos hacer con más detalle, de la peculiar estructura de nuestra mente, con esta apertura inherente en ella, nos lleva en primer lugar a pensar que, detrás de ese misterio, lo inexpresable, lo que a mí me parece misterioso, en palabras de Wittgenstein, que funda nuestro conocimiento no puede estar la nada, pues la nada no da lugar a cosa alguna, sino que es algo que tiene que existir, si bien con una forma de existencia muy distinta de la que nosotros mismos experimentamos. Algunas de las implicaciones de esta experiencia trascendental pueden tal vez ser resumidas como sigue.

Concursul Internaţional de Matematică şi Informatică „Caius Iacob“, organizat … Festivitatea de premiere a avut loc în Sala de Conferințe a Universității „Aurel …

Lipsesc: leave

Conferinta Caius IacobSince α’’ is always normal to M, that means that the dot product of α’’ and a vector in in TpM is always zero. In particular, α’’ • α’ = 0. Let s(t) be the speed of α at t; . Differentiating s with respect to t,

(1)

This shows that s(t) is a constant function, and so a geodesic must be a constant-speed curve. In fact, we can parameterize any geodesic so that it is

1 INTRODUCTION unit-speed.

Given an orthogonal coordinate patch x in a geometric surface M, geodesics can be defined by differential equations called, appropriately, the geodesic equations. Consider a curve α in M. Express α(t) = x(u(t), v(t)). Then, , and so

(2)

LikeLike

here we consider important aspects as say prof dr mircea orasanu and prof horia orasanu as followed with

APPLICATIONS OF LAGRANGIAN FOR NONHOLONOMIC QUESTIONS

ABSTRACT

Theorem 13 (Rolle’s theorem):

Let be continuous on and differentiable on . If , then there exists at least one number c in at which .

[Intuitions:]

(1)

(2)

(3)

If (3) (figure), in . Thus, . If

(1) or (2), suppose takes on some positive values in . Intuitive, there is a number in , such that , where M is the maximum value of in . Then, .

◆

Theorem 14 (mean-value theorem):

Let be continuous on and differentiable on . If , then there exists at least one number c in at which

.

[justifications of theorem 14:]

.

Then, let . Since

,

by Rolle’s theorem, there is a number c such that

◆

The fundamental theorem of calculus is the core of calculus. The following

Astfel ca pe un contur dat functiile sunt discontinue cu dis

Continuitati de p[rima speta. In acest caz se considera un

Contur neted incat functiile definite nu se anuleaza in zero

In afara de uhn numar finit de puncte

O alta problema care este deschisa si se refera la legaturi

neolonome este aceea ageodezicelor necomutative intr-

, miscare a unui system a unui system de puncte mate

Riale. Astfel sub anumite conditii in cele de mai sus ,se

eratura si conductivitatea termica ,apoi ti

Mpul de relaxare , si apar si densitatea si calcura specifica

Rezultatele de mai sus ne conduc la folosirea unor transfor

Mari care invariaza integralele de tip Cauchy si care coincid

Cu Lagrangianul supus optimizaruu legaturilor neolonome

Compact Lecture Notes 2000

Poate studia stabilitatea claselor de unde si a oscilatiilor care

Se supun constrangerilor neolonome. Caracterul acestor

Metode este de a intelege detaliat proprietatile cand sunt fo Urmand sa fie satisfacute anumite relatii Acest lucru este cu atat mai evident in caZul cand functia contine coeficienti continui pentru functia Data ,care contine singularitati integrabile pe contursi care le invariaza si care se afla intr-o

Clasa restransa de functii

Astfel ca apar problem de solutionare a problemelor ;egate

De deformari neolonome. In acest caz pentru obtinerea

Legaturilor neolonome trebuie luate in considerare un domeniu de miscare adica trebuie considerate dinamica

Sistemelor de puncte materiale sau particule fluide ,unde se

Considera doua arce oarecare ce compun frontier domeniul

Ui miscarii si unde sunt puncte regulate

Se afla intr-o clasa restransa

Losite conditii admisibile precum in cazul ecuatiilor integr

aceasta si acestea pot avea loc in contextual ecuatiei lui Kor

teweg de Vries. Se propune un model de coroziune si capila

ritate ,dar si prin difuzie de aici urmeaza evident problema

constrangerilor compacte , a legaturilor beolonome care se

refera la o circumferinta si in care poate fi vorba de conser

varea legilor si de o ierarhie a diverselor sisteme dinamice

in virtutea celor de sus putem asocial un potentialcomplex

notat phi care sa satisfaca problem la limita de tip Riemann

Hilbert ale caror solutii sa fie sa fie functia generatoare a

Conser

1 INTRODUCTION

Theorem 15 (fundamental theorem of calculus):

Let be continuous on . If is any antiderivative of on , then

[justifications of theorem 15:]

Since be continuous on , then exists. Let

.

Then, by mean value theorem,

where and . Thus,

.

Example 3:

Calculate .

[solutions:]

Since the antiderivative of is , by the fundamental theorem of calculus,

LikeLike

here we see and consider some asapects as say prof dr mircea orasanu and prof horia orasanu as followed for

CYLINDRICAL COORDINATES FOR LAGRANGIAN

ABSTRACT

Write the equation in cylindrical coordinates and then graph the equation.

C)

Write the equation in spherical coordinates and graph it.

D)

Use rectangular coordinates and a triple integral to find the volume of a right circular cone of height . Now repeat this using cylindrical coordinates. Which method is easier?

2. Now suppose an ice cream cone is bounded below by the same equation of the cone given in exercise 1 and bounded above by the sphere . Plot the ice cream cone using rectangular coordinates over the same intervals as exercise 1a. Find the volume of the ice cream cone using spherical coordinates. Theorem 13 (Rolle’s theorem):

Let be continuous on and differentiable on . If , then there exists at least one number c in at which .

[Intuitions:]

(1).4 The Fundamental Theorem of Calculus

(2)

(3)

If (3) (figure), in . Thus, . If

(1) or (2), suppose takes on some positive values in . Intuitive, there is a number in , such that , where M is the maximum value of in . Then, .

◆

Theorem 14 (mean-value theorem):

Let be continuous on and differentiable on . If , then there exists at least one number c in at which

.

[justifications of t

1 INTRODUCTION

where . As is chosen such that , the approximated area is

Thus, it is nature to ask if in general for a function with antiderivative

◆

Theorem 15 (fundamental theorem of calculus):

Let be continuous on . If is any antiderivative of on , then

.

Then, by mean value theorem,

where and . Thus,

LikeLike

here we consider some aspects as say prof dr mircea orasanu and prof horia orasanu as folloed

EQUIVALENCE FOR LAGRANGIAN AND DIFFERENTIALS

ABSTRACT Spherical:

Transformations

1 INTRODUCTION

Differential length vectors

Del Operator:

Green’s Theorem

Divergence Theorem

Stoke’s Theorem

Dielectric Material Properties:

LikeLike

here we consider that appear some as say prof dr mircea orasanu and prof horia orasanu as followed

LAGRANGIAN AND ELECTROMAGNETIC OPERATOR AND FORMULAS

ABSTRACT

Spherical:

Differential length vectors

Del Operator:

Green’s Theorem

Divergence Theorem

Stoke’s Theorem

Dielectric Material Properties:

Magnetic Material Properties:

Displacement Field:

Magnetic Field Intensity:

Electric Field for a point charge, q

Magnetic Field for a ‘point’ current, dI

Lorentz Force Equation:

Ohm’s Law:

Maxwell’s Equations Integral Form:

1 . INTRODUCTION

Maxwell’s Equations Point Form:

Boundary Conditions:

Electric and Magnetic Potentials:

Stored Energy:

Poynting Vector:

General Wave Equations:

Plane Wave:

Magentic Vector Potential:

Hertzian Dipole Antenna:

Long Dipole Antenna: (Far field)

2 Antenna Array: (Form Factor)

2 Group Array: (Form Factor)

Index

Section 1. Basic concepts and basic mathematics

1.1. History

For now (until I can write my own version)

Copied from http://history.hyperjeff.net/electromagnetism.html

Sketches of a History of

Classical Electromagnetism

(Optics, Magnetism, Electricity, Electromagnetism)

A

n

t

i

q

u

i

t

y Many things are known about optics: the rectilinearity of light rays; the law of reflection; transparency of materials; that rays passing obliquely from less dense to more dense medium is refracted toward the perpendicular of the interface; general laws for the relationship between the apparent location of an object in reflections and refractions; the existence of metal mirrors (glass mirrors being a 19th century invention).

ca

300

BC Euclid of Alexandria (ca 325 BC – ca 265 BC) writes, among many other works, Optics, dealing with vision theory and perspective.

Convex lenses in existence at Carthage.

1st

cent

BC Chinese fortune tellers begin using loadstone to construct their divining boards, eventually leading to the first compasses. (Mentioned in Wang Ch’ung’s Discourses weighed in the balance of 83 B.C.)

1st

cent South-pointing divining boards become common in China.

2nd

cent Claudius Ptolemy (ca 85 – ca 165) writes on optics, deriving the law of reflection from the assumption that light rays travel in straight lines (from the eyes), and tries to establish a quantitative law of refraction.

Hero of Alexandria writes on the topics of mirrors and light.

ca

271 True compasses come into use by this date in China.

6th

cent (China) Discovery that loadstones could be used to magnetize small iron needles.

11th

cent Abu Ali al-Hasan ibn al-Haitam (Alhazen) (965-1039) writes Kitab al-manazir (translated into Latin as Opticae thesaurus Alhazeni in 1270) on optics, dealing with reflection, refraction, lenses, parabolic and spherical mirrors, aberration and atmospheric refraction.

(China) Iron magnetized by heating it to red hot temperatures and cooling while in south-north orientation.

1086 Shen Kua’s Dream Pool Essays make the first reference to compasses used in navigation.

1155

–

1160 Earliest explicit reference to magnets per se, in Roman d’Enéas. (see reference)

2 FORMULATION

1190

–

1199 Alexander Neckam’s De naturis rerum contains the first western reference to compasses used for navigation.

13th

cent Robert Grosseteste (1168-1253) writes De Iride and De Luce on optics and light, experimenting with both lenses and mirrors.

Roger Bacon (1214-1294) (student of Grosseteste) is the first to try to apply geometry to the study of optics. He also makes some brief notes on magnetism.

Pierre de Maricourt, a.k.a. Petri Pergrinus (fl. 1269) writes Letter on the magnet of Peter the Pilgrim of Maricourt to Sygerus of Foucaucourt, Soldier, the first western analysis of polar magnets and compasses. He also demonstrates in France the existence of two poles of a magnet by tracing the directions of a needle laid on to a natural magnet.

Witelo writes Perspectiva around 1270, treating geometric optics, including reflection and refraction. He also reproduces the data given by Ptolemy on optics, though was unable to generalize or extend the study.

Theodoric of Freiberg (d ca 1310), working with prisms and transparent crystalline spheres, formulates a sophisticated theory of refraction in raindrops which is close to the modern understanding, though it did not become very well known. (Descartes presents a nearly identical theory roughly 450 years later.)

Eyeglasses, convex lenses for the far-sighted, first invented in or near Florence (as early as the 1270s or as late as the late 1280s – concave lenses for the near-sighted appearing in the late 15th century).

LikeLike

here we consider important aspects as say prof dr mircea orasanu and prof horia orasanu as followed with

PARTIAL DIFFERENTIAL EQUATION FOR LAGRANGIAN

ABSTRACT

Return to MathPages Main Menu

mplying that ‘time’ might be expected to exist, and might be expected to play a part in the equation.

However, if we consider any ordinary ‘time based’ experiment, e.g. accessing the dynamics of a stone falling from a tower, we invariably find that anyone conducting the experiment drops a stone, and compares the stones motion to the motion of a steadily rotating hand on a numbered dial (or some other variant).

They will also typically refer to the position of the hand with symbol ‘t’ and refer to it as ‘time’.

While is is extremely useful and logical to compare complex motion to an example of simple regular motion, the fact that a hand has been designed an manufactured to move steadily forwards, and forwards only , does not count as a proof that there is an extra ‘thing’ in the universe that also constantly moves steadily forwards.

Unless proven otherwise this is only an assumption, and while IF there is time, THEN a clock shows its passing, BUT, just making a machine that displays motion, and calling it a clock in no way proves the existence of time.

August 27, 1996

In July and the first part of August there was a workshop on Mathematical Problems of Quantum Gravity at the Erwin Schrödinger Institute in Vienna, run by Peter Aichelburg and Abhay Ashtekar. One of the goals of this workshop was to gather together people working on the loop representation of quantum gravity and have them tackle some of the big open problems in this subject. For some time now, the most important outstanding problem has been to formulate the Wheeler-DeWitt equation in a rigorous way by making the Hamiltonian constraint into a well-defined operator. Just before the workshop began, Thomas Thiemann put four papers aimed at solving this problem onto the preprint archive gr-qc [1]. This led to quite a bit of excitement as the workshop participants began working through the details. A personal account of the workshop as a whole can be found on my website [2]; here I wish only to give an introduction to Thiemann’s work. In the interests of brevity I will not attempt to credit the many people whose work I allude to.

An interesting feature of Thiemann’s approach is that while it uses the whole battery of new techniques developed in the loop representation of quantum gravity, in some respects it returns to earlier ideas from geometrodynamics. Recall that in geometrodynamics á la Wheeler and DeWitt, the basic canonically conjugate variables were the 3-metric and extrinsic curvature . The idea was to quantize these, making them into operators acting on wavefunctions on the space of 3-metrics, and then to quantize the Hamiltonian and diffeomorphism constraints and seek wavefunctions annihilated by these quantized constraints. However, this program soon became regarded as dauntingly difficult for various reasons, one being the non-polynomial nature of the Hamiltonian constraint:

where is the scalar curvature of the 3-metric. It is often difficult to quantize non-polynomial expressions in the canonically conjugate variables and their derivatives. The factor of is not even an entire function of the 3-metric!

In the 1980’s Ashtekar found a new formulation of general relativity in which the canonically conjugate variables are a densitized complex triad field and a chiral spin connection . When all the constraints are satisfied, these are related to the original geometrodynamical variables by

where is built from the Levi-Civita connection of the 3-metric and is built from the extrinsic curvature. In terms of these new variables the Hamiltonian constraint appears polynomial in form, reviving hopes for canonical quantum gravity.

Actually, in this formulation one works with the densitized Hamiltonian constraint, given by

where is the curvature of , and the trace and commutator are interpreted by thinking of as indices. Clearly is a polynomial in , , and their derivatives. However, it is related to the original Hamiltonian constraint by , so in a sense the original problem has been displaced rather than addressed. It took a while, but it was eventually seen that many of the problems with quantizing can be traced to this fact (or technically speaking, the fact that it has density weight 2).

A more immediately evident problem was that because is complex-valued, the corresponding 3-metric is also complex-valued unless one imposes extra `reality conditions’. The reality conditions are easy to deal with in the Riemannian theory, where the signature of spacetime is taken to be . There one can handle them by working with a real densitized triad field and an connection given not by the above formula but by

In the physically important Lorentzian theory, however, no such easy remedy is available.

Despite these problems, the enthusiasm generated by the new variables led to a burst of work on canonical quantum gravity. Many new ideas were developed, most prominently the loop representation. In the Riemannian theory, this allows one to rigorously construct a Hilbert space of wavefunctions on the space of connections on space. The idea is to work with graphs embedded in space, and for each such graph to define a Hilbert space of wavefunctions depending only on the holonomies of the connection along the edges of the graph. Concretely, if the graph has edges, the holonomies along its are summarized by a point in , and the Hilbert space we get is , defined using Haar measure on . If the graph is contained in a larger graph then is contained in and one has . We can thus form the union of all these Hilbert spaces and complete it to obtain the desired Hilbert space .

One can show that has a basis of `spin networks’, given by graphs with edges labeled by representations of — i.e., spins — and vertices labeled by vectors in the tensor product of the representations labeling the incident edges. One can also rigorously quantize geometrically interesting observables such as the total volume of space, obtaining operators on . The matrix elements of these operators can be explicitly computed in the spin network basis.

Thiemann’s approach applies this machinery developed for the Riemannian theory to Lorentzian gravity by exploiting the interplay between the Riemannian and Lorentzian theories. He takes as his canonically conjugate variables an connection and a real densitized triad field , and takes as his Hilbert space as defined above. This automatically deals with the reality conditions, as in the Riemannian case. Then he writes the Lorentzian Hamiltonian constraint in terms of these variables, and quantizes it to obtain a densely defined operator on — modulo some subtleties we discuss below. Interestingly, it is crucial to his approach that he quantizes the Hamiltonian constraint rather than the densitized Hamiltonian constraint . This avoids the regularization problems that plagued attempts to quantize .

He writes the Lorentzian Hamiltonian constraint in terms of and in a clever way, as follows. First he notes that

where the commutators and trace are taken in , and is the Riemannian Hamiltonian constraint, given by

Then he notes that

where

is the total volume of space (which is assumed compact). This observation lets him get rid of the terrifying factors of .

bled by how arbitrary choices of paths and loops prevent one from achieving a representation of the Dirac algebra, one is really troubled by the assumption, built into the loop representation, that , , and are not well-defined operator-valued distributions. Ultimately, the validity of this assumption can only be known through its implications for physics. The great virtue of Thiemann’s work is that it brings us closer to figuring out these implications.

Lagrangian and Hamiltonian mechanics — A short introductionGeneralized momentum and Hamiltonian

In Hamiltonian mechanics we use generalized momentum in place of velocity as a coordinate. The generalized momentum is defined in terms of the Lagrangian and the coordinates (q, ) :

p = . (21)

The Hamiltonian is defined in terms of the Lagrangian as follows:

H(p,q) = p – L(q, ) (22)

where in the above equation is replaced with a function of (p,q) by solving the definining equation 21 with respect to . We note that this task may be rather hard if the dependence of L on is complicated. Fortunately, in most interesting siuations, L is quadratic in (i.e. T . the kinetic energy is a quadratic function of the velocity). Thus, the equation for in terms of (p,q) is linear.

4.2 Equations of motion

The Lagrangian equation of motion 5 becomes a pair of equations known as the Hamiltonian system of equations:

= ,

= – . (23)

The second equation of this system is easy to explain. It is simply the Lagrangian equation 5.

The second equation can be explained by using duality. For the sake of this argument we need to assume that L(q, ) is strictly convex as a function of . It is sufficient to assume that the Hessian

(24)

is positive definite. We note that for Newtonian equations this matrix is I . The function H in this case is the Legendre transform of L , i.e.

H(p,q) = [p – L(p, )] (25)

where the infimum is taken over all p . One can show that L is a Legendre transform of H as well, i.e

L(q, ) = [p – H(p,q)]. (26)

In particular, the minimum is attained for

= .

Liouville’s theorem for non-Hamiltonian systems

The equations of motion of a system can be cast in the generic form

where, for a Hamiltonian system, the vector function would be

and the incompressibility condition would be a condition on :

A non-Hamiltonian system, described by a general vector funciton , will not, in general, satisfy the incompressibility condition. That is:

Non-Hamiltonian dynamical systems are often used to describe open systems, i.e., systems in contact with heat reservoirs or mechanical pistons or particle reservoirs. They are also often used to describe driven systems or systems in contact with external fields.

The fact that the compressibility does not vanish has interesting consequences for the structure of the phase space. The Jacobian, which satisfies

will no longer be 1 for all time. Defining , the general solution for the Jacobian can be written as

Note that as before. Also, note that . Thus, can be expressed as the total time derivative of some function, which we will denote W, i.e., . Then, the Jacobian becomes

Thus, the volume element in phase space now transforms according to

which can be arranged to read as a conservation law:

Thus, we have a conservation law for a modified volume element, involving a “metric factor” . Introducing the suggestive notation , the conservation law reads . This is a generalized version of Liouville’s theorem. Furthermore, a generalized Liouville equation for non-Hamiltonian systems can be derived which incorporates this metric factor. The derivation is beyond the scope of this course, however, the result is

We have called this equation, the generalized Liouville equation Finally, noting that satisfies the same equation as J, i.e.,

the presence of in the generalized Liouville equation can be eliminated, resulting in

which is the ordinary Liouville equation from before. Thus, we have derived a modified version of Liouville’s theorem and have shown that it leads to a conservation law for f equivalent to the Hamiltonian case. This, then, supports the generality of the Liouville equation for both Hamiltonian and non-Hamiltonian based ensembles, an important fact considering that this equation is the foundation of statistical mechanics.

________________________________________

Next: Equilibrium ensembles Up: No Title Previous: Preservation of phase space The Lagrangian

In Lagrangian mechanics we start by writing down the Lagrangian of the system

L = T – U (1)

where T is the kinetic energy and U is the potential energy. Both are expressed in terms of coordinates (q, ) where q is the position vector and is the velocity vector.

he Lagrangian of the pendulum

An example is the physical pendulum (see Figure 1).

Figure 1: The configuration space of the pendulum

The natural configuration space of the pendulum is the circle. The natural coordinate on the configuration space is the angle . If the mass of the ball is m and the length of the rod is l then we have

T = m(l )2 (2)

U = – mglcos (3)

Thus, the Lagrangian in coordinates ( , ) is

L = m(l )2 + mglcos . (4)

2.3 Equations of motion

In Lagrangian mechanics the equations of motion are written in the following universal form:

. (5)

2.4 Pendulum–Equations of motion

For example, for the pendulum we have:

= ml 2 , (6)

= – mglsin . (7)

Thus, the equations of motion are written as

(ml 2 ) = – mglsin . (8)

This equation can be written as second order equation

ml 2 = – mglsin (9)

or in the traditional way

= – sin . (10)

2.5 The meaning of dot

We should emphasize that has dual meaning. It is both a coordinate and the derivative of the position. This traditional abuse of notation should be resolved in favor of one of these interpretations in every particular situation.

2.6 Lagrangian vs. Newtonian mechanics

In Newtonian mechanics we represent the equations of motion in the form of the second Newton’s law:

m = f (q,t) (11)

where f (q,t) is the force applied to the particle.

This equation is identical to the equation obtained from Lagrangian representation if f (q,t) is a conservative field, i.e. it has a potential. A potential is a function U(q,t) such that

f (q,t) = – . (12)

Indeed, the Lagrangian can be written as

L = m( )2 – U(q,t). (13)

According to 5 the equations of motion reduce to 11.

1 INTRODUCTION

Example 5(a): How many ways can you arrange 30 people on a ferris wheel with 30 seats?

Answer: 29! since it is a circular permutation. Example 5(b): How many ways can you arrange 5 people on a ferris wheel with 6 seats?

Answer: 5! Because it is a circular permuation of essentially 6 things – 5 people and one empty seat and there are (n-1)! circular permutations of n things. Here n = 6. If there were 4 people and 2 empty seats the answer would not be 5! any longer since the 4 people are different, but the 2 empty seats are indistinguishable. For example, let a, b, c, and d be the 4 people and e1 and e2 be the empty seats. There is no real difference between a b c d e1 e2 and a b c d e2 e1, but if you gave the answer (6-1)! = 5! you would count these identical permutations twice. The answer for this situation would be 5!/2! = 5*4*3 = 60.

Example 6: How many permutations are there of the word “repetition”?

Answer: 10!/2!2!2! from the formula

Notes on Combinations with Repetitions: Combinations with repetitions involve problems like:

How many ways can you place r identical items into n boxes?

The formula is: C(n+r-1, r). For example, how many ways can you put 4 identical balls into 2 boxes?

You could have 4 in the first box and 0 in the second.

3 1

2 2

1 3

0 4

Notice that since the balls are all identical we only care how many are in each box. So the answer here is 5.

The formula gives us 5 as an answer as well: r = 4, n = 2 and C(n+r-1, r) = C(4+2-1, 4) = C(5, 4) = 5!/4! = 5.

Why is this formula correct? How is this a problem of choosing r things out of n+r-1 things? This is another example where you have to think of the problem in a different way, with pictures, in order to understand the solution.

Imagine that the 4 identical items are drawn as dots: …. If we draw a vertical bar somewhere among these 4 dots, this can represent a unique assignment of the balls to the boxes. For example:

|…. = 0 balls in first box 4 balls in second box

.|… = 1 ” ” ” ” 3 ” ” ” ”

..|.. = 2 ” ” ” ” 2 ” ” ” ”

…|. = 3 ” ” ” ” 1 ” ” ” ”

….| = 4 ” ” ” ” 0 ” ” ” ”

Since there is a correspondence between a dot/bar picture and assignments of balls to boxes, we can count the dot/bar pictures and get an answer to our balls-in-boxes problem. (This is true because the correspondence is one-to-one but we will get into that later).

Die Frage weicher dieser Grenzwerte wann zum Zuge kommt, bleibt im all

Gemeinen and plane complementary and natural numbers. The point belongs

And is that in a definite domain of values of the depth an auxiliary

Boundary value problem of type with the coefficient and solutions of

Problem

Fall Im interesantestenn Fall l =0 ist dieser Grenzwert offenDie Argu

Mentation beruht entscheidend auf einer fruheren

It is proved that the solution is related to the waves in fluids wateer

Waves internal

Some mathematical questions of the linear Cauchy problem and Aristotle

On natural place and natural motion. In the present study we argue that

A number of theorems concerning existence and uniqueness for the proble

M the equation of

And magnetic boundary conditions respectively the generalized eigen func

Rions expansions

And gives the singularly and Lebesgue measure be a procedure be piece

Wise smooth closed curve in

Perturbed optimal control problem and if approach is that it yields

Results that are uniformly valid for all values of argz and not mere

Ly for restricted sectors of the z plane

We employ postulates of similitude of velocity and turbulent jeet

Stress and the problem reduces to a singular integro differential

Equation whose solution is constructed by means of integral transfor

Ms and boundary contact problems of elasticity theory for piecewise

Homogeneous bodies with boundaries condition of mixed type to a sys

tem

Distributions for finding a complete and exact solutions of a set of

The approximate perspective brought about by the problem,then of bounda

Ry contact integro differential equations of the second kind relative

To the unknown components vectors of displacements and the density of

The region of interest is mapped into a unit curcle and the required

Conformal transformation is determined as a part if the calculation at

Each step by solving the Rieman Hilbert the boundary values of the tran

Formation are obtained from we apply the results obtained to the inves

Tigation of the stability of solutions of a non;inear system describing

The steady

And distorted planes waves expansions for the systems on a scale large

Compared with the wavelength and surface force on the interfaces of the

Inhomogenities

And formulation and the numerical calculation it possible to extend the

Estend the range from small perturbations to the range where the pertur

Bations Have finite amplitude and in which the underlying derivatives

In time and 2mth order derivatives in space is proposed which uses and

First the ray fielfs are generalized by associating with each ray trajec

Tory from source to observer a bundle of local plane

And quite well for certain initial profiles the memory function is shown

To be which reflects special features of the mathematical apparatus, amd

Multiply reflected between the boundaries and remains calid also in the

Transition regions. When the generalized ray series is subjected to

Poisson summation and subsequent asymptotics on the transform integrals

It is found to furnish ;ocal modes which coincide with the convention

Adiabatic modes where these exist

It is shown that gives time the slowly growing instabilities eventua

Lly develop sufficient steepness so similar for all cases which sugg

Ests and that raid instability and breaking ensues

It would be nice to see the status of on the conventional which repro

Duces known numerical results and sound propagation with the method is

Illustrated with the semiclasical

This fact requires a subsequent detailed analysis since it is relate

To the possible ill posedness of the natural Cauchy problem for wave

Motions despite the fact that on internal waves the method of simplif

Tng a problem a problem,then the utilization of rays and local mode

LikeLike

here we consider some aspects following the ideas as say prof dr mircea orasanu and prof horia orasanu as followed with

BESSEL FUNCTIONS AND LAGRANGIAN

ABSTRACT

The coefficient of t n is then

(6.5)

This series form exhibits the behavior of the Bessel function Jn for small x. The results for J0, J1, and J2 are shown in Fig.6.1. The Bessel functions oscillate but are not periodic.

Figure 6.1 Bessel function, , , and .

Eq.(6.5) actually holds for n < 0 , also giving

. (6.6)

Since the terms for n

~~0, Jv (0)=0. Thus, for a finite interval [0, a], when is the mth zero of Jv (i.e. ), we are able to have~~if m n, (6.49)

This gives us orthogonality over the interval [0, a].

• Normalization

The normalization result may be written as

(6.50)

• Bessel series

If we assume that the set of Bessel functions (v fixed, m=1,2,… ) is complete, then any well-behaved function may be expanded in a Bessel series

, (6.51)

The coefficients cvm are determined by using Eq.(6.50),

(6.52)

• Continuum form

If a , then the series forms may be expected to go over into integrals. The discrete roots become a continuous variable . A key relation is the Bessel function closure equation

(6.59)

Figure 6.3 Neumann functions , , , and .

6.3 Neumann function, Bessel function of the second kind,

From the theory of the differential equations it is known that Bessel’s equation has two independent solutions, Indeed, for nonintegral order v we have already found two solutions and labeled them and , using the infinite series (Eq. (6.5)). The trouble is that when v is integral Eq.(6.8) holds and we have but one independent solution. A second solution may be developed by the method of Section 3.6. This yields a perfectly good solution of Bessel’s equation but is not the usual standard form.

Definition and series form

As an alte

3 , SOLUTION

Here the variable is being replaced by x for simplicity.) Often this is written as

(6.91)

Series form

In the terms of infinite series this is equivalent to removing the sign in Eq. (6.5) and writing

(6.92)

The extra normalization cancels the from each term and leaves real. For integral v this yields

(6.93)

Recurrence relations

The recurrence relations satisfied by may be developed from the series expansions, but it is easier to work from the existing recurrence relations for . Let us replace x by –ix and rewrite Eq. (6.90) as

(6.94)

Then Eq. (6.10) becomes

Repalcing x by ix, we have a recurrence relation for ,

(6.95)

Equation (6.12) transforms to

(6.96)

From Eq. (6.93) it is seen that we have but one independent solution when v is an integer, exactly as in the Bessel function The choice of a second, independent solution of Eq. (6.108) is essentially a matter od convenience.

We choose to define a second solution in terms of the Hankel function by

(6.97)

The factor makes rela when x is real. Using Eqs. (6.60) and (6.90), we may transform Eq. (6.97) to

(6.98)

analogous to Eq. (6.60) for . The choice od Eq. (6.97) as a definition is somewhat unfortunate in that the function does not satisfy the same recurrence relations as . To avoid this annoyance other authors have included an additional factor of cosine . This permits satisfy the same recurrence relations as , but it has the disadvantage of making for .

To put the modified Bessel functions and in proper perspective, we introduce them here because:

1. These functions are solutions of the frequently encountered modified Bessel equation.

2. They are needed for specific physical problems such as diffusion problems.

Figure 6.4 Modified Bessel functions

6.6 Asmptotic behaviors

Frequently in physical problems there is a need to know how a given Bessel or modified Bessel functions for large values of argument, that is, the asymptotic behavior. Using the method of stepest descent studied in Chapter 2, we are able to derive the asymptotic behaviors of Hankel functions (see page 450 in the text book for details) and related functions:

LikeLike

sure that we consider some aspects following the ideas as say prof dr mircea orasanu and prof horia orasanu as followed with

BESSEL FUNCTIONS AND LAGRANGIAN

ABSTRACT

The coefficient of t n is then

(6.5)

This series form exhibits the behavior of the Bessel function Jn for small x. The results for J0, J1, and J2 are shown in Fig.6.1. The Bessel functions oscillate but are not periodic.

Figure 6.1 Bessel function, , , and .

Eq.(6.5) actually holds for n < 0 , also giving

. (6.6)

Since the terms for n<s (corresponding to the negative integer (s-n) ) vanish, the series can be considered to start with s=n. Replacing s by s + n, we obtain

(6.7)

\

LikeLike

here we see some important aspects as is known as say prof dr mircea orasanu and prof horia orasanu as followed for

LEBESGUE THEOREM AND INTEGRATION AS PRINCIPLES FOR LAGRANGIAN

ABSTRACT

first the above are real in case of discrete sets that contain isolated points

LikeLike

sure these appear we see on my blog as say prof dr mircea orasanu and prof horia orasanu as followed in connection with

LEBESGUE INTEGRATION AND THEOREM FOR DISCRETE SETS

ABSTRACT

Of course must there exist a wish for these and find out the dependence of pressure on equilibrium temperature when two phases coexist.

Along a phase transition line, the pressure and temperature are not independent of each other, since the system is univariant, that is, only one intensive parameter can be varied independently

LikeLike

is observed that here we must to stay as say prof dr mircea orasanu and prof horia orasanu and same followed

LEBESGUE ,CANTOR AND DEDEKIND METHODS FOR SUCTIONS

ABSTRACT

… I realise that in this undertaking I place myself in a certain opposition to views widely held concerning the mathematical infinite and to opinions frequently defended on the nature of numbers.

At the end of May 1884 Cantor had the first recorded attack of depression. He recovered after a few weeks but now seemed less confident. He wrote to Mittag-Leffler at the end of June [3]:-

… I don’t know when I shall return to the continuation of my scientific work. At the moment I can do absolutely nothing with it, and limit myself to the most necessary duty of my lectures; how much happier I would be to be scientifically active, if only I had the necessary mental freshness.

At one time it was thought that his depression was caused by mathematical worries and as a result of difficulties of his relationship with Kronecker in particular. Recently, however, a better understanding of mental illness has meant that we can now be certain that Cantor’s mathematical worries and his difficult relationships were greatly magnified by his depression but were not its cause (see for example [3] and [21]). After this mental illness of 1884 [3]:-

… he took a holiday in his favourite Harz mountains and for some reason decided to try to reconcile himself with Kronecker. Kronecker accepted the gesture, but it must have been difficult for both of them to forget their enmities and the philosophical disagreements between them remained unaffected.

1 INTEGRATION

While teaching there, Dedekind developed the idea that both rational and irrational numbers could form a continuum (with no gaps) of real numbers, provided that the real numbers have a one-to-one relationship with points on a line. He said that an irrational number would then be that boundary value that separates two especially constructed collections of rational numbers.

Dedekind perceived that the character of the continuum need not depend on the quantity of points on a line segment (or continuum) but rather on how the line submits to being divided. His method, now called the Dedekind cut, consisted in separating all the real numbers in a series into two parts such that each real number in one part is less than every real number in the other

LikeLike

we continued the above situations as prof dr mircea orasanu and prof horia orasanu concerning the followed

DEDEKIND CONCEPTS AND CANTOR

ABSTRACT

Though his ideas were not supported much in his homeland, cantor managed to gain interest from the mathematical world internationally. When his notion of the Set Theory was recognized worldwide, new areas such as Topology and the measure theory were explored. These recent developments signified Cantor’s works to be of great importance. After one of his many stays locked in a hospital for the mentally ill, Cantor wrote a dialogue between a master and his pupil in which the master argues that Joseph of Arimathea had fathered Christ.

To his death Cantor remained a devout Christian, having been raised by a Lutheran father who’d ingrained in his son two things above all others: rely on God, and be successful.

LikeLike

there are more situations in connected with more aspects as look prof dr mircea orasanu and prof horia orasanu and as followed

LEBESGUE INTEGRATION AND EXAMPLES ,FUBINI THEOREM FOR DOUBLE INTEGRALS .ZAMM, ZAMP

ABSTRACT

So, there are a couple of things to note here:

1. The Riemann integral involves partitioning the interval to be integrated over without regards to the function being integrated at all; that is, if you were doing or you wouldn’t partition any differently.

2. The elements of any partition of the Riemann integral are intervals of finite length.

The Lebesgue integral changes these two features;

1. We’ll use information about the function being integrated to help us select partitions and

2. The elements of our partition need not be intervals of finite length; they just need to be measurable sets.

1 INTRODUCTION

One of the most fundamental concepts in Euclidean geometry is that of the measure of a solid body in one or more dimensions. In one, two, and three dimensions, we refer to this measure as the length, area, or volume of respectively. In the classical approach to geometry, the measure of a body was often computed by partitioning that body into finitely many components, moving around each component by a rigid motion (e.g. a translation or rotation), and then reassembling those components to form a simpler body which presumably has the same area. One could also obtain lower and upper bounds on the measure of a body by computing the measure of some inscribed or circumscribed body; this ancient idea goes all the way back to the work of Archimedes at least. Such arguments can be justified by an appeal to geometric intuition, or simply by postulating the existence of a measure that can be assigned to all solid bodies , and which obeys a collection of geometrically reasonable axioms. One can also justify the concept of measure on “physical” or “reductionistic” grounds, viewing the measure of a macroscopic body as the sum of the measures of its microscopic components.

With the advent of analytic geometry, however, Euclidean geometry became reinterpreted as the study of Cartesian products of the real line . Using this analytic foundation rather than the classical geometrical one, it was no longer intuitively obvious how to define the measure of a general subset of ; we will refer to this (somewhat vaguely defined) problem of writing down the “correct” definition of measure as the problem of measure. (One can also pose the problem of measure on other domains than Euclidean space, such as a Riemannian manifold, but we will focus on the Euclidean case here for simplicity.)

To see why this problem exists at all, let us try to formalise some of the intuition for measure discussed earlier. The physical intuition of defining the measure of a body to be the sum of the measure of its component “atoms” runs into an immediate problem: a typical solid body would consist of an infinite (and uncountable) number of points, each of which has a measure of zero; and the product is indeterminate. To make matters worse, two bodies that have exactly the same number of points, need not have the same measure. For instance, in one dimension, the intervals and are in one-to-one correspondence (using the bijection from to ), but of course is twice as long as . So one can disassemble into an uncountable number of points and reassemble them to form a set of twice the length.

Of course, one can point to the infinite (and uncountable) number of components in this disassembly as being the cause of this breakdown of intuition, and restrict attention to just finite partitions. But one still runs into trouble here for a number of reasons, the most striking of which is the Banach-Tarski paradox, which shows that the unit ball in three dimensions can be disassembled into a finite number of pieces (in fact, just five pieces suffice), which can then be reassembled (after translating and rotating each of the pieces) to form two disjoint copies of the ball . (The paradox only works in three dimensions and higher, for reasons having to do with the property of amenability; see this blog post for further discussion of this interesting topic, which is unfortunately too much of a digression from the current subject.)

Here, the problem is that the pieces used in this decomposition are highly pathological in nature; among other things, their construction requires use of the axiom of choice. (This is in fact necessary; there are models of set theory without the axiom of choice in which the Banach-Tarski paradox does not occur, thanks to a famous theorem of Solovay.) Such pathological sets almost never come up in practical applications of mathematics. Because of this, the standard solution to the problem of measure has been to abandon the goal of measuring every subset of , and instead to settle for only measuring a certain subclass of “non-pathological” subsets of , which are then referred to as the measurable sets. The problem of measure then divides into several subproblems:

these not learn at FACULty of MATHEMATICS Bucharest because that do not exist teachers and I learn these at university Duke and others

LikeLike

inthe following we consider that can be start as look prof dr mircea orasanu and prof horia orasanu with

DEDEKIND AND CONVERGENCE

ABSTRACT

Average value of a function

To find the average value of a function of two variables, let’s start by looking at the average value of a function of one variable. Note that, over the interval , the integral gives the total area of the region. We could also get the total area of the region by treating the region as a rectangle of length b-a and height equal to the average value of the function.

Thus,

Rearranging this formula, we see that

We can perform a similar “trick” for functions of two variables. The volume under f is given by . But we could treat this volume as a solid whose cross section is shaped like R and whose height is the average value of f over the region R:

To envision this, think of building the volume under z=f(x,y) as a solid mass of wax. Trap the wax inside a tube whose cross-section looks like R. As the wax melts, it will eventually form a solid whose height is equal to the average value of f on the region R.

Click here to view an animation of a function made of wax melting to its average value. This will open a new window. To return to this window, simply close the new one. To view the animation repeatedly, use the “reload” feature of your browser.

How do we calculate the area of a region? The area of a region R will be the same as the integral of a uniform density of 1 over the region. Thus,

Volume under a surface

Just like the integral gives the area between y=0 and y=f(x) from x=a to x=b, the integral gives the total volume of the solid which lies between z=0 and z=f(x,y) with a cross section shaped like R.

LikeLike

here we ho with important now and as look prof dr mircea orasanu and prof horia orasanu wirh followed

LAGRANGIAN AND CONTINUITY .LEBESGUE

ABSTRACT

As indicated, set-theoretic assumptions and procedures already inform Dedekind’s Stetigkeit und irrationale Zahlen. In particular, the system of rational numbers is assumed to be composed of an infinite set; the collection of arbitrary cuts of rational numbers is treated as another infinite set; and when supplied with an order relation and arithmetic operations on its elements, the latter gives rise to a new number system. Parallel moves can be found in the sketches, from Dedekind’s Nachlass, of how to introduce the integers and the rational numbers. Once more we start with an infinite system, here that of all the natural numbers, and new number systems are constructed out of it set-theoretically (although the full power set is not needed in those cases).

LikeLike

we remember that more elaborate works are very important as observed prof dr mircea orasanu and prof horia orasanu as in case of

FERMAT AND DARBOUX THEOREM. LAGRANGIAN

ABSTRACT

The limit comparison test

(1) Suppose th The equation x2 – 8y2 = 16 represents which conic section?

(2)

(a) circle (b) parabola (c) hyperbola (d) ellipse

(2) The equation x2 + y2 = 8 represents a circle with a radius of:

(3)

(a) 4 (b) 8 (c) (d) 2

(3) If (x – 3)2 + (y + 5)2 = 9 is the equation of a circle, the coordinates of the center and the length of the radius are: (4)

(a) (3, -5); 9 (b) (-3, 5); 9 (c) (-3, 5); 3 (d) (3, -5); 3

(4) For the graph of which equation is x = 2 an equation of the axis of symmetry: (5)

(a) x2 – 4x – 6 = y

(b) 3×2 + 6x – 8 = y

(c) x2 + 2x – 3 = y

(d) 4×2 – 2x + 10 = y

(5) Which equation represents a circle with a center at (7, 0) and radius of 4? (6)

(a) (x – 7)2 + y2 = 16

(b) x2 + (y – 7)2 = 2

(c) (x – 7)2 + y2 = 4

(d) x2 + (y – 7)2 = 8

LikeLike

most aspects appear in case of documented evidence as Coriolis force and central force and as look prof dr mircea orasanu and prof horia orasanu these go as followed to

LEBESGUE PROPERTIES ,JORDAN MEASURE AND LAGRANGIAN

ABSTRACT

The teaching and learning of mathematics at all levels is naturally closely related to assessment of student achievement. Nevertheless, to assess mathematical modelling is not easy to accomplish. The more complicated and open a problem is, the more complicated it is to assess the solution, and if one adds the component of available technology, assessment becomes even more complicated. Similar problems are inherent at the course level for the evaluation of programmes with application and modelling components.

Issue 8a:

There seem to be many indications that the assessment modes traditionally used in mathematics education are not fully appropriate to assess students’ modelling competency.

1 . INTRODUCTION

Many technological devices are available today and many of them are highly relevant for applications and modelling. In a broad sense these technologies include calculators, computers, Internet and all computational or graphical software as well as all kind of instruments for measuring, for performing experiments, for solving all kind of daily life problems, etc. These devices provide not only increased computational power but broaden the range of possibilities for approaches to teaching, learning and assessment. Moreover, the use of technology is in itself a key knowledge in today’s society. On the other hand, the use of calculators and computers may also bring inherent problems and risks– In what way do usual evaluation procedures for mathematical programmes carry over to programmes that combine mathematics with applications and modelling?

– What counts as success when evaluating outcomes from a modelling programme? For example, what do biologists, economists, industrial and financial planners, medical practitioners, etc., look for in a student’s mathematical modelling abilities? How does one establish whether a student has achieved these capabilities?

LikeLike

in more cases are possible important treasures as for fundamental problems and as look prof. dr mircea orasanu and prof drd horia orasanu with more followed and

MEASURE JORDAN AND LEBESGUE CONCERN LAGRANGIAN

ABSTRACT

Dedekind also replaced Kummer’s “ideal numbers” by his “ideals”—set-theoretically constructed objects intended to play the same role with respect to unique factorization. Dedekindian ideals are infinite subsets of the number fields in question, or of the ring of integers contained in them, which again gives his approach an infinitary character. (An ideal I in a ring R is a subset such that the sum and difference of any two elements of I and the product of any element of I with any element of R are also in I.) This led him to introduce other fruitful notions, such as that of a “module”. A particular problem Dedekind struggled with for quite a while, in the sense of finding a fully satisfactory solution, was to specify a suitable version, not only of the notion of “integer”, but also of “prime number Both Kronecker, with his divisor theory, and Dedekind, with his ideal theory, were able to obtain immediate results. Each theory had a strong influence on later developments—Dedekind’s by shaping the approaches to modern algebra (field theory, ring theory, etc.) of Hilbert, Emmy Noether, B.L. van der Waerden,

1. INTRODUCTION

Not infrequently this opposition is debated in terms of which approach is “the right one”, with the implication that only one of them but not the other is legitimate. (This started with Dedekind and Kronecker. We already noted that Dedekind explicitly rejected constructivist strictures on his work, although he did not rule out corresponding projects as illegitimate in themselves. Kronecker, with his strong opposition to the use of set-theoretic and infinitary techniques, went further in the other direction.) But another question is more basic: Can the contrast between the two approaches to algebraic number theory, or to mathematics more generally, be captured more sharply and revealingly; and in particular, what is its epistemological significance?

An initial, rough answer to that question is already contained in our discussion so far: Dedekind’s approach is set-theoretic and infinitary, while Kronecker’s is constructivist and finitary. However, that leaves us with a deeper problem: What exactly is it that a set-theoretic and infinitary methodology allows us to accomplish that Kronecker’s doesn’t, and vice versa? The specific strength of a Kroneckerian approach is often summed up thus: It provides us with computational, algorithmic information (Edwards 1980, 1990, Avigad 2008). But what is the characteristic virtue of a Dedekindian approach? Here an articulate answer is harder to find, especially one that is philosophically satisfying.

LikeLike

in these cases we obviously consider some ideas and as observed prof dr mircea orasanu and prof drd horia orasanu as followed

appear

PROBLEM OF CLAIRAUT AND LAGRANGE,CANTOR

ABSTRACT

Euler rejected the concept of infinitesimal in its sense as a quantity less than any assignable magnitude and yet unequal to 0, arguing: that differentials must be zeros, and dy/dx the quotient 0/0. Since for any number α, α • 0 = 0, Euler maintained that the quotient 0/0 could represent any number whatsoever[24]. For Euler qua formalist the calculus was essentially a procedure for determining the value of the expression 0/0 in the manifold situations it arises as the ratio of evanescent increments Philosophically Euler was a thoroughgoing synechist. Rejecting Leibnizian monadism, he favoured the Cartesian doctrine that the universe is filled with a continuous ethereal fluid and upheld the wave theory of light over the corpuscular theory propounded by Newton.

1 INTRODUCTION

The leading practitioner of the calculus, indeed the leading mathematician of the 18th century, was Leonhard Euler[23] (1707–83)..

But in the mathematical analysis of natural phenomena, Euler, along with a number of his contemporaries, did employ what amount to infinitesimals in the form of minute, but more or less concrete “elements” of continua, treating them not as atoms or monads in the strict sense—as parts of a continuum they must of necessity be divisible—but as being of sufficient minuteness to preserve their rectilinear shape under infinitesimal flow, yet allowing their volume to undergo infinitesimal change. This idea was to become fundamental in continuum mechanics.

While Euler treated infinitesimals as formal zeros, that is, as fixed quantities, his contemporary Jean le Rond d’Alembert (1717–83) took a different view of the matter. Following Newton’s lead, he conceived of infinitesimals or differentials in terms of the limit concept, which he formulated by the assertion that one varying quantity is the limit of another if the second can approach the other more closely than by any given quantity. D’Alembert firmly rejected the idea of infinitesimals as fixed quantities, and saw the idea of limit as supplying the methodological root of the differential calculus Philosophers, however, were not fettered by such constraints.

The philosopher George Berkeley (1685–1753), noted both for his subjective idealist doctrine of esse est percipi and his denial of general ideas, was a persistent critic of the presuppositions underlying the mathematical practice of his day (see Jesseph [1993]). His most celebrated broadsides were directed at the calculus, but in fact his conflict with the mathematicians went deeper. but at FACULT MATHEM Bucharest these are unknown and works of lazar dragos,and ene horia are wrongs and also all works edit in FACULT MATHEM Bucharest are wrong

LikeLike

there are some and specially and important and as observed prof dr mircea orasanu and prof drd mircea orasanu with as followed concerning

DEDEKIND AND CANTOR FOR LEIBNITZ

ABSTRACT

of a single uncomputable real number.

This is modern pure mathematics going beyond parody. Future generations are going to shake their heads in disbelief that we happily swallowed this kind of thing without even a trace of resistance, or at least disbelief.

But this is 2016, and the start of a New Year! I hope you will join me in an exciting venture to expose some of the many logical blemishes of modern pure mathematics, and to propose some much better alternatives — theories that actually make sense. Tell your friends, spread the word, and let’s not be afraid of thinking differently. Happy New Year.

Outstanding remarcabil

Enjoyed sa bucurat

1 INTRODUCTION

L2 is the second order language whose only non-logical constant is a unary functional constant. L2 contains individual, set, n-ary relational (n >= 2) and n-ary functional (n >= 1) variables. A mono-unary algebra is an algebra of the form B = (B,hB) where B is a non-empty set and hB is a unary function on B. Each Dedekind algebra is a mono-unary algebra. Each mono-unary algebra is interpreted in L2 in the “standard” way. The (second order) theory of a mono-unary algebra is the set of all sentences in L2 true on the algebra. Two mono-unary algebras are (second order) equivalent iff they have the same second order theory. A familiar application of the axiom of substitution shows that there are Dedekind algebras which are equivalent but of different cardinalities (hence, non-isomorphic).

A mono-unary algebra is (second order) characterizable provided there is a subset of the theory of the algebra all of whose models are isomorphic. Such an algebra is (second order) finitely characterizable provided there is a finite subset of the theory of the algebra all of whose models are isomorphic. Since there are Dedekind algebras of different cardinalities which are equivalent, there are Dedekind algebras which are not characterizable. It follows from Theorem 1 that there are also Dedekind algebras of the same cardinality which are equivalent but not isomorphic. For each cardinal , let + be the cardinal successor of . Let be a cardinal greater than or equal to (20)+. The number of isomorphism types of Dedekind algebras of cardinality is strictly greater than 20 by Theorem 1. Since L2 is countable, there are Dedekind algebras of cardinality which are equivalent but not isomorphic.

From the perspective of Theorem 1, the problem of characterizing a Dedekind algebra is that of fixing its configuration signature in L2 in the sense of finding a set of sentences in L2 all of whose models are Dedekind algebras and have the given configuration signature. As configuration signatures are cardinal-valued functions, it is natural to consider cardinals which are characterizable in L2. The cardinal is (second order) characterizable provided there is , a sentence in L2, such that a mono-unary algebra is a model of iff the domain of the algebra is of cardinality . All non-zero finite cardinals are characterizable as is the cardinal 0. Further, if the cardinal is characterizable, then + and 2 are also. Notice that the cardinal zero is not characterizable. The configuration signature of a Dedekind algebra B is said to be (second order) describable provided for each n in omega, (B)(n) is either zero

LikeLike

now here appear now more aspects in connection with fundamentals problems and as prof dr mircea orasanu and prof drd horia orasanu then followed as in more

IMPORTANT DEDEKIND PROPERTIES ,LEBESGUE AND CALCULUS

ABSTRACT

Finally, two further remarks. (i) About the the role of the term eimt

in the derivation of the Sch. eq. from the KG, the author of the question should give more details about what reference he is using to study this. Nevertheless, I guess it must came from the expression of the time-evolution operator using the relativistic Hamiltonian, U(t)=eip2+m2√t. In this case, the non-relativistic limit is ∝eimt+ip2t/2m

, and so the first term can be ignored since it only gives an overall phase to the quantum state. (ii) As

LikeLike

now in more cases are possible Dedekind results combined with other and as observed prof dr mircea orasanu and prof drd horia orasanu can be followed new so that

DEDEKIND PRESENCE IN CONFORMAL TRANSFORMS

ABSTRACT

The final columns of Tables 1 and 2 show the posterior probabilities that * is less than or equal to the given values of K. These have been estimated as the corresponding proportions of simulated values *k in N = 10,000 simulations.

The results emphasise the difference between inference about the C/E ratio * and the C/E acceptability curve. Looking at Table 2, where the proportion of patients discharged is the outcome measure, we see how the probability that anakinra is acceptable for a given threshold cost K is quite different from the probability that * is less than or equal to K. Suppose that the actual value of K for a decision maker (such as a health care provider) is 95000 Dfl per patient surviving and discharged at 28 days. The probability that the incremental cost-effectiveness ratio is less than or equal to 95000 is 0.9, on which basis it might seem that anakinra is preferred to placebo. However, the C/E acceptability curve shows that the probability that it is acceptable (i.e. preferred to placebo) is only 0.74.

In fact, no matter what value of K is appropriate, the probability of acceptability on the discharge outcome never exceeds 0.812. In this example, the maximal probability of acceptability occurs as K tends to infinity, and is therefore the probability that anakinra is more effective (in terms of proportion of patients discharged at 28 days) than placebo, regardless of cost. However, this depends on the sample data, and in other instances the maximal point on the C/E acceptability curve may occur at K = 0 (corresponding to the probability that treatment 2 is less costly than treatment 1) or at some intermediate value.

Willan and O’Brien[6], using the Fieller interval, found a 95% confidence interval from 108,280 to 55,856 for the survival outcome. The corresponding Bayesian 95% interval is from -94,830 to 84,774

1 INTRODUCTION

The Bayesian approach to inference about * is more robust than either the Fieller or Bonferroni intervals, in the sense that it never produces ‘intervals’ covering the whole line. The intervals suggested here are in fact always simple finite intervals, although they are not necessarily the narrowest that could be calculated. Highest posterior density intervals[21] would be narrower, and since the posterior distribution of * can be bimodal they can then take the form of the union of two disjoint, but finite, intervals.

The bootstrap method also produces a sample of simulated values of * and constructs an approximate confidence interval by ordering them in the same way as proposed above. Superficially, therefore, it appears comparable with the Bayesian solution. However, it should be remembered that, given a sufficiently large simulation size N, the Bayesian solution is an interval with exactly the stated probability of containing *. The bootstrap interval has only approximately the stated confidence no matter how large N is; it becomes exact only as the original sample size n becomes very large.

LikeLike