text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
A Django application is really just a python package with a few conventionally named modules. Most apps will not need all of the modules described below, but it’s important to follow the naming conventions and code organization because it will make your application easier to use. Following these conventions gives you a common model for understanding and building the various pieces of a Django application. It also makes it possible for others who share the same common model to quickly understand your code, or at least have an idea of where certain parts of code are located and how everything fits together. This is especially important for reusable applications. For examples, I highly recommend browsing through the code of applications in django.contrib, as they all (mostly) follow the same conventional code organization. models.py models.py is the only module that’s required by Django, even if you don’t have any code in it. But chances are that you’ll have at least 1 database model, signal handler, or perhaps an API connection object. models.py is the best place to put these because it is the one app module that is guarenteed to be imported early. This also makes it a good location for connection objects to NoSQL databases such as Redis or MongoDB. Generally, any code that deals with data access or storage should go in models.py, except for simple lookups and queries. managers.py Model managers are sometimes placed in a separate managers.py module. This is optional, and often overkill, as it usually makes more sense to define custom managers in models.py. However, if there’s a lot going in your custom manager, or if you have a ton of models, it might make sense to separate the manager classes for clarity’s sake. admin.py To make your models viewable within Django’s Admin system, then create an admin.py module with ModelAdmin objects for each necessary model. These models can then be autodiscovered if you use the admin.autodiscover() call in your top level urls.py. views.py View functions (or classes) have 3 responsibilities: If a view function is doing anything else, then you’re doing it wrong. There are many things that fall under request handling, such as session management and authentication, but any code that does not directly use the request object, or that will not be used to render a template, does not belong here. One valid is exception is sending signals, but I’d argue that a form or models.py is a better location. View functions should be short & simple, and any data access should be primarily read-only. Code that updates data in a database should either be in models.py or the save() method of a form. Keep your view functions short & simple – this will make it clear how a specific request will produce a corresponding response, and where potential bottlenecks are. Speed has business value, and the easiest way to speed up code is to make it simpler. Do less, and move the complexity elsewhere, such as forms.py. Use decorators generously for validating requests. require_GET, require_POST, or require_http_methods should go first. Next, use login_required or permission_required as necessary. Finally, use ajax_request or render_to from django annoying so that your view can simply return a dict of data that will be translated into a JSON response or a RequestContext. It’s not unheard of to have view functions with more decorators than lines of code, and that’s ok because the process flow is still clear, since each decorator has a specific purpose. However, if you’re distributing a pluggable app, then do not use render_to. Instead, use a template_name keyword argument, which will allow developers to override the default template name if they wish. This template name should be prefixed by an appropriate subdirectory. For example, django.contrib.auth.views uses the template subdirectory registration/ for all its templates. This encourages template organization to mirror application organization. If you have lots of views that can be grouped into separate functionality, such as account management vs everything else, then you can create separate view modules. A good way to do this is to create a views subpackage with separate modules within it. The comments contrib app organizes its views this way, with the user facing comments views in views/comments.py, and the moderator facing moderation views in views/moderation.py. decorators.py Before you write your own decorators, checkout the http decorators, admin.views.decorators, auth.decorators, and annoying.decorators. What you want may already be implemented, but if not, you’ll at least get to see a bunch of good examples for how to write useful decorators. If you do decide to write your own decorators, put them in decorators.py. This module should contain functions that take a function as an argument and return a new function, making them higher order functions. This enables you to attach many decorators to a single view function, since each decorators wraps the function returned from the next decorator, until the final view function is reached. You can also create functions that take arguments, then return a decorator. So instead of being a decorator itself, this kind of function generates and returns a decorator based on the arguments provided. render_to is such a higher order function: it takes a template name as an argument, then returns a decorator that renders that template. middleware.py Any custom request/response middleware should go in middleware.py. Two commonly used middleware classes are AuthenticationMiddleware and SessionMiddleware. You can think of middleware as global view decorators, in that a middleware class can pre-process every request or post-process every response, no matter what view is used. urls.py It’s good practice to define urls for all your application’s views in their own urls.py. This way, these urls can be included in the top level urls.py with a simple include call. Naming your urls is also a good idea – see django.contrib.comments.urls for an example. forms.py Custom forms should go in forms.py. These might be model forms, formsets, or any kind of data validation & transformation that needs to happen before storing or passing on request data. The incoming data will generally come from a request QueryDict, such as request.GET or request.POST, though it could also come from url parameters or view keyword arguments. The main job of forms.py is to transform that incoming data into a form suitable for storage, or for passing on to another API. You could have this code in a view function, but then you’d be mixing data validation & transformation in with request processing & template rendering, which just makes your code confusing and more deeply nested. So the secondary job of forms.py is to contain complexity that would otherwise be in a view function. Since form validation is often naturally complicated, this is appropriate, and keeps the complexity confined to a well defined area. So if you have a view function that’s accessing more than one variable in request.GET or request.POST, strongly consider using a form instead – that’s what they’re for! Forms often save data, and the convention is to use a save method that can be called after validation. This is how model forms behave, but you can do the same thing in your own non-model forms. For example, let’s say you want to update a list in Redis based on incoming request data. Instead of putting the code in a view function, create a Form with the necessary fields, and implement a save() method that updates the list in redis based on the cleaned form data. Now your view simply has to validate the form and call save() if the data is valid. There should generally be no template rendering in forms.py, except for sending emails. All other template rendering belongs in views.py. Email template rendering & sending should also be implemented in a save() method. If you’re creating a pluggable app, then the template name should be a keyword argument so that developers can override it if they want. The PasswordResetForm in django.contrib.auth.forms provides a good example of how to do this. tests.py Tests are always a good idea (even if you’re not doing TDD), especially for reusable apps. There are 2 places that Django’s test runner looks for tests: - doctests in models.py - unit tests or doctests in tests.py You can put doctests elsewhere, but then you have to define your own test runner to run them. It’s often easier to just put all non-model tests into tests.py, either in doctest or unittest form. If you’re testing views, be sure to use Django’s TestCase, as it provides easy access to the test client, making view testing quite simple. For a complete account of testing Django, see Django Testing and Debugging. backends.py If you need custom authentication backends, such as using an email address instead of a username, put these in backends.py. Then include them in the AUTHENTICATION_BACKENDS setting. signals.py If your app is defining signals that others can connect to, signals.py is where they should go. If you look at django.contrib.comments.signals, you’ll see it’s just a few lines of code with many more lines of comments explaining when each signal is sent. This is about right, as signals are essentially just global objects, and what’s important is how they are used, and in what context they are sent. management.py The post_syncdb signal is a management signal that can only be connected to within a module named management.py. So if you need to connect to the post_syncdb signal, management.py is the only place to do it. feeds.py To define your own syndication feeds, put the subclasses in feeds.py, then import them in urls.py. Custom Sitemap classes should go in admin.py, Sitemap subclasses are often fairly simple. Ideally, you can just use GenericSitemap and bypass custom Sitemap objects altogether. context_processors.py If you need to write custom template context processors, put them in context_processors.py. A good case for a custom context processor is to expose a setting to every template. Context processors are generally very simple, as they only return a dict with no more than a few key-values. And don’t forget to add them to the TEMPLATE_CONTEXT_PROCESSORS setting. templatetags The templatetags subpackage is necessary when you want to provide custom template tags or filters. If you’re only creating one templatetag module, give it the same name as your app. This is what django.contrib.humanize does, among others. If you have more than one templatetag module, then you can namespace them by prefixing each module with the name of your app name followed by an underscore. And be sure to create __init__.py in templatetags/, so python knows it’s a proper subpackage. management/commands If you want to provide custom management commands that can be used through manage.py or django-admin.py, these must be modules with the commands/ subdirectory of a management/ subdirectory. Both of these subdirectories must have __init__.py to make them python subpackages. Each command should be a separate module whose name will be the name of the command. This module should contain a single class named Command, which must inherit from BaseCommand or a BaseCommand subclass. For example, django.contrib.auth provides 2 custom management commands: changepassword and createsuperuser. Both of these commands are modules of the same name within django.contrib.auth.management.commands. For more details, see creating Django management commands.
http://streamhacker.com/tag/admin/
CC-MAIN-2016-40
refinedweb
1,967
57.37
Updated on Kisan Patel LINQ has the great ability to query on any source of data that could be collections of objects (in-memory data, like an array), SQL Database or XML files. We can easily retrieve data from any object that implements the IEnumerable<T> interface. LINQ to Objects queries are limited to collections of user-generated data. The core types that support LINQ are defined in the System.Linq and System.Linq.Expressions namespaces. Lets write first LINQ example in C#. using System; using System.Linq; namespace ConsoleApp { class Program { static void Main(string[] args) { string[] names = { "Kisan", "Devang", "Ravi", "Ujas", "Karan" }; var shortNames = from name in names where name.Length <= 5 select name; foreach (var name in shortNames) Console.WriteLine(name); } } } Output of the above C# program. Here, we have filtered a list of names to select only the ones whose length is less than or equal to five characters. LINQ queries start with the keyword from. The from clause targets any instance of a class that implements the IEnumerable interface. These queries look similar to a SQL statement, although their style is a bit different. The sample expression we have defined consists of a selection command select name. Between the from clause and the select clause, you can find join, where, orderby, and into. These all are query operators and over 50 operators defined. LINQ to Objects is more beneficial because of the following reasons. In simple, you can say that the more complex operation you want to perform on data, the more benefit you get by using LINQ to Objects instead of the traditional iteration techniques.
http://csharpcode.org/blog/linq-to-object/
CC-MAIN-2019-18
refinedweb
271
66.03
1433 Axioms from Interaction s of A ctors Theory Nick Green Real Time Study Group Department of Computer Science University of Wales, Cardiff Keywords Cybernetics, Self - organisation, Intera ction, Actors, Evolution, Learning, Spin Abstract While working with clients in the last years of his life Gordon Pask produced an axiomatic scheme for his Interaction s of A ctors Theory which is a development of his well known Conversation Theory. These axioms are interpretable as a general theory of self - organisation and are discussed as characteristic of field concurrenc e and as part of the second order cybernetic s canon. An application to popu lation density is reported supported by both kinematic and kinetic simulation. Implications for cardiovascular anti - coagulation therapy and planetary evolution are discussed. Introduction Gordon Pask's Interactions of Actors Theory (IA) was developed fro m his earlier Conversation Theory (CT). The key ideas are presented as the writer encountered them. A cosmological picture emerges of elementary, interacting self - organising processes that evolve to produce life and concepts in our br ains. Pask regards concepts, in general, as any persistent self - productive looping process in any state of matter. CT requires the concurrent existence of at least two distinct precursor concepts to generate a third. These concepts resonate like chemical t automers making them analogical to each other. CT is transformed to IA by the taking of duals and interacting Actors are produced which support participants in conversations of bounded duration. The central object is the stable concept tr iple which takes the form of the Borromean link. The potential of this form as a concurrent computing element and a model of continuity is discussed. The self - organising forces exerted by the concept triple are the subject of the axioms which are disc ussed in detail. Lastly serial and concurrent experimental work is reported which applies Pask's Last Theorem (PLT), a theorem about differences and forces. Experiment confirms the view that weak forces lead to dense forms of self - organisati on in ekistics (the theory of settlements), cardiovascular anti - coagulation therapy and planetary evolution. It will be noted, for example, that diatomic hydrogen cannot form unless Pask imperative forces act on the atoms. In his later years Pask had effectively shown IA to be a theory of Cybernetics hence confirming the von Foerster soubriquet for him "Mister Cybernetics", the "cybernetician's cybernetician" (von Forster, 1995). In 1994 while consulting with clients 1 Gordon Pask made a clear sta tement of axioms or properties governing his Conversation Theory, CT (Pask, 1976), the means by which concepts are shared, and its underlying support Interaction of Actors Theory, IA (Pask, 1993). As Bateson (Bateson, 1980) has it Nature and Mind are one. To Pask, also, Nature's tendency to self - organisation produces concepts (Pask, 1996) whether in a star or a brain. A formal statement of these axioms has never been made. By describing how IA developed and the names and properties of the axioms as discuss ed with Pask in due course a more formal paper and scheme might be produced. P - Individuals and M - individuals There are two parts to an interacting participant or Actor. First the P - individual which is a dynamic, productive and incidentally reproductive, a daptive, evolving and learning collection or entailment mesh of concepts. Dawkins' memes (Dawkins, 1976) are a restricted form of P - individual. Second the M - individual which is a mechanical or biological medium e.g. a computer, a brain or a sta r, 1434 which supports the P - Individual and the strains its concepts produce. One P - individual may inhabit many M - individuals and many P - individuals may interpret in a single M - individual (Pask, 1975) . To Pask Nature's eternal interactions, with associated P - in dividuals interpreted in M - individuals, could be regarded as conversations between participants when constrained by beginnings and ends. The constraints on the second order paradigm of Cybernetics, where one participant may be a ca rbon - based life form designated observer and part of the system, are made plain with the axioms. These axioms apply to all participants whether observer or not. Pask (Pask,1990) won an award from Old Dominion University, Virginia for his Process/Product complementarity principle. "Every process produces a product, every product is produced by a process" e.g. electromagnetic waves and photons or in the case of CT applied concepts and their descriptions. This may be written Ap ( Con z (T)) => D z (T) where => means produces, z is a participant and T the current topic or concept from which all others can unfold as thought proceeds from within the domain or entailment mesh of the participants. So the Description, D, of Concept, T, is produced by the Application, Ap, operator on the Conversation, Con, operator on T. The dual character is denoted: < Ap ( Con z (T)), D z (T)>. Physically D is a hard carapace of repulsive force. P and M - Individuals are also dual process/product pairs. Pask wrote this as: <P - ind ividual, M - individual> denoting an Actor or participant to interaction or conversation. Ap, Con and D are operators in Pask's protolanguage Lp or protologic as he sometimes called it. Concepts A concept is derived by the conj unction, ANDing, of at least two concurrent concepts. This satisfies the marked state requirement of Petri Information transfer. Dr Bernard Scott (personal communication) points out that this is equivalent to the “irreducible to binary fo rm” relations emphasised by Peirce, Korzybsky & McCulloch . This may be shown graphically as in Figure 1 . Here, for example, we might be speaking of Newton's Second Law 2 . T1 could stand for the concept of Force with T2 and T3 being mass and acceleratio n. To understand the modern theory of force more deeply we might prefer the topics or concepts to be assigned as sparticle, knot and string. Something more trivial seeming might be dog, lead and walk. Clearly in all these examples a third concept can be derived from any pair. There is a strict analogy between any concept pair, distinguished by differences and similarities defined by the third concept. Pask (Pask, 1976) called this "local cyclicity" and this can be demonstrated graphically as in Figure 2 . In the case of the two examples taken from classical and supersymmetry force theory we can demonstrate consistency and coherence with, for example, dimensional analysis or probing into the observations that led to the construc tion of the concepts. In the case of the dog, the walk and the lead a supporting hypothesis may also be constructed. Further concepts are entailed Figure 1 T1 is derived from T2 and T3 T1 T3 T2 Figure 2 Local Cyclicity of a concept triple T1 T3 T2 1435 supporting the disambiguation of dog, walk and lead. One might show that leads are used to take dogs for walk s rather than, say, connect them to an electricity supply. Supporting hypotheses of this kind are implicit in the scientific examples and can also be made explicit. The terms assigned to the topic numbers must, at least, be defined unambiguously for t he appropriate context. Thence addition of concepts leads to potentially very large entailment meshes representing the concepts of one or many participants to a conversation delimited by beginnings and ends supported by an interaction which is eternal. In teraction s of Actors It was from consideration of local cyclicity 3 in Conversation Theory that Interaction of Actors Theory was derived. Pask summoned me to the Athenaeum, one of his clubs, to show me with great enthusiasm some large drawings he had made, charmingly indifferent to the club rule: no papers in the bar. Duality and the taking of duals is a widely used technique in simplifying, for example, graph theory and electrical circuit theory. Calculations can sometimes be simplified. In gra ph theory nodes can be interchanged with arcs and in circuit theory inductances with capacitances and current sources with voltage sources. Further we find a three pointed "star" of resistances is electrically equivalent to a triangle of resistances. Elect rical engineers call this Star Delta duality 4 . Pask took the dual with the bidirectional arcs of local cyclicity and the topic or concept nodes. Thus in one elegant transformation Conversation Theory was made dynamic. The old derivations, the stic ks of what he called his "stick and ball model" (as figures 1 and 2) became formal ly circular, closed loop processes. The topic or concept nodes became the intersections of the fields produced by the closed, looping process. He asserted forces a cted between the circular processes and that they existed in stable triples. The Begins and Ends of the application of concepts in Conversation Theory had been replaced by eternal, evolving kinetic interactions between organisationally closed and informati onally open concept loops, toruses with carapaces that maintain a boundary, a distinction. An Actor is anything that acts as a result of an interaction or a transfer of meaningful information The axioms or properties below apply to both Conversatio n and Interactions of Actors Theory with the exception that conversations of P - individuals have Begins and Ends and may be interrupted by other conversations and so be nested. Actors and their M - individuals interact eternally. Concept reson ance The closed toroidal processes which comprise the concepts of P - individuals exist as stable triples in which any pair is analogous to the other and distinguished by the third. Any two concepts may generate the third because of their resonant similariti es and differences. The resonance produced by an incident field produces an output radiative field. Knot Theory was a matter of some concern to Pask. He coined the term "tapestry" making his entailment mesh structure of the concepts of a participant cohe rent with knot theory. Whilst loops are always permitted, indeed are the nature of concepts, the intersections of CT become crossings in IA where crossings up and down with loops define knots. The crossing up or down rule of the knots or links was not deci ded but a recursive and nested Borromean form seemed most likely. This seems coherent with the superstring theory interpretation of force. Here knots in strings produce the sparticles and thence bosons of current force theory e.g. Pierre van Baal and Andre as Wipf (van Baal & Wipf, 2001) who postulate Hopf linked strings (two intersecting loops) produce force. A suitably energetic hadron collider experiment e.g. Utpal Chattopadhyay and Pran Nath ( Chattopadhyay & Nath, 2001) may be able to decide if the Hopf or Pask's Borromean model is most respectable. A smaller new experiment b y 1436 Long et al (Long et al, 2003) using planar oscillators at 100 m lengths has examined some of string theory predictions of the need for extra dimensions. In some cases this would result in deviations from the Newton's inverse square law at distances less than 1mm. None were found. The twelve arcs of the Borromean Ring form and the equilateral prismatic tensegrity of Buckminster Fuller were under study at the time I joined Pask Associates in 1993. The tensegrity provided a tractable force model comprisi ng three repulsions and nine attractions corresponding to the diagonal rods and tension strings. In due course these were configured as elements of a potential concurrent computer. Pask wanted a machine in which the three struts of the prismatic tensegrity were periodically excited with make and break circuits feeding solenoids. Tiny electric motors attached to each strut were more successful and less liable to collapse. The phasor sum of these vibrations produces a fourth frequency that is a mechanical fou r wave mixing analogue 5 (Shih, 1987). We had no theory of desirable frequencies. A variety of resonant frequencies could be inferred but a full vibrational analysis with strain gauges and spectrograph was an obvious next step. Pask felt inspiration, it see med to me, holding the machine as its three axis vibrated and, as usual, fell apart. He was clearly looking for unusual phenomenon. With the help of the Royal Institution a higher precision device was made with six equal mass and 15 cm length mild steel r ods. There was potential danger here without a safety cage and thankfully Pask appeared to lose interest in pursuing this having made his statement, as it were, of the nature of concept triples as computing elements. This device mimics the phase conjugate mirror. Two waves are mixed in a non - linear medium. Any input wave is amplified if the two "pumping" waves are sufficiently intense, phase conjugated and reflected back converging to the input source. Phase conjugate oscillator technology is crucial to som e to make practical beam weapons. The fact that the incident wave is phase conjugated has lead to the radiated wave being described as "time reversed". One can share in Pask's delight at interpreting in rough hardware his concept triples as a dynamic com puting component where the memory requirement was formally embodied as a highly restricted method of travelling backwards in time. Further precision work and miniaturisation is required to create a practical device. Confirmation that output is phase conjug ated with the input wave could be a first step in validating the prototype prismatic tensegrity computer. Estimating its capacity as a storage medium for analogue waveforms for given implementation technologies might be a next step. Figure 5 Isometric View Figure 4 Plan of Equilateral Prismatic Tensegrity computing element showing optical isomerism, an enan tiomorphic pair. A short proof of the 30° angle of twist is elusive. Figure 3 Borromean Ring Model of stable Concept triple 1437 Concept kinematics a nd kinetics Concepts nest recursively in triples within each other. As drawn by Pask they form tori like wires in a multi - core electric cable 6 . Repulsive forces are exerted by the concepts generating a carapace or hard protective shell around them as in fi gure 6. These forces further distinguish the domain of Interactions of Actors Theory from Conversation Theory in that Pask asserted IA to be a kinetic theory and CT to be kinematic because of its implicit begins and ends. Forces cause thoughts to change and we have the Last Theorem, as Pask called it, which states "Like concepts repel, unlike concepts attract". Pask's Last Theorem (PLT) This statement is intended to embody all forces: weak, strong, gravitational and electric or magne tic, which give rise to the self - organising character we see in Nature. Here '+' directs into the not void (or something) and ' - ' means orthogonal to and "deflects similar concepts". This deflective force is "bearing the clockwise or anticlockwise signatur e of the process which creates it" Pask (Pask, 1993, p. 4 4 ). In a most difficult part of the IA manuscript a prepositional operator mesh is introduced with ontologies and analogies. Ap becomes permissive application and & (or IM) imperat ive application. There is a discussion of unfoldment, Un, as mentation and action in this context. Un is identified with ' - ' and seeking out further concepts in its neighbourhood to apply. The '+' process may be clockwise or anticlockwise in orientation an d may not always close or "eat its own tail" but may disappear into the void ( Pask, 1993 , p. 77 ). On page 78 he says "No aether, electromagnetic or not, is needed only orthogonality, as in the electrical and magnetic components of an electromagnetic f ield and, here, represented by '+' and ' - '". After stating PLT for the first time to me (Green, 2001) I asked Pask "What kind of force?" After a considered pause he said "Just a force". Self - organisation The properties of the axioms can be investigated a nd appreciated as characteristic of i nteraction in field concurrent n body systems, like the Newtonian s olar system or the electrically charged protons and electrons of an atom or molecule. The outcomes of interaction are interpreted in the participant A ctors. Interpretation of Pask's theory of force as producing self - organisation is on going. Some not so obvious considerations from the classical dynamics of forces should be pointed out. First stable closed elliptical orbits are rare in n body si mulations with random initial conditions but equal masses. Choreographies, by contrast, are common when masses are equal e.g. Cris Moore Sante Fe 7 , Carlos Simó (~2000 but undated pre - print from Paris Observatory). These trajectories form braids and are sta ble in that a small perturbation produces a precession only. This may imply braids are frequent in liquids and gases between like polarised molecules. For serial computing simulation tools see Acheson (Acheson,1997). Asymmetric attractive, sticky, aggregat ive generation producing a non - uniform Power Law or Bakian 1/f distribution of masses (Bak, 1997) is probably necessary for the stable seeming elliptical orbits we all expect. Bak's condition of Self - Organised Criticality can be met with the non - linear a ggregation of attractive uniform aggregates or Actors (see later for more , Witten and Sander , 1981). The mechanism of the stickiness (or valency) is worth further Figure 6 The Carapace of Repulsive Force around a section of a closed loop concept process from Pask 1993 1438 study and may be partly tractable under serial digital simulation. Poincaré proved circular o rbits in n body systems have zero probability. Later Kolmogorov conjectured orbital toroids were feasible under perturbation in non - dissipative systems. The proofs due to Arnold and Moser in 1962 and 1963 produced the famous KAM Theorem 8 . Henceforth in thi s paper "circular" process implies a KAM compliant process. Whilst the laws of electric or magnetic attraction and repulsion are familiar , recall again the closed curved Newtonian process. There is repulsion without electromagnetic law in the resistance o f a gyroscope to a couple. Laithwaite's 9 apparently notorious observation while sitting on a rotating office chair with a spinning bicycle wheel in his hand that rotating the seat in one direction made the wheel lighter and in the other made the wheel heav ier should be recalled. The behaviour of the asymmetric Tippe (sometimes spelt Tippy) Top 10 shoul d be c onsi dered. Here i s c ount er exampl e t o what ever dynami c al mi ni mi sat i o n pri nc i pl es (e.g. of Fermat, Maupert i us or Hami l t on) i s operat i ng. T he t op t urns up - si de down at some c ri t i c al f requenc y, reversi ng i t s di rec t i on of spi n and ac hi evi ng an equi l i bri um but rai si ng i t s c ent re of gravi t y t o do so. Convent i onal mi ni mi sat i o n pri nc i pl es assert t hemsel ves as t he t op sl ows down past t he c ri t i c al f requenc y and i t t oppl es, reversi ng spi n agai n, wi t h t he c ent re of gravi t y mi ni mi si ng i t s pot ent i al energy as i nt ui t i on and si mpl e Newt oni an dynami c s demands. A pi c t ure of Bohr demonst rat i ng t he i nvert i ng T i ppe T op t o Fermi 11 i s hel d by t he Ameri c an Inst i t ut e of Physi c s. P ri gogi ne's Group 12 bel i eves non - l i near i nt erac t i on produc es sel f - organi sat i on i n i rreversi bl e di ssi pat i ve proc esses. Si mul t aneous t hree body c ol l i si ons are under i nvest i gat i on and spi n i nt erac t i on i n quant um syst ems, f or exampl e. For Pask a spi n reversed c o nc ept was f ormal l y di f f erent and by PLT at t rac t i ve. St abl e t ri pl es exhi bi t resi dual c l oc kwi se/ant i c l oc kwi se pari t y. Last l y , i t i s wort h not i ng a rel at i vel y new phenomenon di sc overed by Davi d Ac heson (Ac heson, 1993) and di sc ussed i n Ac heson (Ac heson, 1997) . T he st i f f eni ng of a sof t c arapac e, as IA mi ght i nt erpret i t, has been demonst rat ed. Around 30 i nc hes of ordi nary househol d f l exi bl e pl ast i c c overed c urt ai n wi re st ands up st rai ght when vert i c al peri odi c exc i t at i ons are appl i ed t o t he base wi t h a mec hani c al vi brat or at a resonant f requenc y. Ac heson c al l s t hi s gravi t y def yi ng ef f ec t 13 "Not qui t e t he Indi an Rope T ri c k". Interaction s of Actors Axioms Cont ext Perspec t ive Responsible Respec t able Amit y Agreement Agreement - to - disagr ee (ATD) Purpose Unit y not uniformit y Fait h Beginnings and Ends (CT) Et ernally int eract ing (IA) Similarit y and Difference Adapt at ion Evolut ion Generat ion Kinet ic (IA) Kinemat ic (CT) Conservat ion of Meaningful Informat ion Transfer bot h Permissive (Ap) an d Imperative Application (Im) Informational openness and Organisational closure. Void and Not - Void 1439 Exactly how to treat these axioms is the subject of some discussion. Pask's form of words for each axiom is particularly useful in maximising applicabilit y to the so - called soft sciences. The aim is to be able to make statements about interacting people of form equivalent in robustness to those of conventional physical science: a ge of the Earth, the water will take ten minutes to boil and so on. There is n o implied order of priority in these axioms. They exist concurrently as restrictions on the behaviour of all participants in their interactions and the forces acting be they strings, sub - atomic particles, atoms, molecules, plasmas, gases, liquids, solids, plants or animals. The setting up of counting and in particular state counting or variety may prove an interesting first challenge to elegance and chosen notation. The apparent simplicity of the Similarities and Differences in Actors Axiom are what may be deployed. It should be noted that a difference is the feedback of first order cybernetics. Beer once remarked to me that often people in conversation would make a statement or proposal and invite criticism or responses from others by saying "Give me some feedback". This he said was, with an emphatic pause, "not correct". Beer said no more and in the context of a formal model he may have a point. It would be correct in the context of the Difference axiom in CT/IA where "teach back", the feedback to which Be er objected, is judged, implied or used explicitly to confirm a new difference acquired or successful act of learning or meaningful information transfer. This is also described as the execution of a model in a modelling facility shared by participants, wit hin the Cognitive Reflector form of CT, to demonstrate an interpretation of the newly acquired difference by a participant. This was Pask practising as philosophical mechanic ( Pask, 1993 , p. 83 ). These are more or less formal routine, but imperative, components of transactions in every day conversation. Cognitive pathologies result from their incorrect operation e.g. vacuous holism "Can't see the trees for the woods" or pathological serialism "Can't see the woods for the trees" rather than the desirab le balance of local and global versatile cognitive style. The nonsense talked about micromanagement today reflects these pathologies. The dangerously fallacious (vacuous holist) belief that knowledge of detail prevents proper management lead to a Railtrack Board with no engineers 14 . T he l ac k of proper perf ormanc e of t rac k mai nt enanc e sub - c ont rac t ors seemed unnot i c ed by t he Board. T here were t ragi c c onsequenc es wi t h seven dead out si de London at Pot t ers Bar on May 10t h 2002. In 2003 i nvest i gat i ve report i ng i n t he London Eveni ng St andard showed t rac k mai nt enanc e sub - c ont rac t or c ost s out of c ont rol. BBC i nvest i gat i ons di sc l osed a si mi l ar pat t ern at London Underground. A t herapeut i c dose of Vi abl e Syst em T heory c an c ure but part i c i pant s get at t ac hed t o t hei r pat h ol ogi es. T hey are surrounded wi t h repul si ve f orc es. T hey have t o "want t o c hange" as t he psyc hot herapi st mi ght say. Present i ng an unwel c ome di f f erenc e t he prof essi onal c ybernet i c i an may be c onf ront ed wi t h Dr El i zabet h Kübl a - Ross' f i ve st ages of gri ef: "den i al, anger, depressi on, bargai ni ng and ac c ept ance" (Kübl a - Ross, 1969). T hrough enqui ry CT/IA c an hel p t o ext ernal i se, c ompare and c ont rast part i c i pant s' perspec t i ves i n a gi ven c ont ext and f ac i l i t at e sel f - di sc overy and est abl i sh more versat i l e st yl es of c o gni t i on and l earni ng. Di agrams or ent ai l ment meshes showi ng t he dependenc i es of part i c i pant perspec t i ves on c onc ept s c an assi st t hi s proc ess. T hese n - ary rel at i ons c an unf ol d f rom a head node, t opi c or c onc ept and i n IA or dynami c Lp, t hey c an depi c t f orc es ac t i ng bet ween c onc ept s t hrough PLT. Amity and Faith have c aused muc h c onc ern. Amit y simply means availabilit y f or int erac t ion. It is a nearest neighbour c rit erion f or least not ic eable dif f erenc e or dist inc t ion in an observat ion. In t he human c ont ext we may say "willingness" t o int erac t where will implies a shared Purpose or Unity whic h is 1440 not Uniformity . This in turn may lead us to use the term love, certainly where Generative interaction (or aggregative growth) is implied. Graham Barnes' (Barnes,1994 ) celebrated primer linking Psychotherapy to second order cybernetics "Justice, Love and Wisdom" can be seen in greater relief and in all its unbounded applicability when this clarification is made. Justice is defined here as "reflective balance", wise hom eostasis applied in the manner of Rawls (Rawls, 1973) or, indeed, Rescher (Rescher, 1966). Faith is a property of the duration of an assertion and persists until a contradictory counter example is found. It is the method of argument used to establish thes e axioms. These axioms only hold if no counter examples can be found in the context of Interaction. This does not imply completeness. New axioms may be found and existing axioms may be condensed into a single more powerful axiom. To interact or do a proper experiment, for example, faith is required. We cannot forecast the outcome, whatever we may claim, otherwise the experiment or interaction would not be worth conducting. It may be seen as a kind of concurrent Halting Problem or unknown signal identificati on with noise. It is never clear how long to cancel or average noise before a useful signal may emerge. Responsible, Respectable distinguishes classical and quantum observability and controllability. A respectable Actor is classically observable, can be h eard, seen etc. A potentially responsible Actor may require excitation or heating to fulfil a Heisenberg condition of respectability or observability. More usually excitation is needed to produce some desirable characteristic response. Pask saw these induc ed excitatory stresses and strains as analo gous to the tautomeric forms of structural chemistry. The IA forms are more facile in application and the ethical status of observation can emerge naturally, despite the somewhat unnatural Victoria n resonance. The Home Office Minister appeals to us to instruct our children to show respect, which is fine if reciprocated by his Department's accountability, which is another form of the observability criterion. Simply put , lack of accountability is form ally not respectable. With the Kalman (1961) 15 approac h a syst em i s c ont rol l abl e i f t here i s some f i ni t e set of i nput s whi c h c an produc e a desi red st at e. Observabi l i t y of a gi ven i nt ernal st at e requi res a f i ni t e number of observat i ons f rom whi c h t hat st at e c an be est i mat ed. Gi ven t he Conant - Ashby (Conant & Ashby 1970) Model Regul at or T heorem we c an use requi si t e vari et y and i nsi st observabi l i t y prec edes c ont rol l abi l i t y t o make a proper model. T he t ransf ers of meani ngf ul i nf ormat i on i mpl i ed requi re agreemen t or reproduc i bi l i t y ac c ordi ng t o t he IA axi oms. T he Kal man c ri t eri a hel p us see what mi ght be avai l abl e f or f urt her ri gour i n t he responsi bl e (perhaps bet t er response - abl e) and respec t abl e (perhaps bet t er respec t ) axi om def i ni t i ons. T here i s an et hi c al po si t i on assoc i at ed wi t h t hese axi oms t hat, surpri si ngl y, st ands up t o f urt her exami nat i on. When a t est mass i s rel eased i nt o an n body syst em i t f ol l ows a pat h of mi ni mum ac t i on (put t i ng on one si de what appears t o be t he t ransi ent maxi mum of t he T i ppy T op phenomenon). In a soc i et y we c an i nt erpret t hi s as opt i mum ac t i on and t heref ore l oc al and gl obal l y opt i mum (Bounai s, 2002 and Pi neau, 2002). T he et hi c al i mpl i c at i ons of l oc al/gl obal opt i mi sat i on or mi ni mi sat i o n of ac t i on, t he t ransf ormat i on of i mperat i ve t o permi ssi ve or di sc ret i onary appl i c at i on, shoul d be c onsi dered f urt her. It may not be wi del y agreed t hat al t rui sm i s a propert y of mat t er but t hat l anguage support s i t c annot be deni ed. Can l anguage be appl i ed wi t hout al t rui sm? Suppressi on of l anguage, c ensorshi p, st eal t h, pri vac y are t he begi nni ngs of t he short t ermi sm or rest ri c t ed l oc al opt i mi sat i on, whi c h i s not al t rui st i c and not 1441 respectable or responsible. If we speak of meaningful information transfer between self - organising systems then we may ex pect an ethical content. An application to the longer term, a eudemonic utility, as Stafford Beer might put it, as opposed to a hedony - the relatively short term gain which has recently lead to pension funds being plundered on a wide scale. Appropriate real time or concurrent engineering could make fraud of this kind impossible or, at least, far less likely to succeed. We might say of a Dawkins' gene that it cannot be selfish unless it can keep secrets. The ethical content of interaction whilst implicit for Interacting Actors seems to lessen as Permissive Ap and Begins and Ends enter the picture. We note the notion of "optimal" is difficult in physical nature but where optimal trajectory in an n body system asserts itself, in real time and kinetically, t here is implicit optimisation and thus ethical minimisation of action. In serial digital kinematic simulation accurate serial 16 di gi t al c omput at i on wi t h i t s begi ns and ends i s i mpl i c i t l y error prone. T he meani ng of opt i mal i n t hi s sense mi ght c hange wi t h in tention or purpose e.g. it may be quicker, more ethical with action minimised, to get to Mars by waiting for a better planetary alignment. Oddly, perhaps, determining the applicability of ethics or optimisation of action and purpose might yield fundamental theorems in concurrent computing. The failure to distinguish serial/parallel and pseudo concurrent (task scheduling) from true field concurrence is paralysing to d evelopment in computing. Shannon Information Theory's dependence on the Ergodic Hypothesis encourages this 17 . T uri ng's def i ni t i on of Comput abi l i t y - what a bank c l erk c an do wi t h a penc i l, some paper and unambi guo us i nst ruc t i ons - si mi l ar l y assumes seri al i t y i s c ompet ent f or Uni versal i t y. Ashby's more f undament al measure of Vari et y i s not, i n pri nc i pl e, sequent i al l y c onst rai ned. T he one l i ne demonst rat i on of Requi si t e Vari et y vi a t he si mpl e i nt egrat i on of bandwi dt h i mpl i es seri al i t y. However t he subst i t ut i on of paral l el c ommuni c at i on c hannel s present s no di f f i c ul t y. Not e Kol mogorov (Kol mogorov, 19 56) shows t he equi val enc e of Shannon T heory t o c ont i nuous si gnal s when observed wi t h bounded ac c urac y. T he subst i t ut i on of paral l el di gi t al c hannel s by t rue f i el d c onc urrent (i mpl i c i t l y sync hroni sed and dependent ) c ont i nuous f i el ds renders no di f f erenc e t o whi c h Shannon, T uri ng or Ashby mi ght objec t 18 . T hus Vari et y c an be appl i ed t o t he t rue c onc urrent c ase al so. Chai t i n i n personal c ommuni c at i on c onf i rms knowl edge of Requi si t e Vari et y as a t eenager. T he purel y T uri ng sequent i al proof of hi s Al gori t hmi c In f ormat i on T heory 19 t hat t he l engt h of i nc ompressi bl e al gori t hms are bound by t hei r i nc ompressi bl e out put f ol l ows f reel y f rom Ashby's Law of Requi si t e Vari et y. We have c onst rai nt s exposed here t hat mi ght yet yi el d a c onc urrent t heory. Pask was ada mant t hat Dat a Compressi on di d not i mpl y aut omat i c t heory bui l di ng at l east i n t he c onc urrent c ase. We seem t o have t he nec essary i ngredi ent s f or a c onc urrent t heory. How t o proc eed i s not dec i ded. Permissive and Imperative applic at ion of a st ress - produc ing c onc ept may be dist inguished by f reedom of c hoic e: when t he st rains of c onc ept s int ernal t o a P - individual lead t o aut onomous ac t ion rat her t han ac t ion f orc ed by anot her part ic ipant. We see c learly how delic at e t he balanc e bet ween permiss ive and imperat ive applic at ion nec essarily is. Perhaps like a random number generat or permissive applic at ion is an ideal. Freedom of c hoic e evolves, argues Daniel Dennet t (Denet t, 2003). Can permissive Ap ever be et ernal? Furt her t he hierarc hy of f orc e: we ak, st rong, elec t ric and gravit at ional may yield t o a nest ed homeost at ic model of t he balanc e of imperat ive f orc es. But what exac t ly is a permissive applic at ion? The out c ome of what we c all aut onomous t hought, no doubt, but is f ree will implied? The “no doppelgangers” c lause so of t en applied by Pask implies no t wo c onc ept s are t he same bec ause t hey are f ormed in unique c ont ext s wit h unique perspec t ives. This provides an a c t or wit h unique 1442 responsibility. Is this enough to guarantee free wil l? Wittgenstein says free will exists because future actions cannot be known now (Tractatus 5.1362) , a position shared by Pask in his indeterminacy of foci of thought or attention and the indeterminacy of truncation and unfoldment given its origin (Tabl e 4 Pask, 1993). We might say free will is environmentally determined. When thought forces are weakly perceived, distant and outside the nearest neighbourhood context, they form a background of noise. A stochastic resonance may occur, however, lifting out a concept under enfoldment. This will ensure unique action but not freedom of choice. Our willingness to accept responsibility for freedom of choice is seen as evidence by Pask (p.83 Pask, 1993) of sentience and eternal mystery. Could it be permissive appl ication is only possible in contemplating the eternal and infinite? But that position uses a variety 20 argument t hat Pask woul d not approve at l east i n our pri vat e researc h deal i ngs. Avoi di ng responsi bi l i t y by c l ai mi ng i mperat i ve c onst rai nt s, whi c h are, i n f ac t, permi ssi ve, on c hoi c e or f ree wi l l may be at t he heart of muc h management f ai l ure t oday. T he Beer model of Aut onomy (Beer 1972) may provi de a prac t i c al l y t enabl e posi t i on adumbrat ed by D. J. St ewart's (St ewart, 2000) T ern T heory and i t s i nt ervent i on rat i os wi t h t hei r support i ng di mensi ons of i mpari t y. An equi l i bri um bet ween i mperat i ves woul d seem a nec essary i ni t i al c ondi t i on f or permi ssi ve Ap. Pask c l ai med Void and Not Void , or "somet hing" as not void may be c alled, were required dist inc t ions in IA . A c onnec t ed void of some kind dist inguishes an M - Individual. In CT alt hough voids may be invoked t hey are not required. The begins and ends of CT may be seen as abst rac t ions of t he void/not void. If begins and ends are real on/of f phenomenon wit h c harac t erist ic rise t imes or even ac t ual void/not void c rossings an int erest ing quest ion arises. Is Gibbs ringing seen around t he begins and ends when t he c losed, periodic proc esses in a signal are c ount ed wit h Fourier spec t ral analysis ( i.e. When Co n z (T) is applied with Ap in a CT with begins and ends )? Voids are small with suitable dimensions around 10 - 35 m., the Planck length 21 . Are t hey f ound, perhaps, at t he c ent re of superst ri ngs? Are t hey t he wormhol es t hat joi n a Deut sc hi an mul t i verse (Deut sc h , 1997)? T hese are t he t est s t hey must pass t o be part of a rel evant physi c al c anon. Rec al l Conc ept s are made up of voi ds and "somet hi ng" wi t h '+' f orc e (t hought ) di rec t i ng i nt o t he not voi d. At t he l evel of CT t he Gi bbs ef f ec t, an exponent i al dec ayi ng d i st ri but i on of hi gh f requenc y c omponent s (t he c i rc ul ar proc esses t hat must be summed t o produc e a st eep ri se t i me, an "on"/"of f" or begi n/end) may be seen. Pask c l ai ms i ndet ermi nac y around t he t runc at i on and sel ec t i on of an unf ol dment f orc e, ' - ', (ment at i o n) al l owi ng some smoot hi ng and reduc t i on of t hese undesi rabl e f requenc y c omponent s. At t he l evel of Int erac t i on of Ac t ors, i n whi c h t he voi d i s a nec essary c omponent, t he c onvol ut i on of a 0, 1 voi d/not voi d support wi t h a t est si gnal c an c anc el t he Gi bbs r i ngi ng. Caref ul c hoi c e of i ni t i al c ondi t i ons c an yi el d t he nec essary phase c anc el l at i on wi t h a t est si gnal 22 . T he possi bl e el i mi nat i on of CT's Gi bb's phenomenon by a wavel et c onvol ut i on wi t h t he di sc ret e punc t ured voi d/not voi d spac e t hat IA support s and re quires underlines the depth of Pask's approach. This constitutes an outline proof of the value and power of the Begin/End, Void/Not Void axioms distinguishing CT and its IA support 23 . Agreement c an be seen as t rajec t ories bet ween bodies in phase loc k c oupl ing or c opying as in Wiener self - organisat ion ( Wiener, 1961 , p. 201 ). The "at t rac t ion of f requenc ies" as he c alled it in t he sec ond edit ion. It is int erest ing t o not e t hat t his at t rac t ion only oc c urs in t he parallel - c oupled c ase and a serial c ouplin g leads t o inst abilit y or a spec t rum of many f requenc ies. A deeper underst anding of PLT immediat ely suggest s it self wherein repulsion may c ause unf oldment of c onc ept s (ment at ion t henc e ac t ion or t hought ) but at t rac t ion may simply c ause a ggregat ion, learning or addit ion of a new c onc ept. The push and 1443 pull of thought, as it were. We might also notice that in his last paper Pask (Pask, 1996) used the von Foerster (von Foerster, 1981) redundancy measure, R, of self - organisati on - itself motivated by the McCulloch condition of Redundancy of Potential Command . R= 1 - H/H max where H is current bandwidth or entropy and H max peak bandwidth or entropy This condition, the minimisation of bandwidth and the maximisation of autonomy, motivates learning and the acquisition through agreement of new and useful concepts. The usual Paskian distinctions from CT/IA of similarity and difference apply here. The philosophical mechanic's cognitive reflector protocol can be applied to produce ex ternalisation, in a medium shared by at least two participants, of a behaviour induced by one participant as a result of a communication act. The response by the second participant, with whom there also exists the analogical dependence of meaningful inform ation transfer or communication, constitutes agreement. Agreement to Disagree is clearly the recognition by a pair of participants that this condition has not been achieved. It denotes the differences between participants. Begins and ends are implicit here . These bounding singularities are indicated here by a rapid change in radius of curvature of a trajectory or an encounter with the void (possible Gibbs effect and frequencies dependent on the magnitude of the force applied.) The precise nature of this int erface where forces produce begins and ends needs further work. Adaptation , Generation and Evolution are deeply intertwined, as indeed are all the axioms, but key features can be distinguished. Adaptation is a dynamical change perhaps elastic in character . From Ashby (Ashby, 1952) a daptation is the propensity to re - establish equilibrium or homeostasis. If asked to give an example of a simple adaptive system Pask once offered a "cushion". It seems quite reasonable to suggest Newton's Third Law "To every ac tion there is an equal and opposite reaction" is a manifestation of the adaptive principle. In the n body environment Le Chatelier's Principle seems best to capture the adaptive principle: "A system will shift its equilibrium to oppose external change". T he communication of difference seems a pre - condition for adaptation to occur. The interpretation of this phenomenon in terms of PLT in which an attractive difference produces an identity maintaining repulsion to a difference needs further work. Generatio n is seen as growth by attraction and thus aggregation. There is a fuller discussion of this with supporting experimental work at the end of this paper. Evolution requires meaningful information transfer or learning. The System Four of Beer's Viable Syste m Model (Beer, 1972) embodies this most clearly but the inheritance of phylogenetic learning is more primitive. The ontogenetic learning of the Viable System comes after the evolution of a neuroanatomy 24 . For t hi s generat i on i s c l earl y requi red. It i s i nt er est i ng t o not e Pask's embodi ment of Beer's aut onomi c response as an i mperat i ve appl i c at i on of sel f - organi si ng f orc es. Syst em Fi ve ensures t he more permi ssi ve appl i c at i ons of Syst em Four i s bounded by t he Ident i t y Imperat i ve. Al f red Wal l ac e (Wal l ac e, 1858), whose l et t er t o Darwi n c aused publ i c at i on of "Ori gi n of Spec i es" 25 , desc ri bed evol ut i on as a negat i ve f eedbac k proc ess. In t he penul t i mat e paragraph of t he paper t o t he Li nnean Soc i et y Wal l ac e st at es: The ac t ion of t his princ iple is exac t ly like t hat of t he c ent rif ugal governor of t he st eam engine, whic h c hec ks and c orrec t s any irregularit ies almost bef ore t hey bec ome evident; and in like manner no unbalanc ed def ic ienc y in 1444 the animal kingdom can ever reach any conspicuous magnitude, because it would make itself felt at the very first step, by rendering existence difficult and extinction almost sure soon to follow . In his paper the frequent use of the term variety as in "variety of species" and, as above, the term "unbalanced deficiency" (as extinction pro ducing violation of homeostasis) are striking to the cybernetician. Bateson (Bateson, 1976) calls this "probably the most powerful thing that’d been said in the 19 th Century" . Ten years later Maxwell (Maxwell, 1868) wrote "On Governors" but the discussio n is restricted to Watt, Jenkin, Thomson, Foucault and Siemens who all produced centrifugal regulatory devices. Maxwell seemed unaware of Wallace's claims for the centrifugal governor, unifying electricity, magnetism and light was achievement enough, after all. Nowadays we speak of negative feedback. The term was not used until 1927 when Dr Harold Black 26 used a phase shi f t i ng l oop bac k, pat ent ed i n 1937, t o c anc el ampl i f i er val ve noi se f or t he Ameri c an Bel l t ransc ont i nent al t el ephone syst em. T he usef ul ness of an Evol ut i onary axi om c annot be deni ed provi ded i t i s i nt erpret ed as a negat i ve f eedbac k bound on t he vi abi l i t y of a growi ng popul at i on and i t s assoc i at ed ext i nc t i ons, begi ns and ends. T he Di f f erenc e axi om c an t hus be appl i ed wi t h addi t i onal di st i nc t i o ns t o Generat i on and Adapt at i on t o yi el d Evol ut i on. Per Bak's c hal l enge (f ai l ed by John Hol l and's Genet i c Al gori t hm) t o c al c ul at e a di st ri but i on of spec i es ext i nc t i on mi ght be an i nt erest i ng probl em t o at t empt wi t h IA assumpt i ons. T he use of "vi abi l i t y " was st rongl y endorsed by Pask ( Pask,1993 , p. 70 ) wi t h an af f i rmat i on of t he dept h of Beer's Vi abl e Syst em t heory (al ong wi t h hi s C.T., Lp and I.A.). "Foundat i onal l y Cybernet i c i n t ype as c ompet ent and very general i f not uni versal t heori es" he wrot e , "against a background of badly considered, pretentious and often meaningless general theories, greatly publicised and advertised by empty rhetoric." Many years ago Beer asked me after lunch what I wanted. I said "Freedom". I asked him what he wanted. He said he wanted to understand "cosmic repulsion" and we discussed Mach's Principle (the determination of local momentum relative to global momentum and the possible rotations of the universe). Now repulsion is embodied in PLT and the carapace theorem of aut onomy. They may have claimed not to know much about each other ’ s work but they certainly worked in the same garden. The unification of their approach is still elusive. However Beer (p. 187 Beer, 1994) takes us through the remarkable fact, Figure 7, that j oining the vertices of the orthogonal intersection of three golden rectangles produces an icosahedron. After pointing out that "true Borromean circles" (Linstrom & Zetterstrom, 1991) are impossible Dr Peter Cromwell remarks that the boundaries of the golde n rectangles are linked in the Borromean manner 27 . A sket c h proof woul d l et t he vert i c es of t he rec t angl es def i ne t he ext ernal radi i of an el l i pt i c al t oroi d t hen l et t he t hree ort hogonal penet rat i ons be suc h t hat t he i nt ernal major axi s equal s, or i s great e r than, the external minor axis. This produces the Borromean link shown in Figure 1445 8. One link must be opened and closed to achieve this (as a Paskian '+' force questing for closure). Thus Pask Concept Triples and Beer Icosahedral Syntegrity have a topolog ical equivalence supported in a void/not void with some minimum of number of void/not void crossings to permit closure. A host of new results and applications seems possible. Outside a void/not void support the transformation destroys half the vertices of the golden rectangles and a triangular tiling of a symmetrical Borromean ring produces an octahedron. ("Boiling the Platonic kettle"? - the Platonic icosahedron was associated with water and the octahedron with air.) Most profoundly the orthogonal form of t he Borromean 28 st abl e c onc ept t ri pl e c an be seen as a c ybernet i c st at ement of c ont i nui t y around a voi d. A general t heory of pai n/pl easure regul at i on or al gedoni c s (Beer, 1972) mi ght hel p as an exampl e of t he evol ut i on, di f f erent i at i on and spec i al i sat i on of f unc t i on i n sel f - organi sat i on. T hi s mi ght hel p us def i ne l i f e, f or exampl e. St af f ord Beer (Beer, 1998) suggest s t hi s i n t he urgi ng of our prof essi on t o deepen i t s underst andi ng of t he rol e of hormones. "Mol ec ul es of Emot i on" as Candac e Pert, t he pi oneer o f endorphi n rec ept or c hemi st ry, c al l s t hem i n her book of t hat t i t l e. Coul d i t be hormones support a permi ssi ve i nt erf ac e t o t he operat i on of an i mperat i ve f orc e as, f or exampl e, when f l ooded wi t h adrenal i ne i n c hoosi ng a dest i nat i on f or f i ght or f l i ght? 29 . Pask observed "Meani ng i s Emot i on " . He regarded hi s work as pot ent i al l y generat i ve of Beer's Vi abl e Syst em Model ( Pask, G. 1993 , p. 70 ). Mappi ng Fi gure 8 i nt o an ort hogonal t hree di mensi onal t essel l at i on pl ane as an aut omat on wi t h si x nea rest nei ghbours wi l l produc e an i c osahedron. Context and Perspective c an be easily def ined as t he neighbourhood and your posit ion in it and c an be lit erally int erpret ed wit h no spec ial c aveat s. Derrida's c rit ic ism of J. L. Aust in (who holds speec h ac t s de pend on c ont ext ) t hat c ont ext c hanges as ut t eranc es are made 30 are rout i nel y handl ed i n IA/CT by t eac h bac k di f f erenc es. One mi ght urge, as Obe Wan Kenobi i n St ar Wars , "Feel t he Forc e, Jac ques .” A change in context means a change in resultant force vect or, a test of sensitivity of the critic and his least noticeable difference. Getting used to the idea that there is real force in communication is a challenge to us all. From the early work in CT participants in conversation shared an entailment mesh pru ned or unfolded in ways different for each participant with their unique contexts and, indeed, perspectives. This is a powerful first step in hermeneutics and the potential use of the axioms as a toolkit for teaching Criticism; indeed the Critical Theory a nd Hermeneutics community should be advised of their depth and apparent robustness. The miracle of the communication of a similarity to a constructivist might well yield to the context and perspective differences of the achievement of agreement. A more rig orous definition of context is now possible thanks to the work of Jones et al (Jones, 2002). We are all bound to the context of the Universe (or Multiverse - to comply with Deutsch). In defining our context we are seeking which concepts or parameters are a pplicable. Jones et al’s Gamma test measures the variance of nearest neighbours in hyperspace for continuous smooth models. When this variance is at a minimum we have a definition of the useful context. Outside the context forces produce background noi se only. This is an extremely important result for Time Series analysis. It can select which Actors or parameters determine outcomes and are hence part of the local context. One application is in establishing the least obtainable error by a neural net aft er training on a data set. Effectively this is a kind of solution to the Figu re 8 The Orthogonal Form of the Borromean link - isometric view. 1446 Halting Problem for neural nets training on smooth data. It avoids the problem of over - training by advising when training can stop. Unity without uniformity can be interpreted as an expression of identity, the System Five of Beer's VSM, wherein distinct parts cohere persistently. It is thus also an expression of Purpose . Purpose is the convergent or perceived outcome of a process, it may be another word for product. Nevertheless pu rpose, the product of a difference in a control or metasystem, is important in the emergence of Cybernetics and a key to its interdisciplinary success. The recognition that control is a reflexive process, a conversation or interaction, wherein who is meta cannot be established except where purpose or identity is maintained is part of the CT/IA paradigm. The purpose of self - organisation to minimise heat loss with redundancy, intelligent thermal lagging as it were, is coherent with Le Chatelier and the state changes of plasmas, gases, liquids and solids as they cool. So we see IA is also a theory of process and its product dual. To underline this in discussion with a Whitehead scholar I suggested the monumental classic "Process and Reality" could be retitled " Product and Reality". This was stoutly, if somewhat vacuously, resisted. Analysis of participant beliefs can be painful using I.A. The blend of facile and profound can surprise and delight but criticism is probably most effective if restricted to utterance s that violate the axioms. The question of identity is not dealt with in IA except in establishing the repulsive carapace of the successfully self - organised system. The System Three and Two processes of Beer's viable system are not distinguishable. Adap tation is implicit throughout the circular process but identity only comes as the requisite force produces a particular unfolding. Whilst there may be no doppelgangers an attractive difference is implied (PLT:"...unlike concepts attract"). However when two participants are attracted and meet, a hard carapace separates them. This can only be explained by unfoldment: I may be an expert on force, you may be an expert on mass. We both share a great deal of similar concepts yet we still retain our hard repulsive shells. Reductio ad Absurdum we both share the identical concept of a hard shell so by a tautology we repel. This does not constitute an adequate exposition of the mechanism. There is a great deal about orthogonal forces producing action under unfoldment in Pask (Pask, 1993) which may need further analysis to resolve this question. Interestingly ATD (Agreement to Disagree) maintains attractive differences. The term distinction is widely known from Spencer - Brown (Spencer - Brown, 1969) and might be included for its power in setting up closure and minimal difference. We know, for example, A 䄠 but hat that means ecept as denoting a minimal difference is not clear. It is enough, hoever, to make it a reuired aiom though it may more properly be called a value, albeit subject to careful definition. It may be enough to regard it as the boundary of void/not void indeed the avelet model of void crossing can be seen as analogous to the rimary Arithmetic. Here the magnitude of the force ould define the rima ry Algebra. he unification of the memory model in euations of the second degree is obviously reuired. istinguishing A B as the to pumps in the non - linear medium may satisfy the four aving miing model. The carapace distinguishes Pask Self - Organisat ion more than anything else. The repulsive force at the surface of the stresses and strains comprising an entailment mesh is produced as an invariant aspect of unfoldment of a concept which itself is compliant with PLT. Since Pask asserted that this struct ure could be found in stars we have to identify how that might be in solids, liquids, gases and plasmas (the earth, water, air and fire of antiquity). We are accustomed to regard hardness as a property of the solid state or condensed matter. However in all 1447 cases of the uncondensed states of matter there exist circular convection currents. For existence these require gravity fields and variations in density. The anisotropy of diffusion in these cases is enough to regard the orthogonal force at the boundaries of a convection or circulation cell as the repulsive carapace force. The propensity to knot can be seen in vorticity, the powerful vertical currents at the centre of a tornado (whirlpool, waterspout, cyclone etc). This is the '+' force of Pask self - organ isation. In general shear or curvature in fluid or gas flow induces relative vorticity. Clearly more rigour is required at the Planck scale of the sparticle and M - Brane but Borromean orbits are postulated for certain nucleons (Bhasin, 1999) e.g. 11 Li whe re two neutrons are thought to form a "Borromean Halo" 31 . A f ul l er i nvest i gat i on of Borromean l i nki ng and t hei r st ar/del t a support s i s i ndi c at ed. A si xt y - degree arc hi t ec t ure f or voi d/not voi d mi ght be hypot hesi sed. T he c l ose pac ki ng of spheres woul d seem t o demand i t. Sal T orquat o et al (2000) however c ast doubt s on our abi l i t y t o do t hi s wi t h ri gour. He shows t hat experi ment st i l l c annot c onf i rm Kepl er's c onjec t ure (part of Hi l bert's 18t h probl em) now proved by T homas Hal es, out l i ned i n Hal es (Hal es, 2000) 32 of 74.05% pac ki ng ( /3 2), but unconfirmed by experiment which persistently finds around a 64% packing fraction. Torquato reminds us how difficult even the simplest seeming scientific endeavour can be. Error inhabits all that we do. The presence of John Conway will be seen in the "many valuable discussions" which lead to this fascinating paper from Princeton that characterises the study of randomness as "still in its infancy". At the macro level looking down on a pan of boiling water the hexagonal tilin g of Rayleigh - Benard convection cells ( Narasimhan, 1999) 33 can be seen. In noting the 60 degree geometry supporting the adjacency of three cells, we see a stable triple. Acheson (Acheson, 1990 p. 313) offers Drazin and Reid's method of heating a 2mm layer o f corn oil in a clean pan sprinkled with cocoa as another means of observing this phenomenon. Olive oil and Bisto powder work just as well. The circular dynamic character of a convection cell only asserts when the buoyancy force (caused by the decrease in density of the hot liquid, plasma or gas) is greater than the force of viscous drag and rate of decrease of temperature. The Ra y leigh number characterises this condition of what we can now regard as IA self - organisation in non - condensed matter. The Rayleigh number is a dimensionless quantity defined Ra = g TL 3 / is the coefficient of thermal expansion, the temperature difference between the hot and cold ends separated by distance L, the thermal diffusivity and the kinematic viscosity of the flu id. When Ra > 1708 convection will occur and an organisationally closed informationally open stable a ctor or concept can exist. This is one precise condition for self - organisation in gases and liquids. Continuity and incompressibility are assumed but de spite this the work done in convection is close to a Carnot Cycle, a definition of entropy, information and the von Foerster redundancy measure of Self - Organisation. Further refinements are necessary to deal with (self - organising) surface tension forces at a free surface. The sixty degrees architecture of convection cells also tries to assert itself geomorphologically. The phenomenon of columnar jointing, seen with the optimal but very rapid cooling of lava, can form hexagonally sectioned columns. Possibly the most prominent example is at the Giant's Causeway in County Antrim, Northern Ireland. The equilateral sixty - degree architecture was regarded by Pask as a characteristic of a minimising spatial self - organisation, see also Hales (Hales , 2000). This planar "hexatic" phase as it can be known is seen, for example, 1448 within the 12 nearest neighbour close packing of spheres, benzene and graphite. With a "pentatic" component local tiling in three dimensions is enabled producing Beer icosahedral s yntegrity, Buckminster Fuller tensegrity and the fullerenes. How might these structures assert themselves in planetary ecology, for example, in carbon, oxygen or sulphur cycles? Where are the forces working in Lovelock's Gaia Principle or plant and anim al internal homeostats? The Gibbs Chemical Potential, µ, could be of assistance here, µ= where G is the Gibbs free energy, p is the pressure and T temperature in degrees Kelvin 34 . When the number of particles, n, is replaced with gen eralised space co - ordinates, or distance, an actual force results. There could be interesting implications for the chemistry of complex systems, in cell biochemical pathways, for example. The triple structure though not apparently assertive in the valency of elements does assert when molecular space is triangulated in the Delaunay - Voronoi manner, here using the delta dual of the concept triple 35 . T hi s mi ght suggest a si ngl e di st i nc t i on may c ust omari l y be shared by t hree pai rs of di f f erent c onc ept s. Not e, how ever, T errest ri al Chemi st ry i s a spec i al c ase of t en at 25 C and one at mosphere pressure. These are just t wo paramet ers out of many t o c omply wit h t he Weak Ant hropic Princ iple required f or observers wit h a c arbon - based c onsc iousness, John Barrow and Frank T ipler (Barrow and Tipler, 1986). The exoc hemist ry of Harvard's William Klemperer 36 provi des an i nt ri gui ng envi ronment f or mat t er i n t he Uni verse. In spac e c hemi st ry oc c urs around 10K and 10 - 16 atmospheres. Here hydrogen occurs as H rather than H 2 . Such pro totypically organised forms as xogen (protonated carbon dioxide - HOCO + ), dicarbide (C 2 ), ketene (H 2 CCO), cyanoethynyl (C 3 N) are claimed to be identified by NASA. It is time to reconsider a Miller - Urey 37 t ype experi ment at l ower t emperat ure and pressure. In t errest ri al c hemi c al sel f - organi sat i on t he pressure and t emperat ure paramet ers i mpl y i n IA t erms a great er densi t y of i mperat i ve f orc e equi l i bri a. T hus i n spac e we see t he new exot i c f orms sel f - sel ec t i ng and persi st i ng. T he t erms Organisational Clos ure and Informational Openness (OCIO) are nowadays widely ac c ept ed as def ining of a syst em e.g. Mat urana and Varela (Mat urana and Varela, 1980). The IA sc heme is c oherent wit h t his approac h indeed Pask ’s (Pask, 1993) Chapter 7 "Interknitting" is largely g iven over to an analysis of this approach and illustrated by analysis of the Rituals of the Tsembaga. OCIO is supported but not required given a suitable supply of Actors who self - organise. One suspects that Pask is making a ritual nod at the profession an d discharging his duty considering his findings to be more deeply applicable. Similarly Pask's acceptance of Second Order Cybernetics is trivial, achieved simply by labelling one of his participants observer in his standard IA or CT scheme 38 . The IA axiom s are widely applicable. The applicability of an early version of Pask's work to psychotherapy, for example, of both individuals and organisations is shown by Barnes (Barnes, 1994). The IA scheme is designed for concurrent application so it should shed lig ht on concurrent computation. Various hardware configurations continue to be considered. No confusion with quantum computing should be permitted where current proposals limit concurrence to the quantum registers in an emulation of the conventional serial d igital computer. IA was in part a mission to Artificial Intelligence (AI) research to map a physics of self - organisation into common sense. It may yet come to the rescue if AI is to become more than attempts at advanced von Neumann machine programming. 1449 Sc eptics may say "Why speak of 'Respect' when you mean observable, why speak of 'Responsibility' when you mean interactive response or 'Faith' when you mean tenacity or duration of observation?" No great damage is done we hope but applicability is greatly fa cilitated. In particular the putting of the so - called "soft sciences": psychology, sociology, politics etc. and law on a sounder basis. One capable of producing sharp values (as distinct from Zadeh fuzzy values) of a precision equal to any obtainable in th e so called hard sciences of Physics or Chemistry. But remember the error bars on forecasting the boiling time of the kettle, of the age of the earth and, indeed, to the close packing of spheres. We recognise the need to make assumptions and restrictions o n idealisations equivalent to those made as the Euclidean becomes the Newtonian thence the relativistic, quantum and non - linear in science. The violence that poverty, scarcity and ignorance or lack of requisite difference breeds should be easily demonstrab le from the Ekistic Density effect (see below). The means by which housing might be designed to encourage permissive application of self - organising force might be made formal. IA can offer a deeper understanding of the intractability of "schizophrenia" to Double Bind Theory, the Public Health implications of hypocrisy, its implications for secrecy about sexual behaviour, for example, and the multiple risks of irresponsible unaccountability. A Bourbakian approach to axioms for Cybernetics may begin to be fe asible. For IA to make a contribution the prepositional anti - mesh linked by an analogy mesh to a characterising and ontology mesh of the Actor (Table 3 Pask, 1993) needs a deal more work! There seem to be immediate applications in putting Criticism on a m ore rigorous basis indeed making Criticism into a science. The hermeneutic requirements of Post Modernism seem fully satisfied. Others may care to challenge that. The applications of these axioms to Beer's Principles of Organisation (Beer 1979, 1985) and P ask's own "Properties of Self - Organised Systems and their consequences for a company" ( Green, 2001 , footnote 11 ) may also form an interesting objective for future research. De Zeeuw's observation (de Zeeuw 2001) that IA makes it possible to se arch for Theory suggests Pask's goal of a dynamic proto - theory may have been achieved. As de Zeeuw so wittily observed at CybCon 2002 IA enables one to say things like "Love makes the World go round .” The axioms provide a route to a robust seeming proof . One wonders where this might lead us. It seems apt to identify constraints on the applicability of IA both to practical questions and to the analysis of others approaches to what is seen as the fundamental in Nature. Applying IA: The Groningen experimen t In 1995 Orit Kaufman, a student of John Frazer at the Architecture Association where Pask was Senior Tutor, wanted to apply IA to her contributory study of Evolutionary Ekistics in Groningen. We unexpectedly showed relatively weaker IA attraction forces lead to denser housing settlements. Subsequently the result was applied in cardiology and planetary accretion. It turned out to be a classic piece of Cybernetics. The book of the Groningen Experiment is to be dedicated to the memory of Gordon Pask (Frazer 2001). Generation in general, pace the subtleties and specialisations of meiotic and mitotic cellular reproduction, is primarily a sticky, attractive, aggregative process. This is elegantly captured in the Diffusion Limited Aggregation (DLA) model of Tom Witten and Len Sander (Witten & Sander, 1981). Sticky random walkers are released onto a fixed attractive seed from a boundary on a tessellation plane. These forces are local only and have a range of one cell. This clearly models self - organising growth. R amified structures are seen as in moulds, brains, the nervous 1450 and circulatory systems of animals, the xylem vessels of plants, roads, railways, rivers and streams. These can be seen as aggregations of random Y shaped forms embodied in the "star" dual of th e stable concept triple. With Kaufman serial, digital kinematic DLA simulation programs 39 were wri t t en and we c onst ruc t ed c opper/c opper sul phat e el ec t roc hemi c al c el l s f or c onc urrent ki net i c si mul at i on. From t he c el l we produc ed growt h of dendri t i c c onnec t ed set t l ement s of c opper ac t ors at poi nt and pl at e c at hodes - i l l ust rat ed i n Kauf man (Kauf man, 1996) and Frazer (Frazer, 2001). For more el aborat e el ec t roc hemi c al DLA t ec hni ques see Harri son (Harri son,1995). T he el ec t roc hemi c al si mul at i on uses a pl at e sa c ri f i c i al c opper anode (c ut f rom c opper pi pe) at up t o 30 vol t s i n around M/4 aqueous c opper sul phat e sol ut i on. El ec t rodes were separat ed by 10 - 15 c m i n a sol ut i on about 0.5 c m deep. A ri ver or c anal bank was si mul at ed by a pl at e c at hode, a c ross roads, we l l or spri ng by a poi nt c at hode. Kauf man's el ec t ron mi c rographs (p138, Kauf man 1996) c onf i rmed sel f - si mi l ar aggregat i on. Our i nt erest i ng di sc overy c ame wi t h t he f uzzi f i c at i on of t he Wi t t en and Sander al gori t hm. We i nt roduc ed a repul si ve f orc e paramet er. In t he di gi t al si mul at i on a c ol l i di ng random wal ker or part i c i pant i s onl y permi t t ed t o st i c k i f a probabi l i t y of st i c ki ng t hreshol d, f i xed f or a gi ven experi ment al run, i s exc eed by a random number c hosen f rom t he hal f - open i nt erval [1, 0). Furt her we i de nt i f y t he probabi l i t y of st i c ki ng as t he probabi l i t y of at t rac t i on and we def i ne an i dent i t y f or t he si mul at i on P(At t rac t i on)= 1 - P(Repul si on) Fi gure 9 shows t he pl ot obt ai ned wi t h t he modi f i ed Wi t t en and Sander al gori t hm on a seri al (ki nemat i c ) mac hi n e wi t h t he st ar dual, pl an vi ew map of part i c i pant s i ni t i al l y at t rac t ed t o a si ngl e poi nt roughl y at t he c ent re of eac h mesh. In t he ki net i c anal ogue el ec t roc hemi c al model, Fi gure 10, c ount er i ons make a more c ompl ex model of operat i ng f orc e but onc e t he e l ec t roc hemi c al pot ent i al t hreshol d i s exc eeded t he l ower t he vol t age t he l ess st i c ky or at t rac t i ve mi grat i ng i ons are. In bot h ki net i c and ki nemat i c si mul at i on we f ound smaller, denser st ruc t ures f ormed wit h less at t rac t iveness (or more repulsive f orc es or less amit y). At t his point Kauf man disc overed Bat t y and Longley (Bat t y & Longely, 1994). Their work t hough primari ly wit h Dielec t ric Breakdown Models used a Fi gure 9 7000 participants settled with no repulsion, P(stick)=1.0 Above. At left 7000 participants with 90% repulsion P(stick)=0.1. The participant density is more than 4 times greate r. Figure 10 Left Copper DLA grown in 35 minutes at 30 volts in 2.5amp current limited 5cm diameter 1cm deep cell with circular anode and central point cathode. At right a five times enlarged view grown over five hours at around 3 volts. 1451 potential parameter, , corresponding to our kinetic voltage or kinematic stickiness produced po pulation density differences similar to those seen with our simple DLAs. Others, planning experts had seen that this approach did indeed model the growth and evolution of one dwelling to a hamlet thence to a village, a town and a city. In settlement theo ry or Ekistics terms the more repulsive the short range force between settling actors, the more similar are the actors by PLT, the more dense the resulting settlement. The result seemed entirely counter intuitive but confirmation by social observation is t rivial. The financially or conceptually poor tend to live in higher density housing than the educated rich whether urban or rural. "Rummaging" or shallow tunnelling behaviour explains the result: repeated random walking attempts to find sticky attraction i n a local neighbourhood. The conceptually rich and variegated arrive and settle at once being most attractive and sticky. As with all randomly choosing participants some will walk back beyond the source radius and then be excluded and Ended with their viab ility expired. Whilst walking persists actors demonstrate Faith in achieving their Purpose. The Eternal nature of the settled actors should also be noted. This proved to be a delightful and satisfying result elegantly demonstrated both in a serial machin e (kinematic) and in concurrent (kinetic) analogue simulation. P(stick) probability is interpreted as a Similarity/Difference metric from PLT. There is also an interpretation in terms of Amity where p(stick)= Amity. The void/not void is the unoccupied/occu pied cell on, in this case, the orthogonal tesselation (the random walking participant has four choices of direction). A hexagonal or hexatic tesslation (the walker has six choices) produces no distinctive difference yet seen in this context. A more comple x context can be created for the growing community of actors or participants embodying concepts when each cell is assigned an attracting or repelling force of some appropriate magnitude rather than void or not/void only. The other tool, corresponding to Ba tty and Longley's spatial parameter, , restricts the release of walking participants to particular locations from a road, path or city gate, for example. This restricts the perspective of potential Actor settlers. The terrain in the Groningen Topography w as modelled in this way (p.110 Kaufman, 1996) Later the findings of this model were applied to congestive heart disease and, in particular, anti - coagulation therapy. This therapy aims to reduce the stickiness of arteriosclerotic plaques that are the aggre gates in this case. The model suggested that while the incidence of heart attacks may be reduced, the incidence of fatal cerebrocardiac events might be increased. This is because of the reduced ease of digestion of the denser displaced plaque aggregate tha t is thought to be the pathological agent in stroke or thrombosis. A recent large Swedish study ( Odén & Fahlén 2002) supports this hypothesis by concluding that lower doses of anti - coagulant reduce excess mortality. Sufferers might consider Pauling Therapy 40 whi c h seems t o exhi bi t onl y beni gn si de ef f ec t s. T hi s was t he f i rst experi ment al work undert aken demonst rat i ng aspec t s of at t rac t i on and repul si on i n sel f - organi sat i on. Pask t hough unwel l t ook a l i vel y i nt erest. Very earl y on he worked i n el ec t roc hemi s t ry t o demonst rat e c onc ept growt h i n whi c h a sel f - repai ri ng aspec t t o dendri t es was demonst rat ed, Pask (1959). Ref l ec t i ng t oday on pl anet ary evol ut i on we see t hat a weakl y at t rac t i ve st i c ki ness f orc e i s enough t o make t he dense roc k bodi es of pl anet s by ag gregation of principally siliceous dusts. Barrow and Tipler (Barrow & Tipler, 1986) p.288 state The size of bodies like stars, planets and even people are neither random nor the result of any progressive selection process, but simply the 1452 manifestations of the different strengths of the various forces of Nature. They are the examples of possible equilibrium states between competing forces of attraction and repulsion. This is most helpful to our central theses but IA suggests that weak force selection or p ermissive Ap might operate when stronger imperative forces are in equilibrium. The fractionation processes that produce variations in isotopic ratios and atomic abundances in stars and planets, for example, can be seen as naturally selective without any lo ss of understanding. This mechanism needs more work but whilst concepts are internal and endogenous forces operate IA can with good reason speak of perspective and context bound, respectable, responsible selection. Relatively weak forces are doing these primary organisational tasks. For Klemperer, too, Astrochemistry is dominated by weak van der Waals forces rather than the stronger forces of Pauling covalency. Indeed in outer space without interaction from a dust support hydrogen atoms will be blown apart by the energy released with the formation of transient H 2 . Here IA suggests imperative "background" force equilibria are required to select conditions suitable for the stable formation of ordinary diatomic hydrogen. At the time of the Gron ingen work DLA structures were thought to be scale invariant but recent work by Benoit Mandelbrot (Mandelbrot et al, 2002) has established possible non - linear scale invariance. A shift in fractal dimension from 1.67 to 1.71 has been demonstrated for aggreg ates of 10 5 and 10 8 particles. This may imply in the IA model we are developing that absolutely different geometries exist for sticky non - voids and larger aggregates. Differences, for example, when sticky, attractive particles make stars, planets, meteorit es, dusts, bubbles 41 , foams, sols, aerosols, plants, animals, indeed, the entailment and actor meshes of CT and IA. The Mandelbrot team may be making considerable demands on serial kinematic digital computation to measure the fractal dimension for larger structures. A "gedanken" two thousand processor parallel kinematic machine with one 10GHz processor for each random walking step would require of order ten million years run time for a 10 22 aggregate DLA simulation. The kinetic electrochemical aggregates w eighed about 1 to 0.1 gram implying ~5X10 21 participants and a run time of 20 minutes to a few hours at lower voltages. Box - counting determination of the fractal dimension of the copper aggregates would require electron and surface tunnelling microscopy bu t the procedure makes large structures tractable. For the simulation of an earth sized planet aggregation of some 10 50 atoms is involved providing yet more motivation for innovation. Concluding Comments Such is Pask's IA legacy to us. He discovered a mech anics for self - organisation and a cosmological epistemology. He invented a new approach to the application of science. Unexpectedly his theory showed us the repulsive force of self - organisation could explain why the density of housing becomes high for the information poor, how anti - coagulation therapy can kill and why weak forces are sufficient to build dense planets. We have the tools to understand more clearly the evolution of physical and chemical form and function and deep axioms for a more natural huma n interaction. 1 Hydro Aluminium, part of Norsk Hydro, Norway's largest company for Dr Bjorn - Erik Dahlberg Senior Vice President Human Resources and Environment, Health and Safety. The theme was Organisations as Lliving Oorganisms and Ecologically Sustainable Innovation. 1453 2 An analysis of Newton's Laws of dynamics from the IA cybernetic perspective might suggest the first law is a law of homeostasis, the second a law of stable conc ept triples, the third a law of negative feedback. 3 My thanks to Dr Bernard Scott for pointing this out. 4 We will use both star and delta representations of concept triples as appropriate. A further but as yet unexplored route from Computational Geometry might be to allow the resonance concept triple nodes to define an "empty circle" - a unique closed process defined by the stresses and consequent strain separations of all three concept nodes. 5 There is a great deal about four wave mixing, phase conjugat ion, non - linear optics etc on the web and elsewhere. Ib Bang has a helpful introduction of this remarkable phenomenon at Reflection at a spherical mirror will produce "time reversal" but without th e potential for amplification implicit in wave mixing. is a Powerpoint presentation by Dr David McClelland of Australian National University. 6 Both fractal and "prismatic" recursive packing schemes for Borromean links are discussed by Slavik Jablan in "Are Borromean rings so rare?" at from the e - Journal Visual Mathematics vol. 2 No 4 2000. 7 Applet with Moo re and Simo's choreographies 8 - Arnold - MoserTheorem.html and see also note 20 9 10 .com/physics/TippeTop.html 11 12 13 Hear this in Acheson's own words from a radio interview in February 1998 by Thames Valley FM at tview.html 14 Health and Safety Executive Report: "Railtrack's decision - taking can only be strengthened by shifting the balance back in the direction of professional engineers who understand how to ma intain and operate the system." Michael Fabricant MP 15 "Controllability and Observability: Tools for Kalman Filter Design" Southall et al. 16 Note parallel com putation is precisely equivalent to serial. Prof. Aaron Sloman suggests a proof by merging processes and increasing clock speed (discussed in personal communication). 17 The ergodic hypothesis may be stated thus: the distribution of states in an n body syst em at a given time is the same as the typical distribution in member bodies over their history. It is an assumption of kinetic smooth (non - kinematic, non - digital, no begins and ends or steep gradients implied) serial/parallel equivalence to concurrence. It 's applicability continues to be discussed. 18 Unless, as Pask pointed out in the mid seventies in a Brunel seminar, they were aware that strictly Shannon needed synchronisation or redundant tricks e.g. "flag waving" (as found in computing device interfaces ) to correctly frame and hence decode symbol streams. This is key to understanding how clocking and the interrupt renders the serial/parallel machine crucially insensitive to fields. Compare this with the resonant response of the prismatic tensegrity eleme nt. 19 20 The space around points of homeostasis or equilibrium is bounded by singularities whose encoding leads to infinite varieties. It is invariably the interaction of these singularities that is the source of interesting behaviour. Freeman Dyson (personal communication) denied singularities exist. Pask was sympathetic to this in adopting the variety measure ( Green, 2001 , note 11 ) in his axioms for self - organising companies. On p. 63 (Pask, 1993) the distinction singularity in an analogy is seen as a potential generator of requisite variety. Consider taking moments around a pivot at equilibrium. A ±ε can determine a clockwise or anti - clockwise moment, the "Tipping - Point". The problem lies in a finite bounding for the coding of the ε that may become infinitesimal in the digital machine. Perhaps part of the Art of Cybernetics is devising approaches that overcome this restriction to our methods. In the field concurrent computer magnitudes are represented by field intensities and an ε will scale with all other parameters permitting singularities to exist without pathology. 21 Nikolay Kosinov (Kosinov, 2001) in a virtuoso paper reports 12 methods of calculating the Planck length. 22 Newcomers to this field may find Barbara Burke Hubbard (Hubbard, 1996) "The World According to Wavelets". This is a help ful overview of the new wavelet approach to Signal Processing incorporating an approach to Heisenberg Uncertainty and Quantum Mechanics. The remarks on p. 114 about the KAM Torus theorem and Weierstrauss' faith in Dirichelet's n body series converging, not shared by Birkoff and Poincaré, is a helpful introduction to turn of the last century n body research. 23 My thanks to John Adams, ex - Pask Associates, for an extraordinarily powerful Pask aphorism: a proof of causality depends on begins and ends. The conve rse is also true: causality cannot be demonstrated when interaction is eternal. We are distinguishing finite and infinite communication. In mathematics counting is not seen as requiring interactive communication. There is great potential here for establish ing a cybernetics producing a more rigorous treatment of nature than mathematics as we know it today. 1454 24 But a case can be made for e.g. a rock feeling pain. Consider an impact in the region of the elastic limit. The Thomson EMF produced by the impact would correspond to the pain signal. We might postulate a distributed System Four (Development) function to repair the crack by concretion if water is present. The rock's thermostasis will define its ability to resist icing that can promote fracture. 25 ww.literature.org/authors/darwin - charles/the - origin - of - species/index.html 26 A Brief History of Feedback Control from Chapter 1: Introduction to Modern Control Theory, in: F.L. Lewis, Applied Optimal Control and Estimation, Prentice - Hall, 1992. heorem.net/theorem/lewis1.html 27 28 My thanks to Ben Laurie for introducing me to ideal, tight knots which are made with minimal string length. At - 05.001 - sandt2.png he shows the stadium, "waisted" Borromean link produced via simulated annealing. Could this define a form for Pask's void? 29 At - cohen.html Simon Baron - Cohen in his review in Nature 410, 520 (2001) of Dylan Evans monograph "Emotion" OUP 2001 ISBN 019285453x claims to have classified "1000 discrete emotions into 23 mutually exclusive categories". A challenge to the cybernetics of emotion. Candace Pert's "Molecules of Emotion" was published by Scribner in 1997. Estimates vary but some c laim more than 300 neurotransmitter substances and some 50 hormones in humans. 30 Discussed in Paul Cilliers' "Complexity and Postmodernism" p. 55 et seq. Routledge 1998 ISBN 0415152879 31. in/pramana/sept99/i1.pdf 32 33 34 Philip Candela prefers µ= where U is the internal energy and more fundamental he argues. du/pages/facilities/lmdr/chmpot.htm This is not usual compare a standard approach in eg Atkins "Physical Chemistry" OUP 5th Edition 1994 page 170. 35 5.html 36. Primarily a spectroscopist his 1997 Lecture to the Royal Institution was eye opening to the real universe of Chemistry and self - organising molecu lar phenomena. Note spectroscopic analysis of human interaction is potentially deeply insightful, for example, analysing Pask's versatility in learning. 37 Miller - Urey produced amino acids fro m a hypothetical primitive reducing, oxygen free terrestrial atmosphere. 38 In his monograph "Rheostasis" Morovsky discusses the setting of fixed points in homeostasis in general. Thus the student or experimenter in the lab with his rheostat becomes second order cybernetician. 39 For downloadable demonstration DLA program with variable stickiness parameter for pc compats 40 41 Joseph Plateau (1801 - 1883) found rules for soap film bubble s requiring three films to meet to form an edge, always at an angle of 120 and with four films meeting at the tetrahedral angle (109 28') to form a point. This three dimensional tiling may coexist with Pask's recursive concept within a concept. Formal e nergy minimisation was not proved until 1973 by Jean Taylor in Annals of Mathematics (2) 103 (1976), no. 3, 489 -- 539.. There may be a basis here for a real - time field concurrent solution to the Travelling S alesman Problem. References Acheson, D. J. (1997) "From Calculus to Chaos" Oxford University Press Oxford ISBN 0 - 19 - 850257 - 7 Acheson, D. and Mullin, T. (1993) Nature Vol 366 pp. 215 - 216 (1993) Acheson, D. J. (1990) "Elementary Fluid Dynamics" Clarendon Press Oxford ISBN 0 - 19 - 859679 - 0 Ashby, W. R. (1952) "Design for a Brain" Chapman Hall London van Baal, P. and Wipf, A. (2001) " Classical Gauge Vacua as Knots" Physics Letters Vol B515 (2001) pp. 181 - 184 - th/pdf/0105/010514 1.pdf Bak, P. (1997) "How Nature Works: The Science of Self - Organised Criticality" Oxford University Press Oxford ISBN 038798738 Barrow, J. D. and Tipler, F. J. (1986) "The Anthropic Cosmological Principle" Oxford University Press Oxford ISBN 0198519494 Barnes, G. (1994) "Justice Love and Wisdom" Inform Lab 1994 ISBN 9531760179 Bhasin, V. S, (1999) " A three body approach to study the structural properties of 2 - n halo nuclei and the search for Efimov states" P ramana Journal of Physics Indian Academy of Sciences Vol 53, No 3 September 1999 pp. 567 – 575 Bateson, G. (1980) "Mind and Nature - A Necessary Unity" Bantam Books ISBN1572734345 Bateson, G. (1976) " For God’s Sake, Margaret" CoEvolutionary Quarterly, June 1976, Issue no. 10, pp. 32 - 44. Batty, M. and Longley, P. A. (1994) "Fractal Cities: A Geometry of Form and Function" Academic Press London Beer, S. (1998) " On the Nature of Models: Let us Now Praise Famous Men and Women, from Warren McCulloch to Candace Pert" Informing Science Vol 2 No 3 1998 - 82.pdf Beer, S. (1994) "Beyond Dispute" John Wiley ISBN 0 - 471 - 94451 - 3 Beer, S. (1985) "Diagnosing the System for Organisations" Jo hn Wiley Chichester ISBN 0 - 471 - 90675 - 1 Beer, S. (1979) "Heart of Enterprise" Wiley Chichester ISBN 0471275999 Beer, S. (1972) "Brain of the Firm" 2nd. Edition 1981 Wiley Chichester ISBN 047194839X Bounais, M. (2002) "Universe as a self - observable, self - ethical, life - embedding mathematical system" Kybernetes 2002, Vol.31, no 9/10 pp1236 - 1248 Chattopadhyay , U. and Nath , P. (2001) "Upper L imits on Sparticle Masses from g - 2 and the Possibility for Discovery of Supersymmetry at Colliders and in Dark Matter Searches" Physical Review Letters 86 (2001) 5854 - 5857 - ph/pdf/0102/0102157.pdf Chaitin, G. J. (1990) "Algori thmic Information Theory" Cambr id ge University Press Cambridge Conant, R. C. and Ashby, W. R. (1970) "Every Good Regulator of a System must be a Model of that System" International Journal of Systems Vol 1 No 2 pp. 89 - 97 Dawkins, R. (1976) "The Sefish Gene" Second Edition 1989 Oxford University Press Oxford ISBN 0192860925 Deutsch, D. (1997) "The Fabric of Reality" Alan Lane The Penguin Press ISBN 0713990619 Dennett, D. C. (2003) "Freedom Evolves" Allen Lane London ISBN 0713993991 Foester von H. (1995) "Ethics and Second - Order Cybernetics", Stanford Humanities Review vol.4 no. 2 - 2/text/foerster.html Foester von H (1981) "Observing Systems", Varela, Franci sco ed., InterSystems Seaside, CA. Frazer, J. H. (2001) "The cybernetics of architecture: a tribute to the contribution of Gordon Pask" Kybernetes Vol 30 Number 5/6 pp. 641 - 651 Green, N. (2001) "On Gordon Pask" Kybernetes vol 30 No 5/6 2001 pp. 673 - 68 2 Hales, T. C. (2000) "Cannonballs and Honeycombs" Notices of American Mathematical Society Vol 47 No 4 pp. 440 - 449 - hales.pdf Harrison, A. (1995) "Fractals in Chemistry" Oxford University Press Oxford ISBN 019855767 Hubbard, B. B. (1998) "The World According to Wavelets" AK Peters ISBN 1568810725 Jones, A. J. (2002) " A proof of the Gamma test " Proceedings of the Royal Society Series A Vol 458 pp. 2759 - 2799. ac.uk/user/Antonia.J.Jones/GammaArchive/IndexPage.htm Kalman, R. E. and Bucy, R. E. (1961) "New Results in Linear Filtering and Prediction Theory" American Society of Mechanical Engineers Journal. Vol 82 pp 193 - 196 Kaufman, O. (1996) "The Projects Review 1995 - 1996" p.110 & p.138 Architectural Association 1996 ISBN 1870890663 Kolmogorov, A. N. (1956) "On the Shannon Theory of Information Transmission in the Case of Continuous Signals" Institute Electrical and Electronic Engineers Transactions on Informati on Theory, Vol IT - 2 pp. 102 - 108 Sept. 1956 Kosinov, N. (2001) " The New Formulas to Calculate Planck Units" Анатолия, Vol 6 pp. 176 - 179 Kübler - Ross, E. (1969) "On Death and Dying" Touchstone, 199 7 ISBN: 0684842238 Lindstrom, B and Zetterstrom H. - O. (1991) "Borromean circles are impossible", American Mathematical Monthly 98 no 4 pp 340 - 341 Long J.C, Chan H.W, Churnside A.B, Gulbis E.A, Varney M.C.M, and Price J.C. (2003) "Upper limits to submillimeter - range forces from extra space - time dimensions" Nature Vol 421 pp. 922 - 925 Mandelbrot, B., Kohl, B. and Aharony, A. (2002) "Angular Gaps in Radial Diffusion - Limited Aggregation: Two Fractal Dimensions and Nontra nsient Deviations from Linear Self - Similarity" Physical Review Letters 88 , 055501 (2002) Maturana, H and Varela, F (1980) "Autopoiesis and Cognition" Kluwer Cologne ISBN 9027710155 Maxwell, J. C. (1868) "On Governors" Proceedings of the Royal Society Vo l 16, pp. 270 - 283 Morovsky, N. (1990) "Rheostasis" Oxford University Press New York ISBN 0195061845 Narasimhan, A. (1999) "Rayleigh - Benard Convection: Physics of a widespread phenomenon," Resonance , Vol 4 No 6 pp. 82 - 90 Odén, A. and Fahlén, M. (2002) "Oral anticoagulation and risk of death: a medical record linkage study" British Medical Journal Vol 325 pp.1073 - 1075 (9 November) Pask, G. (1959) "Physical Analogues to the Growth of a Concept" in "Mechanisation of Thought Processes" ed Uttley, A. London National Physical Labora tory Symposium 1958 HMSO 1959 vol. 2 pp. 877 - 922 Pask, G. (1975) "Conversation, Cognition and Learning," Elsevier, London Pask, G. (1976) "Conversation Theory: Applications in Education and Epistemology" Elsevier, London Pask, G. (1990) "Complementarity in the Theory of Conversations" in Nature, Cognition and System Carvalho, M (Ed.) Proceedings of Baden Baden Symposium 1990 Reichsuniversitiet, Groningen Pask, G. (1993) "Interaction s of Actors , Theory and some Applications" Gordon Pask and Gerard de Ze euw Volume 1 "Outline and Overview" (Draft Last edit April 1993 Still Unpublished) Pask, G. (1996) "Heinz von Foerster's Self - Organisation, the Progenitor of Conversat ion and Interaction Theories" Gordon Pask, Systems Research vol 13, No 3 1996 pp 349 - 362 Pineau P. - O.(2002) "An Ethical Behaviour Interpretation of Optimal Control", in Optimal Control and Di fferential Games - Essays In Honor Of Steffen Jørgensen, Zaccour G. (ed.), Boston: Kluwer Academic Publishers, 2002. .pdf Rawls, J (1973) "A Theory of Justice" Oxford University Press Oxford ISBN: 0674000781 Rescher, N (1966) "Distributive Justice" Bobbs - Merrill New York 1966 Shih, C (1987) "Theory of Four Wave Simulated Brillouin Scattering" Society for Optical Eng ineering (SPIE) vol 739 pp 26 - 31 Simó, C (~2000) "New families of Solutions in N - Body Problems" Institut de Mecanique Celeste, Observatoire de Paris. Pre - print Spencer - Brown, George (1969) "Laws of Form" 3rd Edition 1979 Dutton New York ISBN 0 - 525 - 47544 - 3 Stewart, D. J. (2000) "The ternary analysis of work and working organisations" Kybernetes Vol 28 No 56 pp. 689 - 701 Torqato S, Truskett, T. M. and Debenedetti P.G. (2000) "Is Random Close Packing of Spheres Well D efined?" Physical Review Letters Vol 84 No 10 pp. 2064 - 2067 - 176.pdf Wallace, A. (1858) " On the Tendency of Varieties to depart indefinitely from the Original Type" Linnean Society 1858 u/PBIO/darwin/dw05.html Wiener, N. (1961) "Cybernetics: or Control and Communication in the Animal and the Machine" 2nd. edition M.I.T. Press Cambridge Massachusetts ISBN 026273009 Witten, T. and Sander, L. (1981) "Diffusion - limited aggregation, a kinet ic critical phenomenon" Physical Review Letters, Vol 47, 1981 pp. 1400 - 1403 de Zeeuw, G. (2001) "Interaction of Actors Theory" Kybernetes Vol 30, No 7/8 2001 pp. 971 - 983. Tippy or Tippe Top some references to studies Weisstein, Eric d.wolfram.com/physics/TippeTop.html A downloadable Threebody simulator , THREEBP.BAS at see also Acheson's Vortex Research on curious self - organising vortex "leapfrogging" and his remarks on Kelvin's early knot theoretic "vortex atom" theory in "Elementary Fluid Dynamics" OUP 1990 pp168 - 172 including a Borromean Ring from Lord Kelvin (1869) Transactions Royal Society of Edinburgh 25, pp217 - 260 reprinted in "Knots and Applications" ed. Louis Kauffman Wo rld Scientific 1995 ISBN 981 - 02 - 2030 - 8 Choreographies demonstration applet of the recent work of Simo, Chenciner and Montgomery Further discussion of Pask and his IA axioms ongoing at k Acknowledgments Figure 3 from Rob Scharein Department of Chemistry, New York University, New York, NY 10003, USA Professor Seeman' Lab claim synthesis via circular DNA. Many Reidemeister moves are required on the many crossings of the product to establish planar isotopy with the Borromean knot Figures 4 and 5 redrawn from Captain Ian P. Stern MSc Thesis 1999 Department of Mechanical Engineering University o f Miami Stern analyses prisms with up to six struts (90 ° twist) for "self deployment" Figure 7 is redrawn from Dr Peter Cromwell Department of Pure mathematics, University of Liverpool, P.O. Box 147, Liverpoo l L69 3BX, England It also appears in his book "Polyhedra" CUP 1999 ISBN 0 - 521 - 66405 - 5 Figure 8 is from Paul Bourke . Many th anks to Paul Bourke who is a visualisation researcher at Swinburne University of Technology, Melbourne, Australia. Thanks to Dr Ranulph Glanville and Dr Bernard Scott for their help and encouragement. Dr Alex Andrew, Dr D.J. Stewart, Dr Christian Haan an d have made most helpful criticisms and suggestions. Mr John Rochey - Adams contributed the very interesting and significant Pask aphorism on Causality. Kybernetes: The International Journal of Systems & Cybernetics, Vol. 33 No. 9/10, 2004, © Emerald Group Publishing Limited, 0368 - 492 X Log in to post a comment
https://www.techylib.com/en/view/bistredingdong/axioms_from_interactions_of_actors_theory_nick_green
CC-MAIN-2017-30
refinedweb
17,903
57.61
Hello everybody and Happy New Year!!! I’m writing to you because i’m filling really comfused!! It’s my first time that i use maestro boards and servos and i will apriciate if somebody can help me! Below as you can see i have the program which moves two servos, if you push the button. I would like to add in my program two more sevos which the first one i would like to move it continuously in one direction and the second one i would like to move it 45 degrees forward-backward continuously. My problem is that i can’t to do all the movements together, i can do it only seperately. goto main_loop # Run the main loop when the script starts (see below). # This subroutine returns 1 if the button is pressed, 0 otherwise. # To convert the input value (0-1023) to a digital value (0 or 1) representing # the state of the button, we make a comparison to an arbitrary threshold (500). # This subroutine puts a logical value of 1 or a 0 on the stack, depending # on whether the button is pressed or not. sub button 0 get_position 500 less_than return # This subroutine uses the BUTTON subroutine above to wait for a button press, # including a small delay to eliminate noise or bounces on the input. sub wait_for_button_press wait_for_button_open_10ms wait_for_button_closed_10ms return # Wait for the button to be NOT pressed for at least 10 ms. sub wait_for_button_open_10ms get_ms # put the current time on the stack begin # reset the time on the stack if it is pressed button if drop get_ms else get_ms over minus 10 greater_than if drop return endif endif repeat # Wait for the button to be pressed for at least 10 ms. sub wait_for_button_closed_10ms get_ms begin # reset the time on the stack if it is not pressed button if get_ms over minus 10 greater_than if drop return endif else drop get_ms endif repeat # An example of how to use wait_for_button_press is shown below: # Uses WAIT_FOR_BUTTON_PRESS to allow a user to step through # a sequence of positions on servo 1. main_loop: begin wait_for_button_press 8000 1 servo 8000 2 servo 3000 delay 6000 1 servo 6000 2 servo 4000 1 servo 4000 2 servo 3000 delay 6000 1 servo 6000 2 servo repeat sub frame wait_for_button_press 1 servo Return I’m waiting news from you! Kind Regards Panos
https://forum.pololu.com/t/connect-2-servos-more-in-the-exist-program/18588
CC-MAIN-2022-21
refinedweb
393
63.93
/30/30/30/30/30/30/30/30/30/30/30/30/30/30/30/30/30/30/30/30/30/30/30/30/... etc #include "SPI.h"#include "Adafruit_WS2801.h"int dataPin = 3; // Yellow wire on Adafruit Pixelsint clockPin = 5; // Green wire on Adafruit PixelsAdafruit_WS2801 strip = Adafruit_WS2801(100, dataPin, clockPin);int const pixels = 100;int a[pixels]; //brightnessint r[pixels];int g[pixels];int b[pixels];int mode = 0;int currentLED = 0; void setup() { strip.begin(); Serial.begin(9600); clear(); strip.show();}void loop() { Serial.print(mode); Serial.print("/"); // drawing switch (mode) { case 0: colorWipe(255, 220, 255, 30); break; } // update the entire strand for (int i; i < pixels; i++) { strip.setPixelColor(i, Color(r[i] * (a[i] / 255) , g[i] * (a[i] / 255) , b[i] * (a[i] / 255 ))); } // show the changes strip.show();}void clear() { for (int i = 0; i < pixels; i++) { r[i] = 0; b[i] = 0; g[i] = 0; a[i] = 0; }}// fill the dots one after the other with said colorvoid colorWipe(byte myA, byte myR, byte myG, byte myB) { a[currentLED] = myA; r[currentLED] = myR; g[currentLED] = myG; b[currentLED] = myB; currentLED++; if (currentLED > pixels) { currentLED = 0; }}/* Helper functions */// Create a 24 bit color value from R,G,Buint32_t Color(byte r, byte g, byte b){ uint32_t c; c = r; c <<= 8; c |= g; c <<= 8; c |= b; return c;} Very interesting, I never would have thought but it does make sense. your "mode" variable is declared after the arrays. This makes it vulnerable to being overwritten if you writepast the end of the array accidentally or on purpose. Try declaring "mode" before the arrays. currentLED++; if (currentLED > pixels) { currentLED = 0; } Unlike global variables, local variables are not initialized, unless YOU cause that to happen.It's better to get in the habit of initializing every variable - local or global.It's also better to minimize the use of global variables. pixels and currentLed should be passed to colorWipe(). One final note. a, r, g, and b never hold values that are larger than a byte, do they? You could save 20% of SRAM by making the type of those arrays byte, instead of int. Rather than cover up the bug, why not fix it? Here is the code concerned: should I just declare the arrays inside setup() where I initialize them? I'm not sure I fully understand local vs global, would I declare these variables in setup() to make them local? Where do you initialize a, r, g, and b in setup? I don't see that happening. Yes, but, then they would be local to setup() where they are not used, rather than local to loop, where they are used. Maybe I'm unclear on what initialize means. I thought clear()--which sets every value in the array to 0--would be considered initializing it. Wouldn't I just have to use a loop similar for loop to initialize the arrays? but if I declare them in loop then won't they will be declared again every frame? clear() may be called from setup(), but the initialization is done IN clear(), not IN setup().But, its easier to just leave them global. Variable Seems to Change on It's Own Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=142253.msg1069303
CC-MAIN-2015-22
refinedweb
574
72.87
Namespace A namespace is a prefix, suffix or infix to a key. It can be used in special cases to group closely related keys, or as an additional qualifier for keys. The colon character (':') is used as separator of namespaces in key names. The possible benefits of introducing namespaced keys must be weighed against their disadvantages. Grouping of closely related keys in namespaces helps to separate this group of keys, avoiding naming clashes and provides a higher level context to a particular key. Namespaces as qualifiers are used when an attribute (such as language code) is applicable for a wide range of unrelated main keys. Technically both concepts are treated to a large extent the same - key names containing a colon separator are strings of characters just like any other key. Storing a value in a manner, similar to a namespace syntax i.e. key:suffix=value where suffix equals to some variable value, not serving a grouping purpose, does not mean that namespace concept is utilized here. It's just a colon-delimited suffix. Contents Example namespace uses - addr=* tags to contain part of an address - contact=* tags - Date namespace — A date namespace suffix has been suggested (in Comparison of life cycle concepts) to specify temporal validity of tags, eg "amenity:1835-1965 = school", "name:1933-1945 = Adolf-Hitler-Straße". The suffix comes as last part of the key (eg after language code suffix). Warning: this syntax is relatively common, but it is just a part of a proposal, while, technically, it is not a proper namespace. - generator:output=* to provide information about the power output of an electricity power generating plant. - is_in=* old namespace that is still present in database - :lanes suffix to add lane-specific information. - Lifecycle prefix — It has been suggested (in Comparison of life cycle concepts) to use a prefix such as "proposed:", "construction:", "disused:", "abandoned:" or "demolished:" to tag features in a special state, eg "construction:aerialway=gondola". For highways and railways a different tagging is de facto standard, eg "highway=construction + construction=motorway". See Comparison of life cycle concepts. - Multilingual names — code suffixes are in use for many keys such as "name:ro=..." indicating the Romanian name of a feature. See Map internationalization and Multilingual names for some uses. The suffix comes immediately after the main key. - parking:lane=* and parking:condition=* to provide information about parking lanes for highway=* - source=* to indicate source of all tags or only specific tag - traffic:hourly=* indicate traffic density and can be used as namespace. - Forward & backward, left & right suffixes to indicate that tag only applies in the some direction or side for the Way. Nomenclature - namespace describes the whole concept (i.e. this page). When used as a prefix namespace, the word prefix is often omitted. - prefix is the usage of a namespace in front of a key (used in Tags, Lifecycle_prefix, addr). - suffix is the usage of a qualifier after a key (used in Tags, Conditional_restrictions, Key:name, Lanes). - subkey is used in two contexts - as an additional key which further describes a Feature (used in Key:waste, Key:motorboat) - for describing a suffix (used in Lanes, Key:phone, Key:addr, Class:bicycle and Template:Tag/doc). - infix is used very rarely (Tags, Talk right/left) This wiki's software is also using the concept of namespaces but this is unrelated to OSM namespaces. Consuming namespaces At a basic level within the system, a key with a namespace will just be stored and treated as any other free-form text string (a string which just happens to have a colon character). Many consumers of OSM data will treat keys like this. Consuming applications often match on keys they are interested in, and any unrecognised keys are ignored. This may indeed be the desired effect of a namespace. Namespaces can be used to separate out certain types of specialist information, side-lining this data away from the 'core' map data, to make it clearer that only more specialist consumers will be interested in it. Over-namespacing Namespacing is a great way to structure the data scheme, but it can also cause troubles for some data consumers, they call it over-namespacing. - project related namespace; it can be tempting sometimes to just namespace a key to avoid clashing with other data instead of trying to integrate existing schemes, this is bad habit. OSM is a multi-scheme database, which means that every tag relates to more than one scheme, more than one use of the data, and so it's important to integrate with other schemes already used to maximise the curation of the data - over-namespacing leads to inconsistency in the database: if we have projectfoobar:name=xxx and name=xxx, in many cases one will be updated and not the other. The simpler and more generic is the key, the more used it will be, the more curated it will be. - over-namespacing leads to a disseminated data scheme: for example, someone interested in VHF channels data will have to look for harbour:VHF_channel key, plus seamark:habour:VHF_channel, plus VHF_channel, plus lock:VHF_channel, plus vhf to collect the data... Using only the vhf key should be enough to know that this data relates to the harbour or the lock or what else is the OSM object we are tagging. Currently not displayed at some maps Feel free to improve rendering of namespaced features for 'standard' map style appearing on the OpenStreetMap.org front page.
https://wiki.openstreetmap.org/wiki/Namespaces
CC-MAIN-2018-09
refinedweb
911
51.07
Hey, Scripting Guy! How can I get a list of all the upcoming meetings that have been scheduled by a specific person (namely, my manager)? — GH Hey, GH. Before we tackle this question we’d like to reassure anyone who looked out their window recently and saw pigs flying; that’s to be expected. Likewise, any of our readers who happen to be residents of Hades might have been alarmed by the recent cold snap down there. Trust us; there’s nothing to be alarmed about. Pigs flying, hell freezing over – those things are bound to happen when the Scripting Son draws a walk in his last at-bat of the high school baseball season. That’s right, after 18 games and 60-some at-bats the Scripting Son finally drew a walk, and in his last at-bat of the season to boot. Not that this is particularly unusual. Three years ago, for example, he was on an all-star team that toured Japan. During that time he played in 14 games and walked once: in his final at-bat. That same year he went the entire regular season (nearly 40 games) without walking at all; oddly enough, in his first playoff game, he then walked three times. Last year the Scripting Son played in 60 games and walked four times. Etc., etc. Needless to say, the Scripting Son doesn’t like to walk. So what difference does this make to you, GH? Quite a bit, believe it or not. After all, the Scripting Guy who writes this column has always said, “Write a script that retrieves a list of upcoming appointments that have been scheduled by a specific person? When pigs fly!” Well, in that case: = “[Organizer] = ‘Ken Myer'” Set colFilteredItems = colItems.Restrict(strFilter) For Each objItem In colFilteredItems If objItem.Start > Now Then Wscript.Echo “Meeting name: ” & objItem.Subject Wscript.Echo “Meeting date: ” & objItem.Start Wscript.Echo “Duration: ” & objItem.Duration & ” minutes” Wscript.Echo “Location: ” & objItem.Location Wscript.Echo End If Before we explain how this works we need to go turn up the heat; for some reason it seems a little colder here than it usually is. OK, that’s better. As you can see, we start out by defining a constant named olFolderCalendar and setting the value to 9; we’ll use this constant to tell Outlook which folder (the Calendar folder) we want to work with. Next we create an instance of the Outlook.Application object, then connect to the MAPI namespace (which happens to be the only namespace we can connect to). We then use the GetDefaultFolder method to bind to the Calendar folder in Outlook: Set objFolder = objNamespace.GetDefaultFolder(olFolderCalendar) That was pretty easy, wasn’t it? Our next step is to create an object reference to the folder’s Items property: Set colItems = objFolder.Items What does that do for us? That grabs a collection of all our appointments (that is, everything in the Calendar folder) and stashes it in a variable named colItems. And yes, you’re right: we’re not interested in all the items in the Calendar folder, are we? Instead, we’re only interested in upcoming meetings that have been organized by our manager, Ken Myer. Somehow we need to filter our collection. But how? Well, here’s one suggestion: why not apply a filter? strFilter = “[Organizer] = ‘Ken Myer'” Set colFilteredItems = colItems.Restrict(strFilter) As you can see, in the first line of code we assign a filter value to the variable strFilter. Filters are made up of two parts: a property name (enclosed in square brackets) and a property value (in this case, a value enclosed in single quote marks, because we are dealing with a string). We want to limit our data to meetings arranged by Ken Myer; that is, meetings where the Organizer property is equal to Ken Myer. Hence the two parts of our filter: [Organizer] and ‘Ken Myer’. In line 2, we then call the Restrict method to apply this filter to our collection. That’s going to create a new collection (named colFilteredItems) that contains information only about those meetings organized by Ken Myer. That’s almost what we need. However, this sub-collection will contain all the meetings organized by Ken Myer, including those that have already taken place. Because GH is only interested in upcoming meetings, we need to weed out the meetings that have already taken place. In theory, we could do that by making a fancier filter. However, filtering on dates can be a bit complicated; therefore, we decided to take the easy way out. Rather than apply a double filter – one that limits returned data to meetings organized by Ken Myer, provided that those meetings haven’t been held yet – we applied a filter that returns all the meetings organized by Ken Myer. We then set up a For Each loop to walk through the complete collection of meetings. And what’s the first thing we do in that loop? Check to see if the meeting has already been held: If objItem.Start > Now Then If the meeting’s Start time is later than the current date and time (which we can determine using VBScript’s Now function) that means that the meeting hasn’t taken place yet. Therefore, we go ahead and echo back the meeting’s name, start time, duration, and location; that’s what this block of code is for: Wscript.Echo “Meeting name: ” & objItem.Subject Wscript.Echo “Meeting date: ” & objItem.Start Wscript.Echo “Duration: ” & objItem.Duration & ” minutes” Wscript.Echo “Location: ” & objItem.Location And then we loop around and repeat the process with the next meeting in the collection. One thing to watch out for here. Because of the way Outlook stores recurring appointments, those appointments might not show up in your output. That’s due, in large part, to the fact that the start date for a recurring appointment might be long-since past. Because of that, you might need to use two scripts to make sure you get the desired information: the script we just showed you, and a second script designed to retrieve the recurring appointments organized by Ken Myer. What might that second script look like? It might look a little like this: = “[IsRecurring] = TRUE AND [Organizer] = ‘Ken Myer'” Set colFilteredItems = colItems.Restrict(strFilter) For Each objItem In colFilteredItems Set objPattern = objItem.GetRecurrencePattern If objPattern.PatternEndDate > Now Then Wscript.Echo “Meeting name: ” & objItem.Subject Wscript.Echo “Duration: ” & objItem.Duration & ” minutes” Wscript.Echo “Location: ” & objItem.Location Wscript.Echo “Recurrence type: ” & objPattern.RecurrenceType Wscript.Echo “Start time: ” & objPattern.StartTime Wscript.Echo “Start date: ” & objPattern.PatternStartDate Wscript.Echo “End date: ” & objPattern.PatternEndDate Wscript.Echo End If Of course, you could always combine the two scripts into a single script. But we’ll let you take care of that yourself. Meanwhile, the Scripting Son is getting ready for his summer ball season, a season in which he’ll play another 50-to-60 games. Will he draw many walks during this season? Probably not; at least when it comes to baseball the Scripting Son is impatient, he’s impetuous, and he just goes out there and acts without thinking. Wonder where he learned all that from …. Join the conversationAdd Comment thanks i want a faq any girl indian or foran woman
https://blogs.technet.microsoft.com/heyscriptingguy/2007/05/08/how-can-i-list-all-the-meetings-scheduled-by-a-specified-person/
CC-MAIN-2016-44
refinedweb
1,217
66.84
SQL::Bibliosoph - A SQL Statements Library use SQL::Bibliosoph; my $bs = SQL::Bibliosoph->new( dbh => $database_handle, catalog => [ qw(users products <billing) ], # enables statement benchmarking and debug # (0.5 = logs queries that takes more than half second) benchmark=> 0.5, # enables debug using Log::Contextual debug => 1, # enables memcached usage memcached_address => '127.0.0.1:11322', # enables memcached usage (multiple servers) memcached_address => ['127.0.0.1:11322','127.0.0.2:11322'] ); # Using dynamic generated functions. Wrapper funtions # are automaticaly created on module initialization. # A query should something like: --[ get_products ] SELECT id,name FROM product WHERE country = ? # Then ... my $products_ref = $bs->get_products($country); # Forcing numbers in parameters # Query: --[ get_products ] SELECT id,name FROM product WHERE country = ? LIMIT #?,#? # Parameter ordering and repeating # Query: --[ get_products ] SELECT id,name FROM product WHERE 1? IS NULL OR country = 1? AND price > 2? * 0.9 AND print > 2? * 1.1 LIMIT #3?,#4? # then ... my $products_ref = $bs->get_products($country,$price,$start,$limit); # The same, but with an array of hashs result (add h_ at the begining) my $products_array_of_hash_ref = $bs->h_get_products($country,$price,$start,$limit); # To get a prepared and executed statement handle, append '_sth': my $sth = $bs->get_products_sth($country, $price, $start, $limit); # Selecting only one row (add row_ at the begining) # Query: --[ get_one ] SELECT name,age FROM person where id = ?; # then ... my $product_ref = $bs->row_get_one($product_id); # Selecting only one value (same query as above) my $product_name = $bs->row_get_one($product_id)->[1]; # Selecting only one row, but with HASH ref results # (same query as above) (add rowh_ at the begining) my $product_hash_ref = $bs->rowh_get_one($product_id); # Inserting a row, with an auto_increment PK. # Query: --[ insert_person ] INSERT INTO person (name,age) VALUES (?,?); # then ... my $last_insert_id = $bs->insert_person($name,$age); # Usefull when no primary key is defined my ($dummy_last_insert_id, $total_inserted) = $bs->insert_person($name,$age); Note that last_insert_id is only returned when using MYSQL (undef in other case). When using other engine you need to call an other query to get the last value. For example, in ProgreSQL you can define: --[ LAST_VAL ] SELECT lastval() and then call LAST_VAL after an insert. # Updating some rows # Query: --[ age_persons ] UPDATE person SET age = age + 1 WHERE birthday = ? # then ... my $updated_persons = $bs->age_persons($today); Memcached usage # Mmemcached queries are only generated for hash, multiple rows, results h_QUERY, using de "ch_" prefix. my $products_array_of_hash_ref = $bs->ch_get_products({ttl => 10 }, $country,$price,$start,$limit); # To define a group of query (for later simulaneous expiration) use: my $products_array_of_hash_ref = $bs->ch_get_products( {ttl => 3600, group => 'product_of_'.$country }, $country,$price,$start,$limit); my $products_array_of_hash_ref = $bs->ch_get_prices( {ttl => 3600, group => 'product_of_'.$country }, $country,$price,$start,$limit); # Then, to force refresh in the two previous queries next time they are called, just use: # $bs->expire_group('product_of_'.$country); SQL::Bibliosoph is a SQL statement library engine that allow to clearly separate SQL statements from PERL code. It is currently tested on MySQL 5.x, but it should be easly ported to other engines. The catalog files are prepared a the initialization, for performance reasons. The use of prepared statement also helps to prevents SQL injection attacks. SQL::Bibliosoph supports bind parameters in statements definition and bind parements reordering (See SQL::Bibliosoph::CatalogFile for details). All functions throw 'SQL::Bibliosoph::Exception::QuerySyntaxError' on error. The error message is 'SQL ERROR' and the mysql error reported by the driver. The database handler. For example: my $dbh = DBI->connect($dsn, ...); my $bb = SQL::Bibliosoph(dbh => $dbh, ...); An array ref containg filenames with the queries. This files should use they SQL::Bibliosoph::CatalogFile format (SEE Perldoc for details). The suggested extension for these files is 'bb'. The name can be preceded with a "<" forcing the catalog the be open in "read-only" mode. In the mode, UPDATE, INSERT and REPLACE statement will be parsed. Note the calling a SQL procedure or function that actually modifies the DB is still allowed! All the catalogs will be merged, be carefull with namespace collisions. the statement will be prepared at module constuction. Allows to define a SQL catalog using a string (not a file). The queries will be merged with Catalog files (if any). In order to use the same constants in your PERL code and your SQL modules, you can declare a module using `constants_from` paramenter. Constants exported in that module (using @EXPORT) will be replaced in all catalog file before SQL preparation. The module must be in the @INC path. Note: constants_from() is ignored in 'catalog_str' queries (sorry, not implemented, yet) Do not prepare all the statements at startup. They will be prepared individualy, when they are used for the first time. Defaults to false(0). Use this to enable Query profilling. The elapsed time (in miliseconds) will be printed with Log::Contextual after each query execution, if the time is bigger that `benchmark` (must be given in SECONDS, can be a floating point number). To enable debug (prints each query, and arguments, very useful during development). n. person having deep knowledge of books. bibliognostic. SQL::Bibliosoph by Matias Alejo Garcia (matiu at cpan.org) and Lucas Lain. Juan Ladetto WOLS Copyright (c) 2007-2010 Matias Alejo Garcia. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. The SQL::Bibliosoph is free Open Source software. IT COMES WITHOUT WARRANTY OF ANY KIND. SQL::Bibliosoph::CatalogFile At you can find: * Examples * VIM syntax highlighting definitions for bb files * CTAGS examples for indexing bb files. You can also find the vim and ctags files in the /etc subdirectory. Lasted version at: This module have been tested with MySQL, PosgreSQL and SQL Server. Migration to other DB engines should be simple accomplished. If you would like to use Bibliosoph with other DB, please let me know and we can help you if you do the testing.
http://search.cpan.org/~matiu/SQL-Bibliosoph/lib/SQL/Bibliosoph.pm
CC-MAIN-2018-09
refinedweb
956
58.48
#include "HalideRuntime.h" Go to the source code of this file. Routines specific to the Halide OpenCL runtime. Definition in file HalideRuntimeOpenCL.h. Definition at line 19 of file HalideRuntimeOpenCL.h. Set the platform name for OpenCL to use (e.g. "Intel" or "NVIDIA"). The argument is copied internally. The opencl runtime will select a platform that includes this as a substring. If never called, Halide uses the environment variable HL_OCL_PLATFORM_NAME, or defaults to the first available platform. Halide calls this to get the desired OpenCL platform name. Implement this yourself to use a different platform per user_context. The default implementation returns the value set by halide_set_ocl_platform_name, or the value of the environment variable HL_OCL_PLATFORM_NAME. The output is valid until the next call to halide_set_ocl_platform_name. Halide calls this to gets the desired OpenCL device type. Implement this yourself to use a different device type per user_context. The default implementation returns the value set by halide_set_ocl_device_type, or the environment variable HL_OCL_DEVICE_TYPE. The result is valid until the next call to halide_set_ocl_device_type. Halide calls this to gets the additional build options for OpenCL to use. Implement this yourself to use a different build options per user_context. The default implementation returns the value set by halide_opencl_set_build_options, or the environment variable HL_OCL_BUILD_OPTIONS. The result is valid until the next call to halide_opencl_set_build_options. Set the underlying cl_mem for a halide_buffer_t. This memory should be allocated using clCreateBuffer device pointer. The device and host dirty bits are left unmodified. Disconnect a halide_buffer_t from the memory it was previously wrapped around. Should only be called for a halide_buffer_t that halide_opencl_wrap_device_ptr was previously called on. Frees any storage associated with the binding of the halide_buffer_t and the device pointer, but does not free the cl_mem. The dev field of the halide_buffer_t will be NULL on return. Return the underlying cl_mem for a halide_buffer_t. This buffer must be valid on an OpenCL device, or not have any associated device memory. If there is no device memory (dev field is NULL), this returns 0. Returns the offset associated with the OpenCL memory allocation via device_crop or device_slice.
https://halide-lang.org/docs/_halide_runtime_open_c_l_8h.html
CC-MAIN-2020-50
refinedweb
347
52.15
Go WikiAnswers ® Categories Jobs & Education Education Educational Methods and Theories Unanswered | Answered Educational Methods and Theories Parent Category: Education Educational paradigms may include student, subject and societal influences. Educational philosophies may be progressive, personal, public and other types. Subcategories E-Learning Learning Theories Research Methodology Teaching Resources 1 2 3 > What does a teacher use in the class room which is 10 letters long? Vocabulary NASPE identifies the primary goal of assessment as the documentation of learning rather than the enhancement of learning? True Identify opportunities for child initiated play within each activity? how thick is an sheet of paper How has adult education contributed to andragogy? It hasn't. Describe the cultural diversity of the people who live in your community? Your neighbours. How do you function the cellular phone? == Answer == Every model of mobile phone has its own functions, invoked with different keying. Get copy of the Operators Guide/ Manual to learn how to use it. The manufacturer's website has online copies available. Why should you have discipline? we should have discipline because it will help us be focus and right minded in life and if ever it will help us with interviews for a job or anything else important in life Advantage and disadvantage of hydraulik machines? order texas birth certifcate online What is the developmental process of second language? how to remove stubborn engine cover gaskets The knowledge or awareness of your own cognitive processes is? metacognition What is the difference between dual trace cro and dual beam cro? In a dual beam oscilloscope we are using two separate electron beam for producing different wave forms. But in a dual trace oscilloscope the same beam is used for producing two different wave forms How do you Use Multi-media Correctly in English Teaching? First thing, is not to use multi-media as a means to provide lists of items for students to copy down. The best use is to provide a simulation or example of the concept you want them to learn. For example, with multi-media, you can sequentially add or delete parts of the whole so that... What are the effects of missing breakfast to academic performance? lack of brain response and thus, will make the child sleepy and dysfunctional What are AV aids in teaching? Audio-visual aids. These can be films, overhead drawings, power point or pictures that add to the discussion and understanding of the topic. What is the role of discipline in maintaining quality in education? Classroom discipline is an important part of managing the classroom. It has much less to do with punishment than making everyone in the class feel included and able to understand the material. Harry Wong was one of the early writers regarding how to do this and you can find him on YouTube. Evaluate the statement there are no basic differences between developmental and remedial reading if any do exist they are not of kind but degree? Developmental reading skills are necessary for a student to progress from a non-reader to a reader--these are things such as gross and fine motor movement, identification of letters, being able to hear and recognize a sequence of sounds on the way to understanding words. Remedial teaching... What is A class room management discipline process strategies curriculum development? This is curriculum that keeps students engaged and busy throughout the teaching period. The term I learned was "Active Learning," where activities ranged from lecture, small group/peer study, writing and other activities, roughly divided in 15 minute periods. Although it sounds like extra... What are the remedies for poor performance in senior secondary school mathematics? A symptom of poor perfomance may be found in classroom and homework completion. Another is the stress of threatened failure. If individuals go over classwork and understand the steps they are likely to do well with the homework and will be able to pass the tests. Individual tutoring,... How do behavioral and cognitive theories differ in their views of how environmental variables influence learning? Mental processes Where do you get pictures of van mahotsav? On the Internet. The truth is I don't even know who you're talkingabout What is the Islamic concept of knowledge? Muslims have to get as much knowledge as they can. There is no such thing as enough knowledge. Should students be paid for good grades? yes because and live they try to do their best. Also it will helpthem in live for rewards and live. we should at least give the kids$15 The third level of theory is called? situation relating What is grades equal a 3.0 GPA with 6 classes? 1 A, 4 Bs, 1 C2 As, 2 Bs, 2 Cs2 As, 3 B, 1 D3 As, 3 Cs3 As, 1 B, 1 C, 1 D3 As, 2 Bs, 1 F4 As, 2 Ds4 As, 1 C, 1 F6 Bs What are some tips for caregivers of children with Down syndrome? 1. Find a local organization that helps with this thing. 2. Come up with ways that the parent and the child can both learn and spend quality time together. Can you include a private school balance in bankruptcy? == Answer == Assuming the "balance" is a normal debt owed to the private school for services rendered, it can be discharged like any other unsecured debt,at least in Florida. Why was teaching invented? Inventing -- building something new or making something better --is often thought of as a purely creative act. In fact, inventiondemands much more than a vision. It requires previous knowledge andnew information; the ability to observe, analyze, and identifyproblems; and the utilization of critical... Psychologists who carefully watch the behavior of chimpanzee societies in the jungle are using a research method known as? naturalistic observation What outwards symbol were Jews requried to display? During World War II, those of the Jewish faith were required to display the Star of David (the 6-pointed star).. Effect of school location on the academic performance of students? There are many ways in which school location can impact performance. For example, school in an area of the country with inclimate weather can significantly harm a student's ability to learn. Discuss different profit theories? Please can you help list and explain the various theories of proft that we have in economics. I am a postgrudate student of the university of lagos, nigeria studing ECN 845: advanced micro-economics. What would a 2.6 on a 5.0 grade scale equal on a 4.0 grade scale? (2.6/5) * 4 = 2.08 What is self assessment? A self assessment is how you perceive yourself in your job or other aspects of your life. One might compare themselves to others in the same job and set goals to improve themselves. What is the purpose of a strategic intervention material? this learning package is intended to suplement your classroom learning while working independently. What is the log i you value? I know the answer *is arctan(x), but how about breaking it into partial fractions by doing 1/(1-ix)(1+ix)?... Doubly circular linked list java codes? class Node{public Node next;public Node previous;public int item;public Node(int item){this.item = item;}public Node(int item, Node previous){this.item = item;this.previous = previous;}}public class DoublyLinkList {public static void main(String[] args) {// TODO Auto-generated method... On what does a memoir focus on? A memoir focuses on the recollections and memories of the author writing it. Stages in implementing portfolio assessment'? The first stage in implementing portfolio assessment is to identify teaching goals. Next stage is to introduce the idea of portfolio assessment and provide examples. Then, a specification of portfolio content should be done. The fourth stage is to give clear and detailed guidelines on how to... Should school hours be longer? Some people believe school hours should be longer so children have more time to learn. Some believe that school hours are already too long. What grade equivalent is kumon C141? It differs to some extent depending on how the grading system works in the schools in your area, but generally it will be anywhere from grades 1-3. C works on solidifying addition/subtraction, and introduces multiplication. Source:I work there. Why do you need early childhood education? To understand how children develop mentally, cognitively and physically. Even if you don't plan on becoming a teacher, you'll most likely be a parent and this class can prepare you for that as well. Disadvantages of portfolio assessment? 1. 2. Why you should use visual aids? A visual aid supplements words with pictures, charts, graphs, orother visual information. They are important because they help theaudience understand and remember, increase audience interest, andact as notes or reminders for the speaker. How is argon used in the production of light bulbs? Argon does not react. :) What is the main function of Education? The main function of education is to prepare children to become the best members of society that they can be. This includes teaching them skills that will carry over into the workplace, and being an informed citizen. What things are lucky? Numbers can be lucky Rabbits feet four leaf clovers Difference between instructional material and teaching aids in education? Instructional material or media is a devices which present a complete body of imformation and largely self-supporting rather than supplementary in the teaching learning process while teachers aid are supplemenary and not self-supporting and teacher can only use it to make his/her points... 4 What research methodology requires researchers to gather data and information that can be converted to numbers for statistical analysis? 1. Which research methodology requires researchers to gather data and information that can be converted to numbers for statistical analysis? Fifth grade summation notation problem so what is the answer if the upper limit of summation is 6 and the lower limit of summation which is the index summation is 2? 6 Σn 2 What is n?? Please answer How might web-based learning activities cause engagement of all three learning styles? You're seeing the words and pictures as well as hearing theinformation, plus you're doing something when you click on thevarious links and icons. Why do kids think Answers.com is the solution to all homework problems? cause kids are lazy and crazy exept me Why do auditory learners benefit from reading aloud? Auditory learners tend to remember what they hear. Give two reasons why a teacher would not provide a student with a copy of a database? well maybe he doesn't have one or maybe hes not allowed to. Should children be used in medical research? No - except in cases where there is no cure for their condition and a new experimental treatment has shown promise and the parents elect to see if the experimental procedure might help. In that respect the child is part of medical research. How do you find reduction factor of tangent galvanometer? We find it by varying the current flowing through it and by measuring the deflection respectively and then we use the formula k=I/tan(theta) Can a quantitative study have an independent and dependent variables? Yes. The presumed cause is the independent variable and the presumed effect is the dependent varibale. Variablility in the dependent variable is presumed to depend on variablility in the independent variables. It is used more of a direction of influence rather than a cause and effect scenario.... What are the factors affecting the academic performance of elementary pupils? lack of proper care from the parent poor background lack of paying attention on the pupil's problems (emotionally) inadequate balance diet When was the start of Emilio Aguinaldo's term? start of term of emilio aguinaldo Free download answer of smu MBA assignment paper of 2010? yes What types of media are there? movies, books, internet, tv, magizines, cell phones, etc. What is one of the most positive reasons students decide to attend college? Better-paying jobs for attaining a higher education. What type of research method was used to conduct the Tuskegee experiment? Statistical Are field trips important? Field trips are important for several reasons. They reinforce material covered in the classroom and create a new learning environment. Field trips also strengthen the teacher-student bond. What is the concept of educational achievement? == Knowledgable Satisfaction == The concept of educational achievement is believing you understand and see things the way they were meant to be from any teachings, the basics of every detail of every thesis. Others will say a diploma will hold you in higher standings. ... What are the example of deductive method of teaching? Possibility of Correlation What are some negative descriptive words on homework? uh...evil, the killing of trees....... What colleges offer PhD in public communication? You can obtain this information by going to and using the sites College MatchMaker search engine, or click on the related links section below to go directly to the site. You can research colleges and universities by name, or by programs of study, or by geographical... The rule states that each slide should have a maximum of seven lines and each of these lines should have a maximum of seven words? This rule applies to material that you show to your audience while giving a talk. Your presentation shall not distract from your talk, therefore only the main keywords shall be shown. Otherwise people end up in reading your presentation slides instead of listening to what you say. Why is school punishment not good? If you misbehaved at school or were truant, then you should expect consequences. It does take concern and skill to provide experiences that children will find both unpleasant and learn something from. Writing a paper about the behavior and having to keep writing it until the truth is told; isolation... What are the weaknesses and strength of Daedalus? Daedalus was VERY intelligent. He was an inventor, an architect, a scientist, and lots of other things. Although he was spiteful; he was angry because he was trapped and he was angry at anyone who thought they were better or smarter than him. How do you control labor cost in a hotel? please send me massege how do you control labor cost ahotel What are the issues and concern on curriculum innovations? as now nothing What is the process by which human being obtain useful quantitative information about the different physical aspects of the earth? Measurement.. Answered by anthony_christian13@y.c XDD How do you get a username and password on study island? Previous answer: "First of all, you have to be elementary aged student and second of all, your teachers make it. Also at first your password will be student but you can change it if u want." You do not have to be an elementary student. Study Island is available for any student K-12 in all 50 states... What is the humidity of a snowstorm? A snowstorm is characterized by strong sustained winds of at least 56 kilometers per hour. Humidity during snowstorms is near 100 percent. Common mistakes people make in your society during speaking listening reading and writing English? Common Mistakes in Reading, Writing, Speaking and Listening # One very common mistake exists in each segment of language learning including speaking, reading and writing in our society is that people want to learn through translation. They don't think in the language, instead,... are Federal handicapping condition codes? Federal Disabilities Census Codes01 - Intellectual Disability02 - Hearing Impairment03 - Deaf04 - Speech or Language Impairment05 - Visual Impairment06 - Emotional Disability07 - Orthopedic Impairment08 - Other Health Impairments09 - Specific Learning Disabilities10 - Multiple Disabilities12 - Deaf... Is corporal punishment legal in japan? It is not legal in schools, but is legal outside of it, with the exception of some cities, who have banned the practice by local law. Montessori versus christian education? Both! There are several christian/montessori shools around the country. Your child gets a fully christian education while the school remains on montessori principles. Another word for post-mortem? There are several words that can be used a substitute for post mortem. Some of the common words include autopsy and necropsy among others. Why are teaching and learning important in curriculum? They help students to learn. It is difficult to learn new things and get an education is no one is teaching or learning the information. What benefits will the school derive if its curricular programs are accredited? The benefits that schools get by accrediting their curricular programs is that they will teach the up to date methods. The learners are imparted with the most recent skills. What is another name for the research method that is referred to as participant observation? fieldwork The importance of land survey knowledge to your discipline as a quantity surveyor? land survey and quantity surveyor different What is the Best Catholic High school in Rhode Island? Prout high school it has the IB program and it is a very small school with a total of around 600 which allows for more one on one it is looked prestigiously by colleges and the only catholic high school in rhode island with the IB program (international Baccalaureate) this is the highest... What is supported curriculum? Is the one for which there are complementary instructional materials available, such as textbooks, software, and multimedia resources. What are some good lesson plans for Easter? Do a study in Exodus of the Passover Lamb and show why Jesus fulfills all the requirements. Here are a few places to start: Answer If the... Projected and non projected teaching aids? What is meant by teaching aid What disciplinary measures were used to tackle discipline problems a few decades ago? Disciplinary Measures Used In the 1950's and 60's, the most used discipline methods were writing sentences, standing in the corner with your face to the wall, and paddling. In fact, it was traditional at the school I attended for the teacher to show the class her (almost all teachers were... Experimental results must be qualitative and open for interpretation ture or false? false A Research method that can establish a causal link is? experimental Talambuhay ni pangulong Emilio F Aguinaldo? yes! What is the casual-comparative research method? it is one of the 9 basic method in research What are the factors affecting the academic performance of undergraduates? Some factors that affect a college students academic performance in a negative way are as follows. The positive affect would be the opposite. . No vision . Some students do not have a clearly articulated picture of the future they intend to create for themselves. Thus, they may take programs of... What is associative play? This is when a child is interested in the people playing but not the activity they are doing. It is a substantial amount of interaction involved although the activities are not coordinated. Carlene did 6 of her 10 homework examples correctly what fractional part of her homework examples did she complete correctly? 3/5 What is the importance of land survey knowledge to your discipline as a quantity survey? Land survey is important because we must know the position of the land before the process to build building is begin 1 2 3 >
http://www.answers.com/Q/FAQ/3138
CC-MAIN-2017-51
refinedweb
3,213
57.37
If you've ever wanted to know how to install Kubernetes and join a node to a master, here's how to do this with little to no frustration on Ubuntu. Unless you've had your head buried in a pile of Cat5 cables, you know what Kubernetes is. For those who are neck deep in cabling, Kubernetes is an open-source system used for automating the deployment, scaling, and management of containerized applications. Kubernetes is enterprise-ready and can be installed on various platforms. However (isn't there always an however?), the installation of Kubernetes can sometimes be a challenge. That's why I'm here. I want to show you how you can easily and quickly install Kubernetes on Ubuntu, initialize your master, join a node to your master, and deploy a service on the cluster. I'll be demonstrating this with the Ubuntu platform (specifically one instance of Ubuntu Server 16.04 and one of Ubuntu Desktop 17.10). Installing dependencies The first thing you must do is install the necessary dependencies. This will be done on all machines that will join the Kubernetes cluster. The first piece to be install is apt-transport-https (a package that allows using https as well as http in apt repository sources). This can be installed with the following command: sudo apt-get update && apt-get install -y apt-transport-https Our next dependency is Docker. Our Kubernetes installation will depend upon this, so install it with: sudo apt install docker.io Once that completes, start and enable the Docker service with the commands sudo systemctl start docker sudo systemctl enable docker You're now ready to install Kubernetes. Installing Kubernetes Installing the necessary components for Kubernetes is simple. Again, what we're going to install below must be installed on all machines that will be joining the cluster. Our first step is to download and add the key for the Kubernetes install. Back at the terminal, issue the following command: sudo curl -s | apt-key add Next add a repository by creating the file /etc/apt/sources.list.d/kubernetes.list and enter the following content: deb kubernetes-xenial main Save and close that file. Install Kubernetes with the following commands: sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni SEE: Special report: The cloud v. data center decision (free PDF) (TechRepublic) Disable swap In order to run Kubernetes, you must first disable swap. To do this, issue the command: sudo swapoff -a To make that permanent (otherwise swap will re-enable every time you reboot), issue the command: sudo nano /etc/fstab In the fstab file, comment out the swap entry (by adding a leading # character): /swap.img none swap sw 0 0 Save and close that file. Initialize your master With everything installed, go to the machine that will serve as the Kubernetes master and issue the command: sudo kubeadm init When this completes, you'll be presented with the exact command you need to join the nodes to the master ( Figure A). Figure A The master is ready to be joined by the nodes. Before you join a node, you need to issue the following commands (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Deploying a pod network You must deploy a pod network before anything will actually function properly. I'll demonstrate this by installing the Flannel pod network. This can be done with two commands (run on the master): sudo kubectl apply -f sudo kubectl apply -f Issue the command sudo kubectl get pods --all-namespaces to see that the pod network has been deployed (Figure B). Figure B Our Flannel pod network is ready. Joining a node With everything in place, you are ready to join the node to the master. To do this, go to the node's terminal and issue the command: sudo kubeadm join --token TOKEN MASTER_IP:6443 Where TOKEN is the token you were presented after initializing the master and MASTER_IP is the IP address of the master. Once the node has joined, go back to the master and issue the command sudo kubectl get nodes to see the node has successfully joined (Figure C). Figure C Our node has joined the master. Deploying a service At this point, you are ready to deploy a service on your Kubernetes cluster. To deploy an NGINX service (and expose the service on port 80), run the following commands (from the master): sudo kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster" sudo kubectl expose deployment nginx-app --port=80 --name=nginx-http If you go to your node and issue the command sudo docker ps -a, you should see the service listed (Figure D). Figure D Our service has been deployed. Your Kubernetes cluster is ready You now have a basic Kubernetes cluster, consisting of a master and a single node. Of course you can scale your cluster by installing and adding more nodes. With these instructions, that should now be easy peasy. Also see - How to become a developer: A cheat sheet (TechRepublic) - 20 quick tips to make Linux networking easier (free PDF) (TechRepublic) - Securing Linux policy (Tech Pro Research) - The battle between real open source vs. faux open source heats up (ZDNet) - Best cloud services for small businesses (CNET) - Microsoft Office vs Google Docs Suite vs LibreOffice (Download.com) - Linux, Android, and more open source tech: Must-read coverage (TechRepublic on Flipboard)
https://www.techrepublic.com/article/how-to-quickly-install-kubernetes-on-ubuntu/
CC-MAIN-2019-43
refinedweb
929
61.16
Java reports time zone incorrectly during CDT (US Daylight saving time) Bug Description Binary package hint: sun-java5-bin The following source code (provided to me by a developer on the FreeGuide-TV project) demonstrates the problem: import java.util.Calendar; import java.util.Date; import java.util. public class TimeTester { public static void main( String[] args ) { Calendar calendar = new GregorianCalend Date trialTime = new Date(); } } When run using Sun Java, the DST_OFFSET is 0, even though my time zone is currently at Central Daylight Time so it should be 1. I also tried alternatives to Sun Java. Blackdown Java has the same problem, but GNU Java and Kaffe produce the correct results. Unfortunately, Sun Java is the only package that has been able to run FreeGuide-TV for me so far, which is what I was using when I discovered this bug. Seems to work for me here. Setting the date to June 1, 1997 as a test case worked. 2006/06/08 (date this was filed) works as well. Considering how long this has sat open with no attention, and my inability to reproduce the problem, I'm going to close the bug report. If you feel this is in error and can provide a specific example (date) where Dapper/Feisty's Sun Java doesn't work, then please reopen the bug report. I can still reproduce this. I'm really sorry, but I've done it on an Edgy system, just to be difficult. I'd be really surprised if it's not the same on both Dapper and Feisty. balaaman@ java version "1.5.0_06" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_06-b05) Java HotSpot(TM) Client VM (build 1.5.0_06-b05, mixed mode, sharing) balaaman@ import java.util.Calendar; import java.util.Date; import java.util. public class TimeTester { public static void main( String[] args ) } balaaman@ balaaman@ Wed Mar 7 10:33:36 GMT 2007 balaaman@ ZONE_OFFSET: 0 DST_OFFSET: 0 balaaman@ Sun Jun 1 10:33:00 BST 1997 balaaman@ Sun Jun 1 10:33:02 BST 1997 balaaman@ ZONE_OFFSET: 0 DST_OFFSET: 0 I would expect the final line above to read: DST_OFFSET: 1 Right? Thanks, Andy Re-opening since I can still reproduce. Please show what you did differently to make this not happen. Happily. I'm in time zone CST, and I ran the following code: import java.util.Calendar; import java.util.Date; import java.util. public class TimeTester { public static void main( String[] args ) { Calendar calendar = new GregorianCalend // Date trialTime = new Date(); // calendar. } } Which printed out: ZONE_OFFSET: -6 DST_OFFSET: 1 You are explicitly setting the time in the constructor for GregorianCalendar. Do you get the same results when you set your system date and then use the default constructor for GregorianCalendar? The reason it reports an incorrect time zone is because sun java uses its own time zone database. As far as I can tell anything below 1.6 needs to be patched for the upcoming DST changes. I have had to run the timezone updater from Sun since no backports have been made available. An awful lot of people who run java apps are in for a surprise on March 11th. The timezone updater is documented here: http:// You have to register to download it. Installing the patch will, of course, overwrite files installed by the original package, which is never a good thing. Until a backport becomes available I have no other option.... Here is a link to a page with some validation code: http:// I've run the timezone updater as well, to fix my local JVM's zone database but the package really should be synchronized with Sun's latest 1.5.X release (which does have proper DST values). Note that I am not in the US, so should not be affected by the US foobar. Java fails to report DST as 1 when my system clock is set to summer, and I am in the UK (UTC and British Summer Time). I am also affected by this bug in continental Europe. I have two Kubuntu installations, one is an edgy installation with sun-java5, the other one is feisty with sun-java6. I tried the sample application DateTest from http:// Edgy with Java 5 shows the correct time (currently GMT offset +1 and dstSavings +1) and the correct time zone "id=Europe/ Feisty with Java 6 however is affected by this bug (or at least a bug very similar to this one). The time is off by one hour because the DST savings are not taken into account. Java by default uses a time zone called "id=GMT+01:00" instead of "id=Europe/ java -Duser. instead of simply java DateTest So somehow Java 6 under feisty is getting the wrong time zone from the system. I checked /etc/timestamp, which correctly reads "Europe/Amsterdam", and date also reports the correct time. I switched to time zone America/Chicago using KDE's application for setting the time and time zone, but the time zone in Java changed to "id=GMT-06:00", not to "id=America/ Another workaround is to set the environment variable TZ (for time zone obviously), e.g. by executing export TZ=`cat /etc/timezone` If TZ is set, Sun's Java always uses its value under Linux. See/ This solution also works for me (creating a link to /usr/share/ bug against Dapper, but it seemed to go away with Edgy (DST was not in effect when I installed Edgy and the problem did not appear when DST started in March). However, it returned when I installed Feisty beta recently, and it didn't matter whether I used sun-java5 or sun-java6. Thanks for this solution. It will at least allow me to use the applications that depend on it until a better solution is provided. Allen Crider Christian Assig/ > expects to find a symbolic link here, and the time zone detection fails > if Java encounters a regular file instead of the link. > Setting to confirmed because Allen can reproduce the behaviour I updated my edgy installation with sun-java5 to feisty with sun-java6 after feisty 's final release. This machine is still not affected by this bug. My other machine now shows the wrong time and time zone again, probably because the symbolic link was overwritten during a package upgrade with a regular file. I have found out that even without the symbolic link, Java detects the time and time zone correct when I choose a time zone that has no DST at the moment, e.g. Australia/Perth or Asia/Shanghai. Unfortunately, I don't know yet what the difference between the two installations causing this bug might be. I already did a directory diff on the /etc/java and /etc/java-6-sun folders of both system, but they are identical. /etc/localtime is identical on both machines as well. Any hints regarding what else I could compare would be appreciated. Please read http:// Mike Green wrote: > Please read > http:// > particularly question 9. The local timezone database, localtime links, > and the TZ environment variable do not have anything to do with the > problem (according to Sun). > That doesn't quite make sense. Java may have its own copy of the timezone database, but it must use something from the user or system to determine the timezone on the machine where it is running. And given the behavior we've seen, my guess is that it uses the environment variable TZ if it is set or the /etc/localtime file in some manner otherwise. Or it may be using another library that depends on those to determine the timezone. I haven't had the time to try to track down the Java source code and determine what it uses and I'm afraid I'm not going to be able to anytime soon. Allen Crider > That doesn't quite make sense. Java may have its own copy of the > timezone database, but it must use something from the user or system to > determine the timezone on the machine where it is running. It makes sense if you consider that jvm's run on many platforms that handle timezone information in their own way. Perhaps there are individual java classes/methods that use the host o/s timezone environment. From what I am seeing in that Sun FAQ, you can't depend on that, which is why they provide the tzupdater tool. Basically the embed a local copy of the official timezone database. You also have to consider that there might be multiple copies of jre/jdk's running on individual machines. All I know is that even after successfully updating my timezone database java was still not functioning correctly, I have to run the tzupdater tool every time. Well, since it appears that the sun java package is not going to be updated for dapper users, I might as well post my workaround. The sun tzupdater package requires a registered account to download it, and I have no idea what licensing restrictions it is distributed under. I created this package to update all of my servers. 1) download the tzupdater zip archive from http:// 2) Extract the attached debian source package: tar xvfz sun-java5- 3) Move the downloaded zip file into the extracted directory: mv tzupdater-*.zip sun-java5-tzupdater 4) Edit sun-java5- 5) Build the package: cd sun-java5-tzupdater dpkg-buildpackage -b -rfakeroot You now have a deb package in the parent directory. Installing it on a box that already has the dapper provided sun java package will update the currently installed java timezone database. The sun tzupdater alters the already existing package, so any updates or reinstalls of the original dapper package will require the updater to be reinstalled. The postinst detects if java is present and if the currently installed java even needs updating.../ Possible ways to solve this bug that I could imagine would be to remove /usr/share/ Christian Assig wrote: >/ > /usr/share/ > all systems. So I guess whether or not this bug appears depends on the > order in which Java cycles through the files in /usr/share/ > it finds the symbolic link to /etc/localtime first, the detection fails > because the time zone cannot be derived from the path > /usr/share/ > first, the detection succeeds. > > Possible ways to solve this bug that I could imagine would be to remove > /usr/share/ > applications/ > correct order (bad) or implement a check in > j2se/src/ > /usr/share/ > links. > Given the above information, I see three other possible solutions: 1. Add a check in TimeZone_md.c to have the second or third check be an attempt to read /etc/timezone (I don't like this because I'd rather not have Ubuntu specific modifications to the Java source if possible) 2. Modify whichever package( file to /etc/localtime to create a symbolic link instead (I see no reason why this should be a problem; I've encountered no problems as a result of replacing the file with a symbolic link manually) 3. Modify the same package( (I don't like this as well as 2 simply because I hate to see another directory added to /etc just to satisfy Sun Java when there is a better solution) After a little searching, at least one program that would need to be modified if solution 2 or 3 is adopted is /usr/sbin/tzconfig. Another thing that should be looked at is the installation scripts for the package tzdata, as it was an update to that package that recently removed the link I had created manually and forced me to recreate it. And I don't know whether all of the GUI administration tools that allow a user to change the timezone are wrappers around tzconfig or if they have another method ... Look guys, according to sun (http:// "NOTE: The Java platform's time zone data is completely independent from your operating system's time zone data. Therefore, you do not need to update your operating system for the Java platform to work correctly." From everything I have read it appears that after 1.4.0, java uses its own internal olson database. It seems to me you might be barking up the wrong tree by looking into utility classes and such... Why not just take a look at what Sun is saying you have to do to fix the problem? @Mike also exactly the behaviour I could reproduce. Let me clarify again what Sun's statement means: When a country decides to change the way it handles DST (such as Western Australia did last year or the U.S. did this year), you get these changes about the DST attributes of your time zone into your Java virtual machine by updating Java, you do not have to update your operating system's time zone database. But this can only work if the Java VM knows in which time zone it is, which it does in the way I have described. @Allen Solution 2 may be a problem. The reason for /etc/localtime not being a link is that /usr/share/zoneinfo might be mounted from a different partition than /etc. So it may be possible at boot time that a program wants to read /etc/localtime before /usr/share/zoneinfo is mounted, which would fail if /etc/localtime is a symbolic link to /usr/share/ Regarding solutions 2 and 3: I already tried to rename /usr/sbin/tzconfig. KDE's time configuration tool is still able to change the time zone if /usr/sbin/tzconfig does not exists, so unfortunately it does not suffice just to change /usr/sbin/tzconfig, at least the KDE code would have to be changed, probably other packages as well. I should have thought about the possibility of having /usr/share/zoneinfo on a different partition. And I had doubts about changing tzconfig being a good solution as it did not appear to be easy to wrap. It would be much easier (at least for me) to reproduce the needed functionality in whatever programming language an application was written in than wrapping tzconfig. I'm beginning to believe there is not going to be a nice long-term solution to this problem unless Sun changes their code. As I see it right now, we're coming down to just a few somewhat practical solutions, all with drawbacks: 1. Modify the source for sun-java, either to add a new rule to look for the file /etc/timezone if TZ is not set or to modify the way the rule for /etc/localtime being a regular file works. If Sun were to incorporate this change into their source, it would be the best solution, but I don't like it otherwise. This probably comes down to whether Ubuntu is handling time zones differently than other Linux distributions (I haven't run any other distributions since installing Breezy and don't remember how others did it before), and if so, whether it is big enough for Sun to incorporate an Ubuntu specific fix. 2. Modify the installation script(s) for sun-java to provide one of the things being looked for. That would probably come down to to either creating the file /etc/sysconfig/ I've never looked at /etc/sysconfig/ current systems), I don't know if that could be done in a manner that wouldn't break the next time the user changed time zones, although I doubt if many users do that very often. And I really haven't kept up with whether there is a standard way for setting an environment variable for all users. The way I would have done this in the past would have been to add it to /etc/bash.bashrc or a similar file. This has the disadvantage that it could possibly be lost the next time an update replaced /etc/bash.bashrc. 3. Change the name of the java executable and create a wrapper script that would set needed environment variables. 4. Modify applications that use this feature to work around it. For example, the only application I use that has been affected by this bug is FreeGuide. TZ could be set in the script /usr/bin/freeguide. I'm not crazy about this solution, but it should work. OTOH, I don't know what impact this might have on users that are using a different JVM. However, the last time I tried (when I first discovered this bug on Dapper), I was unable to get FreeGuide to run at all with any other available JVM. I still can't get it to work with GCJ, but I haven't tried installing any other JVMs. I wish I had a better idea, but I can't think of anything else that would work for everyone. For now, I can live with manually replacing /etc/localtime with a link, although if it isn't solved, I'll probably fall back on adding TZ to my .bashrc (workable on my personal system that doesn't have any other users). Someone else will have to decide whether it is a big enough problem for other users to implement a system level fix. ... ? Mike Green wrote: >>? > It sounds like we need to divide this into two bugs. My problem has a lot more to do with the localtime/TZ situation. Allen >It sounds like we need to divide this into two bugs. My problem has a >lot more to do with the localtime/TZ situation. As far as the localtime/TZ situation, why not just handle it during the install of the jdk/jre? Determine if /etc/localtime is a link or not and warn the user that they need to copy over whatever localtime is linked to if they have a separate mount for /usr, or set TZ somewhere... > It sounds like we need to divide this into two bugs. My problem has a > lot more to do with the localtime/TZ situation. I agree. Because of the title of this bug, I suggest we keep track of the time zone detection in this bug, and someone reports a new bug for the problem regarding deprecated time zone data in Java's time zone database. > As far as the localtime/TZ situation, why not just handle it during the install of the jdk/jre? Pretty simple: A lot of people have Ubuntu on their laptops and travel from time zone to time zone (like me e.g.). By just determining the time zone during the package installation, Java would not be updated once you change the time zone settings of the operating system, at least not until you install a new Java package. > Determine if /etc/localtime is a link or not and warn the user that they need to copy over whatever localtime is linked to if they have a separate mount for /usr, or set TZ somewhere... As described above, this does not work either. If you change /etc/localtime to a symbolic link during the installation of the Java package, it might be changed back by any program included in Ubuntu that makes any kind of changes to the time zone settings. If /etc/localtime is a regular file (that is the current situation), you may encounter the bug described here. Some thoughts about Allen Crider's suggestions from 2007-05-05: 1. The time zone detection already checks if /etc/localtime is a regular file or a symbolic link, so why not make it check if the files in /usr/share/zoneinfo are regular files? Of course, this could cause trouble if someone decides to change all files in /usr/share/zoneinfo to symbolic links for some reason. Simply reading /etc/timezone if it exists would be my personal favourite. 2. Like I said before, I guess there are quite a lot of people using Ubuntu on laptops that change time zones quite often. So I think if you want to change the Ubuntu environment to make Sun's current implementation detect the time zone correctly, you will have to change all applications in Ubuntu that modify time zones to either set the environment variable TZ or to create /etc/sysconfig/ 3. Very ugly, but it could work. Like I wrote before, you would have to put something like export TZ=`cat /etc/timezone` into the wrapper scripts for all java VM executables. 4. A Java VM is supposed to know the correct time and time zone, and it should not be the duty of any single Java based application to implement a workaround. So I guess this is just a workaround, not a bug fix at all. A similar bug has already reported at Sun. It is marked as being in progress, but I have no idea how long it might take until Sun takes care of it. See http:// Another fix for this bug might be to rewrite the scanning of /usr/share/zoneinfo to continue if a file is found that is identical to /etc/localtime, but whose name does not correspond to a time zone known to Java. People were mentioning writing the correct timezone information to /etc/sysconfig/ ZONE="America/ UTC=false ARC=false This doesn't seem like such a horrible idea. There may be even other (non-ubuntu pieces of software) that use the /etc/sysconfig/ I use java pretty extensively on ubuntu servers/ As far as I can tell, what timezone java thinks is correct appears to be random per machine (a quick check of 10 workstations appears to find a high level of variance depending on what ubuntu release they are running, if they were upgraded from older releases, and what architecture they are running). I would strongly recommend a solution be found, and not wait for sun to save the day (which I've done in the past with not much luck). I don't have these problems on my redhat servers. Java is a very common server application, and it should really *work* on ubuntu servers, even if that means living with a non-ideal workaround (EG: writing out /etc/sysclock/ Joe Kislo schrieb: > I would strongly recommend a solution be found, [...] sure, you can recommend that, but read the license what a distributor is allowed to do, and attach a patch which fixes the problem and is conforming to the license. Matthias: Are you referring to what a distributor can do for sun's JVM? The solution I have put onto my ubuntu servers would require no changes to Java. It would probably require changing tzconfig to write the timezone out to a second place (/etc/sysconfig) Arik Kfir wrote: >) > I haven't looked at much of the JVM source, so I have no idea how difficult this would be. I would object to doing it just because I wouldn't expect the Ubuntu team to have to maintain a patch specific to Ubuntu. I would want any changes made to go into the Sun code where it would be maintained as part of the upstream version of the JVM. And as Java is supposed to be cross-platform and provide better security, there may be a good reason for it not depending on the operating system for the timezone database. However, I don't know whether such a change would fix the problem I reported anyway. The problem I have is that the JVM is sometimes unable to determine which time-zone it is in. I figured out at one time what the algorithm was that the JVM uses to determine the correct time-zone, and it wouldn't work consistently with Ubuntu/Kubuntu, at least not while DST was in effect. For now, I've gotten around the problem by adding export TZ=`cat /etc/timezone` to /etc/profile. I'm not convinced that that is a good solution, but there are problems with every other solution that has been suggested. Allen Crider Allen Crider wrote: > I would want any changes made to go into the Sun code where it > would be maintained as part of the upstream version of the JVM. Yes, you are right. I was only asking because I thought maybe it would be a small patch, which Ubuntu manages quite a few, in which case it would not be too unusual. If it's a major undertaking - I agree full heartedly. > And as Java is supposed to be cross-platform and provide better > security, there may be a good reason for it not depending on the > operating system for the timezone database. I'm afraid I don't agree with that - the Java platform does not guarantee EXACT runtime results regardless of the platform - they (JVM architects) are not naive and we all know that there are, and always will be, differences. For example, the Java2D module uses DirectX on Windows, and OpenGL or X11 on Linux/UNIX. This is a rational design - all operating systems provide pretty much the same set of services, only in different APIs - Java simply wants to abstract these different APIs, and does so very well. Time & Date is simply another API the OS provides, IMO. Of course, that's a rant I should save for Sun... ;-) > However, I don't know whether such a change would fix the problem I > reported anyway. The problem I have is that the JVM is sometimes unable > to determine which time-zone it is in. I think it will solve it, because finding out in which time-zone should be done by calling one of the shared libraries the OS provides that does this (I'm assuming there is such a lib...) rather than duplicate the code, which undoubtedly has already been written by someone. Anyway - it's just an opinion and as I said, I agree this should be directed to Sun and not Ubuntu. Cheers. Please fix this bug. We are deploying ubuntu in our environment, and have many java developpers. All have to create a symlink to /usr/share/ Thank you I'm on the latest version of Ubuntu 9.10 with Java 1.6.0_15-b03 and I ran into the exact same problem. Had to create a symlink for /etc/localtime. According to sun this is fixed in 1.6.0_18-b02. Would it be possible to get the repository updated with the newest version (I looked around and couldn't find a way to help out with this) so that this bug could be closed already. Thank you. Looks like this was fixed in Java 6u18! http:// ==> 6456628 java classes_util_i18n (tz) Default timezone is incorrectly set occasionally on Linux (Sun Bug #6456628 is now listed as "Fix Shipped") Java 6u20 is available for Ubuntu 10.04 as of today! Upgradeable: sun-java6-jdk 6-16- 6.20dlj- sun-java6- sun-java6- sun-java6- sun-java6- sun-java6- sun-java6- # java -version java version "1.6.0_16" Java(TM) SE Runtime Environment (build 1.6.0_16-b01) Java HotSpot(TM) Server VM (build 14.2-b01, mixed mode) # apt-get update # sudo apt-get install sun-java6-bin sun-java6-javadb sun-java6-jdk sun-java6-jre sun-java6-plugin sun-java6-source Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: sun-java6-demo openjdk-6-doc sun-java6-fonts The following packages will be upgraded: sun-java6-bin sun-java6-javadb sun-java6-jdk sun-java6-jre sun-java6-plugin sun-java6-source 6 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. Need to get 84.7MB of archives. After this operation, 6,291kB of additional disk space will be used. .... # java -version java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02) Java HotSpot(TM) Server VM (build 16.3-b01, mixed mode) ### To test: ### # ls -l /etc/localtime -rw-r--r-- 1 root root 3519 2010-04-11 16:40 /etc/localtime (I've also still got the /usr/share/ # vi ShowDate.java ====SNIP===== import java.util.Date; public class ShowDate { public static void main(String[] arg) { System. } } ====SNIP===== # javac ShowDate.java # java ShowDate # date -u Thu Apr 22 16:30:32 UTC 2010 # date Thu Apr 22 12:30:34 EDT 2010 # java ShowDate Thu Apr 22 12:30:44 EDT 2010 # date -s 20100301 Mon Mar 1 00:00:00 EST 2010 # java ShowDate Mon Mar 01 00:00:05 EST 2010 # date -s 20100315 Mon Mar 15 00:00:00 EDT 2010 # java ShowDate Mon Mar 15 00:00:02 EDT 2010 # sudo date -s 20100101 Fri Jan 1 00:00:00 EST 2010 # java ShowDate Fri Jan 01 00:00:01 EST 2010 Can anyone else confirm the issue is resolved? Re: "Java 6u20 is available for Ubuntu 10.04 as of today!" Just a note, I forgot that I had enabled the Canonical Partner repo on this system. That is where this version of sun-java6-* is available from. So to get it, you'll need to add this to your apt sources: deb http:// Obviously I cannot reproduce this at the current time (since it involves daylight saving) but I have been searching the Java bugtracker to try and see if I could find it reported there. Unfortunately there are quite a few bug that may or may not be related. It is difficult for me to determine since I'm not that adept at Java. I would advice the original reporter or the developers that found the bug to do the following: 1. Try to determine of there is any reason to believe that this should be related to this distribution. e.g. does either of the classes involved, use any methods outside Java to determine the timezone, like say the system date or something of a sort. This could be supplemented by testing the code on another distro. 2. If it turn out that this is purely a internal Java issue. Then go upstream and see if you can figure out if this bug has already been filed there. If it hasn't the file it and link to it in this thread. During my searches I found out that there are some "similar" bug reported in the TimeZone class. But I couldn't quite figure out if that class is being used by the GregorianCalender. The Java bug tracker searcher is here: http:// bugs.sun. com/bugdatabase /index. jsp bugs.sun. com/bugdatabase /search. do?process= 1&category= &bugStatus= &subcategory= &type=bug& keyword= calender+ dst and if searched with keywords like calender DST it returns the following hits: http:// Happy hunting
https://bugs.launchpad.net/ubuntu/+source/sun-java5/+bug/49068
CC-MAIN-2020-10
refinedweb
5,126
60.35
I'm trying to create a cumulative list, but my output clearly isn't cumulative. Does anyone know what I'm doing wrong? Thanks import numpy as np import math import random l=[] for i in range (50): def nextTime(rateParameter): return -math.log(1.0 - random.random()) / rateParameter a = np.round(nextTime(1/15),0) l.append(a) np.cumsum(l) print(l) The cumulative sum is not taken in place, you have to assign the return value: cum_l = np.cumsum(l) print(cum_l) You don't need to place that function in the for loop. Putting it outside will avoid defining a new function at every iteration and your code will still produce the expected result.
https://codedump.io/share/wJjS9Vj9pxdr/1/cumulative-sum-list
CC-MAIN-2017-09
refinedweb
118
57.57
Thanks Debajit. Debajit's Dynamic CRM Blog This one feature that I am going to pen down here, personally I have longing for it quite sometime now. So before going into the HOW part of it, let’s understand the why part of it? When do I need show a Lookup dialog Programmatically? Well the answer is, numerous occasions. Like if you need to throw up a lookup dialog on change of field on the form OR you needed to throw the lookup dialog on click of a button on a web-resource. All this time, we have achieved this but not in a supported way. Probably we may have ended up using or some method of rnal namespace. But all these are unsupported and mere workaround to this perennial problem. Well, no more messing around. Microsoft has finally brought in the So let’s see how it works. Let’s take a not so good example here. Let’s say whenever… View original post 296 more words
https://passion4dynamics.com/tag/open-lookup-dialog-using-javascript-crm/
CC-MAIN-2019-39
refinedweb
168
74.69
How to: Group Results by Contiguous Keys (C# Programming Guide) Updated: July 20, 2015 For the latest documentation on Visual Studio 2017 RC, see Visual Studio 2017 RC Documentation. The following example shows how to group elements into chunks that represent subsequences of contiguous keys. For example, assume that you are given the following sequence of key-value pairs: The following groups will be created in this order: We, think, that Linq is really cool, ! The solution is implemented as an extension method that is thread-safe and that returns its results in a streaming manner. In other words, it produces its groups as it moves through the source sequence. Unlike the group or orderby operators, it can begin returning groups to the caller before all of the sequence has been read. Thread-safety is accomplished by making a copy of each group or chunk as the source sequence is iterated, as explained in the source code comments. If the source sequence has a large sequence of contiguous items, the common language runtime may throw an OutOfMemoryException. The following example shows both the extension method and the client code that uses it. using System; using System.Collections.Generic; using System.Linq; namespace ChunkIt { // Static class to contain the extension methods.(); } } } // A simple named type is used for easier viewing in the debugger. Anonymous types // work just as well with the ChunkBy operator. public class KeyValPair { public string Key { get; set; } public string Value { get; set; } } class Program { // The source sequence. public static IEnumerable<KeyValPair> list; // Query variable declared as class member to be available // on different threads. static IEnumerable<IGrouping<string, KeyValPair>> query; static void Main(string[] args) { // Initialize the source sequence with an array initializer. list = new[] { new KeyValPair{ Key = "A", Value = "We" }, new KeyValPair{ Key = "A", Value = "Think" }, new KeyValPair{ Key = "A", Value = "That" }, new KeyValPair{ Key = "B", Value = "Linq" }, new KeyValPair{ Key = "C", Value = "Is" }, new KeyValPair{ Key = "A", Value = "Really" }, new KeyValPair{ Key = "B", Value = "Cool" }, new KeyValPair{ Key = "B", Value = "!" } }; // Create the query by using our user-defined query operator. query = list.ChunkBy(p => p.Key); // ChunkBy returns IGrouping objects, therefore a nested // foreach loop is required to access the elements in each "chunk". foreach (var item in query) { Console.WriteLine("Group key = {0}", item.Key); foreach (var inner in item) { Console.WriteLine("\t{0}", inner.Value); } } Console.WriteLine("Press any key to exit"); Console.ReadKey(); } } } To use the extension method in your project, copy the MyExtensions static class to a new or existing source code file and if it is required, add a using directive for the namespace where it is located. LINQ Query Expressions Classification of Standard Query Operators by Manner of Execution
https://msdn.microsoft.com/en-us/library/cc138361.aspx
CC-MAIN-2017-09
refinedweb
450
54.83
Hi there, All i want to do is read the jokes from a text file (using Scanner) and then write them out to three different files with three different extensions (fileName1.obj, fileName2.dat, fileName3.txt), that's part A. Part B, Create a class called FileWatcher. • This class can be given several filenames that may or may not exist corresponding to .dat, .txt and .obj files. • Include all the filenames which corresponds to the associated file type from part A above. • For example FileWatcherObj should include all filenames created with the suffix .obj from part A. A thread of execution should be started for each filename. Each thread will periodically check for the existence of its file. Enter in “exit” when all the file names you are watching have been entered. Be sure to enter a file that does not exist. If the file appears, the thread will write a message indicating the file that was found to the console. • Then, it will randomly display a joke in its separate JFrame thread properly titled for the classification. • Each joke should display for two second for every 5 words in the joke. Jokes should be only less than 3 sentences. Also create another thread that checks to see if all the jokes are displayed at least twice. When all have been displayed, terminate the execution. Hint: Create static ThreadGroup object in a class (ReadFile) so you can refer to the thread group object with the class name for all the different treads. static ThreadGroup fileWatchers=new ThreadGroup("File Watchers"); Put all the FileWatcher threads in a thread group so you can terminate the group of threads as a whole by calling their interrupt() method that is a condition in the run() methods loop. ReadFile.fileWatchers.interrupt(); Help me please import java.util.*; import java.io.*; public class Joking{ public static void main(String[] args) throws IOException{ File jokesDat = new File ("jokesDat.dat"); File jokesChar = new File ("jokesChar.txt"); File jokesObj = new File ("jokesObj.obj"); int i = 0; String[] joke = new String[100]; Scanner scanner = new Scanner(new File("jokes.txt")); try{ while (scanner.hasNext()){ joke [i] = scanner.nextLine(); i++; //System.out.println("Joke #" + i + " is " + joke[i]); } }catch(Exception e){ System.out.println("Reading failed"); System.exit(0); } DataOutputStream outDat = null; DataOutputStream outTxt = null; DataOutputStream outObj = null; try{ jokesDat.createNewFile(); jokesChar.createNewFile(); jokesObj.createNewFile(); outDat = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(jokesDat))); outTxt = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(jokesChar))); outObj = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(jokesObj))); }catch(Exception e){ System.out.println("Creation failed"); System.exit(0); } try{ for (i = 0; i < joke.length; i++){ outDat.writeUTF(joke[i]); outTxt.writeUTF(joke[i]); outObj.writeUTF(joke[i]); } }catch(Exception e){ System.out.println("Writing failed"); } finally{ outDat.close(); outTxt.close(); outObj.close(); } } }
https://www.daniweb.com/programming/software-development/threads/264977/io-files-and-multithreading
CC-MAIN-2017-04
refinedweb
464
51.75
The C# language is the "default" language of the .NET framework (the FCL is written in C#). It is a .NET-only, case-sensitive, and fully object-oriented language that in many ways resembles C++, Delphi, and Java programming languages. If you've read the C++ portions of this book or if you already know C++ or Java, you'll have no trouble learning the C# language since it is syntactically closest to these two languages. To learn more about the C# language, let's create a C# console application project. When you select the Console Application item to create a C# console application project, the IDE will first ask you to name the project (see Figure 29-1) and then will automatically save all project files to disk. The code generated for the C# console application is displayed in Listing 29-2. Figure 29-1: Creating a C# console application Listing 29-2: The source code of a C# console application using System; namespace Project1 { /// <summary> /// Summary description for Class. /// </summary> class Class { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main(string[] args) { // // TODO: Add code to start application here // } } } Besides the project code, the list of references in a project is extremely important. In order to use a specific class, we must first reference the assembly in which the class is implemented. For instance, to use the Console class (declared in the System namespace), we have to reference the System assembly. If you look at the References node in the Project Manager (see Figure 29-2), you'll notice that the console application already references not only the most important System but also the System.Data and System.XML assemblies. Figure 29-2: Project references To reference other assemblies, right-click the References node in the Project Manager and select Add Reference to display the Add Reference dialog box (see Figure 29-3). The Add Reference dialog box allows you to reference standard and custom .NET assemblies. Figure 29-3: The Add Reference dialog box The purpose of the first directive in the code, the using directive, is to allow us to use the types in a specific namespace without having to specify the namespace. In this case, we can use the Console class and its methods without having to specify the System namespace before the Console class: // without using System; System.Console.WriteLine("Hello from C#!"); // with using System; Console.WriteLine("Hello from C#!"); The using directive can also be used to create class or namespace aliases, which reduce typing even more. For instance, here's how you can create an alias for the System.Console class and use the class through the alias: using System; using Con = System.Console; namespace Project1 { class Class { [STAThread] static void Main(string[] args) { Con.WriteLine("Hello from C#"); Con.WriteLine("Press any key to continue..."); Con.ReadLine(); } } } Each project also gets its own namespace. The syntax of a C# namespace is: namespace namespaceName { } or namespace name1.name2.nameN { } If you don't like the generated namespace name, you can either change it manually or use Rename Namespace refactoring. When naming namespaces, you should use the CompanyName.TechnologyName format: namespace Wordware.InsideDelphi { } Since C# is completely object-oriented, the source code of the console applications also contains a class and a static method called Main. The Main method is the most important method in a C# project, as it is the entry point of the program. The summary comments in the code are special comments that can be used by the compiler to generate XML documentation for the project. To have the compiler generate the documentation, check Generate XML documentation in the Code generation group box on the Project Options dialog box, as shown in Figure 29-4. Generated XML documentation for a simple console application is displayed in Figure 29-5. Figure 29-4: C# Project Options dialog box Figure 29-5: XML documentation generated by the compiler Variables in C# are declared as they are in C++: data type followed by an identifier and a semicolon. Variables declared in C++ can be initialized, but they don't have to be; if they aren't initialized, they contain random values. In C#, you can declare a variable, but you cannot use it until you initialize it: static void Main(string[] args) { int x; // Error: Use of unassigned local variable 'x' Console.WriteLine(x); } You can initialize a variable in C# at the same time you declare it or, of course, by using the assignment operator after the variable is declared: int x = 1; int y; y = 2; Constants in C# are defined with the reserved word const, followed by the data type, identifier, assignment operator, and value: const int ConstValue = 2005; Explicit typecasts in C# are written as they are in C++, by writing the target data type in parentheses before the value. All objects in the .NET framework can be converted to string by calling the ToString() method. Here's an example of an explicit typecast and the usage of the ToString() method: static void Main(string[] args) { int x = (int)1.2; /* since 2005 is an int object, we can call its ToString() method */ string s = 2005.ToString(); string s2 = x.ToString(); Console.WriteLine("x = {0}", x); Console.WriteLine("s = {0} and s2 = {1}", s, s2); Console.ReadLine(); } The above example also illustrates how to use the WriteLine method to output several values to the console. The {0} and {1} parts of the string are placeholders for values, just like %d and %s values are placeholders for values in the Delphi Format function. The .NET framework supports two categories of types: value types and reference types. Value types are derived from the System.ValueType class; they are allocated on the stack and directly contain their data. Value types in the .NET framework are numeric data types; Boolean, Char, and Date types; enumerations; and structures (records). Reference types are derived directly from the System.Object class, contain a pointer to the data allocated on the heap, and are managed by the framework's garbage collector. Reference types are string, arrays, classes, and delegates (covered in the next chapter). The .NET framework also allows us to convert value to reference and reference to value types. The process of converting a value type to a reference type is known as boxing. The process of converting a reference type to a value type is known as unboxing. When boxing occurs, a new object is allocated on the managed heap and the variable's value is copied into the managed object stored on the heap. The following code illustrates when boxing occurs. In this case, boxing is done implicitly by the C# compiler: static void Main(string[] args) { char c = 'A'; object o = c; /* implicit boxing */ Console.WriteLine(o); Console.ReadLine(); } Here's the same code in Delphi for .NET, which also results in implicit boxing of the c variable: program Boxing; {$APPTYPE CONSOLE} var c: Char = 'A'; o: System.Object; begin o := c; { implicit boxing in Delphi for .NET } Console.WriteLine(o); Console.ReadLine(); end. The .NET SDK includes an extremely useful tool that allows us to view the IL code produced by a .NET compiler — the IL Disassembler (ILDASM). ILDASM can show us how things work in the .NET framework, which is great, because the .NET framework, unlike Delphi's RTL and VCL, doesn't ship with its source code. In this case, we can use ILDASM to confirm that boxing actually occurs in the object o = c line. The ildasm.exe file is located in the Program Files directory under \Microsoft.NET\SDK\v1.1\Bin. When you run it and load the appropriate assembly (in this case the project's exe file), ILDASM shows not only the IL code but also the namespaces, classes, types, and methods to which the code belongs. Figure 29-6 shows what ILDASM looks like and also shows the disassembled C# version of the above code that contains an implicit box operation. Figure 29-6: Using ILDASM to view assembly contents Unboxing occurs when a reference type is explicitly typecast to a value type. The unboxing operation first checks whether the typecast value is a boxed value and then copies the value from the object instance to a value type variable. The following C# code shows when boxing and unboxing operations occur and Figure 29-7 shows the results: Figure 29-7: Boxing and unboxing static void Main(string[] args) { int x = 2005; object o = x; /* box */ int y = (int)o; /* unbox */ Console.WriteLine(x); Console.ReadLine(); } The C# language provides us with the if and switch statements for testing conditions. The syntax of both statements and the relational and logical operators that are used with them are the same in C++ and C#. Although syntactically the same, the switch statement in C# differs from the C++ switch statement. The C# switch statement doesn't allow fall- through (all cases must be followed by the break statement) and it supports string cases. The following listing shows both statements in action and also shows how to use the System.Convert class to convert types in .NET (in this case, how to convert a string to an integer). Listing 29-3: if and switch statements using System; namespace Wordware.InsideDelphi { class Conditions { [STAThread] static void Main(string[] args) { Console.Write("Enter a number: "); string userValue = Console.ReadLine(); // convert string to int using the Convert class int num = Convert.ToInt32(userValue); if ((num < 1) || (num > 5)) Console.WriteLine("Invalid number"); else { /* C# switch doesn't allow fall through */ switch(num) { case 1: Console.WriteLine("One"); break; case 2: Console.WriteLine("Two"); break; case 3: Console.WriteLine("Three"); break; default: Console.WriteLine("Four or five"); break; } } Console.ReadLine(); } } } Arrays in C# are reference types, and because of that, they cannot be declared as simply as variables of primitive types like integer. To declare an array in C#, the following syntax is used: data_type[] array_name; To actually use an array, it must be instantiated with the reserved word new: data_type[] array_name = new data_type[nr_of_elements]; Arrays can also be automatically initialized using the following syntax: type[] array = new type[nr] {val1, val2, valn}; The following listing illustrates both how to declare and automatically initialize a one-dimensional array in C# and how to loop through the array using all four C# iteration statements: for, while, do, and foreach (C# version of Delphi's for-in loop). The for, while, and do loops work as they do in C++, so only the foreach loop needs to be described. Like Delphi's for-in loop, the foreach loop is used to loop through arrays and collections. The syntax of the C# foreach loop is: foreach(data_type identifier in array_or_collection) statement; The listing also shows how to determine the length of the array using its Length property. C# arrays inherit the Length property from the System.Array class. Listing 29-4: Arrays in C# using System; namespace Wordware.InsideDelphi { class Arrays { [STAThread] static void Main(string[] args) { int[] arr = new int[10] {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; int i; /* for */ for(i = 0; i < arr.Length; i++) Console.WriteLine(arr[i]); /* while */ i = 0; while (i < arr.Length) Console.WriteLine(arr[i++]); /* do..while */ i = 0; do { Console.WriteLine(arr[i++]); } while (i < arr.Length); /* foreach */ foreach(int x in arr) { Console.WriteLine(x); } Console.ReadLine(); } } } When you're building .NET applications, you can either declare an array as you always do, or you can create an instance of the System.Array class. To instantiate the System.Array class, you need to call its CreateInstance method and pass the element type and the number of elements you want the array to have. When calling the CreateInstance method, you can't directly pass a type like string to the method; you need to use the typeof operator (TypeOf in Delphi for .NET), which returns the required System.Type object that describes a data type. Listing 29-5 shows how to create an instance of the System.Array class in Delphi for .NET. Listing 29-5: Instantiating and using the System.Array class program Project1; {$APPTYPE CONSOLE} uses SysUtils; var arr: System.Array; s: string; i: Integer; begin { array[0..9] of string } arr := System.Array.CreateInstance(TypeOf(string), 10); arr[0] := 'Using the '; arr[1] := 'System.Array '; { the SetValue method can be used to assign a value to an element } arr.SetValue('class...', 2); for s in arr do begin { if s <> '' can also be used } if s <> nil then Console.Write(s); end; { the GetValue method can also be used to read element values } for i := 0 to Pred(arr.Length) do Console.Write(arr.GetValue(i)); Console.ReadLine(); end. Besides the System.Array class, which can be used to create arrays with a known number of elements, we can also use the ArrayList class, which allows us to dynamically increase or decrease the number of elements in the array. The ArrayList class exists in the System.Collections namespace. Listing 29-6 shows how to use the ArrayList class in Delphi for .NET. Listing 29-6: Using the ArrayList class program Project1; {$APPTYPE CONSOLE} uses System.Collections; var list: ArrayList = ArrayList.Create; i: Integer; begin list.Add('Item 1'); list.Add('Item 2'); list.Add('Item 3'); for i := 0 to Pred(list.Count) do Console.WriteLine(list[i]); Console.ReadLine(); end. Since C# is entirely object-oriented, it doesn't allow developers to create global methods, that is, methods that don't belong to a class. Delphi for .NET supports global methods, but only syntactically; the Delphi for .NET compiler compiles units as classes and global routines as their methods. The syntax of a C# method is: return_type method_name(parameter_list) { } To see how to create and use methods in C# and to illustrate how easy it is to create cross-language applications in .NET, let's create a simple class library (a .NET assembly with the .dll extension) and then use it in a Delphi for .NET console application. To create a C# class library, double-click the Class Library item in the C# Project category. The only difference between a C# console application project and the class library project is that the compiler will produce a .dll, not an .exe file. The HelloClass class in the library also has two methods: a normal and a static method. The standard ConsoleHello() method illustrates how to create a method that can only be called when an instance of a class is created. The static method StaticConsoleHello() illustrates how to create a static method that can be called without having to instantiate the class first. For instance, the WriteLine method that we've used constantly is a static method. If it weren't, we would have to create an instance of the Console class first and then call the WriteLine() method through the instance. Since C# methods are private by default, both methods need to be marked as public in order to use them outside of the class. Listing 29-7 shows the source code of the entire class library. Listing 29-7: A very simple C# class library using System; namespace TestLib { public class HelloClass { private const string MSG = "Hello from C#."; private const string STATIC = "Static Hello from C#."; /* constructor currently does nothing */ public HelloClass() { } /* a simple C# method */ public void ConsoleHello() { Console.WriteLine(MSG); } /* can be called without creating a HelloClass instance */ public static void StaticConsoleHello() { Console.WriteLine(STATIC); } } } After you've compiled the class library, create a new Delphi for .NET console application project. To use the C# TestLib class library in the console application, right-click the References node in the Project Manager window to reference it. To add a reference to a custom assembly, use the Browse button in the lower part of the Add Reference dialog box, as shown in Figure 29-8. Figure 29-8: Adding a custom assembly reference When you add an assembly reference in a Delphi for .NET application, the IDE adds a {%DelphiDotNetAssemblyCompiler} directive that references the assembly to the source code: {%DelphiDotNetAssemblyCompiler '..\testlib\bin\debug\TestLib.dll'} When you add a custom assembly to either a C# or a Delphi for .NET project, the IDE checks the Copy Local option. When Copy Local is checked (see Figure 29-9), the assembly is copied to the directory of the executable file when the executable is compiled. Local copies of custom assemblies greatly reduce development and deployment issues because everything (except the standard assemblies from the .NET runtime) is stored in the application directory. Figure 29-9: Custom assemblies are copied to the application directory If you don't know which namespaces, types, or classes are available in the assembly, you don't have to launch ILDASM; you can double-click the assembly (in this case, TestLib.dll) in the Project Manager to display its contents. Figure 29-10: Viewing an assembly in the IDE Finally, to use the HelloClass class, you only have to add the TestLib namespace to the uses list. Listing 29-8 shows the entire console application that illustrates how to call both normal and static methods. Listing 29-8: Using HelloClass from the C# TestLib class library program UseTestLib; {$APPTYPE CONSOLE} {%DelphiDotNetAssemblyCompiler '..\testlib\bin\debug\TestLib.dll'} uses TestLib; var HC: HelloClass; begin { static methods can be called without an instance } TestLib.HelloClass.StaticConsoleHello; HC := HelloClass.Create; HC.ConsoleHello; { no need to free the HC object; it gets destroyed by the garbage collector } Console.ReadLine; end. Now that you know how to build applications using both C# and Delphi for .NET languages, let's create a Delphi for .NET package (Delphi .NET DLL) with a unit that contains a single global procedure, and then use it in a C# console application. The rationale for building a package with a global procedure is to see Delphi for .NET compiler magic — to see how the compiler constructs namespaces, classes, and methods from units and procedures. When you create a new Delphi for .NET package, add to it a new unit named About.pas and then create the following procedure: unit About; interface procedure ShowAbout; implementation procedure ShowAbout; begin Console.WriteLine('Built with Delphi for .NET.'); end; end. After you create the procedure, compile the package to create the necessary DLL file and then open it with ILDASM (see Figure 29-11). The compiler uses the unit name to both create the namespace and name the class. The namespace created by the compiler has the UnitName.Units format, so in this case, it's About.Units. The class name in this case is About. The ShowAbout procedure from the unit is converted into a public static method of the About class. Figure 29-11: Delphi for .NET compiles units as classes. All these internal changes have no effect on Delphi for .NET code that uses the package and the ShowAbout procedure from the unit. To use the Delphi for .NET package in a Delphi for .NET application, you have to reference the package, and then you can use the unit and the procedure as if they are part of the current project: program Project1; {$APPTYPE CONSOLE} {%DelphiDotNetAssemblyCompiler '..\delphinet_firstlib\DelphiFirstLib.dll'} uses About; begin ShowAbout; ReadLn; end. Although Delphi allows you to use the unit and the procedure as you do in Delphi for Win32, the compiler has to play around with the emitted CIL code and call the appropriate method of the appropriate class (see Figure 29-12). Figure 29-12: CIL code emitted for the ShowAbout procedure call When you have to use a Delphi for .NET package in another language, like C# or VB.NET, you need to reference the package and import the appropriate namespace manually. Listings 29-9A and 29-9B show how to use the Delphi for .NET package in a C# and a VB.NET console application (you can find the VB.NET console application project item in the Other Files category on the Tool Palette). VB.NET applications can be built in Delphi because the vbc.exe (VB.NET compiler) is included in the .NET Framework SDK. Listing 29-9A: Using a Delphi for .NET package in a C# application using System; /* About.Units is the namespace, final About is the class name */ using DelphiClass = About.Units.About; namespace CSharpUsesDelphi { class UserClass { [STAThread] static void Main(string[] args) { DelphiClass.ShowAbout(); Console.ReadLine(); } } } Listing 29-9B: Using a Delphi for .NET package in a VB.NET application Imports System Imports DelphiClass = About.Units.About Module VBNETUser Sub Main() DelphiClass.ShowAbout() Console.ReadLine() End Sub End Module Methods can accept none, one, or many parameters, and they can return none, one, or many values. As in Delphi, methods in C# can be overloaded, can be static, and can accept a variable number of parameters. By default, methods in C# are private (in C++ they are also private, but in Delphi they are public). When you want to create a method that accepts no parameters, write an empty pair of parentheses as in Listing 29-10. Listing 29-10: A method without parameters public void NoParams() { return; /* you can but don't have to write return */ } When you want to create a method that accepts a single parameter, declare it inside the parameter list's parentheses as you would a variable but without the semicolon: Listing 29-11: Method that accepts a single parameter by value public void OneStringParam(string UserName) { System.Console.WriteLine(UserName); } When you want to create a method that accepts several parameters, separate the parameters with commas: Listing 29-12: Accepting a larger number of parameters public void SeveralParams(int One, string Two, char Three) { } To return a single value from the method, use the function syntax: public int RetIntegerSum(int One, int Two) { return One + Two; } To modify values passed as parameters (if they are not constant values), have the method accept parameters by reference using the reserved word ref. C# ref parameters are equivalent to Delphi's var parameters (see Listing 29-13). Listing 29-13: Passing parameters by reference public void PassByReference(ref string Name, ref bool Changed) { if(Name == "") { Name = "The string can't be empty."; Changed = true; } } You won't be able to test the PassByReference method in the Main method because non-static methods cannot be called in a static method. To test the PassByReference method, either declare the method as static or create an instance of the class inside the Main method and then call the PassByReference method on that instance. When you have a method that accepts ref parameters, you also have to use the reserved word ref when you call the method as shown in Listing 29-14. Listing 29-14: Calling a method that accepts parameters by reference using System; namespace Parameters { class ParamsClass { public void PassByReference(ref string Name, ref bool Changed) { if(Name == "") { Name = "The string can't be empty."; Changed = true; } } [STAThread] static void Main(string[] args) { ParamsClass pc = new ParamsClass(); bool b = false; string s = ""; pc.PassByReference(ref s, ref b); Console.WriteLine(s); /* writes "The string ... */ Console.ReadLine(); } } } C# also supports out parameters, which, like Delphi out parameters, allow us to pass uninitialized parameters and modify the passed value inside the method. When calling a method that accepts an out parameter, the reserved word out must also be used. Listing 29-15: C# out parameters public void ModifyValues(out string Text) { Text = "Initializing..."; } [STAThread] static void Main(string[] args) { ParamsClass pc = new ParamsClass(); bool b = false; string s = ""; pc.PassByReference(ref s, ref b); Console.WriteLine(s); /* writes "The string ... */ /* uninitialized string variable */ string noInit; pc.ModifyValues(out noInit); /* no error! */ Console.WriteLine(noInit); Console.ReadLine(); } If you've read the Delphi for Win32 portions of this book, you'll know that objects in Delphi don't have to be passed as var parameters in order to change their fields. If you do pass objects as var parameters, nothing bad will happen, but you'll unnecessarily be passing a pointer to pointer. The same is true for C# ref parameters. If you need to pass an object to a method, pass it by value, as shown in Listing 29-16. Listing 29-16: Passing objects to methods using System; namespace Parameters { class FieldClass { public string TestField = ""; } class ParamsClass { public void AcceptAnObject(FieldClass fc) { fc.TestField = "Changed in the AcceptAnObject method."; } [STAThread] static void Main(string[] args) { ParamsClass pc = new ParamsClass(); /* object parameters don't have to be ref */ FieldClass fld = new FieldClass(); Console.WriteLine(fld.TestField); // empty string pc.AcceptAnObject(fld); // change fld.TestField Console.WriteLine(fld.TestField); Console.ReadLine(); } } } There is no special syntax involved in method overloading in C#. To create an overloaded version of a method, you have to create a method with the same name but with a different parameter list. Listing 29-16: Overloaded methods /* overloaded method */ public void AcceptAnObject(FieldClass fc) { fc.TestField = "Changed in the AcceptAnObject method."; } /* overloaded method */ public void AcceptAnObject(FieldClass fc, string s) { fc.TestField = s; } To have a C# method accept a variable number of parameters, have it accept an array and mark the array parameter with the reserved word params. Listing 29-17: Passing a variable number of parameters to a method public void VariableParamNum(params string[] names) { foreach(string name in names) Console.WriteLine(name); } [STAThread] static void Main(string[] args) { /* passing a variable number of parameters to a method */ pc.VariableParamNum("Anders"); pc.VariableParamNum("Danny", "Allen"); pc.VariableParamNum("Michael", "Lino", "Steve", "David"); Console.ReadLine(); } In C#, enumerations are declared with the reserved word enum, which has the following syntax (you can end the declaration with a semicolon, but it isn't required like it is in C++): enum enumeration_name {enumerator_list} To use an enumerated value in C#, you have to write the fully qualified name of the value, that is, the enumeration name followed by the dot operator and the enumerated value. Listing 29-18: C# enumerations using System; namespace Enumerations { enum Days {Monday, Tuesday, Wednesday, Thursday, Friday} enum Weekend {Saturday = 10, Sunday = 20} class EnumerationClass { [STAThread] static void Main(string[] args) { Console.WriteLine(Days.Monday); // Monday Console.WriteLine((int)Days.Monday); // 0 Console.WriteLine(Weekend.Sunday); // Sunday Console.WriteLine((int)Weekend.Sunday); // 20 Console.ReadLine(); } } } C# exception handling is very similar to Delphi exception handling. C# allows you to catch exceptions in a try-catch block and to protect resource allocations with the try-finally block. Unlike Delphi, C# allows you to catch multiple exceptions after a single try block and lets you add a finally block after one or more exception handling blocks. All exceptions in Delphi are derived from the Exception class (the SysUtils unit). In C#, all exceptions are derived from the System.Exception class. In Delphi for .NET, the Exception type is mapped to the FCL's System.Exception class. As in Delphi, all exceptions are objects, and you can use the properties of the exception object to find out more about the exception. To catch all exceptions that a piece of code can throw ("raise" in Delphi parlance), write a "plain" try-catch block: private void CatchAllExceptions() { try { int i = 2, j = 0; Console.WriteLine("Result = {0}", i / j); } catch /* all exceptions */ { Console.WriteLine("I caught an error, but I " + "have no idea what happened."); Console.ReadLine(); } } To catch specific exceptions, use the following syntax: try { } catch (AnException) { } Listing 29-19 shows how to catch specific exceptions in C#. The try block is followed by two catch blocks. The first catch block tries to catch the Divide- ByZeroException, which gets thrown when you try to divide a number by zero. The second catch block tries to catch all other exceptions (except the DivideByZeroException) that might be thrown by code from the try block. Listing 29-19: Catching several exceptions private void CatchSpecificException() { try { int i = 2, j = 0; Console.WriteLine("Result = {0}", i / j); } catch (System.DivideByZeroException) { Console.WriteLine("Cannot divide by zero!"); Console.ReadLine(); } catch(System.Exception) { Console.WriteLine("Something wrong happened."); Console.ReadLine(); } } To catch an exception and use the exception object, use the following syntax: catch (AnException ExceptionInstance) Use the reserved word throw to throw or rethrow an exception. When you want to rethrow an exception, write the throw reserved word, followed by a semicolon. When you want to throw an exception, use the following syntax: throw new ExceptionName(); Listing 29-20 shows how to use exception object instances and how to throw and rethrow exceptions in C#. The result of the code displayed in Listing 29-20 is displayed in Figure 29-13. Figure 29-13: Throwing and rethrowing exceptions in C# Listing 29-20: Using exception objects, throwing and rethrowing exceptions using System; namespace csharp_exceptions { class Exceptions { private void ThrowOneForFun() { // pass the text to the exception's Message property throw new System.Exception("Catch me if you can!"); } private void Rethrow() { Exceptions exceptions = new Exceptions(); try { ThrowOneForFun(); } catch(Exception e) { Console.WriteLine("Exception \"{0}\" " + "caught and rethrown in Rethrow().", e.Message); throw; /* rethrow the exception */ } } [STAThread] static void Main(string[] args) { Exceptions exc = new Exceptions(); try { exc.Rethrow(); } catch(Exception e) { Console.WriteLine("Exception with message \"{0}\" " + "caught after rethrow in Main().", e.Message); Console.ReadLine(); } Console.WriteLine("Press any key to continue..."); Console.ReadLine(); } } }
https://flylib.com/books/en/1.228.1.175/1/
CC-MAIN-2019-13
refinedweb
4,892
56.45
Your answer is one click away! I have a very simple Java class, that does nothing else but: public class TestMain { public static void main(String[] args) { System.out.println("Running!"); System.exit(1111); } } , packed into a TestOSX.jar file. While on Windows I can run the above snippet and show that %ERRORLEVEL% has the expected value, I get a different outcome on OS X. Given test.sh containing: #!/bin/bash "/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/bin/java" -jar TestOSX.jar wait $! updater_exit_val=$? echo $updater_evit_val , I always print 0. Setup: OS X 10.11.1, Oracle Java 8 u60. What trivial detail am I missing here? You do not send your java process to the background with &. Thus wait is executed after the java process exits. It can't find the process you try to wait for, because it already exited and giving return code 0 because of that. $? returns the return code of the last command (in your case wait). You can either remove wait from your script, or you send your java process to the background by adding & at the end.
http://www.devsplanet.com/question/35282281
CC-MAIN-2017-22
refinedweb
186
68.77
Tutorial introduction This tutorial will be a basic demonstration of calling routines written in assembly from your C code. We will use NASM to compile our assembly code and GCC to compile our C code. The assembly code For the purpose of this tutorial, our assembly routine will be very simple, it will simply add two integers passed as parameters and return the result. add.asm ; make the add function visible to the linker global _add ; prototype: int __cdecl add(int a, int B)/> ; desc: adds two integers and returns the result _add: mov eax, [esp+4] ; get the 2nd parameter off the stack mov edx, [esp+8] ; get the 1st parameter off the stack add eax, edx ; add the parameters, return value in eax ret ; return from sub-routine We are using the __cdecl (default for C/C++) calling convention, so we must take some things into account... We preserve the stack pointer, this is because the stack is cleaned up by the caller so we don't want to mess around with the stack pointer inside the routine. The return value is returned in the EAX register. Leading underscores are added because by default GCC will add these to function calls. We could use GCC compiler flags to change this behaviour but for simplicities sake, we will just add them in here. The C code The C code will simply be a basic main function that calls our add function main.c #include <stdio.h> /* * declaring add as extern tells the compiler that the definition * can be found in a seperate module */ extern int add(int a, int B)/>; int main() { int ret = add(10, 20); printf("add returned %d\n", ret); return 0; } Compilation Compiling the above code requires the use of object files, the format that we will use is ELF since NASM and GCC both understand this format. The first thing we must do is compile our assembly code using NASM. We will compile add.asm into an object file called add.o using the following command. nasm -f elf -o add.o add.asm Now you should have an ELF file called add.o in your working directory. Next we need to compile our C code, main.c. We use GCC to do this using the following command. gcc -c main.c -o main.o Now you should have two object files, add.o and main.o. Now for the final step, we will link these object files together to create our final executable file. To do this, we will use GCC again which in turn will invoke LD for us. The following command will create our executable. gcc -o test_asm add.o main.o Now you should have an executable file asm_test.exe which will call our assembly routine and display the result, that is. Quote add returned 30 Conclusion Thanks for reading my tutorial, I hope you got something out of it. Ryan
http://www.dreamincode.net/forums/topic/232120-calling-assembly-routines-from-c/
CC-MAIN-2016-40
refinedweb
491
73.17
Note: This article was written using the first CTP of TAP ( released 2010-10-28 ) There's an updated version for .NET 4.5 RTM here[^] This article is about the new TAP - the Task-based Asynchrony Pattern and concentrates on the support for progress reporting. I explain a simple program that gets text from a server asynchronously. As I show each layer of the solution, I explain the implementation and framework support. I will assume you already know how async and await work. async await Firstly, I will show you how simple this has become for application developers. Then I will go under the hood and explore the implementation of a TAP method with progress reporting. Progress support was not really addressed in the framework in .NET 4, but appears to be front-and-centre in this CTP. It may just be wishful thinking, but I hope that all the new TAP methods in .NET 5 will support progress reporting (and cancellation). I have hosted a page on my web server that shows a short list of trite quotes, generated randomly from a list I obtained... The page takes 7 seconds to generate a list of 8 quotes. Each quote is flushed to the network as it is generated, one per second. The requirement is to write an application that consumes this service and shows both the current progress percentage and all the quotes as soon as they arrive from the ether. I wrote a WinForms app as the client. Here is the entire code for the main Form: Form public partial class Form1 : Form { public Form1() { InitializeComponent(); Shown += async ( s, e ) => { txtResult.Text = await DownloadAsync() + "Done!"; }; } async Task<string> DownloadAsync() { using ( var wc = new WebClient() ) { var progress = new EventProgress<DownloadStringTaskAsyncExProgress>(); progress.ProgressChanged += ( s, e ) => { progressBar.Value = e.Value.ProgressPercentage; txtResult.Text += e.Value.Text; }; return await wc.DownloadStringTaskAsyncEx ( @"", progress ); } } } There is quite a lot going on here. I will explain each piece in turn. Shown += async ( s, e ) => { txtResult.Text = await DownloadAsync() + "Done!"; }; I am using a lambda to handle this event. Notice that you can use async and await here too, not just for named methods. lambda The method that does the work, DownloadAsync(), eventually returns a string. When this method completes, the handler just appends "Done!" to the result and shows that. This is how I know the whole process has finished. DownloadAsync() string "Done!" return await wc.DownloadStringTaskAsyncEx ( @"", progress ); The work is done by an extension method on the WebClient class: DownloadStringTaskAsyncEx. This is also a TAP method, so I can use await to yield control while it is executing. WebClient DownloadStringTaskAsyncEx It takes a URL and returns a string - all well and good. But it also takes an object called progress as a parameter. This is the new pattern for progress reporting ( at least in this CTP). progress I'll fudge a little bit here and gloss over the implementation of the progress object. It uses some new classes in the CTP and deserves a section to itself. Just assume I have some progress object: var progress = new EventProgress<DownloadStringTaskAsyncExProgress>(); All I need to tell you now is that it has a ProgressChanged event that is raised by the TAP method at suitable points during its execution. This event will be raised on the UI thread, and the EventArgs object will contain information about the current progress. So, all I need to do is add a handler that updates my UI controls: ProgressChanged EventArgs progress.ProgressChanged += ( s, e ) => { progressBar.Value = e.Value.ProgressPercentage; txtResult.Text += e.Value.Text; }; Well, now we have our client code, so now for the fun bit... The TAP method DownloadStringTaskAsyncEx doesn't exist in the CTP, so I had to write it. There is an extension method on WebClient: public static Task<string> DownloadStringTaskAsync( this WebClient webClient, Uri address, CancellationToken cancellationToken, IProgress<DownloadProgressChangedEventArgs> progress); However, the DownloadProgressChangedEventArgs class only reports the progress percentage and doesn't give access to the stream buffer, so it doesn't meet the requirements. DownloadProgressChangedEventArgs stream There are a couple of extension methods in the CTP that are just right though. One takes a WebClient and returns a Stream, and the other reads the stream asynchronously: Stream public static Task<Stream> OpenReadTaskAsync(this WebClient webClient, string address); public static Task<int> ReadAsync (this Stream source, byte[] buffer, int offset, int count); I used these two methods to write the implementation of DownloadStringTaskAsyncEx, with progress reporting that includes the result text as it arrives. Here is the entire source: class DownloadStringTaskAsyncExProgress { public int ProgressPercentage { get; set; } public string Text { get; set; } } static class WebClientExtensions { public static async Task<string> DownloadStringTaskAsyncEx( this WebClient wc, string url, IProgress<DownloadStringTaskAsyncExProgress> progress ) { var buffer = new byte[ 1024 ]; var bytes = 0; var all = String.Empty; using ( var stream = await wc.OpenReadTaskAsync( url ) ) { int total = -1; Int32.TryParse( wc.ResponseHeaders[ HttpResponseHeader.ContentLength ], out total ); for ( ; ; ) { int len = await stream.ReadAsync( buffer, 0, buffer.Length ); if ( len == 0 ) break; string text = wc.Encoding.GetString( buffer, 0, len ); bytes += len; all += text; if ( progress != null ) { var args = new DownloadStringTaskAsyncExProgress(); args.ProgressPercentage = ( total <= 0 ? 0 : ( 100 * bytes ) / total ); args.Text = text; progress.Report( args ); // calls SynchronizationContext.Post } } } return all; } } This TAP method also happens to be an async method. This is perfectly correct and allowed me to use the TAP forms of WebClient.OpenRead and Stream.Read. This means that there is no blocking in the method and so it is safe to execute on the UI thread. WebClient.OpenRead Stream.Read One interesting detail is that I must create a new instance of DownloadStringTaskAsyncExProgress each time I call progress.Report(). This is because the ProgressChanged event is fired by using SynchronizationContext.Post to get on the right thread. If I tried to reuse a single Progress object, there would be a race condition between the event handlers and the next call to Report(). DownloadStringTaskAsyncExProgress progress.Report() SynchronizationContext.Post Progress That's all there is to it. The caller creates an IProgress<T> object and passes it in. All the TAP method has to do is call Report(). IProgress<T> So what is this magic progress object? There are a few new types we need to look at here. The signature for DownloadStringTaskAsyncEx actually takes an IProgress<T>. This is a new interface defined in the CTP's AsyncCtpLibrary.dll assembly. AsyncCtpLibrary.dll namespace System.Threading { // Summary: // Represents progress of an asynchronous operation. // // Type parameters: // T: // Specifies the type of the progress data. public interface IProgress<T> { // Summary: // Reports that progress has changed and provides the new progress value. // // Parameters: // value: // The new progress value. void Report( T value ); } } It only has one method: void Report( T value ). Remember, an object that implements this interface is passed into the TAP method. That implementation can call the Report method of this object when it wants to report progress. Makes sense, yes? Now I need to create the object, so I need a class that implements the interface. void Report( T value ) Fortunately, there is an implementation of IProgress in the CTP: EventProgress<T>. Here it is: IProgress EventProgress<T> namespace System.Threading { // Summary: // Provides an implementation of IProgress(Of T) that raises an event for each // reported progress update. // // Type parameters: // T: // Specifies the type of data provided with a reported progress update. public sealed class EventProgress<T> : IProgress<T> { public EventProgress(); // Summary: // Occurs whenever a progress change is reported. public event EventHandler<EventArgs<T>> ProgressChanged; // Summary: // Creates the progress object from the specified delegate. // // Parameters: // handler: // The delegate to invoke for each progress report. // // Returns: // The initialized progress object. public static EventProgress<T> From( Action<T> handler ); // This is captured in the constructor from SynchronizationContext.Current private readonly SynchronizationContext m_synchronizationContext; void IProgress<T>.Report( T value ) { ... m_synchronizationContext.Post( o => ProgressChanged( this, value ), null ); ... } } } I have edited the code above to highlight the interesting bits. Basically, when you instantiate this class, it captures the current thread's SynchronizationContext. Then, each time Report is called from inside the TAP method, it raises the ProgressChanged event on the right thread. SynchronizationContext Notice that a SynchronizationContext.Post is used. This is why you would get a race condition between previous events being handled and subsequent calls to Report, if you reused your value objects ( instances of T ) in your TAP method. value Also, there is a bug in the implementation of the static factory method, From( Action<T> handler ), so you can't use it in this CTP. static From( Action<T> handler ) Notice the definition of the event in the class above: public event EventHandler<EventArgs<T>> ProgressChanged; The EventHandler delegate is already in the framework: EventHandler public delegate void EventHandler<TEventArgs> (object sender, TEventArgs e) where TEventArgs : EventArgs; Notice the constraint - the type parameter must derive from EventArgs. There is a new class in the CTP which helps: EventArgs<T>. It's quite simple, but it keeps EventHandler happy: EventArgs<T> namespace System { // Summary: // Provides an EventArgs type that contains generic data. // // Type parameters: // T: // Specifies the type of data included with the event args. [DebuggerDisplay( "Value = {m_value}" )] public class EventArgs<T> : EventArgs { private readonly T m_value; // Summary: // Initializes the event arguments. // // Parameters: // value: // The state to include with the event arguments public EventArgs( T value ) { m_value = value; } // Summary: // Get the state included with the event arguments. public T Value { get { return m_value; } } } } So that ties it all together. The value object can be of any type as it is wrapped in an EventArgs<T> before the EventProgress<T>.ProgressChanged event is raised. EventProgress<T>.ProgressChanged As I have shown, the new Task-based Asynchrony Pattern makes this sort of code much easier to write (and read). Consumers of TAP methods are almost trivial. Even if you have to write a method yourself, it is still quite simple if you know the patterns. The async and await keywords just work. Anders and the team have done a great job. The arduous bit for Microsoft will be implementing TAP versions of all blocking methods in the framework. I hope and expect they will provide overloads that support progress reporting using IProgress<T>. If they do, application developers will be able to give their users more information about what an app is doing in an increasingly asynchronous world. Well, that's it. I hope you have enjoyed reading this article..
https://www.codeproject.com/articles/129447/progress-reporting-in-c-5-async?fid=1597463&df=10000&mpp=10&noise=1&prof=true&sort=position&view=quick&spc=relaxed&fr=11
CC-MAIN-2017-04
refinedweb
1,722
58.18
Determining insert – Creating a Chatbot with Deep Learning, Python, and TensorFlow p.4 [ad_1] Welcome to part 4 of the chatbot with Python and TensorFlow tutorial series. Leading up to this, we’ve gotten our data and begun to iterate through it. Now we’re ready to begin building the actual logic for inputting the data. Text tutorials and sample code: Source [ad_2] Comment List I have a query when you are checking for better score (find_existing_score(parent_id)) this function looks in our created database parent_reply if we have comment added and if the score is better, but you are ignoring that 1000 comments are in transaction that haven't gotten committed to database hence I see there is a scenario in which there would be records with child comments answering same parent comments with lower score . Its giving me this error json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 9 (char 8), but i already tested a lot of datasets, and the error doesn't change, i mean, it does change the line and the character but the error itself always appears How can i solve this?? Hey guys, Even after running through admin privilege , "PermissionError: [Errno 13] Permission denied:" . Also changed security properties to full control. Any other tips? up I keep getting this error: UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 441: character maps to <undefined> Hello! Great video as always! On some research, I found out that the comments with parent_id starting with "t3_" are direct comments to the submission and wont have any parent comment. So we can save some time by just checking if that is the case before trying to find the parent body or the existing score. I hope this helps some of the potato owners like myself out there..haha! over 2 years ago scince last response but ill give this a go @sentdex def find_parent(pid): ^ SyntaxError: invalid syntax I wonder what does comment "score" meaning ?? Heey i know this Tut is a little bit older but i try this out right now but i get some errors. can someone check this out how to fix it ? i uploaded it to GitHub. maby someone can help me ^^ thanks 😀 I have this error after fixing the buffer to buffering: Traceback (most recent call last): File "C:/Users/user/Desktop/ml/Chatbot Database.py", line 46, in <module> for row in f: File "C:UsersuserAppDataLocalProgramsPythonPython36libencodingscp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 22: character maps to <undefined> Any suggestions? Thanks. Great explanation about Dataset that we use in BOT. Hopely more in the future. Maybe a dataset that we will use in Parking. Respect for all your video's. I was getting this error "FileNotFoundError: [Errno 2] No such file or directory:" and I couldn't open the file, so I stored the file into the same folder where the program is and changed the code to this -with open("RC_2007-09","r"-and it worked. I'm a beginner so I hope this may work with the next steps. Hello, nice tutorials, but can you make one video just for introducing the reddit dataset. For example, what is the parent_id in the dataset, what is the created_utc, etc. Stuffs like that. Thank you, again, very nice tutorials and thank you. Import your dataset with pandas, and get rid of all this sql stuffs. what is this score in this program I am not able to understand anybody else getting PermissionError: [Errno 13] Permission denied when running the chatbot_database.py ? Anyone ? comment_id = row['name'] KeyError: 'name' with open("C:/Users/<user>/Desktop/ChatBOTpy-TF/RC_2008-01", 'w') as f: PermissionError: [Errno 13] Permission denied: 'C:/Users/<user>/Desktop/ChatBOTpy-TF/RC_2008-01' can someone tell how to reslove this problem?? return result[0] ^ IndentationError: unindent does not match any outer indentation level its showing this error in find_parent() Hey, For some reason the file path I used is returning a uni code error because of the fact that Windows use backslashes. I can't find a way to fix this. Does anyone know what to do? Thanks! when i tried to run it i got this error any thoughts? :File "<ipython-input-3-169a334970e5>", line 38 with open("C:UsersMichaelDesktopDavid_Morrow{}RC_{}" .format(timeframe.split('-')[0], timeframe), buffering=1000) as f: ^ SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated UXXXXXXXX escape i am sorry but i seem to be having issues and cant find away maybe i set it up wrong but am gonna find some tutorial for beginner thanx for the help I ran this exact code and i get this error File "<ipython-input-6-1ea30de32c9a>", line 10 c.execute(""""CREATE TABLE IF NOT EXITS parent_reply(parent_id TEXT PRIMARY KEY, comment_id TEXT UNIQUE, parent TEXT, comment TEXT, subreddit TEXT, unix INT, score INT)"""") ^ SyntaxError: EOL while scanning string literal
https://openbootcamps.com/determining-insert-creating-a-chatbot-with-deep-learning-python-and-tensorflow-p-4/
CC-MAIN-2021-10
refinedweb
835
61.87
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. > To that end, I'm soliciting opinions on what we should document for each > public class, variable, and function. Just an overview? Each parameter? > Something in between those two? Keep in mind that we can release an > encyclopedia if we want for 3.1. Also keep in mind that the more we want > documented, the more help I'll need (my wrists get sore enough from typing > already :-). I'd be happy if class data members and public member functions were documented, as well as types. That seems sufficient for the time being. I'd like to start simple, and then work up to something more complicated. I'd also like to avoid undue complication and needless complexity if at all possible... It seems to be a sensible thing to try to get the formatting right on a relatively simple class before marking up everything. I suggest std::string. This means basic_string.h, std_string.h, basic_string.tcc, and char_traits.h would have to be documented. Does this sound like a good starting place to you? > Initially I'd like to drop placeholder text in everywhere we plan on > documenting for 3.0; that way people can easily see whether a particular > thing has been documented yet or not. Contributors then don't have to > ask where/what/whether a thing should be documented. Not quite sure I understand you. It seems to me that another approach would be to just document bits correctly, and the undocumented bits are not done. Once the style sheet and formatting are ok'd, they we can ask for volunteers to do specific files or groups of files. Even four people could probably do the whole library in a week, I think. I'm planning on volunteering to help you: it's going to take a lot of work and I'd hate for one person to have to do it all. -benjamin
http://gcc.gnu.org/ml/libstdc++/2001-04/msg00411.html
crawl-001
refinedweb
335
75.71
Not one of my C books teaches me this, but can I ask... as I have to specify the array size and the length of the string, do I use a 2 dimensional array to input sort and output the strings? I assume I cannot use a single one as all I can do with that is speicfy the array size. I had a quick mess around with a one dimensional without the sort method and im already running into problems with a seg fault as I havent specified the size. I also find it incredible that my book expects me to know how to do this when not once did it mention sorting string-type arrays, only integer and float using bubble and selection sort. Code: #include <stdio.h> #define ARRAY_SIZE 3 /*main function - begins program execution -----------------------------------*/ int main ( void ) { char names[ ARRAY_SIZE ]; int i; printf("Enter three names: "); for ( i = 0; i < ARRAY_SIZE; i++ ) { scanf("%30s", &names[ i ]); } printf("\n\nYou entered\n\n"); for ( i = 0; i < ARRAY_SIZE; i++ ) { printf("%c\n", names[ i ]); } getchar(); /*freeze console output window*/ return 0; /*return value from int main*/ }
https://cboard.cprogramming.com/c-programming/92446-sorting-list-entered-words-printable-thread.html
CC-MAIN-2017-17
refinedweb
189
62.41
Introduction. So, what is scope? Scope establishes which variables, functions and objects are available or accessible in some particular section of your code during runtime. What this means is that not everything that is visible to you is “visible” to the rest of your JavaScript’s code. This is called The Principle of Least Privilege — it states that it’s better not to give full access to everyone that is involved in a system. Users should only have access to the things they need to fulfill their purpose at a given time. If everyone has full access, you won’t know who caused the error when something fails. In programming languages, this principle is applied by restricting which part of your code has access to which resources. There are two different types of scopes in JavaScript: - Global Scope - Local Scope Let's look at a very simple example: var global_variable = "I am a global variable";function aFunction() {var local_variable = "I am a local variable";} As you can see, we have two variables. The first one is outside of any function, and thus exists in the global scope. The second one is inside the function aFunction() and exists in the local scope of said function. Any variable declared outside of a function is a global variable, and is created within the global scope. All scripts and functions in your JavaScript code will have access to the global scope. Of course, there is only ONE global scope per JavaScript file, and it is created as soon as runtime begins (we will learn more about this process in the next tutorial). By contrast, any variable declared inside of a function is a local variable, and they are created within the local scope of the function. These variables are only available within their local scope, and thus any functions OUTSIDE of that scope will not be able to access them. Let's take our former example and expand it a little bit: var global_variable = "I am a global variable";function aFunction() {var local_variable = "I am a local variable";console.log(global_variable); //"I am a global variable"console.log(local_variable); //"I am a local variable"}aFunction();console.log(global_variable); //"I am a global variable"console.log(local_variable); //ReferenceError: local_variable is not defined This script will print the following in the console: "I am a global variable""I am a local variable""I am a global variable"ReferenceError: local_variable is not defined Why is that? Well, global_variable is in the global scope, so it can be accessed by any other part of the script (including, of course, aFunction()); local_variable is instead defined inside of aFunction(), so it only lives WITHIN the function. When we call aFunction(), the function has access to the global scope and to its own local scope, and so it is able to print both variables. But, when we try to access local_variable from outside of the function, it throws a Reference Error, because that variable is not accessible from that part of the script (in fact, when console.log() tries to find it, that variable doesn’t even exist). Function arguments Function arguments (parameters) basically work as local variables inside of a function, so they can only be called within the function where they are created: function aFunction(first_argument, second_argument) {console.log(first_argument + second_argument);}aFunction(1, 2); //3console.log(first_argument + second_argument); //ReferenceError: first_argument is not defined. Block Statements By contrast, block statements like if and switch conditions or for and while loops do not create their own scopes. This means that variables that are defined within a block statement will live inside of the scope that the block statement was created in: if (true) {var if_statement = "If";}switch (1 + 2) {case 3:var switch_statement = "Switch";}for (var i = 0; i < 1; i++) {var for_loop = "For";}var j = true;while (j) {var while_loop = "While";j = false;}console.log(if_statement); //”If”console.log(switch_statement); //”Switch”console.log(for_loop); //”For”console.log(while_loop); //”While” Here, the four variables are accessible from the global scope, because all of them were created within block statements that ran inside of the global scope. There is a way around this, the const and let keywords, but we will talk about them in a later tutorial. Namespaces One cool thing about local scopes is that they allow us to create multiple namespaces. Think of a namespace as a “container of unique names”. Two objects cannot have the same name inside of a namespace, but objects with the same name can exist throughout different namespaces. This means that variables with the same name can be used in different functions: var var1 = 1;var var2 = 2;function aFunction() {var var1 = 3;var var2 = 4;console.log(var1 + var2);}function otherFunction() {var var1 = 5;var var2 = 6;console.log(var1 + var2);}aFunction(); //7otherFunction(); //11aFunction(); //7console.log(var1 + var2); //3 In this example, the console is going to print 7, 11, 7, and 3. Notice that calling otherFunction() did not override the variables that we declared in aFunction(). Also, even after calling both functions, the global variables remained intact. This is because the variables exist only within their own local scopes. The lifetime of JavaScript’s variables starts when they are declared, and local variables are deleted when the function where they exist in is completed. Having said that, variables with the same name in the SAME scope will result in name collision. The last variable to be declared will override all previous declared variables that have the same name: var var1 = 1;var var1 = 2;var var1 = 3;var var1 = 4;console.log(var1); //4 Be careful when naming variables in the global scope! If you create them using the same name as a built-in function/object, you could redefine the original function/object and cause some trouble. For example: var console = 1;alert(console); //1console.log(console); //TypeError: console.log is not a function Here, I am redefining console. Notice that alert(console) will work (it will alert 1) because it is simply printing the variable named console. But console.log() will not work because we have redefined console and now it is a variable containing an integer instead of a built-in complex object. Lexical Scope When creating nested functions, the inner functions will always have access to the variables, functions and objects defined in their parent functions. This is known as Lexical Scope or Static Scope. We say that the child functions are lexically bound to their parent functions’ execution contexts (think “scopes” for now). Nonetheless, parent functions don’t have access to their child functions’ variables, because lexical scope only works forward, not backward. This has to do with the way in which JavaScript constructs execution contexts. We will explain execution context in detail in the next tutorial. For the time being, we can understand this concept with an example: function firstFunction() {var var1 = 1;//Only var1 is accesible from this scope.function secondFunction() {var var2 = 2;//var1 and and var2 are accessible from this scope.console.log(var1 + var2); //3function thirdFunction() {var var3 = 3;//var1, var2, and var3 are accessible from this scope.console.log(var1 + var2 + var3); //6}thirdFunction();}secondFunction();//console.log will throw a ReferencError because var2 and var3 and not accessible from here.console.log(var1 + var2 + var3); //ReferenceError}firstFunction(); As you can see, firstFunction() only has access to var1. secondFunction() has access to var1 and var2; and thirdFunction() has access to all three variables. When we access var1 and var2 from within secondFunction(), the console prints 3, as var1 was created in secondFunction’s parent scope (firstFunction()). Accessing var1 and var2 is also possible from within thirdFunction() because firstFunction() and secondFunction() are parents of thirdFunction(). Nevertheless, var2 and var3 are not accessible from firstFunction() because Lexical Scope only works forward: when the script tries to access var1 and var2 from firstFunction(), the variables simply don’t exist and JavaScript throws a Reference Error. The var keyword Have you noticed that in all of our examples all variable declarations have started with the var keyword? What happens when we don’t use it? Well, when JavaScript finds a non-declared variable (a variable without the var keyword that can’t be found within the current scope), it starts to look for it in the parent functions’ scopes. If it finds it, it simply redefines the variable; if it DOES NOT find it, then it creates it for you in the global scope. This is dangerous, because if you are not careful you could be creating variables in unexpected scopes that will cause your script to have unexpected behaviors. Look at this script: var var1 = 1;var var2 = 2;function aFunction() {var1 = 3;var var2 = 4;var3 = 5;console.log(var1 + var2 + var3);}aFunction(); //12console.log(var1 + var2); //5console.log(var3); //5 var1 and var2 are first created in the global scope. The problems start when we define aFunction(). - var1 inside aFunction() doesn’t have the var keyword and thus JavaScript starts searching for it in the parents’ scopes. It is found in the global scope and its value is reassigned (from 1 to 3). - var2 inside aFunction() is declared as usual, so a new local variable is created (which will stop existing when aFunction() is completed). - var3 also doesn’t have the var keyword, so JavaScript looks for it. var3 is not found, and so it is created in the global scope. And so, when we call console.log(var1 + var2), the console prints 5 (not 3, as we would expect), because var1 has been reassigned. Surprisingly (not really) console.log(var3) does not throw a Reference Error, because var3 was created in the global scope, and now it is accessible from everywhere in the script. Security Some programming languages give you the option of declaring public, private and protected scopes to “hide” certain aspects of your code from users. JavaScript does not allow the definition of public and private scopes, but we can simulate them using closures, which will be a topic for another tutorial. Scope and Context It is important not to mistake the meanings of scope, context, and execution context. Scope is what we have been discussing in this tutorial. Context is used to refer to the value of this in a particular section of your script (this will be discussed in the tutorials regarding Object-Oriented Programming). Execution Context is the process by which scopes are created and this is assigned (which will be discussed in the next tutorial). Conclusion We hope that it is now clear to you the importance that scopes have when writing JavaScript code. Scopes are a powerful tool to organize your code, but when used carelessly, they can become a terrible headache. Throughout the next tutorials, we will explore scopes deeper, so that you are able to always use them in your favor. Read on to continue your journey to become a JavaScript Grand Master!
https://www.commonlounge.com/discussion/5eecccc4eec5433c82bcefefe719600b
CC-MAIN-2021-31
refinedweb
1,820
61.97
7.3: Structures in C++ (cont'd) - Page ID - 34673 2) When deriving a struct from a class/struct, default access-specifier for a base class/struct is public. And when deriving a class, default access specifier is private. For example program 3 fails in compilation and program 4 works fine. // Program 3 #include <iostream> using namespace.std; class Base { public: int x; }; class Derived : Base { }; // is equilalent to class Derived : private Base {} int main() { Derived d; d.x = 20; // compiler error becuase inheritance is private return 0; } So...if we use a struct instead of class we get a different result // Program 4 #include <iostream> using namespace std; class Base { public: int x; }; struct Derived : Base { }; // is equilalent to struct Derived : public Base {} int main() { Derived d; d.x = 20; // works fine becuase inheritance is public return 0; } Adapted from: "Structure vs class in C++" by roopkatha, Geeks for Geeks is licensed under CC BY-SA 4.0
https://eng.libretexts.org/Courses/Delta_College/C_-_Data_Structures/07%3A_Linked_Lists/7.03%3A_Structures_in_C_Continued
CC-MAIN-2021-43
refinedweb
159
61.16
Bcfg2 maintains compatibility with a wide range of Python versions – currently 2.4 through 3.2 This requires some (often significant) compatibility interfaces. This page documents the compatibility library, Bcfg2.Compat, and its contents. Note that, due to limitations in Sphinx (the Bcfg2 documentation tool), this documentation is not automatically generated. Compat.py should always be considered the authoritative source. There are several basic types of symbols found in Bcfg2.Compat: To use the compatibility libraries, simply import them as such: from Bcfg2.Compat import StringIO, all The individual symbol import is preferred over just importing Bcfg2.Compat as a whole, since in the future we will be able to remove some items from the library and this makes that process easier. A wildcard import is definitely discouraged. Bcfg2.Compat defines the following symbols: The following symbols are imported to provide compatibility with Python 3. In cases where the newer symbols has also been backported to Python 2, the older symbol will be used unless otherwise noted. This is to ensure that functions or modules with radically different behavior (e.g., input()) do not cause unexpected side-effects. The following symbols are imported or defined to provide compatibility with Python 2.4 (and occasionally 2.5). Be sure to read the notes below, since some of these implementations may be feature-incomplete. The walk_packages implementation for Python 2.5 is feature-complete. The implementation for Python 2.4 is not. Differences: The wraps implementation for Python 2.4 is a no-op. It does not attempt to copy the docstring or other details from the original function to the wrapped function. hashlib is available for Python 2.4, but it is not part of the standard base. If it is installed, it will be used. If you are doing something fancy with MD5 sums, you may need to determine which object is in use, since they are not equivalent. For the majority of simple cases – finding the MD5 sum of a string – they are equivalent enough. collections.MutableMapping provides a subset of the functionality of UserDict.DictMixin; that is, any object that is written to work with MutableMapping will also work with DictMixin, so you should write classes with MutableMapping in mind. collections.MutableMapping is available in Python 2.6+, and will be used if available. ast.literal_eval() is a safe version of eval() that will only allow delaration of literal strings, ints, list, dicts, etc. This was introduced in Python 2.6, and as such Python 2.4 uses the plain-old eval(). The following functions, classes, and symbols are provided for other miscellaneous reasons. In Python 2, base64.b64encode() and base64.b64decode() expect strings and return strings. In Python 3, they expect bytes and return bytes. For Python 3, Bcfg2.Compat provides b64encode and b64encode that transparently encode strings into bytes, then decode bytes into strings, so that those functions can be used identically to their use in Python 2. In Py3K, object.__cmp__() is no longer magical, so this mixin can be used to define the rich comparison operators from __cmp__ – i.e., it makes __cmp__ magical again. In Py3k, the unicode() class is not defined, because all strings are unicode. Bcfg2.Compat defines unicode as equivalent to str() in Python 3. Convert a decimal number describing a POSIX permissions mode to a string giving the octal mode. In Python 2, this is a synonym for oct(), but in Python 3 the octal format has changed to 0o000, which cannot be used as an octal permissions mode, so we need to strip the ‘o’ from the output. I.e., this function acts like the Python 2 oct() regardless of what version of Python is in use.
http://docs.bcfg2.org/dev/development/compat.html
CC-MAIN-2021-43
refinedweb
619
60.51
An Exception thrown by classes in the idaeim namespace. More... #include <Exception.hh> An Exception thrown by classes in the idaeim namespace. This class is the root of of the idaeim hierarchy of classes used to signal types of Exceptions: Associated with an unrecoverable error condition. Each idaeim module is expected to provide a set of one or more exception subclasses that more specifically identify exception occurances within the module. For all Exceptions a message string is provided that describes the reason for the Exception. The message method will return this message, while the what method will prepend the Exception class ID to the message. Gets the user-provided caller_ID and message string. Class identification name with source code version and date.
https://pirlwww.lpl.arizona.edu/software/idaeim/Utility/classidaeim_1_1Exception.html
CC-MAIN-2019-04
refinedweb
122
56.25
Up to Design Issues. I face these problems day to day, and like many geeks, am driven by the urge to make the boring things in life happen automatically, with the computer helping more effectively. There are lots of things I can do with N3 rules -- but I'd like to have a nice user interface to it which hides as much technology beneath the surface as possible. I'd like as many non-geeks as possible to be able to use the same tools. Let's take one example. I took a bunch of photos of a local soccer team, once when they played Wayland, and once when they played Arlington. I loaded them all into iPhoto. I wanted to burn a CD for the team of the best of the bunch. I also want to be able to find them later. On the first day, I didn't take any other photos, so the simplest thing was to make a 'smart folder' (actually 'smart Album' in iPhoto) , which had in it by definition the photos taken on that day. The smart folder allows you to specify a combination (and or or) a number of constraints such as time, keyword, text and rating. I called this one Soccer vs Wayland. On the second day, I took other photos as well, so the smart folder was going to be more complicated. So instead, I just found all the photos, selected them, and dumped them in a new plain folder Soccer vs Arlington. These of course one would represent in RDF as classes. - but we'll get into that later. Ok, so here's where we get into wish-list territory. 1) At that point, I wanted to be able to make a virtual folder Soccer, and make the two folders subfolders. (There used to be a photo processing tool called Retriever which would handle hierarchical classifications well, but that I lost track of.) This would indicate that anything in either of the two Soccer subfolders was a member of the Soccer folder -- or was tagged 'soccer' if you like. In fact, you can make a smart folder Soccer consisting of all the things which are either in Soccer vs Wayland or Soccer vs Arlington. You have to make it as a smart folder, which is not as intuitive, but woks fine. It doesn't give me the nice hierarchical user interface. Actually I now want to associate some exportable re-usable data. The folder names are essentially my local tags. Exporting them doesn't help much. Suppose, for example, I want to geotag the photos, so that I can find them on a map, or people interested in sports at the given field could find them. The current user interface allows me to select all the photos in one folder and apply keywords and apply metadata to them, as a batch operation. It is actually useful that the data is carefully stored in each photo, but it is sad that the fact that the metadata (such as a comment about the game) was applied to everything in the folder. I'd like to be able to associate the random tag name I just made up with properties to be applied to each of the things tagged. Suppose at the user interface we introduce a label. A label is a set of common metadata that I want to apply to things at once. The user interface could really milk the label metaphor, by representing a label as a box with a hole in the end with a bit of string. It clashes perhaps with the folder metaphor. If we use both, then I'd like to be able to drop a label on a folder, and let all the things in the folder inherit the labeled properties. I'd like to see for each photo firstly what properties it has, but secondarily which labels and hence folder the properties came from. The essential thing about a label is that as I build it, I am prompted to use shared ontologies. They could be group ontologies which others have exported, they could be globally understood ontologies like time and place, and email address of a person depicted. As I create the label from an (extendable) set of options in menus, and using drag and drop and other user interface tricks for noting relationships, I am creating data which will be much more useful than the tag. The tag then I can slap on very easily. The hope is then that by making label creation something which is low cost, because I have to do it only once and can apply it many times, the incentive for me @@ In this section we leave the user interaction and discuss the way in which labels can be exchanged in RDF under the covers. This of course is important for interoperability. A label can be expressed in many ways. in bits on the wire. The label describes a set of things, which in RDF is a class*. Information about the class and the things in it -- the things labeled -- can be given in various ways. As a rule, it could look like { ?x a soc:SoccerWaylandPhoto } => { ?x geo:approxLocation [ geo:lat 47. geo:long 78 ]; foaf:depicts soc:ourTeam. } A label is a fairly direct use of OWL restrictions: SoccerWaylandPhoto rdfs:subClassOf [ [ a owl:Restriction; owl:owl:onProperty geo:approxLocation; owl:hasValue [ geo:lat 47. geo:long 78 ]], [ a owl:Restriction; owl:onPredicate foaf:depicts; owl:allValuesFrom soc:ourTeam]. (Let's not discuss the modeling of depiction here, rather elsewhere.) This is very much the sort of thing OWL is designed for. There is one trap which one must beware of. Remember that the label is a concept. It is a class. It isn't a photo. The label may have been created by someone, at a particular time, but that person and that time have nothing to do with the creator and time of a photo which is so labeled. You can not write soc:SoccerWaylandPhoto geo:approxLocation [ geo:lat 47. geo:long 78 ]; foaf:depicts soc:ourTeam. It is possible to make a special label terms which are only used only for labels: soc:SoccerWaylandPhoto LAB:approxLocation [ geo:lat 47. geo:long 78 ]; LAB:depicts soc:ourTeam. and have some metadata like foaf:depicts ex:labelPredicate LAB:depicts. geo:approxLocation ex:labelPredicate LAB:approxLocation. and a general rule like { ?x a ?lab. ?lab ?p ?z. ?p ex:labelPredicate ?q } => { ?x ?q ?z }. or { ?lab ?p ?z. ?p ex:labelPredicate ?q } => { ?lab rdfs:subClassOf [ a owl:Restriction; owl:onProperty ?q; owl:hasValue ?z] }. These methods are more or less inter-convertible. There are various communities which understand OWL and N3 rules, which may find those forms most convenient. The architecture of this system then is that tags are initially local to the user. Anyone can use any word to to tag anything they want. Labels are used to associate meaning with them, but the tag itself is local. Mapped into RDF, tags are classes in a local namespace. They can of course be shared. Tagging things with other people's tags attributes to them the properties associated with those tags, if any. Some people may define tags with rather loosely defined meaning, and no RDF labels, in which case others will be less inclined to use those tags. When one combines a selection expression of a 'smart folder' with a label, then the result is a form of rule which is restricted to one variable. This can be expressed in OWL as a subclass relationship between restrictions. A lot of information can be expressed as rules, but finding an intuitive user interface to allow lay users to express their needs with rules has been a stumbling block. These smart folder and label metaphors, combined, could be a route to solving this problem*. There are many systems which use selection rules to define virtual sets of things. There probably lots which use an abstraction equivalent to labels. One system which effectively uses labels is (I think) described as 'semantic folders' (@@link Lassila and Deepali), to be published There is a language for labels being defined, as it happens, by the Web Content Labeling (WCL) Incubator Group at W3C. The final form of expression has not been decided. The concept of a label as a preset set of data which is applied to things and classes of things provides an intuitive user interface for a operation which should be simple for untrained users. Newman.R., Tag ontology design>, 2005-03-29. Stefano Mazzochi,Folksologies: de-idealizing ontologies, 2005-05-05 Tom Gruber, Where the Social Web Meets the Semantic Web, Keynote, ISWC 2006. ( video) W3C Content Label Incubator Group Dan Connolly, Some test cases from WCL/POWDER work in N3. *we do not here discuss the difference between rdfs:Class and owl:Class How could other variables be added? Other variables can be expressed as paths from the base variable, and paths can be selected from a menu-like tree, and so on. The tabulator has a user interface for selecting a subgraph for a query. The smart folder selection panel could have the option for adding another similar panel for an item connected by a search path. Up to Design Issues Tim BL
http://www.w3.org/DesignIssues/TagLabel
crawl-002
refinedweb
1,557
63.7
NFS over RDMA screencast The NFS over RDMA project is coming in for landing. This project updates the Solaris NFS/RDMA implementation to match the relevant IETF specifications, and to realize the level of performance expected from using NFS over this transport mechanism. Just WHAT level of performance is expected, you ask?? Check out the screencast developed by one of our external Open Solaris developers from Ohio State University: Not only does this screencast show off the expected capabilities of the new NFS/RDMA implementation, but the fact that most of the work for the project and the screencast was done by community developers is WAY COOL! Posted at 11:13AM Mar 05, 2008 by Don Traub in General | Comments[1] Connectathon 08 is now open for business Registrations are now being taken for Connectathon 08. Please see: For details. This year, we expect there to be a lot of buzz around pNFS testing. The NFSv4.1 spec is moving forward, and we hope to see all the vendors bring their implementations to test against. We also expect CIFS testing to continue, and look for participants in iSCSI, NDMP, and even SSH. We look forward to seeing you there! Oh yea, and we'll have cool t-shirts too! Posted at 03:24PM Feb 04, 2008 by Don Traub in General | The Solaris NFS Team is Hiring! The Solaris NFS Team is hiring! We're looking for an intern to join our development team in Austin, TX. The ideal candidate can start part-time prior to working full-time throughout the summer. Please refer to: For details on the position. We also just posted a newgrad (or junior engineer) postion, which is posted at: Please share with anyone you know who may be interested and qualified! Posted at 01:54PM Jan 28, 2008 by Don Traub in General | NFSv4 Mirror Mounts and a CEO who cares First, let me congratulate the NFS team on delivering Mirror Mounts! This greatly simplifies how NFS clients view and manage a server's namespace. From Tom Haynes' blog: Mirror Mounts. Gotta love it. I guess this project is reflective of the geographically distributed nature of the team - one team, many locations. Now, for fear of being accused of copious brown-nosing... With all of Sun's recent announcements about innovation in storage software within Solaris, getting an email from one's CEO congratulating and encouraging our team on the progress made goes a long way to keeping morale high. Such an email from a CEO could easily be content and emotion-free (eg: "Thanks." . Well whoopdeedoo for us. But that's not Jonathan. Even if his emails are brief, they're sincere and specific. He really does care about what his staff is up to. Thank you, Jonathan, it is appreciated! Go Rockies! Red Sox fans, meet Matt Holliday. For those at Fenway Park siting in the outfield, be sure to bring your gloves. You're gonna need them. Posted at 03:43PM Oct 23, 2007 by Don Traub in General | Japanese version of pNFS screencast now available We are pleased to announce that a Japanese translation of the pNFS screencast is now available at: This screencast provides an overview of pNFS, along with a demonstration of our pNFS prototype. We encourage you to view the demo, and visit our OpenSolaris project page at: Also, we just posted new source code and BFU archives at: release notes for this post can be found at: us know if you're downloading the code, as we'd love to get your feedback! Posted at 04:01PM Aug 10, 2007 by Don Traub in General | CIFS Client Alpha Now Available! Our CIFS Client project team just reached a great milestone yesterday evening, delivering our Alpha code drop to Open Solaris! You can download the packages from: There's assiciated release notes that you should check out when downloading this code. The CIFS Client allows Solaris to natively mount a CIFS share from Windows, Samba, or any other server exporting CIFS shares. Once mounted on Solaris, the files and directores are accessed through standard Solaris I/O interfaces just like any other file system. The Alpha drop provides read-only access to the data, but writes and other capabilities are coming very soon. We encourage you to download it, and send feedback to smbfs-discuss@opensolaris.org. Posted at 11:02AM Jul 20, 2007 by Don Traub in General | Comments[1] pNFS Open Solaris code drop now posted! I'm pleased to announce that we have posted the most recent NFSv4.1 pNFS source and BFU archives, along with Release Notes and a pNFS "How-To" guide, to our pNFS Open Solaris project page. Downloads: Release Notes: pNFS "How-To" guide: This code drop follows a successful Bakeathon the week of June 11th, at our engineering site in Austin, Texas. Beyond enjoying the best BBQ in the land, we gathered of community participants to perform NFSv4.1 interoperability testing, much like we do at Connectathon. We flushed out some bugs both in implementations and the spec, and continued to demonstrate leadership both in implementation and the community. If you have any questions, please send an email to: nfsv41-discuss@opensolaris.org Posted at 10:51AM Jun 26, 2007 by Don Traub in Personal | Comments[1] 3 years of Blogging@Sun, what's changed since then? Happy Birthday bloggers! 3 years, wow, how time has flown. Since we've starting bloging at Sun, what's changed? What impact has it had on Sun and the community? Does anyone actually read these? Am I just talking to myself? Will I get spammed by putting my name out there? Will I get fired if I talk about what I'm doing? What do I write about, anyway? So many questions like this I've heard, and guess what: It has made a difference, people do read these, yes, I sometimes talk to myself, no, I don't get spammed by blogging, and I haven't gotten fired yet. Better yet, there is TONS to talk about at Sun! The greatest revelation is that we are all in marketing. Of course, there's company confidential information that we need to keep under wraps, but doing Solaris development in the open now translates to a level of transparency many of us are unaccustomed to. In fact, that expected level of transparency makes many people very nervous about sharing. However, once someone does their first blog (following guidelines, of course), they quickly get a sense of how much they have to contribute by sharing via blogs. Better yet, seeing people read one's blog, and getting affirming remarks back, boosts morale and one's sense of purpose. I'm never more amped at work than after talking with a customer. Blogging provides a similar adrenaline high, but more importantly, it's a critical venue to get information out to those who need it. So, if you're reading this, THANK YOU! We appreciate the opportunity to share what's going on! Posted at 02:21PM Apr 27, 2007 by Don Traub in General | Comments[1] NFSv4 Namespace Extensions requirements documents are now posted Rob Thurlow just posted requirements docs for our NFS Mirror Mount and Referrals projects on our OpenSolaris NFS Namespace Extensions page: invite the community to have a look at these, provide feedback, and consider contributing to the effort. Posted at 12:39PM Apr 27, 2007 by Don Traub in General | pNFS screencast now available Hopefully by now you've heard about pNFS (Parallel NFS). Well, now you can see what we're talking about! Our team just created a screencast for pNFS, that includes several slides covering an overview of pNFS, design goals, and a demonstration of our pNFS prototype. The screencast can be accessed from the page pNFS Demo link. pNFS is still in the early stages of planning and development, so command syntax and such will evolve over time, but this should give you a good feel of what's coming. Posted at 08:49AM Apr 24, 2007 by Don Traub in General | Dude, where's my data? Apparently, it's out in the parking lot! Project blackbox just rolled up, and was so dominating that even the snowstorm predicted for Denver cowared away in fear of the great and powerful box. We're all lined up to tour the Blackbox today, and we look forward to the public and press coming by to check it out. We'll have a FROSUG booth there as well. Come on buy (by)! More information on Project Blackbox can be found here. Posted at 07:26AM Apr 13, 2007 by Don Traub in General | Life's a Beach, Countdown to Mexico! Where we're going on our summer vacation. It's the Barcelo Colonial & Tropical Beach Resort in Rivera Maya. Posted at 02:26PM Apr 12, 2007 by Don Traub in Personal | Hello World! Announcing Storage at OpenSolaris.org Rise and shine. Today, we launched the Open Solaris Storage site. Check it out! It has links to the Open Solaris project pages, along with pointers to related communities. The one project I neglected to mention in one of my last blogs was the opensourcing of WebNFS, now referred to as Yet-Another-NFS (YANFS). WebNFS has been around for years, but never gained much attention. Recently, though, several major customers have shown interest in this technology, so we've made it available via java.net. Within the community, we plan to expand YANFS beyond it's current implementation. Posted at 08:56AM Apr 10, 2007 by Don Traub in General | A Programmer's Frustration A coworker sent this to me a while back. Ever have one of those days? Posted at 01:19PM Apr 09, 2007 by Don Traub in Humor | Coffee, the 5th major food group Riddle me this, Batman:Q: What do you get when you mix a manager with a Peet's 4-shot latte right after a brisk morning run at the gym? A: 2 blogs in one day!! Actually, guilt took over, as it's been way too long since I've posted. SoOOOOOOooooo, what's new in NFS and CIFS land? Lots! Pour yourself a hot cup of joe, and read on... We've fired up several new OpenSolaris projects: - NFS Server in non-Global Zones - We just fired up this project, with the intent of seeking input on what the use-cases are for supporting this. This work is not staffed, but if there's engineering resources within the community that would like to pick this up, let us know! - NFSv4 Namespace Extensions - This project delivers Mirror Mounts and Referrals. We have working prototypes, that we tested at Connectathon '07. We'll be posting a requirements doc shortly to prompt discussion and communicate progress. - NFS RDMA transport update and performance analysis - This project will update the OpenSolaris implementation of RPC for infiniband transports. The work is being done primiarily by students at Ohio State University. This is an excellent example of the community contributing to OpenSolaris. OSU has made great progress, and we're working with them to get this work delivered this Spring. Watch the project page for further developments. - Sharemgr - This wasn't an OpenSolaris project, but we recently delivered into build 53 of Nevada. It delivers a much-improved and greatly simplified administrative model for managing NFS shares. Check out Doug McCallum's blog for details. - In-kernel Sharetab - While also not an OpenSolaris project, Tom Haynes's blog discusses the details of this work, which just went into Nevada. This is one more step in simplifying Solaris administration. - NFSv4.1 pNFS - This project delivers an implementation of pNFS, which is coming out of the NFSv4 IETF Working Group The specification is nearing completion. We have a prototype implementation today, which we recently tested at Connectathon. Watch the OpenSolaris project page, as we'll be updating it shortly with documentation and prototype code. - CIFS Client - This project will create a virtual filesystem for Solaris to provide a CIFS/SMB client which can connect to machines exporting CIFS/SMB shares. We have many operations working, and expect to deliver this into Nevada this summer, with hopes to backport to a Solaris 10 Update shortly thereafter. We'll be posting the latest bits on the project page shortly, so grab them, kick the tires, and let us know what you think. - Dtrace Provider for NFS - This project introduces a new DTrace provider that instruments NFSv4 clients and servers. The probes and their arguments represent the NFSv4 protocol. Not much progress on this over the last few months, but we're getting some engineering resources back onto this to crank this out. That's all for now. Tune in next time for my next caffeine rush. C'ya. Posted at 10:56AM Apr 05, 2007 by Don Traub in General |
http://blogs.sun.com/dtraub/
crawl-002
refinedweb
2,155
64.1
#include "dmx.h" #include "dmxsync.h" #include "dmxgc.h" #include "dmxgcops.h" #include "dmxpixmap.h" #include "dmxfont.h" #include "gcstruct.h" #include "pixmapstr.h" #include "migc.h" Create the GC on the back-end server. Free the pGC on the back-end server. Change the clip rects for a GC. Set the values in the graphics context on the back-end server associated with pGC's screen. Copy a GC's clip rects. Copy pGCSrc to pGCDst on the back-end server associated with pGCSrc's screen. Create a graphics context on the back-end server associated /a pGC's screen. Destroy a GC's clip rects. Destroy the graphics context, pGC and free the corresponding GC on the back-end server. Initialize the GC on pScreen, which currently involves allocating the GC private associated with this screen. Validate a graphics context, pGC, locally in the DMX server and recompute the composite clip, if necessary.
http://dmx.sourceforge.net/html/dmxgc_8c.html
CC-MAIN-2017-17
refinedweb
155
64.27
Rendering XML Documents in Author Mode The structure of an XML document and the required restrictions on its elements and their attributes are defined with an XML schema. This makes it easier to edit XML documents in a visual editor. For more information about schema association, see the Associate a Schema to a Document section. The Author mode renders the content of the XML documents visually, based on a CSS stylesheet associated with the document. Associating a Stylesheet with an XML Document The rendering of an XML document in the Author mode is driven by a CSS stylesheet that conforms to the version 2.1 of the CSS specification from the W3C consortium. Some CSS 3 features, such as namespaces and custom extensions, of the CSS specification are also supported. Oxygen XML Editor also supports stylesheets coded with the LESS dynamic stylesheet language. You can read more about associating a CSS to a document in the section about customizing the CSS of a document type. If a document has no CSS association or the referenced stylesheet files cannot be loaded, a default one is used. A warning message is also displayed at the beginning of the document, presenting the reason why the CSS cannot be loaded. Figure: Document with no CSS association default rendering Selecting and Combining Multiple CSS Styles Oxygen XML Editor provides a Styles drop-down menu on the Author Styles toolbar that allows you to select one main (non-alternate) CSS style and multiple alternate CSS styles. An option in the preferences can be enabled to allow the alternate styles to behave like layers and be combined with the main CSS style. This makes it easy to change the look of the document. Figure: Styles Drop-down Menu in a DITA Document
http://www.oxygenxml.com/doc/versions/18/ug-editor/topics/rendering-xml-author-mode.html
CC-MAIN-2018-30
refinedweb
295
51.78
7 responses to “GNOME 3.8 for openSUSE 12.3 – Call for Testers” Hi Do, It works. I am using your repository as recommended for the test. I used YaST for the update and nothing has been deleted. Howerer, YaST did not pick the right version of GTK3 at the beginning and to do that manually. Several branding packages have been downgraded from the distribution to the repository. That is up to now all what I noticed. Once again, thanks a lot for the job been done. openSUSE is unbreakable. Have a lot of fun… Works fine on MacbookAir5,2 running oS 12.3 + Kernel 3.8.6 + GS38 + Packman. One branding package of oS 12.2 (where did that came from?) needed conflict solving by uninstalling it otherwise dependencies were just fine. As usual, great work. Tomasz April 11th, 2013 at 17:55 Hi, I have several issues after switching to 3.8 from your repository: 1. Gnome-documents won’t start. It shows the following error message: tomek@linux-1fbp:~> gnome-documents JS ERROR: !!! Exception was: Error: Requiring WebKit, version none: Requiring namespace ‘Gtk’ version ‘2.0’, but ‘3.0’ is already loaded JS ERROR: !!! message = ‘”Requiring WebKit, version none: Requiring namespace ‘Gtk’ version ‘2.0’, but ‘3.0’ is already loaded”‘ JS ERROR: !!! fileName = ‘”/usr/share/gnome-documents/js/edit.js”‘ JS ERROR: !!! lineNumber = ’20’ JS ERROR: !!! stack = ‘”@/usr/share/gnome-documents/js/edit.js:20 2. Gnome-sudoku won’t start. It prints the following message: (…) e = number_box.SudokuNumberBox(upper = self.group_size) File “/usr/lib/python2.7/site-packages/gnome_sudoku/number_box.py”, line 133, in __init__ self.set_property(‘events’, Gdk.EventMask.ALL_EVENTS_MASK) NotImplementedError: Setting properties of type ‘GdkEventMask’ is not implemented 3. Firefox doesn’t start. Error message is: (firefox:4540): GLib-GIO-ERROR **: Settings schema ‘org.freedesktop.Tracker.FTS’ does not contain a key named ‘min-word-length’ Pułapka debuggera/breakpoint 4. Some elements of gnome-shell are not translated (“Type to search” in application launcher, “Frequent” at the bottom of application launcher). Tomasz April 11th, 2013 at 17:59 I also found that Liferea is crashing on startup, but I’m not sure if it wasn’t like that before upgrading. (liferea:7500): GLib-GObject-CRITICAL **: g_param_spec_internal: assertion `!(flags & G_PARAM_STATIC_NAME) || is_canonical (name)’ failed Naruszenie ochrony pamięci Dustin Falgout April 12th, 2013 at 05:44 I sure wish I would have found this thread yesterday.. I’m just about out of the woods now though. I did a ‘dup’ with dimstar repo when it was built because I just had to have the latest update and couldnt wait two more days LOL..so yeah, obviously that broke my system and I spent a day trying to fix it instead of just reverting back (that’s how you learn, right? Lol) I did get it unbroken yesterday although I still had a lot of random issues. When I saw the announcement that the stable repo was published, I jumped on and fired up another ‘dup’…..to the Tumbleweed:Gnome repo! I copy and pasted and I guess my fingers went to fast. So as we speak I’m downloading yet another 500 or so packages, but the correct ones this time. Anyway, just thought I’d share and that my experience would bring a chuckle or two.. Maybe even help a few people who knows. I will let you know how it ends up, still 136 packages to go. Thanks for all that you guys do to make openSUSE the best distribution out there! 2 Trackbacks / Pingbacks […] el blog de Dominique a.k.a. DimStar, nos esplican como instalarlo. Tan solo tenemos que ejecutar como root los siguientes comandos en […] […] Dominique Leuenberger recently wrote that they were “working hard to push GNOME 3.8 as an addo… […] - Eliasse Diaite April 10th, 2013 at 19:22
http://dominique.leuenberger.net/blog/2013/04/gnome-3-8-for-opensuse-12-3-take-2/
CC-MAIN-2015-27
refinedweb
639
69.89
Ok, I've finally created a simple example. Unfortunately, I'm working from home today and having trouble getting the file on my local machine to send as an attachment. You need to make sure includes are turned on with the following: AddOutputFilter BUCKETEER;INCLUDES .html Simple create a file in your docroot, say 'if.html', and put in the following: BeforeIf<!--#if expr="$X" -->preIfBlock<cntl-F>postIfBlock<!--#else -->ElseBlock<!--#endif -->AfterIf (replace the <cntl-F> with a control F :) You'll see that (incorrectly) the first part of the text block in the 'if' part is spit out but not the second part... and (correctly) the text of the 'else' part. Ron Cliff Woolley wrote: > On Thu, 10 Jul 2003, Ron Park wrote: > > >>We're trying to come up with nice simple test cases to >>show the problems (all ours involve a proprietary module >>and mod_proxy but they should be recreatable with some >>carefully crafted file sizes). > > > The file sizes don't even need to be carefully crafted if you use > mod_bucketeer to strategically break up the buckets and brigades, right? > > --Cliff >
http://mail-archives.apache.org/mod_mbox/httpd-dev/200307.mbox/%3C3F0F25EF.2050000@cnet.com%3E
CC-MAIN-2014-41
refinedweb
184
62.78
Ok, so I'm writing a text preprocessor which seems pretty easy...identify certain tags from a source file and format the output file accordingly. Basically it's supposed to work like this: The first character is read in main. If a character is == '<', then Brace is called to determine what tag needs processing. Inside Brace is where I'm having the problems, as I haven't declared input and output within that function. When I re-declare input and output in Brace it doesn't work properly, as it starts reading from the beginning of the file again which causes the switch to always result in a default. I guess I have absolutely no idea how to handle this situation. Any and all help is very much appreciated. Just for the record, I have no problem making an exact copy of the source file in the destination file, and know that Brace is called correctly (debugging) but it doesn't get past that switch correctly. Here's what I have so far: Code:#include<iostream> #include<fstream> #include<cctype> void Brace(char); void Title(char); using namespace std; int main() { ifstream input; ofstream output; input.open("source.dat"); output.open("destination.out"); char character; input >> character; while(input) { switch(character) { case '<': Brace(character); output << character; break; default: output << character; break; } //end switch input >> character; } //end while return 0; } //end main void Brace(char character) { input >> character; switch (character) { case 'T': case 't': Title(character); break; default: output << character; break; }//end switch return; }//end Brace void Title(char character) { input >> character; if (character == '>') { output << character; } //end if while(character != '<') { input >> character; output << toupper(character); } //end while return; } //end Title
http://cboard.cprogramming.com/cplusplus-programming/35811-help-text-files-cplusplus.html
CC-MAIN-2014-42
refinedweb
279
50.36
I keep a file of code I like. When looking for inspiration, I read through the file. It’s short; I often re-write the samples to be nothing but the minimally inspiring thought. Here is the first snippet: def self.run(user) new(user).run end Think on it for yourself before I explain what it means to me. I’d like to hear what it means to you — leave a long comment here before you keep reading. To me, it’s a reminder of how to write a beautiful class method: instantiate the class then call a method on the instance. Look at it from the perspective of the person calling the class method. When you call a class method you want one of two things: either you want to construct an instance of the class itself ( .new, or perhaps .new_from_file, .new_for_widescreen, and .new_from_json), or you want convenience. Think of the class methods you’ve seen, or have written. If they are not in the above style, they might look more like this: class ImageUploader def self.run(xpm) @@dimensions = geometry_for(xpm) @@color_palette = colors_for(xpm) svg = generate_svg(xpm) end def self.geometry_for(xpm) # ... end def self.colors_for(xpm) # ... end def self.generate_svg(xpm) # ... end end What a mess of an object. An abuse of the singleton pattern, where it wasn’t even intended. Class variables being used in an especially not-thread-safe way, plus a jumble of code that is all exposed. It is daunting to extend because it is a tricky thought process to understand the full implications of even using it. When dealing with an object you want a small interface. As a user you want the fewest number of options, and the one with the best abstraction; as a developer you want to hide as much of the implementation as possible, giving you full freedom to change the internals. The best object is one with no exposed methods at all. The above pattern gives you that. You call .run and pass a user, and it takes care of the rest. If the default constructor changes its arity, the instance method ( #run) changes its name, or the object is re-written in C and needs to do pointer arithmetic first: you are protected. The snippet has explicit names for things: run and user. This brings to mind the command pattern, and especially a command pattern for dealing with users. Perhaps something to kick off the backend signup process. The command pattern is a quick way to start reducing a god class (PDF); pushing various bits of User into the The simplicity of the snippet is a reminder to use abstractions on the same “level”. Create an instance and call a method on that; perhaps in the instance’s #run method, it will instantiate a few more objects and call a method on those; and so on. Short methods all the way down, explained with clear but concise names. This snippet happens to be in Ruby, an inspiration unto itself. A part of the power behind the command pattern is in Ruby’s duck typing. Let’s say this is a class method on that expects to call run, passing a user. In doing so, I know that I can fake it in a test with any object that responds to run: class FakeSignup def initialize(should_succeed = true) @should_succeed = should_succeed end def run(user) unless @should_succeed raise "I am supposed to fail" end end end The idea of passing SignUp around makes me think of queues: you can add the SignUp class to a background job runner to get an asynchronous workflow from within Rails. You could spawn a co-routine, passing SignUp and a user object. Once you’ve been inspired by the snippet, a world of concurrency opens up. So that’s what I think of when I see my first snippet from my collection of inspirational code. What do you see?
https://thoughtbot.com/blog/meditations-on-a-class-method
CC-MAIN-2020-45
refinedweb
658
72.46
I tried to write my own implementation of the strchr() method. It now looks like this: char *mystrchr(const char *s, int c) { while (*s != (char) c) { if (!*s++) { return NULL; } } return (char *)s; } return s; I believe this is actually a flaw in the C Standard's definition of the strchr() function. (I'll be happy to be proven wrong.) (Replying to the comments, it's arguable whether it's really a flaw; IMHO it's still poor design. It can be used safely, but it's too easy to use it unsafely.) Here's what the C standard says: char *strchr(const char *s, int c); The strchr function locates the first occurrence of c (converted to a char) in the string pointed to by s. The terminating null character is considered to be part of the string. Which means that this program: #include <stdio.h> #include <string.h> int main(void) { const char *s = "hello"; char *p = strchr(s, 'l'); *p = 'L'; return 0; } even though it carefully defines the pointer to the string literal as a pointer to const char, has undefined behavior, since it modifies the string literal. gcc, at least, doesn't warn about this, and the program dies with a segmentation fault. The problem is that strchr() takes a const char* argument, which means it promises not to modify the data that s points to -- but it returns a plain char*, which permits the caller to modify the same data. Here's another example; it doesn't have undefined behavior, but it quietly modifies a const qualified object without any casts (which, on further thought, I believe has undefined behavior): #include <stdio.h> #include <string.h> int main(void) { const char s[] = "hello"; char *p = strchr(s, 'l'); *p = 'L'; printf("s = \"%s\"\n", s); return 0; } Which means, I think, (to answer your question) that a C implementation of strchr() has to cast its result to convert it from const char* to char*, or do something equivalent. This is why C++, in one of the few changes it makes to the C standard library, replaces strchr() with two overloaded functions of the same name: const char * strchr ( const char * str, int character ); char * strchr ( char * str, int character ); Of course C can't do this. An alternative would have been to replace strchr by two functions, one taking a const char* and returning a const char*, and another taking a char* and returning a char*. Unlike in C++, the two functions would have to have different names, perhaps strchr and strcchr. (Historically, const was added to C after strchr() had already been defined. This was probably the only way to keep strchr() without breaking existing code.) strchr() is not the only C standard library function that has this problem. The list of affected function (I think this list is complete but I don't guarantee it) is: void *memchr(const void *s, int c, size_t n); char *strchr(const char *s, int c); char *strpbrk(const char *s1, const char *s2); char *strrchr(const char *s, int c); char *strstr(const char *s1, const char *s2); (all declared in <string.h>) and: void *bsearch(const void *key, const void *base, size_t nmemb, size_t size, int (*compar)(const void *, const void *)); (declared in <stdlib.h>). All these functions take a pointer to const data that points to the initial element of an array, and return a non- const pointer to an element of that array.
https://codedump.io/share/EqkTrcEmkUHr/1/how-does-strchr-implementation-work
CC-MAIN-2018-26
refinedweb
582
66.98
vanilla.Slider width/height - jesentanadi last edited by gferreira Hi, I'm updating an old script and I'm having some problems with Slider direction. Wondering if it's related to the comments on this commit and if there's a good way around it for now. This code: self.w.xHeightSlider = Slider((x+2, row2+22, 194, 25)... produces this: But if I flip the width/height, it looks like my x,y coordinates also don't work: Seems like something like self.w.xHeightSlider = Slider((x+2, row2+22, 194, 195)works, but I'd have to decrease the y value a lot: I can't set the width to -10because there are more UI elements next to the slider. Thanks! hi. I also came across this bug some time ago… you can temporarily fix it like this: self.w.slider.getNSSlider().setVertical_(True) # or False which version of macOS are you running? which version of RF3? I think this problem happens only in macOS 10.12 (??) – it’s working fine for me on macOS 10.13 using the latest 3.2 beta. some background info: How can I make a vertical slider? - jesentanadi last edited by jesentanadi @gferreira That works, thanks! I'm on macOS 10.13, but running RF3.1. I'm still confused about why, in your case, it works without the explicit setVertical()if self._isVertical()returns Truewhen w > h. @jesentanadi good point. looking closer into it, I see that the Slider gets a different orientation if the width is specified as a positive or negative integer: from vanilla import Window, Slider class SliderDemo1(object): # width as negative number (relative to parent object) # slider is horizontal = CORRECT def __init__(self): self.w = Window((200, 100), title='demo 1') self.w.slider = Slider((10, 10, -10, -10)) self.w.open() class SliderDemo2(object): # width as positive number (absolute value) # slider is vertical = WRONG! def __init__(self): self.w = Window((200, 100), title='demo 2') self.w.slider = Slider((10, 10, 180, -10)) self.w.open() SliderDemo1() SliderDemo2() I think it really is a bug, and we need to reopen issue #58… - jesentanadi last edited by jesentanadi @gferreira hmm, I'm getting a horizontal slider in Demo1—although as a user, I think this is what I would expect, since I'm specifying the width first. In both these cases, it seems right that the current _isVertical()would return Falsewhen w == h(when both are -10) and Truewhen 180 > -10, so if _isVertical()returns w < hinstead, then the Slider would be horizontal in both cases. I've opened an issue, so we'll see. hmm, I'm getting a horizontal slider in Demo1 you are right — it was a typo :) (fixed in the code sample, thanks!) I get the same results (macOS 10.13 / RF 3.2b): - jesentanadi last edited by @gferreira Thanks for helping!
https://forum.robofont.com/topic/528/vanilla-slider-width-height
CC-MAIN-2020-40
refinedweb
479
67.35
Hi,, I have been working on windows phone platform since 2 months. I am working on Visual studio 2012 IDE and .Net Framework 4.5 . I would like to open a discussion on Mac Address . I have used DeviceExtendedProperties class to get the device unique ID. I would like to know whether the same class can be used to extract the mac address . If it is not possible , would you please tell me another method or API for retieving the mac address. Also I have found that ManagementClass can be used for getting the mac address. For that I have used System.Management namespace. But I am getting following error : Error: The type or namespace name 'Management' does not exist in the namespace 'System' (are you missing an assembly reference?) I couldn’t find System.Management dll in my environment. From where shall I download the dll file. Any kind of help would be greatly appreciated. thanks, Aswathy.
http://developer.nokia.com/community/discussion/showthread.php/241574-Retrieving-Mac-Address-of-Windows-Phone?p=921641
CC-MAIN-2014-15
refinedweb
158
77.23
Pandas is an open-source Python library that consists of multiple modules for high-performance, easy-to-use data structures, and data analysis tools. The pandas module is named as pandas and can be imported into the Python script, applications, or interactive terminal with “import pandas“. But what is “import pandas as pd” which is very popular amongst Python developers, examples even the official pandas documentation use. Install pandas Library In order to import and use the pandas library first it should be installed by using different methods in different operating systems. Install pandas For Ubuntu, Mint, Debian: sudo apt install python3-pandas Install pandas For Fedora, CentOS, RHEL: sudo yum install python3-pandas Install pandas with pip Command: pip3 install pandas import pandas as pd The pandas module can be imported by using the import pandas. But when we try to use the pandas module functions we should use the pandas term in every time. This can be trivial task to use pandas in every call. So alias cna be specified for the imported module and in this case the pd term can be specified as alias to the pandas. import pandas as pd Use pandas with pd Alias The pd alias is used to access pandas module functions. In the following example we will read the csv file with the pandas module read_csv() method. The pd can be used as shortcut for the pandas module. import pandas as pd data = pd.read_csv("users.csv") Set Different Alias Then pd Even the alias pd is very popular and referred as the pandas module alias we can use another alias whatever we want. In the following exampl we will set the p as the pandas module alias instead of pd. import pandas as p data = p.read_csv("users.csv")
https://pythontect.com/what-is-import-pandas-as-pd/
CC-MAIN-2022-21
refinedweb
299
52.09
table of contents NAME¶ getgid, getegid - get group identity SYNOPSIS¶ #include <unistd.h> #include <sys/types.h> gid_t getgid(void); gid_t getegid(void); DESCRIPTION¶ getgid() returns the real group ID of the calling process. getegid() returns the effective group ID of the calling process. ERRORS¶ These functions are always successful. CONFORMING TO¶ POSIX.1-2001, POSIX.1-2008, 4.3BSD. NOTES¶¶ getresgid(2), setgid(2), setregid(2), credentials(7) COLOPHON¶ This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.debian.org/unstable/manpages-dev/getgid32.2.en.html
CC-MAIN-2021-49
refinedweb
104
61.53
Bruno Haible <address@hidden> writes: >> * Commit the following patch. It consists of verbatim (from CVS) >> libc/argp/argp* (except argp-test.c) and new gnulib/m4/argp.m4, >> gnulib/modules/argp. This builds on my Debian Linux box. (My >> savannah account is "jas" if you want me to commit it.) > > OK, I've committed the lib/* files for you, and also the module > description. I've tweaked the title of the description because there > is no argp() function. Thanks. > With m4/argp.m4 I see two problems: > > - The module doesn't define a function argp(), therefore _FUNC_ARGP is > inappropriate. Can you call the macro AC_ARGP or gl_ARGP ? I used AC_ARGP below. Is there a policy about the gl_ prefix? I found it slightly ugly, and it doesn't seem to be used consistently. > - The code is compiled and used even on glibc systems, which > unnecessarily increases executable size. Can you arrange to > not compile the files on a glibc system? I'm using AM_CONDITIONAL below. While testing this, I noticed that the getopt and xalloc packages (which argp depends on) have the same problem.. More to come... Thanks. m4/argp.m4: # argp([AC_ARGP], [ AC_CHECK_HEADERS(argp.h) AC_CHECK_FUNCS(argp_parse) AM_CONDITIONAL(ARGP, test x$ac_cv_func_argp_parse = xno) if test $ac_cv_func_argp_parse = no; then gl_PREREQ_ARGP fi ]) # Prerequisites of lib/argp*.c. AC_DEFUN([gl_PREREQ_ARGP], [ AC_CHECK_HEADERS_ONCE(sysexits.h) ]) modules/argp: --- argp.~1.1.~ Tue Jun 10 13:29:45 2003 +++ argp Tue Jun 10 19:01:03 2003 @@ -19,18 +19,21 @@ Depends-on: alloca getopt +strchrnul +sysexits configure.ac: -AC_FUNC_ARGP +AC_ARGP Makefile.am: +if ARGP +endif Include: "argp.h" Maintainer: Simon Josefsson, glibc - lib/sysexits.h: /* * */ modules/sysexits: Description: SYSEXITS.H: Exit status codes for system programs. Files: lib/sysexits.h Depends-on: configure.ac: Makefile.am: lib_SOURCES += sysexits.h Include: "sysexits.h" Maintainer: Simon Josefsson, glibc lib/argp-eexst.c: --- argp-eexst.c.~1.1.~ Tue Jun 10 13:19:46 2003 +++ argp-eexst.c Tue Jun 10 19:06:34 2003 @@ -1,5 +1,5 @@ /* Default definition for ARGP_ERR_EXIT_STATUS - Copyright (C) 1997 Free Software Foundation, Inc. + Copyright (C) 1997, 2003 Free Software Foundation, Inc. This file is part of the GNU C Library. Written by Miles Bader <address@hidden>. @@ -22,7 +22,11 @@ #include <config.h> #endif +#if defined _LIBC || defined HAVE_SYSEXITS_H #include <sysexits.h> +#else +#include "sysexits.h" +#endif #include "argp.h"
http://lists.gnu.org/archive/html/bug-gnulib/2003-06/msg00095.html
CC-MAIN-2013-48
refinedweb
389
54.9
- 0.23 ... Add 'T' config option. Add metadata to InlineX::C2XS Makefile.PL. Add Context.pm and demos/context. Add PROTOTYPES and PROTOTYPE config options Remove t/t_using.t (broken by Inline-C-0.62) -) - 0.17 ... Add PREREQ_PM config option - 0.16 ... Add config option '_TESTING' (and tests) Inline pre-requisite version now 0.46_01 Add config option 'USE' Substitution regex added to t_makefile_pl_pre.t and t_makefile_pl.t - to cater for recent ActivePerls - 0.15 ... Add config options EXPORT_ALL, EXPORT_OK_ALL and EXPORT_TAGS_ALL - 0.14 ... Requires Inline-0.45 or later. Add tests for the use of ParseRegExp.pm. 'USING' now takes either a string or array reference as its value (as does the Inline::C equivalent). 'LIBS' and 'TYPEMAPS' can now optionally be assigned as a (space delimited) string or an array reference (same as 'INC'). Previously 'LIBS' and 'TYPEMAPS' had to be assigned as an array reference. 'INC' (in the generated Makefile.PL) no longer automatically includes the cwd. 'TYPEMAPS' (in the generated Makefile.PL) no longer automatically specifies the standard perl typemap. (Doing so was annoying and, afaict, unnecessary.) - 0.13 ... Change the test for the locatability of a specified typemap from a '-e' test to a '-f' test. Add C2XS-Cookbook.pod Minor correction to WRITE_PM The C source code can now be alternatively provided by either a 'CODE' or 'SRC_LOCATION' key. - 0.12 ... Provide access to ParseRegExp.pm. (Untested, because ParseRegExp.pm is broken - see for patches to ParseRegExp.pm) Can now write a .pm file for you as well - 0.11 ... Can now provide the optional "4th" (config options) argument without *also* having to provide a "3rd" (build directory) argument. ie If the third argument is a hash reference, it's assumed to contain config options - otherwise it sets the build directory. Check that only valid config options have been passed via the "config options" argument. Catch the error (and die) if a non-existent typemap is specified in the TYPEMAPS config option. If the specified build_dir does not exist, die(). Previously, a warning was issued and the files written to the current working directory. - 0.10 ... Add coverage for the PREFIX and BOOT options. Add coverage for CCFLAGS, LDDLFLAGS, MYEXTLIB and OPTIMIZE (and test that they get passed to the generated Makefile.PL) Also check that the CC, LD and MAKE parameters are passed on to the generated Makefile.PL. Not sure how to utilise FILTERS. (I won't do anything with it unless requested.) - 0.09 ... Rewrite the t_makefile_pl test script. (It's now not a very conclusive test ... still needs further work ... though I think the WRITE_MAKEFILE_PL functionality is operating correctly.) Add coverage (currently untested) for the CC, LD, and MAKE options. TODO: Add coverage for BOOT, CCFLAGS, FILTERS, LDDLFLAGS, MYEXTLIB, OPTIMIZE and PREFIX options. - 0.08 ... Rename the module into the InlineX namespace (previously named Inline::C2XS) Add coverage for VERSION, LIBS, BUILD_NOISY and WRITE_MAKEFILE_PL options. - 0.07 ... Add coverage for AUTOWRAP, TYPEMAPS, INC and AUTO_INCLUDE. The cpp2xs() function is no longer supported by this module. Use Inline::CPP2XS instead. - 0.06 ... Add a demos/cpp folder with a CPP demo. - 0.05 ... Add Inline::CPP to XS support with the cpp2xs() function. - 0.04 ... Now hooks into the Inline::C routines that parse the code and write the XS file. The c2xs() sub now takes an optional 3rd argument (the directory into which the XS file gets written). - 0.03 ... more bugfixes - 0.02 ... bugfixes - 0.01 ... born
https://metacpan.org/changes/distribution/InlineX-C2XS
CC-MAIN-2015-35
refinedweb
579
61.63
The Swift programming language has an abundance of features to help developers code efficiently. Souroush Khanlou live codes a Sudoku puzzle solver that highlights the sequence and collection protocols framework; Swift error handling; and deep copy using structs. Introduction My name is Souroush Khanlou, and I maintain my blog, where I write a lot about programming. You may also know me from the Fatal Error podcast that I host with Chris Dzombak. Today we’re going to be doing live coding of a Sudoku solver. The rules are simple. You are given a 9x9 grid of boxes, and some of them contain a number - any number from one to nine. Your task is to fill in the empty boxes with the numbers one through nine. Naive Sudoku Solver To start, our representation of a Sudoku board looks like this: let boardString = "..3.2.6..9..3.5..1..18.64....81.29..7.......8..67.82....26.95..8..2.3..9..5.1.3.." We will build a better representation. but first, I will run you through what I already have set up, and then we will get to writing the solver code. The core modeling component is the Cell. Fundamentally, a cell can be one of many values. Initially, the values will be the numbers one through nine. If the cell is “settled”, that means that we know what the value is. In that case, it’s going to have exactly one value in it. However, it might also have more values in it, because we are not sure what the correct value should be. Instead, the Cell contains information about which values it can and cannot have. Everything else is built on top of this Cell: import Foundation public struct Cell: Equatable, CustomStringConvertible { public let values: [Int] public var isSettled: Bool { return values.count == 1 } public static func == (lhs: Cell, rhs: Cell) -> Bool { return lhs.values == rhs.values } public var description: String { if isSettled, let first = values.first { return String(first) } return "_" } public static func settled(_ value: Int) -> Cell { return Cell(values: [value]) } public static func anything() -> Cell { return Cell(values: Array(1...9)) } } Get more development news like this A Cell has some helper methods. The description method will print the number value of the cell if it’s settled, otherwise, it will just print an underscore character. There are two convenience initializers. The func settled creates a new settled cell from a single input integer. The other, anything, creates a cell that represents any number in the range one through nine inclusive. The Board is built from the Cell. The board is an array of cells which represents a row in the board, and then an array of rows is the whole board. The Board contains a method to initialize the grid from that ugly boardString I mentioned earlier. The method looks at each character in the string. If the character is a dot, it’ll return Cell.anything(). If it’s a valid integer, it will create a settled cell at that location. Any other character means malformed input, and in that case, I’ll just crash the app by calling fatalError(): public init(boardString: String) { let characters = Array(boardString.characters) self.rows = (0..<9) .map({ rowIndex in return characters[rowIndex*9..<rowIndex*9+9] }) .map({ rawRow in return rawRow.map({ character in if character == "." { return Cell.anything() } else if let value = Int(String(character)) { return Cell.settled(value) } else { fatalError() } }) }) } Board also includes a number of helper methods. There is a method to get every cell flattened into a long array, so we don’t have them in the array of arrays representation: public var cells: [Cell] { return Array(rows.joined()) } The Board can print itself using its description method. It has a method which returns true if all of the cells are settled, which means the board is solved: public var isSolved: Bool { return self.cells.all({ $0.isSettled }) } We also have a method row which takes a Cell’s index and returns its row. Note that a Cell’s index is between 0 and 80 (remember that the board is a 9x9 grid of Cells). Similarly, there’s a column method which returns the column for that index. There’s also a minigrid method which returns the mini-grid containing that index. These three methods are used by canUpdate, and it takes an integer value between 1 and 9, and a Cell’s index. It checks to see if I can update the Cell at that index to that value. Lastly, an update function which updates the values for a given cell. You can update to an array of possible values. But if you update to exactly one value, it will make sure that it can update that by calling canUpdate. Solving The way I solve Sudoku puzzles is run through all of the cells, find an empty cell, and check the row, column, and the mini-grid to see if there’s only one possible value that can be placed there. If that’s the case, I update that to that value. Then I repeat this until the board is solved. This cannot solve every puzzle but is a good place to start. The solve method has a while loop that runs until board.isSolved returns true. Inside that while loop, it iterates through every cell in the board until it finds one that is not settled. If board.canUpdate returns true, then that number is valid for that cell: for cell in board.cells { if (cell.isSettled) { continue } let possibleValues = (1...9).filter({ value in return board.canUpdate(index: Int, toValue: value) }) } ... The issue with this is that we do not have the index. The easiest way to get at this is to use the enumerated function: for (offset, cell) in board.cells.enumerated(). It returns an offset and a cell. Enumerated is not exactly the right thing to do in this situation. Enumerated returns a number that starts from zero and increases by one every time. For arrays, it corresponds to the index, but for other types of collections, this is not always the case. If you called enumerated on an ArraySlice, the first element will have an offset of zero, even though its index may be two; wherever that ArraySlice starts. It’s not exactly right, so I extended Collection to add a method withIndexes. withIndexes returns an index instead of an offset. The cells can now be updated by index. If the array is empty, then this technique can’t solve this board, so we return from the function immediately: if possibleValues.isEmpty { return } The last thing we do is update the board to those values that we found were valid for that index: try? board.update(index: index, values: possibleValues) Brute Force Sudoku Solver The previous technique does not solve every board. Consider this example: let boardString = "4.....8.5.3..........7......2.....6.....8.4......1.......6.3.7.5..2.....1.4......" If I let the solver run on this, it will go into an infinite loop. I initially believed that all Sudoku puzzles can be solved this way, and upon doing more research found this post by Peter Norvig about sophisticated strategies for solving Sudoku puzzles. It highlights a number of strategies, but coding all the different strategies will involve a lot of work. This is why I will write a brute force solver for this problem. Suppose a given Sudoku board has 60 unset cells. Each of those 60 cells could have one of nine possible values. Typically, brute forcing 9^60 can take hundreds of billions of years, but because we narrowed which values are possible using our first, naive solver, the set of possibleValues is likely less than 9. The first step in brute forcing is to call the naive solver, and fill in the cells with all of their possible values. In an unsettled cell, pick the first one of those values, and see if the board is solvable. If it is, then that was the correct choice, otherwise, try the second possible value for that cell an so on until the puzzle is solved. We’re going to try every possible value, with a few optimizations that will be added later. First, we check if the puzzle’s solved by the naive method. If that’s the case, we can just return from the solve method. Once we’ve tried that, we’ll check our cells for the first value that isn’t settled: let optional = board.cells.first(where: { !$0.isSettled }) At this point, we know that optional is never going to be nil, because the board is still unsolved, so we must have some cell that’s not settled: guard let cell = optional else { return } Next, check all the possible values that this cell could be. Grab all the values in the cell with cell.values and for each of those values create a scratch pad to test that the current value is the correct value for this box: for value in cell.values { var copy = board try copy.update(index: Int, values: [value]) ... Once we have that copy, we can try to update the copy. Update is a throwing function, so we need to wrap it and catch our error. If the error is a consistency error, then continue. Now we have a new board, and a cell that we’ve updated to something that we think might be right - we still need to solve the rest of the board. The best way to do that is to run the exact same thing again, and see if you generate any consistency errors through recursion: for value in cell.values { var copy = board do { try copy.update(index: index, values: [value]) print(copy) } catch { if error is ConsistencyError { continue } } let solver = Solver(board: copy) solver.bruteForce() if solver.isSolved { self.board = solver.board return } } We can make an optimization that will help this code run faster. Suppose you are working with a big tree. The root or top of the tree is the first choice that you make. That first choice has some number of potential options which each represent a branch. Then each of those branches represents the next choice that you will make about which cell you should use. As you go through this tree, you sometimes update a value to something, and you get a consistency error. This means that everything below that branch cannot be a correct solution, so you throw it away. If we can throw out more of this tree earlier, and higher up, then that solves the problem faster. Making this improvement does not much effort. Sort the cells according to how many potential values they have, and start with the one with the smallest number: let optionalTuple = board.cells .withIndexes() .sorted(by: { left, right in return left.1.values.count < right.1.values.count }) .first(where: { !$0.1.isSettled }) This ensures that the first thing that we check has a very small number of potential options, and if one of those options ends up being incorrect then we can throw away a part of that tree, which leads to a faster computation. You can see that the board is starting to fill in from the bottom, sometimes from the middle, and does not necessarily always fill in from the top. This is how you know the optimized algorithm is working. Error Handling I like this problem so much because it takes advantage of a few Swift features that are worth noting, in particular, errors. When you have a system that is synchronous and has failure modes, it can be nice to describe those failure modes with something like our consistency error, and catch those later and deal with them in special ways as needed. Sequences and Collections Swift has an excellent sequencing collection system. For example, we can do slicing, flatMapping, mapping, filtering, etc. While there are some rougher edges to Swift, and it’s still a very new language, the sequence and collection handling are very rich and help you solve problems. Sequences and collections give you all these interesting and useful tools that you can use to solve the problem more easily. And if you were at Playgrounds in Melbourne or try! Swift Tokyo, you saw my talk about sequences and collections and you know how much I love that system in Swift. Struct Deep Copies Lastly, I want to bring up Swift’s struct system and the way that it copies values into new references. This makes it easy be able to copy really easily. If you want to do this in Objective-C, you have to implement copyWithZone, and then you’ve got to make a new one, and then you have to allocWithZone, and copy the value into the right-hand side, and then return it. In Swift, it’s simply: var copy = board. You can explore the full Sudoku solver source code repository on GitHub. About the content This talk was delivered live in September 2017 at try! Swift NYC. The video was recorded, produced, and transcribed by Realm, and is published here with the permission of the conference organizers.
https://academy.realm.io/posts/try-swift-nyc-2017-souroush-khanlou-spontaneous-swift-sudoku-solving/
CC-MAIN-2018-13
refinedweb
2,211
72.66
How to use JNBridgePro with new and emerging languages Over the last few months, we’ve given examples of how to use JNBridgePro to bridge between various “emerging languages” (for example, Python – here and here, Groovy, and Clojure) and “legacy” Java or .NET code. But what if you encounter a brand new language that you want to integrate with your Java or .NET code? How would you approach this problem? Here are some guidelines. The “new” language must have an implementation on the Java or .NET platforms. Since JNBridgePro bridges between Java and .NET (or, more precisely, between the Java Runtime and the .NET Common Language Runtime), the new language must be implemented on one of these platforms. If you want to call from the new language to your Java or .NET, the “new” language must allow for calls to other binaries that run on the same platform. If the language has been implemented on the Java platform (like Jython or Clojure) it needs to that; here are instructions for NetCOBOL. If the API that’s that JNBridgePro has been properly configured. If you’re calling from a .NET-based application (in whatever language) you can place configuration information in the application configuration file in the usual way, or you can call the JNBRemotingConfiguration.specifyRemotingConfiguration() API using whatever mechanism your language allows to call methods in external .NET DLLs. Similarly, if your calling language is JVM-based, configure JNBridgePro through a call to DotNetSide.init() as you would if you were starting in Java, again using whatever mechanism the language uses to call external Java binaries.. Recall that this happened in our Jython-calling-.NET. As we saw in that blog post, the problem can be avoided by using package import statements, which don’t cause classes to be loaded. In general, you will need to be aware of your language’s behavior when it comes to loading classes and executing initializers. If this happens before JNBridgePro can be configured, this can cause errors. There are lots of new languages out there, and many are based on the .NET and Java platforms. Over the past few months, as a way to help alleviate developer fatigue, we’ve provided examples of how JNBridgePro can be used to support interoperability between some of these new languages and Java or .NET. Based on what we’ve discovered when putting together those posts, we’ve put together this framework, so you can approach any novel interoperability scenario involving a new JVM- or CLR-based language. We will be posting new language interoperability scenarios in the coming months, but in the meantime, we hope this framework will be useful for any new interoperability situations you might encounter. Are you planning any integrations using new JVM- or CLR-based languages? If so, let us know.
https://jnbridge.com/blog/how-to-use-jnbridgepro-with-new-and-emerging-languages
CC-MAIN-2018-34
refinedweb
469
55.03
Copies a string in the value of a Slapi_Value structure. #include "slapi-plugin.h" int slapi_value_set_string(Slapi_Value *value, const char *strVal); This function takes the following parameters: Pointer to the Slapi_Value structure in which to set the value. The string containing the value to set. This function returns one of the following: 0 if value is set. -1 if the pointer to the Slapi_Value is NULL. This function sets the value of the Slapi_Value structure by duplicating the string strVal. If the pointer to the Slapi_Value is NULL, nothing is done and the function returns -1. If the Slapi_Value already contains a value, it is freed from memory before the new one is set. When you are no longer using the Slapi_Value structure, you should free it from memory by calling slapi_value_free() .
http://docs.oracle.com/cd/E19424-01/820-4810/aaipg/index.html
CC-MAIN-2016-18
refinedweb
132
65.42
Subject: Re: [boost] version conflicts: is there a solution? From: Lewis Hyatt (lhyatt_at_[hidden]) Date: 2009-06-24 23:08:42 Artyom <artyomtnk <at> yahoo.com> writes: > I just what to remind that there is a script I had written > that renames boost namespace and macros preventing namespace collisions > of two different versions of Boost. > > > > So take a look on it, and give feedback, so I would be able to > submit it officially to Boost. > > Artyom Thank you all for the helpful comments; this is a vexing problem indeed. I think the renaming script looks like my best bet, I will check out bcp as well. -Lewis Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2009/06/153407.php
CC-MAIN-2019-43
refinedweb
129
76.52
Provided by: grass-doc_7.6.1-3build1_all GRASS GIS Database A GRASS GIS Database is simply a set of directories and files with certain structure which GRASS GIS works efficiently with. Location is a directory with data related to one geographic location or a project. All data within one Location has the same cartographic projection. A Location contains Mapsets and each Mapset contains data related to a specific task, user or a smaller project. Within each Location, a mandatory PERMANENT Mapset exists which can contain commonly used data within a Location such as base maps. PERMANENT Mapset also contains metadata related to Location such as projection. When GRASS GIS is started it connects to a Database, Location and Mapset specified by the user. Fig. 1: GRASS GIS Database structure as visible to the user GRASS GIS Database All data for GRASS GIS must be in GRASS GIS Database which is a directory (visible on the disk) containing subdirectories which are GRASS Locations. User can have one or more of Databases on the disk. Typically users have one directory called grassdata in their home directory. In multi-user environment users often have a grassdata directory mounted as a network directory (network file system). For teams, a centralized GRASS DATABASE would be defined in a shared network file system (e.g. NFS). GRASS GIS Databases can be safely copied or moved as any other directories. Don’t be confused with (relational) databases which are used in GRASS GIS to hold attribute data and might be part of the GRASS GIS Database. From user point of view, GRASS GIS Database with all its data in it is similar to, e.g. PostGIS, database, as it stores all information inside in a specific format and is accessible by specific tools. GRASS GIS Databases is in GRASS GIS often called GISDBASE or DATABASE. GRASS Locations Location is a directory which contains GRASS Mapsets which are its subdirectories. All data in one Location have the same projection (coordinate system, datum). Each Location must contain Mapset called PERMANENT. Typically, a Location contains all data related to one project or a geographic area (geographic location or region). Alternatively, Location can simply contain all data in a given projection. GRASS Locations can be safely copied or moved as any other directories. Compressed Location is usually what GRASS users exchange between each other when they want to share a lot of data. For example, GRASS GIS sample data are provided as Locations. Don’t be confused with location as a place (file or directory) in a file system. The word location in GRASS Location refers to a location or area on Earth (or whatever is applicable). Users and programmers familiar with relational databases such as PostgreSQL can view Location as an individual database inside the system or a storage area which would be equivalent to GRASS GIS Database. Mapsets in a Locations are like namespaces or schemas inside a database. GRASS Mapsets Mapsets contains the actual data, mostly geospatial data, referred to as maps in GRASS GIS. Mapsets are a tool for organizing maps in a transparent way as well as a tool for isolating different tasks to prevent data loss. GRASS GIS is always connected to one particular Mapset. GRASS GIS modules can create, modify, change, or delete a data only in the current Mapset. By default, only the data from the current Mapset and PERMANENT Mapset are visible. Using g.mapsets module or in GUI other Mapsets can be made visible and seamlessly accessible. All data are available for reading when Mapset is specified explicitly, for example to access map streets in Mapset new_highway user can use streets@new_highway. For maps which are in the current or PERMAENT Mapsets or Mapsets sets as visible (accessible), there is no need to use @mapset syntax. Mapsets are used to store maps related to one project, smaller project, specific task, issue or subregions. In multi-user environment, when a team works together on one project, Mapsets support simultaneous access of several users to the maps stored within the same Location. Besides access to his or her own Mapset, each user can also read maps in PERMANENT Mapsent and in other users’ Mapsets when set. However, each user can modify or remove only the maps in his or her own Mapset. Besides the geospatial data, Mapset holds additional data such as color tables (managed e.g. by r.colors) and the current computational region’s extent and resolution stored in a file called WIND and managed by g.region. Mapsets can be copied and moved as directories, however only when it is clear that the projections of both Locations (as reported by g.proj) match each other. Since this is sometimes hard to to establish, it is recommended to use r.proj or v.proj to reproject the data. The files and directories should not be moved or modified directly, but only using GRASS GIS tools. The role of the PERMANENT Mapset When creating a new Location, GRASS GIS automatically creates a special Mapset called PERMANENT where the core data for the Location are stored. Since the maps in PERMANENT Mapset are visible from all the other Mapsets, it can be used to store the base maps (base cartography), data common to all projects or needed for different analyses done is separate Mapsets. In multi-user environment, data in the PERMANENT Mapset can only be added, modified or removed by the owner of the PERMANENT Mapset; however, they can be accessed, analyzed, and copied into their own Mapset by the other users. The PERMANENT Mapset is useful for providing general spatial data (e.g. an elevation model), accessible but write-protected to all users who are working in the same Location as the database owner. To manipulate or add data to PERMANENT, the owner can start GRASS GIS and choose the relevant Location and the PERMANENT Mapset. The PERMANENT Mapset also contains the DEFAULT_WIND file which holds the default computational region’s extent and resolution values for the Location (which all Mapsets will inherit when they are created). Users have the option of switching back to the default region at any time. Importing, exporting and linking data GRASS GIS works only with data which are imported into a GRASS Database, so all data needs to be imported, e.g. by r.in.gdal or highly convenient r.import, before the actual analysis. Data in GRASS Datable can be exported using for example r.in.gdal in case of raster maps. For cases when import is not desirable, an option to link external data exists. Projection of the linked data must match the Location’s projection otherwise the external data cannot be linked. (Linking data in different projection is not allowed as it would require on-the-fly reprojection which could cause inconsistencies in the data. For example, module r.external links external raster data, so that the data are accessible in GRASS Database as standard raster maps. Similarly for newly created maps, r.external.out setups a format and directory where the actual data will be stored, however in GRASS Database the data will be created as standard maps. Starting GRASS GIS using GUI After launching GRASS GIS, the startup window will open (Fig. 2). Fig. 2: GRASS GIS startup window The startup windows provides these functions: 1 Selecting the GRASS GIS Database directory. 2 Selecting the Location (e.g. a project or area). See the Location Wizard (4) for creating new Locations. 3 Selecting the Mapset (a subproject or task). Creating a new Mapset requires only name. 4 The Location Wizard for creating new Locations based for example, on existing georeferenced file or EPSG code. 5 Download a sample Location from the Internet. 6 Start GRASS GIS once you have selected an existing Location and Mapset or defined a new one. The graphical user interface wxGUI will open and provide you with a menu system, map visualization tool, digitizer, and more. Starting GRASS GIS using command line GRASS GIS can be started with given Database, Location and Mapset from the command line. For example, the following will start in a given Mapset with only command line interface: grass76 --text ~/grassdata/mylocation/mymapset And the following will create the given Location with projection given by the EPSG code and it will start the default interface (GUI or command line): grass76 -c EPSG:5514:3 ~/grassdata/mylocation See grass command manual for more details. Creating a New Location with the Location Wizard The wxGUI graphical user interface provides a graphical Location Wizard which lets you easily create a new Location for your own data. You will be guided through a series of dialogues to browse and select predefined projections or to define custom projections. The most convenient way of using Location Wizard is creating new Location based on a georeferenced file, such as Shapefile or GeoTIFF, or by selecting the corresponding EPSG projection code. In case of using georeferenced file, you are asked whether the data itself should be imported into the new Location. The default region is then set to match imported map. After defining a new Location, wxGUI starts automatically. If data were already imported, you can add them into the Layer Manager now and display them. More data can be imported into the Location, e.g. using import options in the File menu in Layer Manager or r.import. See also GRASS GIS 7 Reference Manual GRASS GIS 7 startup program manual page Importing data on GRASS Wiki r.import, v.import, r.external, v.external, r.proj, v.proj, Last changed: $Date: 2018-09-05 07:59:43 +0200 (Wed, 05 Sep 2018) $ Main index | Topics index | Keywords index | Graphical index | Full index © 2003-2019 GRASS Development Team, GRASS GIS 7.6.1 Reference Manual
http://manpages.ubuntu.com/manpages/eoan/man1/grass_database.1grass.html
CC-MAIN-2020-34
refinedweb
1,636
54.63
- Domain Name Services - The Function of DNS - Examples of Name Resolution - Using the MMC - Summary - Troubleshooting The Need for DNS The Function of DNS Examples of Name Resolution Using the MMC The Need for DNS Domain Name Services (DNS) enable us to use human-friendly names for our computers. Even though the network uses numbers to identify each machine on a network, DNS enables people to think of computers in terms of names; the DNS service then maps those names to numeric addresses. DNS is used only with the Internet Protocol (IP). DNS is critical to Active Directory (AD) because it is used to find Domain Controllers (DCs) and services on Domain Controllers such as Lightweight Directory Access Protocol (LDAP), Kerberos, and the Global Catalog. When a client needs to authenticate, it issues a DNS request for a nearby Active Directory Domain Controller. The DNS server then replies with the IP address and other information about the DC. In addition, when a DC needs to replicate with other DCs, it uses DNS to find the IP address of the DC. When we use Active Directory tools to add, subtract, or modify an Active Directory object, we use DNS to find an LDAP server running on a DC near us. Without DNS, Active Directory almost completely ceases to function. The history of DNS began in the early 1980s. For the first few years, the Internet relied on a static text file called a hosts file, which was updated frequently and could be downloaded to an Internet-connected machine on a regular basis. Obviously, this did not scale beyond hundreds or thousands of hosts. The first DNS Request for Comments (RFC) appeared in 1984. Since then, DNS has been the standard methodology for name resolution on the Internet. TIP An enormous amount of public domain information about DNS can be found at. Internet Request for Comments (RFCs) are considered the authoritative works on any Internet-related protocol or service. DNS is conceptually a very simple service, akin to a phone directory. Just as a person with a phone directory can translate a name into a phone number, DNS accepts a fully qualified domain name (FQDN) and returns a 32-bit IP address. This is called a forward lookup. Or, it can accept an IP address and return an FQDN, which is called a reverse lookup. The entire process is known as name resolution. CAUTION The first step in installing DNS or Active Directory is planning. Do not begin implementation of production DNS servers until your DNS and Active Directory namespaces have been planned and decided on.
http://www.informit.com/articles/article.aspx?p=130969&amp;seqNum=7
CC-MAIN-2017-22
refinedweb
434
52.49
Disk Quota Provider The Windows Disk Quota provider allows administrators to control the amount of data that each user stores on an NTFS file system. The provider can log events when users are near their quota, and deny further disk space to users who exceed their quota. The Event Log provider can be used with the Disk Quota provider to track quota issues. The __Win32Provider instance name is "DskQuotaProvider". Windows XP: The Disk Quota provider is not available. As an instance, method, and event provider, the Disk Quota provider implements the standard IWbemProviderInit interface and the following IWbemServices methods: The Disk Quota provider also implements the following provider framework interface: The Wmipdskq.mof file contains the Disk Quota provider, and association and registration classes. You can access the Disk Quota provider classes only in the root\cimv2 namespace. The Disk Quota provider supports the following classes: Related topics
http://msdn.microsoft.com/en-us/library/windows/desktop/aa390365.aspx
CC-MAIN-2014-15
refinedweb
148
63.49
Your official information source from the .NET Web Development and Tools group at Microsoft. It should be no surprise that JScript Documentation Comments power much of what you see in JScript IntelliSense in VS2008. Perhaps the most useful of these comments is the "reference" comment. The "reference" comment allows you to "see" functions and objects outside of your current file in the completion list. I've gotten quite a few questions about this feature so, I wanted to provide a reference for when and how to use it. There are four usage patterns: Referencing Another JS File /// <reference path="path-to/another-script.js" /> If the "path" attribute points to another JS file, any objects or functions defined inside that file-or in a file referenced by that file-will show up in IntelliSense. Yes, this implies a transitive closure. The intent was to reduce the number of references you would need on any given file. Thus, don't be surprised if more scripts show up to the party than you invited! Referencing a Web Service /// <reference path="path-to/wcf-service.svc" />/// <reference path="path-to/asmx-service.asmx" /> If the "path" attribute points to a web service (either WCF or ASP.NET), any objects or functions defined in the generated proxy for that service will show up in IntelliSense. Web service proxies in JScript are an ASP.NET AJAX feature. Therefore it is critical that you reference "MicrosoftAjax.js" before you reference any web services. Referencing a Web Page /// <reference path="path-to/default.aspx" /> If the "path" attribute points to a page (an ASPX, HTML, or Master), IntelliSense will behave as if you were on the page. This mode is really just syntactic sugar and behaves equivalently to manually copying each script reference from the page over to your script file. The only difference is that you will not see any inline script blocks reflected in IntelliSense. However, you will be able to reference any elements with an ID on the page. Any scripts or elements included via a Master page will also be reflected. This mode useful for scripts that are meant to be paired exclusively with a markup page, i.e. in a "code-beside" fashion. Referencing an Embedded Resource /// <reference name="resource-name" />/// <reference name="resource-name" assembly="assembly-name" /> We recognized that scripts will frequently be embedded inside an assembly. To reference such a script, set the "name" attribute to the resource name, and set the "assembly" attribute to the assembly name. The "assembly" attribute is optional. If left out, System.Web.Extensions will be assumed as the assembly. This is why referencing "MicrosoftAjax.js" does not require an "assembly" attribute. Other Hints Here are a few other subtle tips: Hope this helps! Jeff KingProgram ManagerVisual Studio Web Tools Not directly related, but is there a way to filter the JavaScript intellisense to not show IE-only syntax? IOW, I have a <div id='myDiv' /> in my page, I do not want window.myDiv to appear. Does VS2008 support this? If not, consider this a feature request :) Hi Brock Allen: unfortunately it's going to have to be a feature request. :) It's behaved like that longer than I can remember and we didn't make any changes to that code this version. It would be nice if this actually worked for classes in the namespaces created with Type.registerNamespace, that Microsoft told us to use when working with Microsoft AJAX. It would also be nice if the webcontrols on the webpage that embeds javascript files into the page also would be included in the Intellisense. Without these two things, it's hard for an API developer to actually make any use of any of this and expose the JS methods to the developer. SharpGIS, in a couple of days Misual Studio 2008 RTM will be shipped. And Scott Guthrie promised us, that support most of JS Frameworks will be included. So, im believe him :) I can't get the intellisense to work for anything except the build in JavaScript objects. I've tried both with the scriptmanager, regulare script tags and using the reference tag within a js file. Any ideas why? I'm using the rtm Team Suite trial version. Is it possible to use <reference> on a virtual directory? I have websites with several virtual directories in order to share JavaScript. The structure is mainly for ease of development. I have tried pretty well every construct I can think of, and dragging the JavaScript file from the Solution Explorer inserts what looks like the correct path, but the Intellisense just does not pick up the content. Thanks for the explanation. A little off topic, Any I idea why intelliSense would not show documention comments? I get color coding and intellisense on method name and parameter names but the comments are not shown. Anything for me to look at? Thanks! Adding /// <reference name="MicrosoftAjax.js"/> gives me perfect IntelliSense in an external script file. But how do I get the same experience in an inline script block? <script type="text/javascript"> /// <reference name="MicrosoftAjax.js"/> doesn't do it. Neither does <asp:ScriptManager <Scripts> <asp:ScriptReference Thanks Magnus .NET: JScriptIntelliSense:AReferenceforthe yeah i don't see intellisense for $get, $find, etc in inline script block inside aspx. It works fine in an external js using reference name=microsoftajax.js. please advise. Hi Wahy: Do you have an asp:ScriptManager on your page? That is required to see $get, $find, etc. Hi Magnus Markling: an Explicit reference to "MicrosoftAjax.js" is not needed as the ScriptManager will add that for you by default. If it's still not working, please contact me at jking-at-microsoft-dot-com and we can examine your specific case. Thanks! Hi DJames: While editing the the active document, any Doc Comments defined there will not show up. You will only be able to see Doc Commented defined in other (referenced) files. This is a VS2008 limitation. We will see what was can do in the next version. Thanks! Hi Johan Nordberg: If you are still seeing issues, please contact me at jking-at-microsoft-dot-com and we'll investigate further. Thanks! Hi SharpGIS: Namespaces created with Type.registerNamespace will be reflected in IntelliSense if it comes from an external file. Namespaces in the local file created with Type.registerNamespace will not appear because we do not evaluate that code. It's pretty difficult to infer such things, but we'll see what we can do in the next version. Dynamically registered scripts are not detectable by the IntelliSense engine since they only-resolve at runtime. This is one more thing that's difficult to infer, but (again) we'll see what we can do in the next version. Thanks!
http://blogs.msdn.com/b/webdev/archive/2007/11/06/jscript-intellisense-a-reference-for-the-reference-tag.aspx
CC-MAIN-2014-41
refinedweb
1,134
67.25
2009/7/20 Chris Anderson <jchris@apache.org>: > Devs, > > I've just committed a patch (r795687) that adds the ability to filter > _changes requests with a JavaScript function. > > The function signature is: > > function(doc, req, userCtx) { > return (true or false); > } > > When it returns true (or something truthy, like a non-empty string or > a non-zero number), the change is passed along to the user, otherwise > it is skipped. > > The filter functions are stored on design documents under the > "filters" field. The current best source of documentation is the > changes.js test. > > To query changes with a filter, the syntax is like: > > GET /db/_changes?filter=ddocname/filtername > > The biggest problem with this patch is that it uses a JavaScript OS > process per connected filtered listener. Fixing this is an > optimization as it won't effect the API, which is why I'm comfortable > committing this. > > I'd appreciate some review to make sure the implementation is on the > right track. > > Cheers, > Chris > > -- Implementation seems good for me and tests pass. For userCtx filtering I guess it would be needed to have a way to filter all changes without passing any parameter to forbid all changes read. Maybe by adding a main validate_changes on top of a design doc ? Same args but this fucntion would be applied on all changes. What do you think about it ? - benoit
http://mail-archives.apache.org/mod_mbox/couchdb-dev/200907.mbox/%3Cb7cd8ed10907200019q4cd6014fm7243d9bd4da053ba@mail.gmail.com%3E
CC-MAIN-2016-44
refinedweb
227
73.07
I tried out the recently released and very nice binary library and tripped up by laziness. Muchly simplified my problem was: data Bad = Bad instance Binary Bad where put = return () get = do fail "this is bad" return Bad Here I expected decoding to Bad to fail, but to my surprise: ghci> decode (encode Bad) :: Bad Bad From the example, it is clear that my brain has the following law for fail built in: fail err >> m === fail err Some testing reveals that this law is violated not only by the Get monad, but also by State, Reader, and Writer. (There was a discussion about the strictness of the state monad a while back, I can't recall if this came up...) So, where am I going wrong? Am I mixing up fail and mzero? Are there other laws for fail that actually hold? / Ulf
http://article.gmane.org/gmane.comp.lang.haskell.libraries/6162
crawl-002
refinedweb
144
64.24
using tiles without struts using tiles without struts Hi I am trying to make an application using tiles 2.0. Description of my web.xml is as follows: tiles... tiles-servlet tiles-api tiles-core tiles-jsp When i try to run my struts tiles implementation struts tiles implementation im implementing Tiles using Struts.. I... at the following link: [Struts2 Tiles Example][1]"** [1]: Some errors in Struts - Struts i am Getting Some errors in Struts I am Learning Struts Basics,I am Trying examples do in this Site Examples.i am getting lot of errors.Please Help tiles using struts2 tiles using struts2 hello, im implementing tiles using struts2 in eclipse. i am having following problem occurred during execution.i have created.../struts-tiles.tld org.apache.jasper.compiler.DefaultErrorHandler.jspError :( I am not getting Problem (RMI) I am not getting Problem (RMI) When i am excuting RMI EXAMPLE 3,2 I am getting error daying nested exception and Connect Exception   Getting an error :( Getting an error :( I implemented the same code as above.. But getting this error in console... Console Oct 5, 2012 10:18:14 AM...:SpringMVC' did not find a matching property. Oct 5, 2012 10:18:14 AM Tiles in jsp of Apache Tiles. Thanks I have following error in my jsp sile.. tiles - Struts Struts Tiles I need an example of Struts Tiles Application   redirect with tiles - Struts specify in detail and send me code. Thanks. I using tiles in struts 2. And i want redirect to other definition tag by url tag. Please help me...redirect with tiles I have definition with three pages. I have link Session management using tiles Session management using tiles hi i am working on elearning project ..my problem is i am not able to maintain session over login's page..suppose if i logged in one user..and if i open another tab and logged in another account error error I am running a program insert into statement in sql using.../ServletUserEnquiryForm.shtml getting an error given below SQLException caught: [Microsoft][ODBC SQL Server Driver]COUNT field incorrect or syntax error please suggest Developing Simple Struts Tiles Application Developing Simple Struts Tiles Application  ... will show you how to develop simple Struts Tiles Application. You will learn how to setup the Struts Tiles and create example page with it. What is Struts/O Program output error I/O Program output error Hello All, I am working on a program... file, but I am getting incorrect output. I have been successfull with part of the program in that it reads the text file and analyzes it, however I need it to take struts tiles framework struts tiles framework how could i include tld files in my web application-.   Getting Error - Development process Getting Error Hi , i am getting error while executing this code. I just want to store date in to database. Dont mistake me for repeated questions. java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver] Number xml - XML = stw.toString(); after i am getting xml string result, like Successxxx profile...xml hi convert xml document to xml string.i am using below code...-an-xml-document-using-jd.shtml Hope that it will be helpful for you Tiles - Struts Inserting Tiles in JSP Can we insert more than one tiles in a JSP page error "+it); } } this is my program i am getting an error saying cannot find symbol class string xml and xsd - XML it.. ====================================================================== i am using program import...xml and xsd 50007812 2005-03-09T17:05:59... ====================================================================== how to use xmlns? i have all xsds but i don't know how Im not getting validations - Struts Im not getting validations I created one struts aplication im using DynaValidations I configured validation.xml and validation-rules.xml also..... and Struts DOM error - Java Beginners ); } } } the above program "To Count XML Element" was copied from this website (roseindia.net) but im getting a error like this... "java.io.IOException: Server returned HTTP...XML DOM error import org.w3c.dom.*; import javax.xml.pars all XML Elements . Description of program: The following program helps you in getting all XML.... Program parses the xml file using the parse() method and creates a Document object... Getting all XML Elements   tiles - Struts Tiles in Struts Example of Titles in Struts Hi,We will provide you the running example by tomorrow.Thanks Hi,We will provide you the running example by tomorrow.Thanks i got an error while compile this program manually. i got an error while compile this program manually. import... mapping.findForward("errors.jsp"); } } i set both servlet,struts jar files and i got an error in saveErrors() error Heading cannot Program Error - WebSevices Program Error Hello Friends, Write the simple php program Using Zend framework with Database connection .Anyone know the code .Help me Hi friend, I am sending simple code of using Zend Framework getting result in table dynamically - Struts getting result in table dynamically How do i get result in tabular format dynamically using our own tags Login form in Struts2 version 2.3.16 How to create Login Form in Struts2? In this video tutorial I am explaining.... This application is build using the Struts 2.3.16 and uses the Eclipse IDE...: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE struts PUBLIC java struts error - Struts THINK EVERY THING IS RIGHT BUT THE ERROR IS COMING I TRIED BY GIVING INPUT...java struts error my jsp page is post the problem... ActionForward execute(ActionMapping am,ActionForm af,HttpServletRequest req struct program - Struts Struct program I am using the weblogic 8.1 as application server. I pasted the struts. jar file in the lib folder. when i execute program.do always gives the unavailable service exception. please help me where Struts - Jboss - I-Report - Struts Struts - Jboss - I-Report Hi i am a beginner in Java programming and in my application i wanted to generate a report (based on database) using Struts, Jboss , I Report getting dropdown values using apache commons in servlet getting dropdown values using apache commons in servlet i want...); } i tried to debug and i am getting the wright file path but while proceeding... commoms,before that i was using o'reilly api. here is my code : @Override Produces XML file but format not correct for storing data using JSP and XML ;/body> </html> I am using Jdeveloper ,program runs correctly,forms...Produces XML file but format not correct for storing data using JSP and XML hii I have created a project using JSP and XML as database to store data tomcat server start up error - Struts tomcat server ,I got a problem..... Sep 5, 2009 4:49:08 AM...:\Program Files\QuickTime\QTSystem\ Sep 5, 2009 4:49:08 AM... INFO: XML validation disabled Sep 5, 2009 4:49:09 AM Struts Struts I want to create tiles programme using struts1.3.8 but i got jasper exception help me...); if(rs.next()) { ------- this is the code i have and i am getting an error like dis: "ename cannot be resolved to a variable" whats Java - Struts Java hello friends, i am using struts, in that i am using tiles framework. here i wrote the following code in tiles-def.xml... doubt is when we are using struts tiles, is there no posibulity to use action class XML error message: The reference to entity XML error message: The reference to entity XML error message: The reference to entity "ai" must end with the ';' delimiter. Im getting this error when i gotta edit my blogger template Please advice manjunath). java and xml problem - XML java and xml problem hi, i need to write a java program...(); } } --------- the output i am getting is as follows. ModDefLang... --------- i have witten a program in java, but im getting a problem in execution - Development process the whole file i am getting a problem like in server.properties file...getting a problem in execution hi friends i have a problem in imcms... the error as ERROR : (Access denied for user 'root'@'localhost' (using Getting File path error - JSP-Servlet Getting File path error I have a requirement where i need to get certain properties on application login page start itself. I an currently using... (request.getRealPath) is deprecated. Second problem is that when i create a war program code for login page in struts by using eclipse program code for login page in struts by using eclipse I want program code for login page in struts by using eclipse accessing xml using java accessing xml using java I need to retrieve some elements in xml file using java Hi Friend, Please visit the following links:
http://www.roseindia.net/tutorialhelp/comment/3696
CC-MAIN-2014-52
refinedweb
1,448
56.45
Introduction In this post I’m going to cover the basics of creating and publishing a gem using the bundle gem command provided by Bundler. We’re going to use bundler to create a gem template for us. We’ll then take that skeleton gem, add some functionality to it, and publish it for all the world to use. For the purposes of this tutorial I need a very simple example of something which you could conceivably want to release as a gem. How about a simple Sinatra web app which tells you the time? Sure, that’ll work. We’ll call it Didactic Clock. In order to make this server implementation need more a couple of lines of code we’ll add the requirement that the clock tells you the time in a verbose form like “34 minutes past 4 o’clock, AM”. Preparing to create a gem A great way to create and test gems in a clean environment is to use the awesome rvm and in particular rvm’s awesome gemset feature. I assume you’re already set up with rvm. If not go get set up now! First off we’ll create a seperate gemset so that we can create and install our gem in a clean environment and be sure that someone installing our gem will have all the dependencies they need provided to them. We’re going to be creating a gem called didactic_clock, so we’ll name our gemset similarly. We’ll create the gemset and start using it by executing: rvm gemset create didactic_clock rvm gemset use didactic_clock From now on I’ll assume we’re always using this clean-room gemset. Creating the skeleton First lets install bundler into our gemset: gem install bundler Now we’ll ask bundler to create the skeleton of a gem. In this tutorial we’re going to be creating a gem called didactic_clock. We’ll ask bundler to create a skeleton for a gem with that name by calling: bundle gem didactic_clock You should see some output like: create didactic_clock/Gemfile create didactic_clock/Rakefile create didactic_clock/.gitignore create didactic_clock/didactic_clock.gemspec create didactic_clock/lib/didactic_clock.rb create didactic_clock/lib/didactic_clock/version.rb Initializating git repo in /Users/pete/git/didactic_clock Modifying our gemspec Bundler creates a basic .gemspec file which contains metadata about the gem you are creating. There are a few parts of that file which we need to modify. Let’s open it up and see what it looks like: # -*- encoding: utf-8 -*- $:.push File.expand_path("../lib", __FILE__) require "didactic_clock/version" Gem::Specification.new do |s| s.name = "didactic_clock" s.version = DidacticClock::VERSION s.platform = Gem::Platform::RUBY s.authors = ["TODO: Write your name"] s.email = ["TODO: Write your email address"] s.homepage = "" s.summary = %q{TODO: Write a gem summary} s.description = %q{TODO: Write a gem description} s.rubyforge_project = "didactic_clock" s.files = `git ls-files`.split("\n") s.test_files = `git ls-files -- {test,spec,features}/*`.split("\n") s.executables = `git ls-files -- bin/*`.split("\n").map{ |f| File.basename(f) } s.require_paths = ["lib"] end You can see that Bundler has set up some sensible defaults for pretty much everything. Note how your gem version information is pulled out of a constant which Bundler was nice enough to define for you within a file called version.rb. You should be sure to update that version whenever you publish any changes to your gem. Follow the principles of Semantic Versioning. Also note that there are some TODOs in the authors, email, summary, and description fields. You should update those as appropriate. Everything else can be left as is for the time being. Adding a class to our lib We’ll start by creating a TimeKeeper class which will report the current time in the verbose format we want the Didactic Clock server to use. To avoid polluting the client code’s namespace it is important to put all the classes within your gem in an enclosing namespace module. In our case the namespace module would be DidacticClock, so we’re creating a class called DidacticClock::TimeKeeper. Another convention which is important to follow when creating gems is to keep all your library classes inside a folder named after your gem. This avoids polluting your client’s load path when your gem’s lib path is added to it by rubygems. So taking both of these conventions together we’ll be creating a DidacticClock::TimeKeeper class in a file located at lib/didactic_clock/time_keeper.rb. Here’s what that file looks like: module DidacticClock class TimeKeeper def verbose_time time = Time.now minute = time.min hour = time.hour % 12 meridian_indicator = time.hour < 12 ? 'AM' : 'PM' "#{minute} minutes past #{hour} o'clock, #{meridian_indicator}" end end end Adding a script to our bin We want users of our gem to be able to launch our web app in sinatra’s default http server by just typing didactic_clock_server at the command line. In order to achieve that we’ll add a script to our gem’s bin directory. When the user installs our gem the rubygems system will do whatever magic is required such that the user can execute the script from the command line. This is the same magic that adds the spec command when you install the rspec gem, for example. So we’ll save the following to bin/didactic_clock_server #!/usr/bin/env ruby require 'sinatra' require 'didactic_clock/time_keeper' # otherwise sinatra won't always automagically launch its embedded # http server when this script is executed set :run, true get '/' do time_keeper = DidacticClock::TimeKeeper.new return time_keeper.verbose_time end Note that we require in other gems as normal, we don’t require rubygems, and that we don’t do any tricks with relative paths or File.dirname(__FILE__) or anything like that when requiring in our TimeKeeper class. Rubygems handles all that for us by setting up the load path correctly. Adding a dependency Our little web app uses Sinatra to serve up the time, so obviously we need the Sinatra gem installed in order for our own gem to work. We can easily express that dependency by adding the following line to our .gemspec: s.add_dependency "sinatra" Now Rubygems will ensure that sinatra is installed whenever anyone installs our didactic_clock gem. Building the gem and testing it locally At this point we’re done writing code. Bundler created a git repo as part of the bundle gem command. Let’s check in our changes to the git repo. git commit -a should do the trick, but obviously feel free to use whatever git-fu you prefer. Now we’re ready to build the gem and try it out. Make sure you’re still in the clean-room gemset we created earlier, and then run: rake install to build our didactic_clock gem and install it into our system (which in our case means installing it into our didactic_clock gemset). If we run gem list at this point we should see didactic_clock in our list of gems, along with sinatra (which will have been installed as a dependency). Now we’re ready to run our app by calling didactic_clock_server from the command line. We should see sinatra start up, and if we visit we should see our app reporting the time in our verbose format. Victory! Publishing our gem The last step is to share our creation with the world. Before we do that you’ll need to set up rubygems in your system to publish gems. The instructions at rubygems.org are easy to follow. Bundler provides a rake publish task which automates the steps you would typically take when publishing a version of your gem, but it’s fairly opinionated in how it does so. The task will tag your current git commit, push from your local git repo to some upstream repo (most likely in github), and then finally build your gem and publish your .gem to rubygems.org. If you don’t have an upstream repo configured then you’ll probably get an error like: rake aborted! Couldn't git push. `git push 2>&1' failed with the following output: fatal: No destination configured to push to. So, now would be the time to set up an upstream repo. Doing that with github is really straightforward. Once you have your local git repo configured with an upstream repo you can finally publish your gem with rake publish. Now anyone who wants to install your gem can do so with a simple gem install command. Congratulations! Fame and fortune await you! Conclusion Hopefully I’ve shown that creating and publishing a well-behaved gem is pretty simple. The didactic_clock sample I created is up on github, and of course the gem is published on rubygems.org and can be installed with gem install didactic_clock.
http://blog.thepete.net/2010/11/creating-and-publishing-your-first-ruby.html
CC-MAIN-2015-18
refinedweb
1,469
65.62
by Peter Daukintis Recently, I found myself needing to animate a List Box Item in response to it’s underlying bound data changing in a Windows Phone 7 application. Seems like this would be a fairly common requirement. I should add that a satisfying solution would be one that would adhere to an MVVM approach, be testable and maintain a separation between view and data. Anyway, I had to do a little bit of digging around to get a solution so it may save someone else some time. I started out creating a new project using the ‘Windows Phone Application’ template. I then cobbled together a view model like so, using System.Collections.ObjectModel; using System.ComponentModel; namespace ListBoxDataChange { public class ViewModel { public ViewModel() { Data = new ObservableCollection<MyItem> {" }, }; } public ObservableCollection<MyItem> Data { get; set; } } public class MyItem : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; public void OnPropertyChanged(string propertyName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } private string _name; public string Name { get { return _name; } set { if (_name != value) { _name = value; OnPropertyChanged("Name"); } } } } } For testing purposes only I have created an ObservableCollection of a custom type (MyItem) which implements INotifyPropertyChanged on it’s Name property. This will ensure that the Silverlight data binding mechanism will detect both changes to the collection i.e. additions, deletions, etc and also changes to the contained data. The next step to set the test up is to create the phone ui. To do this I opened the solution in Expression Blend for WP7 Beta and dragged a list box into the ContentGrid provided by the project template. Right-click the list box and use the Auto Size > Fill function to expand the list box to fit it’s container: Next build the solution to ensure that our data objects are available to Blend for binding. Then select the PhoneApplicationPage in the objects and timelines panel and find it’s DataContext property in the Properties pane. Click on the small square next to the property and select Data binding to launch the ‘Create Data Binding’ dialog. Select the option ‘CLR Object’ and choose the ViewModel class created earlier. This binds the View Model to the page, so we now need to bind the View Model’s collection to the ListBox ItemSource property. To do this in Blend right click the ListBox in the Objects and Timelines Panel and select ‘Data Bind ItemsSource to Data…’. In the ensuing dialog choose the DataContext tab and select the Data property here to complete the binding. This will result in the following since we have not yet edited the ListBoxItem Data Template: So now we need to edit the template, so first right-click the list box and select ‘edit additional templates’ and then ‘edit generated items (itemtemplate)’ and choose ‘create empty’. This puts Blend into template editing mode and easily allows you to see in place the changes you are making. Add a TextBlock to the empty Grid and bind its Text property to the MyItem Name property. This can be done by locating the Text property in the Properties panel and clicking the small square, select data binding and selecting the Name property. So, now the scene is set we can add a storyboard. So, with the Item Template selected we can click on the ‘+’ to create a new storyboard: The content of the storyboard is not relevant, so I just animated the scale of the TextBlock. Now, with the TextBlock selected, navigate to the Assets panel and click on Behaviors. Then, click and drag a ControlStoryBoardAction onto the TextBlock. examine it’s properties in the properties panel. So, if we just had a trigger that would fire when the data in the collection changes then it looks like we might be able to wire all of this up here in Blend. One problem is that we don’t have one, but File > New Item > Trigger will at least get us started. The expression blend samples found here have some examples of custom triggers which are pretty similar. So, I looked at the DataEventTrigger sample and simplified it to produce a DataChangedTrigger. The original sample creates a new Dependency Property which it binds to the property you want to detect the changes on. This allows it to register a callback when the property changes and I simply invoke any attached Actions when this callback is called. So after modifying and building the code I can click on the ‘New’ button (next to the TriggerType) and select my new DataChangedTrigger. In it’s properties I can type the name of the binding, in this case ‘Name’ (for the property to which we wish to detect the changes). Now I select the Storyboard I made earlier. To test it works I then wired up a click event handler on a button on the user interface which would make changes to the underlying data, like so. private void Button_Click(object sender, RoutedEventArgs e) { ViewModel model = LayoutRoot.DataContext as ViewModel; if (model != null) { int index = counter++ % model.Data.Count; model.Data[index].Name = "Data has changed" + counter; } } And indeed, the storyboard was triggered each time my data changed. Technorati Windows Live 5 thoughts on “WP7 ListBoxItem Animation on bound data changes (MVVM)” I’m pretty new to Blend and Windows Phone. I tried to reproduce this, but I’m stuck at changing that trigger. Could you maybe explain this a little bit in more detail, please? hi beavearony, You can see the project I used here The code won’t work with the current WP7 sdk as this project was created with the ctp version but the code for the trigger should work fine. Hi, I was wondering what needs to happen to make this work with current win phone sdk. You mentioned a ctp version that is does work with. Any idea on how I can accomplish, well, need to start an animation on alist box item (or not) based on teh contents of the data for each list item. Thanks for any suggestions you might have. Regards, Mike Great stuff, been butting my head against a wall for a few days trying to work this one out.
https://peted.azurewebsites.net/wp7-listboxitem-animation-on-bound-data-changes-mvvm/
CC-MAIN-2018-51
refinedweb
1,027
60.85
Troubleshooting These are some common issues you may run into while setting up React Native. If you encounter something that is not listed here, try searching for the issue in GitHub. Port already in usePort already in use The React Native packager runs on port 8081. If another process is already using that port, you can either terminate that process, or change the port that the packager uses. Terminating a process on port 8081Terminating a process on port 8081 Run the following command to find the id for the process that is listening on port 8081: sudo lsof -i :8081 Then run the following to terminate the process: kill -9 <PID> On Windows you can find the process using port 8081 using Resource Monitor and stop it using Task Manager. Using a port other than 8081Using a port other than 8081 You can configure the packager If you added React Native manually to your project, make sure you have included all the relevant dependencies that you are using, like RCTText.xcodeproj, RCTImage.xcodeproj. Next, the binaries built by these dependencies have to be linked to your app binary. Use the Linked Frameworks and Binaries section in the Xcode project settings. More detailed steps are here: Linking Libraries. If you are using CocoaPods, verify that you have added React along with the subspecs to the Podfile. For example, if you were using the <Text />, <Image /> and fetch() APIs, you would need to add these in your Podfile: In the project's build settings, User Search Header Paths and Header Search Paths are two configs that specify where Xcode should look for #import header files specified in the code. For Pods, CocoaPods uses a default array of specific folders to look in. Verify that this particular config is not overwritten, and that none of the folders configured are too large. If one of the folders is a large folder, Xcode will attempt to recursively search the entire directory and throw above error at some point. To revert the User Search Header Paths and Header Search Paths build settings to their defaults set by CocoaPods - select the entry in the Build Settings panel, and hit delete. It will remove the custom override and return to the CocoaPod defaults. No transports availableNo transports available React Native implements a polyfill for WebSockets. These polyfills are initialized as part of the react-native module that you include in your application through import React from 'react'. If you load another module that requires WebSockets, such as Firebase, be sure to load/require it after react-native: refering, just run this command in your terminal window echo fs.inotify.max_user_watches=582222 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
http://facebook.github.io/react-native/docs/0.42/troubleshooting
CC-MAIN-2018-51
refinedweb
452
58.62
mount, umount - Mount or unmount a file system #include <sys/mount.h> int mount( int type, char *mnt-path, int mnt_flags, caddr_t data ); int umount( char *mnt-path, int umnt_flag ); Defines the type of the file system. The types of recognized file systems are: Reserved for third-party file systems. See NOTES for information about support for thirdparty file systems. For internal use only. For internal use only. For internal use only. Compact Disk File System (see cdfs(4)) Distributed File System (layered product) Versatile Disk File System (see dvdfs(4)) DCE Episode File System (layered product) File Descriptor File System (used by streams) File on File Mounting File System (used by streams) Memory File System (RAM disk) Advanced File System (AdvFS) Network File System, Version 2 protocol Network File System, Version 3 protocol PC File System /proc File System (used by debuggers) System V File System Berkeley's UNIX File System Points to a null-terminated string that contains the appropriate pathname. Specifies which semantics should be used when accessing the file system. One or more of the following flags might be valid, depending on the file system type and flag combination: Cause all files in the mounted AdvFS fileset to use atomic-write data logging. (See the description of the adl argument for the mount command's -o option in mount(8).) For internal use only. For UFS, flush data asynchronously rather than synchronously. For information about the advantages and risks of using this flag, see the discussion of the delayed keyword for the mount command's -o option in mount(8). For internal use only. Allow an AdvFS fileset to be mounted as a domain volume even though it has the same AdvFS domain ID as a fileset that is already mounted. Allow the file system to be exported for both read and write access. Allow the file system to be exported for read-only access. For UFS and AdvFS, extend the size of the file system to use all the available storage space in a revised partition. The file system must be already mounted in order to use this option. For internal use only. Forcibly mount the file system, even if it is unclean. In a cluster, enable cluster partitioning, which restricts use of the file system to the member that mounts it. This flag cannot be used on a file system that is already mounted. This flag is automatically set when mounting a UNIX file system (UFS) for read-write access and when mounting an in-memory file system (MFS). For internal use only. All new files and directories inherit the group ID of the parent directory. When this flag is not specified, the following SVID III semantics apply: If the parent directory's mode bits include the IS_GID, then the group ID of the new file or directory is the parent directory's group ID. If the parent directory's mode bits do not include IS_GID, then the group ID of the new file or directory is the process group ID of the creating process. For internal use only. For internal use only. Obsolete; not used. For internal use only. Mark the file access time changes made for reads of regular files in memory, but do not flush them to disk until other file modifications occur. This behavior does not comply with industry standards and is used to reduce disk writes for applications with no dependencies on file access times. Do not allow access from the file system to either block- or character-special devices. Do not allow files to be executed from the file system. Do not honor setuid or setgid bits on files when executing them. For AdvFS and UFS, enable quotas on the file system.. Obsolete; not used. For AdvFS and UFS, enable an alternate smooth sync policy wherein dirty UBC pages are flushed to disk after the smoothsync_age period, but only if they are idle for the smoothsync_age period. By default, dirty UBC pages are written to disk after the smoothsync_age period, regardless of whether they are still being modified. This policy can be applied only to dirty pages in the file system cache (UBC); dirty pages mapped into virtual memory are always flushed to disk after the smoothsync_age period, even if they are still being modified. The smoothsync_age system attribute can be configured by means of the /sbin/sysconfig command. See sys_attrs_vfs(5) and sysconfig(8) for information about the smoothsync_age attribute and /sbin/sysconfig command, respectively. For AdvFS and UFS, cause all writes to be written to disk as well as to the buffer cache before the function performing the write operation returns. By default, write operations to disk are done asynchronously of write operations to the buffer cache. For AdvFS and UFS, prevent excessive asynchronous I/O from overloading the device queue. This flag has no effect if M_SYNCHRONOUS is applied to the file system. For internal use only. See M_THROTTLE. The mount operation is being performed on an already mounted file system. This flag allows mount attributes to be changed without unmounting and remounting the file system. The attributes that can be changed for a mounted file system are restricted by most types of file system software. For example, for most types of file systems, you cannot change the access mode from read-write to read-only if the file system is already mounted. For UFS or AdvFS, M_UPDATE is typically specified without M_RDONLY to change a file system that had been mounted read-only to read-write. If M_UPDATE is used in a cluster environment, it is important to remember that while AdvFS filesets can be mounted read-write and be accessible to all cluster members, UFS file systems must be mounted read-only to be available to all cluster members. For UFS, any attempt to use M_UPDATE on a file system that is already mounted read-only and accessible to all cluster members will fail. Points to a structure that contains the type-specific parameters to mount. May be 0 (zero) or the following: Performs a fast unmount that causes remote file systems to be unmounted without notifying the server.. The file specified by the data parameter cannot be a directory file; otherwise either file may be of any type.) mounts, to call either the mount() or umount() function, the calling process must have superuser privilege. Two mount() functions are supported by Tru64 UNIX: the BSD mount() and the System V mount(). The default mount() function is the BSD mount() documented on this reference page. The operating system does not support the System V lmount() function. Third-party file systems do not have type constants defined in the <sys/mount.h> file. For these file systems, functionality has been added to the mount() function to allow an application to query by using the file system's name string to obtain the corresponding type numeric value. The type numeric value obtained from the first mount() call can then be used in a second mount() call to mount the third-party file system. To use the type query functionality, call mount() with type as -1, mnt-path as NULL, mnt_flag as 0, and data pointing to the address of a vfsops_fsname_args structure. This structure is defined in the <sys/mount.h> file and contains two fields; the first field must be set to the file system name string to search for and the second field is a return index. If the specified name string is found, the function returns the corresponding type numeric value into the structure's return index field. The mount() function supports mount-point argument pathnames of up to MNAMELEN, which includes the null terminating character. MNAMELEN can be up to 90 characters long, including the null terminating character. The mount() function returns 0 (zero) if the file system was successfully mounted. Otherwise, -1 is returned. The mount can fail if the mnt-path parameter does not exist or is of the wrong type. For AdvFS, the mount can fail if the domain or fileset (or both) specified in the data parameter does not exist or is inaccessible. For wide. The file system is invalid or not installed. A component of the mnt-path parameter does not exist. The specified mnt-path is not a directory. A pathname contains a character with the high-order bit set, or the file system name in the query by name functionality is invalid. Another process currently holds a reference to the mntpath parameter. The file system is not clean and M_FORCE is not set. The mnt-path parameter points outside the process' allocated address space. The process is attempting to mount on a multilevel child directory. The following errors can occur for a UFS file system mount:) mount(2)
http://nixdoc.net/man-pages/Tru64/man2/mount.2.html
CC-MAIN-2020-16
refinedweb
1,463
63.29
# 5. Extras At this point, you already have a working full-featured serverless API, well done! 🎉 NestJS is a very comprehensive framework, and there could be a lot more use-cases to cover for your specific needs. I encourage you to dive into the NestJS documentation to learn more about the techniques and tools you can use. If you have more time and feel like it, here are some extras points that I found interesting to cover, especially if you want to build enterprise apps. Note that each of these extra parts is entirely independent, so you can skip to the one you are the most interested in or do them in any order 😉. # Add data validation It is a best practice to check and validate any data received by an API. What do you think would happen if you call your story creation endpoint, but without providing data? Let's try! curl -X POST -d "" Whoops! A new story is created, but with our entity properties are left empty 😱. We might want to make sure a new story has its animal field set and either a description or an image provided. Nest.js provides a built-in ValidationPipe that enforces validation rules for received data payloads, thanks to annotations provided by the class-validator package. To use it, you have to create a DTO (Data Transfer Object) class on which you will declare the validations rules using annotations. First, you need to install the required packages: npm install class-validator class-transformer Then create the file src/stories/story.dto.ts: export class StoryDto { @IsNotEmpty() animal: string; @IsOptional() description: string; @IsOptional() createdAt: Date; } It looks like a lot like our Story entity, but this time you define only properties that are expected in the request payload. That's why there is no imageUrl property here: it will be set by the controller only if an image file is uploaded. The annotations @IsNotEmpty() and @IsOptional() describe which property can be omitted and which one can be set in the payload. You can see the complete list of provided decorators here. Now open src/stories/stories.controller.ts and change the type of the data parameter of your POST function to StoryDto: ... async createStory( @Body() data: StoryDto, @UploadedFile() file: UploadedFileMetadata, ): Promise<Story> { ... Finally open src/main.azure.ts and enable ValidationPipe at the application level, to ensure all endpoints gets data validation: const app = await NestFactory.create(AppModule); app.setGlobalPrefix('api'); app.useGlobalPipes(new ValidationPipe()); Start your server with npm run start:azure and run the previous curl command again. This time you should properly receive an HTTP error 400 (bad request). Pro tip By default, detailed error messages will be automatically generated in case of a validation error. You also specify custom error message in the decorator options, for example: @IsNotEmpty({ message: 'animal must not be empty' }) animal: string; You also use special tokens in your error message or use a function for better granularity. See the class-validator documentation for more details. What about our other constraint, which is to have either a description or an image file provided? Since the imageUrl information is not directly part of the DTO, we cannot use it for validation. As the imageUrl property is set in the controller, that's where you have to perform manual validation. You can use the manual validation methods of the class-validator package for that. This time, it's your turn to finish the job! - Ensure that either descriptionor imageUrlis not empty, using manual validation. - Ensure that descriptionlength is at most 240 characters. - Ensure that animalis either set to cat, docor hamsterusing annotations. - Ensure that createdAtis a date if provided, using annotations. You can read more on data validation techniques in the NestJS documentation. # Enable CORS If you try to access your API inside a web application from your browser, you might encounter an error like that one: This error occurs because browsers block HTTP requests from scripts to web domains different than the one of the current web page to improve security. To bypass this restriction, your funpets-api \ --resource-group funpets \ --allowed-origins If you want to allow any website to use your API, you can replace the website URL by using * instead. In that case, be careful as Azure Functions will auto-scale to handle the workload if millions of users start using it, but so will your bill! # Enable authorization By default, all Azure Functions triggered by HTTP are publicly available. It's useful for a lot of scenarios, but at some point you might want to restrict who can execute your functions, in our case your API. Open the file main/function.json. In the functions, bindings, notice that authLevel is set to anonymous. It can be set to one of these 3 values: anonymous: no API key is required (default). function: an API key specific to this function is required. If none is defined, the defaultone will be used. admin: a host API key is required. It will be shared among all functions from the same app. Now change authLevel to function, and redeploy your function: # Don't forget to change the name with the one you used previously func azure functionapp publish <your-funpets-api> --nozip Then try to invoke again your API: curl https://<your-funpets-api>.azurewebsites.net/api/stories -i You should get an HTTP status 401 error ( Unauthorized). To call a protected function, you need to either provide the key as a query string parameter in the form code=<api_key> or you can provide it with the HTTP header x-functions-key. You can either log in to portal.azure.com and go to your function app, or follow these steps to retrieve your function API keys: // Retrieve your resource ID # Don't forget to change the name with the one you used previously az functionapp show --name <your-funpets-api> \ --resource-group funpets \ --query id # Use the resource ID from the previous command az rest --method post --uri "<resource_id>/host/default/listKeys?api-version=2018-11-01" You should see something like that: { "functionKeys": { "default": "functionApiKey==" }, "masterKey": "masterApiKey==", "systemKeys": {} } Then try to invoke again your API, this time with the x-functions-key header set with your function API key: curl https://<your-funpets-api>.azurewebsites.net/api/stories -i \ -H "x-functions-key: <your_function_api_key>" This time the call should succeed! Using authorization level you can restrict who can call your API, this can be useful especially for service-to-service access restrictions. However, if you need to manage finely who can access your API with an endpoint granularity, you need to implement authentication in your app. # Write tests Your API might currently look fine, but how can you ensure it has as little bugs as possible, and that you won't introduce regression in the future? Writing automated is not the most fun part of development, but it's a fundamental requirement to develop robust software applications. It helps to catch bugs early, preventing regressions and ensuring that production releases meet your quality and performance goals. The good news is NestJS has you covered to make your testing experience as smooth as possible. When you bootstrapped the project using the nest CLI, Jest and SuperTest frameworks have been set up for you. Each time you run the nest generate command, unit test files are also created for you with the extension .spec.ts. There are 5 NPM scripts dedicated to testing in your package.json file: npm test: runs unit tests once. npm run test:watchruns unit tests in watch mode, it will automatically re-run tests as you make modifications to the files. It is suited perfectly for TDD. npm run test:covruns unit tests and generate coverage report, so you can know which code paths are covered by your tests. npm run test:debug: runs unit tests with Node.js debugger enabled, so you can add breakpoints in your code editor and debug your tests more easily. npm run test:e2e: runs your end-to-end tests. Now run the npm test command. Oops, it seems that src/stories/stories.controller.spec.ts test if failing 😱! # Add module and providers mocks If you look at the stack trace, you can see that the reason is that @nestjs/typeorm and AzureStorageModule services cannot be resolved. It's expected: when running unit tests, you want to isolate the code you are testing as much as possible, and for that you can see that each test file provides its own module definition: beforeEach(async () => { const module: TestingModule = await Test.createTestingModule({ controllers: [StoriesController], }).compile(); controller = module.get<StoriesController>(StoriesController); }); The module created with Test.createTestingModule does not import AzureTableStorageModule and AzureStorageModule, so that's why their providers cannot be resolved. Instead of importing them right away to fix the issue, we should write mocks for the providers we use instead. # Mock @nestjs/azure-storage Let's start with mocking what we use in @nestjs/azure-storage module, using jest.mock(<module>) helper function. Add this code just after the imports: jest.mock('@nestjs/azure-storage', () => ({ // Use Jest automatic mock generation ...jest.genMockFromModule('@nestjs/azure-storage'), // Mock interceptor AzureStorageFileInterceptor: () => ({ intercept: jest.fn((context, next) => next.handle()) }) })); For simple modules, using jest.mock(<module>) would be enough to generate mocks automatically according to the module interface. But in our case, AzureStorageFileInterceptor needs to be mocked manually as it is a bit trickier: it must returns an object with a method intercept(context, next) that needs to call next.handle() to not break the chain of interceptors calls. So we provide our own version of the @nestjs/azure-storage module mock, using jest.genMockFromModule(<module>) helper to automatically generates mocks for everything except AzureStorageFileInterceptor. For AzureStorageFileInterceptor we manually reproduce a minimal implementation. Using jest.fn() method here creates a mock function. Thanks to that, we can later change its implementation in a specific test if needed. Then add AzureStorageService to the testting module providers list: beforeEach(async () => { const module: TestingModule = await Test.createTestingModule({ controllers: [StoriesController], providers: [AzureStorageService] }).compile(); controller = module.get<StoriesController>(StoriesController); }); And complete the missing import: import { AzureStorageService } from '@nestjs/azure-storage'; # Mock @nestjs/typeorm We also need to mock the storiesRepository service injected in our controller using @InjectRepository(Story), but how to do that? This time we do not need to mock the entire module, but only this specific service. We can still use Jest automatic mock generation: // Add this code after the imports const mockRepository = jest.genMockFromModule<any>('typeorm').MongoRepository; Its injection token is generated dynamically, so we need to add a custom provider to our testing module to reproduce the same behavior: beforeEach(async () => { const module: TestingModule = await Test.createTestingModule({ controllers: [StoriesController], providers: [ AzureStorageService, { provide: getRepositoryToken(Story), useValue: mockRepository }, ], }).compile(); controller = module.get<StoriesController>(StoriesController); }); Pro tip We had to look at the implementation @InjectRepository() annotation to find out that it uses the method getRepositoryToken() internally. Unfortunately, that's something you have to do sometimes to be able to mock modules properly. Don't forget to add missing imports: import { getRepositoryToken } from '@nestjs/typeorm'; import { Story } from './story.entity'; Now run npm test again, this time the tests should succeed! # Complete test suite Hold on, now that we have solved the mock issue, it's time to write more tests 😃! Try to add: - Unit tests for your controller in src/stories/stories.controller.ts. - An end-to-end for of your endpoints in tests/app.e2e-spec.ts. Take also a look a the report generated by npm run test:cov to see your test coverage. If you are not familiar with Jest you might want to take a look at the documentation. For end-to-end tests, HTTP assertions are made using the SuperTest library. You can also find examples and more information in the NestJS documentation. Solution: see the code for extras
https://black-cliff-0123f8e1e.azurestaticapps.net/step5/
CC-MAIN-2020-34
refinedweb
1,981
55.84
Joy of Clojure – In the Books! Chouser and I have finished our book, The Joy of Clojure. Actually, it is still due for a heavy editing pass and a technical review1, but the content is there and the final phases are underway. Amazon has a listed date of November 30th, but we hope it’ll be available before then. No promises I suppose. We’ve updated the official Table of Contents also. Some of the more obvious changes are as follows: - A foreword2 - An introductory chapter - “Putting Things Together” sections - Much more information about namespaces - A tale about the influence of Ho-Hos® on the design of Scheme - Records - A section about locking - A section about debugging - An annotated bibliography - An index (man that was painful) The sections that were previously listed were rolled into the dialogue of other sections (where appropriate). There is still work to be done: - A small section about “Getting Clojure”3 - Auxiliary content - The obvious changes from the technical and editorial reviews - Layouts - Source code made available Thanks to all for the feedback (please keep it coming — there’s still time). :f I would mention the reviewer except I’m not sure if he wants to be revealed. I’ll let him comment here if he so desires. In any case, we are very excited to work with him. ↩ We hit the jackpot on the foreword. It’s a secret at the moment, but we are very excited and honored. ↩ We are leaning toward an “official” Joy of Clojure distro using David Edgar Liebke‘s cljr. ↩ One Comment, Comment or Ping Mike K. I thought the whole book was about “Getting Clojure” :-) Aug 5th, 2010 Reply to “Joy of Clojure – In the Books!”
http://blog.fogus.me/2010/08/05/joy-of-clojure-in-the-books/
CC-MAIN-2019-35
refinedweb
288
70.73
Technical Support Support Resources Product Information Information in this article applies to: The documentation states floating-point operations are fully reentrant. Does that apply to interrupt routines as well? Floating point operations the compiler generates code for (+ - * /) are fully reentrant. But only a few functions in math.h are reentrant. Those that are not reentrant must be protected from interrupts. One way this can be accomplished is to put a wrapper that disables and re-enables interrupts around the floating point math calls. For example: #pragma disable // Disable Interrupts for this function float ISR_sin (float x) { return (sin(x)); } Last Reviewed: Friday, July 15, 2005
http://www.keil.com/support/docs/34.htm
crawl-003
refinedweb
105
50.23
* Right if you "remove" then the DTD puts them back ;) Pesky DTDs.... ;o) Thanks for that Thomas, it's much clearer now. I didn't appreciate that the spec defines that those attributes should be included. I had just got as far as experimenting and finding that an SVG file without those attributes included 'worked', but is obviously not compliant SVG. Thanks again guys for the help, cheers, Dylan ________________________________ From: thomas.deweese@kodak.com [mailto:thomas.deweese@kodak.com] Sent: 11 April 2008 12:11 To: batik-users@xmlgraphics.apache.org Cc: batik-users@xmlgraphics.apache.org Subject: Re: Attrbutes Generated For USE tag Hi Dylan, "Dylan Browne" <dbrowne@mango-solutions.com> wrote on 04/07/2008 05:31:00 AM: > I have a question about the attributes that are added into my "use" > tag when I generate it. Basically I am attempting to 'streamline' my > SVG to reduce file size, so any attributes that are not required I'd > like to strip out. So the attributes you are concerned about are added because the DTD says they should be added. You can detect them on output by checking 'getSpecified' on the Attribute Node (it will return false for these). > I was wondering if there was a way I could remove the inclusion of > the xlink namespace etc, as these are defined in the root of my SVG document? Not really the SVG specification says they must be present in the DOM. > I've tried doing a removeAttributeNS but either this is not possible > or I am attempting to reference them by the wrong namespace. I've > tried variations on something like this: Right if you "remove" then the DTD puts them back ;) We should probably filter these attributes on output in our various writhing utilities, however I've been reluctant to do that since they shouldn't hurt anything downstream (although they do increase file size).
http://mail-archives.apache.org/mod_mbox/xmlgraphics-batik-users/200804.mbox/%3C3CBFCFB1FEFFA841BA83ADF2F2A9C6FA191A1D@mango-data1.Mango.local%3E
CC-MAIN-2015-32
refinedweb
318
61.67
#include <iostream> #include <vector> #include <algorithm> using namespace std; vector<vector<float>> func(int M) { // res = matrix size MxM vector<vector<float>> res; float* buffer = static_cast<float*>(malloc(M * M * sizeof(float))); res.reserve(M); for (int i=0; i<M; i++) { res.emplace_back(buffer + i * M, buffer + (i + 1) * M); /// res[i] = compute_the_matrix(); } return res; } I’m required to make a function that use vector<vector<float>> to represent a matrix. However, it’s inefficient because the rows might be at different location in memory, while a good matrix should have all its element in a continuous block. To do this, I malloc a continuous block of memory. Then initialize the vectors from this block. Is this method safe and will the vectors free memory correctly when it’s destructed? Another situation I can think of is if there’s an exception in res[i] = compute_the_matrix();, then we have a memory leak. Edit: I think this code perform copy-constructor instead of move-constructor, so it’s not what I’m looking for. So, how can I make a vector that is continuous in memory? >Solution : The code doesn’t do what you think it does. The line res.emplace_back(buffer + i * M, buffer + (i + 1) * M); creates a new std::vector<float> to add to res. This std::vector<float> will allocate its own memory to hold a copy of the data in the range [buffer + i * M, buffer + (i + 1) * M), which also causes undefined behavior because you never initialized the data in this range. So, in the end you are not using the memory you obtained with malloc at all for the vectors. That memory is simply leaked at the end of the function. You can’t specify what memory a vector<vector<float>> should use at all. There is simply no way to modify its allocation strategy. What you can do is etiher use a vector<float> instead to hold the matrix entries linearly indexed in a single vector or you can use a vector<vector<float, Alloc1>, Alloc2> where Alloc1 and Alloc2 are some custom allocator types for which you somehow specify the allocation behavior so that the storage layout is closer to what you want (although I doubt that the latter can be done nicely here or is worth the effort over just using the linear representation).
https://devsolus.com/2022/06/15/c-using-vectorvector-to-represent-matrix-with-continuous-data-buffer/
CC-MAIN-2022-27
refinedweb
394
52.6
/ Published in: ActionScript 3 You can replace the library item with whatever you want, or draw a shape in actionscript to use. TweenMax is used in this snippet, get it here: and thank Jack Doyle for being great by donating. Expand | Embed | Plain Text - // Right click the movieclip you wish to use in the library and - // change the linkage name to "Dot" or create your own custom class. - - // TweenMax is used in this snippet, get it here: - // and thank Jack Doyle for being great by donating. - - - import com.greensock.TweenMax; - import com.greensock.easing.*; - - var dotLength : int = 870; // the length that you'd like the dot to be - createDotLine( dotLength, 50, 100 ); // place this anywhere, this creates your dot, - // it takes three parameters: length + x,y of line - - function createDotLine( dL:int, sX:int, sY:int ) : void - { - var dotSprite : Sprite = new Sprite(); - dotSprite.y = sY; - dotSprite.x = sX; - addChild(dotSprite); - - while ( dotSprite.width <= dL ) - { - var w : int = dotSprite.width; - var d : Sprite = new Dot(); - d.x = w; - dotSprite.addChild( d ); - } - - for ( var i : int = 0; i < dotSprite.numChildren; i++) - { - var dS = dotSprite.getChildAt( i ); - var plusMinusY : int; - if ( i % 2 == 0 ) - { - plusMinusY = dS.y - 50; - } else { - plusMinusY = dS.y + 50; - } - - TweenMax.from( dS, .7, { alpha:0, y:plusMinusY, x:dS.x - 40, delay:i * .01, ease:Expo.easeInOut }); - } - } Report this snippet Tweet Note: The import statements shown are for an old version of TweenMax. The package structure has changed in the latest version. It should now be ... import com.greensock.*; import com.greensock.easing.*; When I run your code I get the following compile error ... Scene 1, Layer 'Layer 1', Frame 1, Line 18 1180: Call to a possibly undefined method Dot. Shouldn't there be a Dot class as well. The Dot would be your library item with the linkage dot, or you could make your own custom class... I'll update the snippet with a comment to reflect that. that would be nice. waiting for that.
http://snipplr.com/view/29610/create-dotted-line-made-of-any-shapemc/
CC-MAIN-2015-40
refinedweb
328
77.74
I can send MMS messages with Python. But how can I receive a MMS? I can send MMS messages with Python. But how can I receive a MMS? Similar like SMS? For example () import inbox, appuifw, e32 def message_received(msg_id): box = inbox.Inbox() appuifw.note(u"New message: %s" % box.content(msg_id)) app_lock.signal() box = inbox.Inbox() box.bind(message_received) print "Waiting for new SMS messages.." app_lock = e32.Ao_lock() app_lock.wait() print "Message handled!" Last edited by DrivingMobileInnovation; 2008-04-24 at 19:00. Gargi Das- Forum Nokia Python Wiki Learn Python at For example when I receive a MMS, I just want to save automatically included image to a (jpeg) file and message text to a separete (txt) file. hi DrivingMobileInnovation again i am really sad that in PyS60 there is no help to receive a MMS in the manner you explained. what you can do is design some other algorithm or a code to save the incoming mms. feel free for a feedback Gargi Das- Forum Nokia Python Wiki Learn Python at
http://developer.nokia.com/Community/Discussion/showthread.php/132268-MMS-receiver
CC-MAIN-2013-48
refinedweb
174
69.28
This is the documentation for older versions of Odoo (formerly OpenERP). See the new Odoo user documentation. See the new Odoo technical documentation. Module development¶ Введение. Module Structure¶ The Modules¶ - Введение - - Files & Directories - - __openerp__.py - __init__.py - - XML Files - - Actions - Menu Entries - Reports - Мастера - Profiles Modules - Files and Directories¶ All the modules are located in the server/addons directory. The following steps are necessary to create a new module: - create a subdirectory in the server/addons directory - create a module description file: __openerp__.py - create the Python file containing the objects - create .xml files that download the data (views, menu entries, demo data, ...) - optionally create reports, wizards or workflows. The Modules - Files And Directories - XML Files¶ XML files located in the module directory are used to modify the structure of the database. They are used for many purposes, among which we can cite : - initialization and demonstration data declaration, - views declaration, - reports declaration, - wizards declaration, - workflows declaration. General structure of OpenERP XML files is more detailed in the XML Data Serialization section. Look here if you are interested in learning more about initialization and demonstration data declaration XML files. The following section are only related to XML specific to actions, menu entries, reports, wizards and workflows declaration. Python Module Descriptor File __init__.py¶ __openerp__.py, on 2 digits (1.2 or 2.0). description The module description (text) including documentation on how to use your modules. List of .xml files to load when the server is launched with the "--init=module" argument. Filepaths must be relative to the directory where the module is. OpenERP XML File Format is detailed in this section. data List of .xml files to load when the server is launched with the "--update=module" launched. Filepaths must be relative to the directory where the module is. OpenERP XML File Format is detailed in this section. demo List of .xml files to provide demo data. Filepaths must be relative to the directory where the module is. OpenERP XML File Format is detailed in this section. installable True or False. Determines if the module is installable or not. images List of .png files to provide screenshots, used on. active True or False (default: False). Determines the modules that are installed on the database creation. test List of .yml files to provide YAML tests. Пример. Objects¶ All OpenERP resources are objects: menus, actions, reports, invoices, partners, ... OpenERP is based on an object relational mapping of a database to control the information. Object names are hierarchical, as in the following examples: - account.transfer : a money transfer - account.invoice : an invoice - account.invoice.line : an invoice line Generally, the first word is the name of the module: account, stock, sale. Other advantages of an ORM; - simpler relations : invoice.partner.address[0].city - objects have properties and methods: invoice.pay(3400 EUR), - inheritance, high level constraints, ... It is easier to manipulate one object (example, a partner) than several tables (partner address, categories, events, ...) PostgreSQL¶ The ORM of OpenERP is constructed over PostgreSQL. It is thus possible to query the object used by OpenERP using the object interface or by directly using SQL statements. But it is dangerous to write or read directly in the PostgreSQL database, as you will shortcut important steps like constraints checking or workflow modification. Примечание The Physical Database Model of OpenERP Pre-Installed Data¶ Data can be inserted or updated into the PostgreSQL tables corresponding to the OpenERP objects using XML files. The general structure of an OpenERP XML file is as follows: <?xml version="1.0"?> <openerp> <data> <record model="model.name_1" id="id_name_1"> <field name="field1"> "field1 content" </field> <field name="field2"> "field2 content" </field> (...) </record> <record model="model.name_2" id="id_name_2"> (...) </record> (...) </data> </openerp> Fields content are strings that must be encoded as UTF-8 in XML files. Let's review an example taken from the OpenERP source (base_demo.xml in the base module): <record model="res.company" id="main_company"> <field name="name">Tiny sprl</field> <field name="partner_id" ref="main_partner"/> <field name="currency_id" ref="EUR"/> </record> <record model="res.users" id="user_admin"> <field name="login">admin</field> <field name="password">admin</field> <field name="name">Administrator</field> <field name="signature">Administrator</field> <field name="action_id" ref="action_menu_admin"/> <field name="menu_id" ref="action_menu_admin"/> <field name="address_id" ref="main_address"/> <field name="groups_id" eval="[(6,0,[group_admin])]"/> <field name="company_id" ref="main_company"/> </record> This last record defines the admin user : - The fields login, password, etc are straightforward. - The ref attribute allows to fill relations between the records : <field name="company_id" ref="main_company"/> The field company_id is a many-to-one relation from the user object to the company object, and main_company is the id of to associate. - The eval attribute allows to put some python code in the xml: here the groups_id field is a many2many. For such a field, "[(6,0,[group_admin])]" means : Remove all the groups associated with the current user and use the list [group_admin] as the new associated groups (and group_admin is the id of another record). - The search attribute allows to find the record to associate when you do not know its xml id. You can thus specify a search criteria to find the wanted record. The criteria is a list of tuples of the same form than for the predefined search method. If there are several results, an arbitrary one will be chosen (the first one): <field name="partner_id" search="[]" model="res.partner"/> This is a classical example of the use of search in demo data: here we do not really care about which partner we want to use for the test, so we give an empty list. Notice the model attribute is currently mandatory. Record Tag¶ Описание The addition of new data is made with the record tag. This one takes a mandatory attribute : model. Model is the object name where the insertion has to be done. The tag record can also take an optional attribute: id. If this attribute is given, a variable of this name can be used later on, in the same file, to make reference to the newly created resource ID. A record tag may contain field tags. They indicate the record's fields value. If a field is not specified the default value will be used. Пример <record model="ir.actions.report.xml" id="l0"> <field name="model">account.invoice</field> <field name="name">Invoices List</field> <field name="report_name">account.invoice.list</field> <field name="report_xsl">account/report/invoice.xsl</field> <field name="report_xml">account/report/invoice.xml</field> </record> Field tag¶ The attributes for the field tag are the following: - name : mandatory the field name - eval : optional python expression that indicating the value to add - ref reference to an id defined in this file - model model to be looked up in the search - search a query Function tag¶ A function tag can contain other function tags. - model : mandatory The model to be used - name : mandatory the function given name - eval should evaluate to the list of parameters of the method to be called, excluding cr and uid Пример <function model="ir.ui.menu" name="search" eval="[[('name','=','Operations')]]"/> Getitem tag¶ Takes a subset of the evaluation of the last child node of the tag. - type : mandatory int or list - index : mandatory int or string (a key of a dictionary) Пример Evaluates to the first element of the list of ids returned by the function node <getitem index="0" type="list"> <function model="ir.ui.menu" name="search" eval="[[('name','=','Operations')]]"/> </getitem> i18n¶ Improving Translations¶ - Translating in launchpad Translations are managed by the Launchpad Web interface. Here, you'll find the list of translatable projects. Please read the FAQ before asking questions. - Translating your own module Изменено в версии 5.0. Contrary to the 4.2.x version, the translations are now done by module. So, instead of an unique i18n folder for the whole application, each module has its own i18n folder. In addition, OpenERP can now deal with .po [1] files as import/export format. The translation files of the installed languages are automatically loaded when installing or updating a module. OpenERP can also generate a .tgz archive containing well organised .po files for each selected module. Процессы¶ Виды¶ Technical Specifications - Architecture - Views¶ Views are a way to represent the objects on the client side. They indicate to the client how to lay out the data coming from the objects on the screen. There are two types of views: - form views - tree views Lists are simply a particular case of tree views. A same object may have several views: the first defined view of a kind (tree, form, ...) will be used as the default view for this kind. That way you can have a default tree view (that will act as the view of a one2many) and a specialized view with more or less information that will appear when one double-clicks on a menu item. For example, the products have several views according to the product variants. Views are described in XML. If no view has been defined for an object, the object is able to generate a view to represent itself. This can limit the developer's work but results in less ergonomic views. Usage example¶ When you open an invoice, here is the chain of operations followed by the client: - An action asks to open the invoice (it gives the object's data (account.invoice), the view, the domain (e.g. only unpaid invoices) ). - The client asks (with XML-RPC) to the server what views are defined for the invoice object and what are the data it must show. - The client displays the form according to the view To develop new objects¶ The design of new objects is restricted to the minimum: create the objects and optionally create the views to represent them. The PostgreSQL tables do not have to be written by hand because the objects are able to automatically create them (or adapt them in case they already exist). Reports¶ OpenERP uses a flexible and powerful reporting system. Reports are generated either in PDF or in HTML. Reports are designed on the principle of separation between the data layer and the presentation layer. Reports are described more in details in the Reporting chapter. Мастера> Workflow¶ The objects and the views allow you to define new forms very simply, lists/trees and interactions between them. But that is not enough, you must define the dynamics of these objects. A few examples: - a confirmed sale order must generate an invoice, according to certain conditions - a paid invoice must, only under certain conditions, start the shipping order The workflows describe these interactions with graphs. One or several workflows may be associated to the objects. Workflows are not mandatory; some objects don't have workflows. Below is an example workflow used for sale orders. It must generate invoices and shipments according to certain conditions. In this graph, the nodes represent the actions to be done: - create an invoice, - cancel the sale order, - generate the shipping order, ... The arrows are the conditions; - waiting for the order validation, - invoice paid, - click on the cancel button, ... The squared nodes represent other Workflows; - the invoice - the shipping OpenERP Module Descriptor File : __openerp__.py¶ Normal Module. description The module description (text)._xml List of .xml files to load when the server is launched with the "--init=module" argument. Filepaths must be relative to the directory where the module is. OpenERP XML File Format is detailed in this section. update_xml List of .xml files to load when the server is launched with the "--update=module" launched. Filepaths must be relative to the directory where the module is. OpenERP XML File Format is detailed in this section. installable True or False. Determines if the module is installable or not. active True or False (default: False). Determines the modules that are installed on the database creation. Example¶. Profile Module¶ The purpose of a profile is to initialize OpenERP with a set of modules directly after the database has been created. A profile is a special kind of module that contains no code, only dependencies on other modules. In order to create a profile, you only have to create a new directory in server/addons (you should call this folder profile_modulename), in which you put an empty __init__.py file (as every directory Python imports must contain an __init__.py file), and a __openerp__.py whose structure is as follows : { "name":"''Name of the Profile'', "version":"''Version String''", "author":"''Author Name''", "category":"Profile", "depends":[''List of the modules to install with the profile''], "demo_xml":[], "update_xml":[], "active":False, "installable":True, } Example¶ Here's the code of the file server/bin/addons/profile_manufacturing/__openerp__.py, which corresponds to the manufacturing industry profile in OpenERP. { "name":"Manufacturing industry profile", "version":"1.1", "author":"Open", "category":"Profile", "depends":["mrp", "crm", "sale", "delivery"], "demo_xml":[], "update_xml":[], "active":False, "installable":True, } Module creation¶ Getting the skeleton directory¶ You can copy __openerp__.py and __init__.py from any other module to create a new module into a new directory. As an example on Ubuntu: $ cd ~/workspace/stable/stable_addons_5.0/ $ mkdir travel $ sudo cp ~/workspace/stable/stable_addons_5.0/hr/__openerp__.py ~/workspace/stable/stable_addons_5.0/travel sudo cp ~/workspace/stable/stable_addons_5.0/hr/__init__.py ~/workspace/stable/stable_addons_5.0/travel You will need to give yourself permissions over that new directory if you want to be able to modify it: $ sudo chown -R `whoami` travel You got yourself the directory for a new module there, and a skeleton structure, but you still need to change a few things inside the module's definition... Changing the default definition¶ To change the default settings of the "travel" module, get yourself into the "travel" directory and edit __openerp__.py (with gedit, for example, a simple text editor. Feel free to use another one) $ cd travel $ gedit __openerp__.py The file looks like this: { "name" : "Human Resources", "version" : "1.1", "author" : "Tiny", "category" : "Generic Modules/Human Resources", "website" : "", "description": """ Module for human resource management. You can manage: * Employees and hierarchies * Work hours sheets * Attendances and sign in/out system Different reports are also provided, mainly for attendance statistics. """, 'author': 'Tiny', 'website': '', 'depends': ['base', 'process'], 'init_xml': [], 'update_xml': [ 'security/hr_security.xml', 'security/ir.model.access.csv', 'hr_view.xml', 'hr_department_view.xml', 'process/hr_process.xml' ], 'demo_xml': ['hr_demo.xml', 'hr_department_demo.xml'], 'installable': True, 'active': False, 'certificate': '0086710558965', } You will want to change whichever settings you feel right and get something like this: { "name" : "Travel agency module", "version" : "1.1", "author" : "Tiny", "category" : "Generic Modules/Others", "website" : "", "description": "A module to manage hotel bookings and a few other useful features.", "depends" : ["base"], "init_xml" : [], "update_xml" : ["travel_view.xml"], "active": True, "installable": True } Note the "active" field becomes true. Changing the main module file¶ Now you need to update the travel.py script to suit the needs of your module. We suggest you follow the Flash tutorial for this or download the travel agency module from the 20 minutes tutorial page. The documentation below is overlapping the two next step in this wiki tutorial, so just consider them as a help and head towards the next two pages first... The travel.py file should initially look like this: from osv import osv, fields class travel_hostel(osv.osv): _name = 'travel.hostel' _inherit = 'res.partner' _columns = { 'rooms_id': fields.one2many('travel.room', 'hostel_id', 'Rooms'), 'quality': fields.char('Quality', size=16), } _defaults = { } travel_hostel() Ideally, you would copy that bunch of code several times to create all the entities you need (travel_airport, travel_room, travel_flight). This is what will hold the database structure of your objects, but you don't really need to worry too much about the database side. Just filling this file will create the system structure for you when you install the module. Customizing the view¶ You can now move on to editing the views. To do this, edit the custom_view.xml file. It should first look like this: <openerp> <data> <record model="res.groups" id="group_compta_user"> <field name="name">grcompta</field> </record> <record model="res.groups" id="group_compta_admin"> <field name="name">grcomptaadmin</field> </record> <menuitem name="Administration" groups="admin,grcomptaadmin" icon="terp-stock" id="menu_admin_compta"/> </data> </openerp> This is, as you can see, an example taken from an accounting system (French people call accounting "comptabilité", which explains the compta bit). Defining a view is defining the interfaces the user will get when accessing your module. Just defining a bunch of fields here should already get you started on a complete interface. However, due to the complexity of doing it right, we recommend, once again, that download the travel agency module example from this link. Next you should be able to create different views using other files to separate them from your basic/admin view. Action creation¶ Linking events to action¶ The available type of events are: - client_print_multi (print from a list or form) - client_action_multi (action from a list or form) - tree_but_open (double click on the item of a tree, like the menu) - tree_but_action (action on the items of a tree) To map an events to an action: <record model="ir.values" id="ir_open_journal_period"> <field name="key2">tree_but_open</field> <field name="model">account.journal.period</field> <field name="name">Open Journal</field> <field name="value" eval="'ir.actions.wizard,%d'%action_move_journal_line_form_select"/> <field name="object" eval="True"/> </record> If you double click on a journal/period (object: account.journal.period), this will open the selected wizard. (id="action_move_journal_line_form_select"). You can use a res_id field to allow this action only if the user click on a specific object. <record model="ir.values" id="ir_open_journal_period"> <field name="key2">tree_but_open</field> <field name="model">account.journal.period</field> <field name="name">Open Journal</field> <field name="value" eval="'ir.actions.wizard,%d'%action_move_journal_line_form_select"/> <field name="res_id" eval="3"/> <field name="object" eval="True"/> </record> The action will be triggered if the user clicks on the account.journal.period n°3. When you declare wizard, report or menus, the ir.values creation is automatically made with these tags: - <wizard... /> - <menuitem... /> - So you usually do not need to add the mapping by yourself.
https://doc.odoo.com/6.1/ru/developer/03_modules_1/
CC-MAIN-2019-09
refinedweb
3,001
58.08
explode_view 1.0.4 A new open-source Flutter project that enables the developers to quickly enhance the ui of their application and can easily get started with the Flutter animation. The UI has been inspired from Redmi's uninstall application animation shown [here](). This project contains the features of Flutter Animation that are required to complete an amazing Flutter application. Explore how ExplodeView is made through this blog. Index # Installing # 1. Depend on it # Add this to your package's pubspec.yaml file: explode_view: ^1.0.4 2. Install it # You can install packages from the command line: with pub: $ pub get with Flutter: $ flutter packages get Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more. 3. Import it # Now in your Dart code, you can use: import 'package:explode_view/explode_view.dart'; How To Use # Let's get this animation # For the explosion animation in the app, user has to simply add the ExplodeView as a child in any Widget like Stack and many more. Example Code: ExplodeView( imagePath: 'assets/images/abc.png', // path where the image is stored imagePosFromLeft: 120.0, // set x-coordinate for image imagePosFromRight: 300.0, // set y-coordinate for image ); For more info, please refer to the main.dart in example. Algorithm # The algorithm used to build this project is as follows: On clicking the image, the image would shake for some time and will disappear with generation of random particles in that image area and they would scatter farther with fading and upcoming transition and disappear finally on the screen. The colors of the particles are decided by the colors of the pixels of the image which provides the effect of breaking the image into pieces. For more info, please refer to the explode_view.dart. Documentation # | Dart attribute | Datatype | Description | Default Value | | :------------------------------------ | :-------------------------- | :----------------------------------------------------------- | :-------------------: | | imagePath | String | The string which gives the path to the image. | @required | | imagePosFromLeft | double | The distance from the left edge of the screen. | @required | | imagePosFromTop | double | The distance from the top edge of the screen. | @required | Bugs/Requests # If you encounter any problems feel free to open an issue. If you feel the library is missing a feature, please raise a ticket on Github and I'll look into it. Pull request are also welcome. License # ExplodeView is licensed under MIT License. View license. [1.0.4] - 2019-12-09. [1.0.4] - Formatted the code [1.0.3] - Fixed health issues in pub.dev [1.0.2] - Updated the code [1.0.1] - Updated the README file [1.0.0] - Stable release with solving maintenance issues in pub.dev example # An example application for explode-view library: explode_view: :explode_view/explode_view.dart'; We analyzed this package on Jan 16, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.7.0 - pana: 0.13.4 - Flutter: 1.12.13+hotfix.5
https://pub.dev/packages/explode_view
CC-MAIN-2020-05
refinedweb
486
59.19
GameFromScratch.com In this tutorial we are now going to implement a web app end to end using YUI and hosted in Node. It isn’t actually going to do anything much, but it will illustrate everything you need to create a client and server in JavaScript using YUI App and Node. Over time we will add more bells and whistles, but there is actually everything you need here to create a full functioning web application. After looking in to it a little closer, YUI is incredibly well documented, but actually implementing YUI App Framework in a real world environment there are gaps of information missing. Generally all you are going to find is samples with a gigantic HTML files where all of the script is located in a single file, which obviously isn’t good form nor very maintainable. Unfortunately, when you actually set about splitting your YUI App into separate files and templates, you are completely on your own! In fact, this tutorial may in fact be the first one on the subject on the internet, I certainly couldn’t find one. That said, there are still some great resources I referred to over and over well figuring this stuff out. First and foremost, was the App Framework link I posted earlier. It effectively illustrates using models and views, just not how to organize them across files in a complex project. The GitHub contributor app was another great sample, but it again, was effectively one giant HTML file. Finally there is the photosnear.me application’s source code up on GitHub. It is a full YUI App/NodeJS sample and is certainly worth reading, but as an example for people learning it is frankly a bit too clever. Plus it renders templates out using Node, something I wanted to avoid. Alright, lets do a quick overview of how program flow works, don’t worry, it’s actually a lot less complicated than it looks. It is initially over-engineered for what it ends up accomplishing. However, in the end you basically have the bones of everything you need to create a larger more complex application and a code structure that will scale with your complexity. This ( in the image to the left ) is the file hierarchy that we are about to create. In our root directory are two files, server.js which is our primary NodeJS application, while index.html is the heart of our web application, and where the YUI object is created. Additionally, in a folder named scripts we create a pair of directories models and views. Models are essentially your applications data, while Views are used to display your data. Finally within the views folder we have another folder templates which is where our handlebar templates reside. If you have done any PHP or ASP coding, templates are probably familiar to you already. Essentially they are used to dynamically insert data into HTML, it will make more sense shortly. We are going to implement one model, person.js, which stores simple information about a Person, and one view person.View.js which is responsible for displaying the Person’s information in the browser, and does so using the person.Template. Now lets actually take a look at how it all works, in the order it is executed. First we need our Node based server, that is going to serve the HTML to the browser ( and do much much more in the future ). Create a new file named server.js var express = require('express'), server = express.createServer(); server.use('/scripts', express.static(__dirname + '/scripts')); server.get('/', function (req, res) { res.sendfile('index.html'); }); server.get('*', function (req, res) { res.redirect('/#' + req.url, 302); }); server.listen(process.env.PORT || 3000); Essentially we are creating an express powered server. The server.use() call enables our server to serve static ( non-dynamic ) files that are located in the /scripts folder and below. This is where we serve all of our javascript and template files from, if we didn’t add this call, we would either need to manually map each file or you will get a 404 when you attempt to access one of these files on server. Next set our server up to handle two particular requests. If you request the root of the website ( / ), we return our index.html file, otherwise we redirect back all other requests back to the root with the url appended after a hash tag. For more details, read this, although truth is we wont really make much use of it. Finally we start our server to listen on port 3000 ( or process.env.PORT if hosted ). Amazingly enough, these 9 lines of code provide a full functioning if somewhat basic web server. At this point, you can open a browser and browse to, well, that is, once you start your server. Starting the server is as simple as running node server.js from your command line. This assumes you have installed NodeJS and added it’s directory to your PATH environment variable, something I highly recommend you do. Now that we have our working server, lets go about creating our root webpage index.html. <!DOCTYPE html> <html> <head> <title>GameFromScratch example YUI Framework/NodeJS application</title> </head> <body> <script src=""></script> <script src="/scripts/models/person.js"></script> <script src="/scripts/views/person.View.js"></script> <script> YUI().use('app','personModel','personView', function (Y) { var app = new Y.App({ views: { personView: {type: 'PersonView'} } }); app.route('/', function () { var person = new Y.Person(); this.showView('personView',{model:person}); }); app.render().dispatch(); }); </script> </body> </html> The most important line here is the yui seed call, where we pulling in the yui-min.js, at this point we have access to the YUI libraries. Next we link in our model and view, we will see shortly. Ideally you would move these to a separate config file at some point in the future as you add more and more scripts. These three lines cause all of our javascripts to be included in the project. The YUI().use call is the unique way YUI works, you pass in what parts of YUI library you want to access, and it creates an object with *JUST* that stuff included, in the for of the Y object. In this case, we want the YUI App class ( and only it! ) from the YUI framework, as well as our two classes personModel and personView, which we will see in a bit more detail shortly. If you use additional YUI functionality, you need to add them in the use() call. We create our app and configure it to have a single view named personView of type PersonView. Then we set up our first ( and only route ), for dealing with the URL /. As you add more functionality you will add more routes. In the event a user requests the web root, we create a person model. Next we show the personView and pass it the person model we just created. This is how you connect data and views together using the YUI app framework. We then render our app and call dispatch(), which causes our app url to be routed ( which ultimately causes our person model and view to be created. If you aren’t used to Javascript and are used to programming languages that run top down, this might seem a bit alien to you at first. Don’t worry, you get used to it eventually… mostly). Now lets take a look at our model person.js YUI.add('personModel',function(Y){ Y.Person = Y.Base.create('person', Y.Model, [],{ getName:function(){ return this.get('name'); } },{ ATTRS:{ name: { value: 'Mike' }, height: { value: 6 }, age: { value:35 } } } ); }, '0.0.1', { requires: ['model']}); Remember in index.html in the YUI.use() call where we specified personModel and personView, this is how we made those classes accessible. By calling YUI.add() we add our class into the YUI namespace, so you can use YUI.use() to included them when needed, like we did in index.html. Next we create our new class, by deriving from Y.Model using Y.Base.create(), you can find more details here. We declare a single function getName(), then a series of three attributes, name, height and age. We set our version level to ‘0.0.1’ chosen completely at random. When inside a YUI.add() call, we specify our YUI libraries as a array named requires instead of in the YUI.use call. Otherwise, it works the same as a .use() call, creating a customized Y object consisting of just the classes you need. Now lets take a look at the view,']}); Like person.js, we use YUI.add() to add personView to YUI for availability elsewhere. Again we used Y.Base.create(), this time to extend a Y.View. The rest that follows is all pretty horrifically hacky, but sadly I couldn’t find a better way to do things that way I want. The first horrible hack is that:this, which is simply taking a copy of PersonView’s this pointer, as later during the callback, this will actually represent something completely different. The next hack was dealing with including Handlebar templates, something no sites that I could findon the web illustrate, because they are using a single HTML file (which makes the task of including a template trivial). The problem is, I wanted to load in a Handlebars template( we will see it in a moment ) in the client and there are a few existing options, none of which I wanted to deal with. One option is to create your template programmatically using JavaScript, which seemed even more hacky ( and IMHO, beats the entire point of templating in the first place! ). You can also precompile your templates, which I will probably do later, but during development this just seemed like an annoyance. The photosnear.me site includes them on the server side using Node, something I wanted to avoid ( it’s a more complex process over all, and doesn’t lend itself well to a tutorial ). So in the end, I loaded them using Y.io. Y.io allows you to make asynchronous networking requests, which we use to read in our template file person.Template. Y.io provides a series of callbacks, of which we implement the complete function, read the results as our template, “compile” it using Y.Handlebars, we then “run” the template using template(), passing it the data it will populate itself with. In our case, our name, age and height attributes from our personModel. template() after executing contains our fully populated html, which we set to our views container using the setHTML() method. Finally, lets take a look at person.Template, our simple Handlebars template: <div align=right> <img src="" alt="GameFromScratch HTML5 RPG logo" /> </div> <p><hr /></p> <div> <h2>About {{name}}:</h2> <ul> <li>{{name}} is {{height}} feet tall and {{age}} years of age.</li> </ul> </div> As you can see, Handlebar templates are pretty much just straight HTML files, with small variations to support templating. As you can see, the values {{name}}, {{height}} and {{age}} are the values that are populated with data. They will look at the data passed in during the template() call and attempt to find matching values. This is a very basic example of what Handlebars can do, you can find more details here. Now, if you haven’t done so, run your server using the command node server.js, if you have set node in your PATH. Then, open a web browser and navigate to, and if all went well you should see: Granted, not very exciting application, but what you are seeing here is a fully working client/server application with a model, view and templating . There is one thing that I should point out at this point… in the traditional sense, this isn’t really an MVC application, there is no C(ontroller), or to a certain extent, you could look at the template as the view, and the view as the controller! But don’t do that, it’s really quite confusing! Just know, we have accomplished the same goals, our data layer is reusable and testable, our view is disconnected from the logic and data. Don’t worry, the benefits of all of this work will become clear as we progress, and certainly, once we start adding more complexity. In the near future, we will turn it into a bit more of an application. You can download the project source code here. Programming YUI, Tutorial, JavaScript, Node, Pipeline, HTML
http://www.gamefromscratch.com/post/2012/06/22/Creating-a-simple-YUI-Application-Framework-and-Node-app.aspx
CC-MAIN-2017-04
refinedweb
2,098
64.61
Summary: To call an external command in a Python script, use any of the following methods: subprocess.call()function subprocess.run()function subprocess.PopenClass os.system()function os.popen()function Whether you are a developer or a system administrator, automation scripts. Having said that, let us jump into our problem statement. Problem: Given an external command that can run on your operating system; how to call the command using a Python script? Example: Say, you want to ping a remote server using your operating system’s ping command—all from within your Python program. Python provides various ways to call and execute external shell commands. Without further delay, let’s discuss the various methods which can be used to invoke external commands in Python and the ideal scenarios to use them. Method 1: Using The subprocess.call() Function The subprocess module is the recommended method of invoking and executing external commands in Python. It provides a flexible way of suppressing the input and output of various external/shell commands and, once invoked, it triggers new processes and then obtains their return codes. You can use various functions with the subprocess module to invoke external commands using a Python script. The call() function of the subprocess module is used to start an external process, waits until the command completes, and then provides a return code. Thus the subprocess.call() function is used to return an exit code which can then be used in the script to determine if the command executed successfully or it returned an error. Any return-code other than “0” means that there was an error in execution. Let us have a look at the following program using the call() function to check if the ping test is successful in the system : import subprocess return_code = subprocess.call(['ping', 'localhost']) print("Output of call() : ", call() : 0 Method 2: Using The subprocess.run() Function) Method 3: Using the subprocess.Popen Class The use of subprocess.Popen() is recommended only for advanced cases that cannot be handled by other methods like subprocess.run() or subprocess.call(). This is because, on account of its comprehensive nature, it is more complicated and difficult to manage. Nevertheless, it can prove to be instrumental in dealing with complex operations. Let us have a look at the following code that uses the Popen() to open Microsoft Excel in Windows: import subprocess subprocess.Popen("C:\Program Files (x86)\Microsoft Office\Office12\excel.exe") Method 4: Using The os.system(command) Function The os module in Python provides several functions to interact with the shell and the system directly. Let us have a look at a couple of methods to invoke external commands using the os module. The os.system() function helps to immediately interact with the shell by passing commands and arguments to the system shell. It returns an exit code upon completion of the command, which means that an exit code of 0 denotes successful execution. Let us have a look at the following program which is used to display the current date of the system using the os.system() command: import os dt = 'date' os.system(dt) Output: The current date is: 03-09-2020 You can try this yourself in our interactive online shell: Exercise: Run the code. Why is the output different from the output on your own machine? Method 5: Using The os.popen(“Command”) Function Let us have a look at the following program which uses popen to print a string as output by invoking the shell command, echo: import os print (os.popen("echo Hello FINXTER!").read()) Output: Hello FINXTER! Try it yourself: Exercise: Does the command echo has to be installed on the server for this to work? Which Method Should You Use? os.system(): If printing the output on the console is not a criterion and you need to run only a few simple commands you can use the os.system()function. subprocess.run(): If you want to manage the input and output of an external command then use the subprocess.run()command. subprocess.Popen: If you want to manage a complex external command and also continue with your work, then you might want to use the subprocess.Popenclass. for more interesting concepts!
https://blog.finxter.com/how-to-call-an-external-command-in-python/
CC-MAIN-2021-43
refinedweb
705
56.45
There are hundreds of different programming languages, so which one is right for you? As technology advances and we discover more ways to use it, there are more specialized programming languages than ever. But, despite all these different languages, there are only three that you need to learn to begin your career as a web developer. HTML, CSS, and JavaScript. If you don’t know where to start, this guide will help you learn a bit about each language. Today, I’ll give you all three. HTML HTML stands for Hypertext Markup Language. We use HTML to structure a website’s content into different blocks and sections, like headings, lists, paragraphs, boxes, and much more. Once you get the basics, HTML is pretty easy to use as we break it down into different elements. <html> <head> <title>My First Web Page</title> <body> <h1>Hello World</h1> <p>This is a paragraph.</p> </body> </html> CSS If HTML is about organizing content, CSS is about styling it. With CSS, we define how lists, paragraphs, and other things look, including spacing, colors, sizes, animations, and much more. Without CSS, websites would basically be a text file or a word document. But thanks to it, websites as beautiful as apple ’s are possible. CSS makes it possible for us to easily update apps, websites, and other things on different devices with different dimensions or requiring different elements. CSS provides easy flexibility to control how things look and how people interact with what you build, no matter where or how they use it. h1 { color: blue; font-size: 24px; } JavaScript HTML and CSS give structure and consistent style to websites and apps. Javascript makes them fun. In short, Javascript helps us turn a website from a simple static page with information into an interactive page that keeps users engaged. It is also responsible for all the logic and functionality in a website, powering interactive elements, dynamically loading information, and even handling gaming logic into your websites. function whyLearnJavaScript() { alert("Because it's fun!"); } Here you can try that piece of JavaScript code: One important thing to mention As time progressed and technology advanced, things requiring JavaScript have now become possible with CSS or HTML. As developers and designers get more creative, and those creations become standards, new things get added to all three, and staying up to date with the latest is the best way to succeed in this field. Is it worth to learn web dev and JavaScript? I teach beginners to build websites and write programs with JavaScript. Here’s why: - JavaScript is a universal: language-it’s everywhere, and knowing it makes you more valuable no matter what you are doing. - JavaScript experts are in demand: It is one of the best ways to start getting paid quickly. - JavaScript experts make good money: because there are so many jobs, you can make money with JavaScript quickly. - JavaScript is beginner-friendly. With these three languages under your belt, you have everything you need to begin creating almost anything, websites, apps, and even games-and getting paid for it. But, JavaScript developers are in high demand, so don’t wait to begin learning it today. Want to begin learning HTML, CSS, and JavaScript? Right now, join our FREE public class on Monday, February 7th, at 10 PM CET. For those in other time zones, here are a few references: - 10 PM CET (e.g., Berlin) - 4 PM EST (e.g., New York) - 2:40AM IST (e.g. India) We’ll have students from all over the world joining us. I hope you can too! Thanks for reading! Source: livecodestream
https://blog.lnchub.com/learn-these-to-become-a-web-developer-2/
CC-MAIN-2022-33
refinedweb
607
73.17
Hello, I've just started trying to use the Python visual in Power BI, but am getting an error whenever I try to load data. from PyQt4 import QtCore, QtGui ImportError: DLL load failed: The specified module could not be found. This error occurs regardless of the data I am using or the script I try to run. I am not trying to import these modules, which makes the issue more puzzling. For reference, I am using Anaconda with Python3.5 (I believe .5.6). I have pyqt 4.11.4 and sip 4.16.9 installed, as well. Please someone advise on how I might be able to troubleshoot this. Regards, Dave Hi @djanez, 1. When you add the Python visual in the report, which Python script do you run? Please share the script. 2. Please run the Python script used in Power BI desktop on Power BI side to see if the same issue occurs. 3. In Power BI desktop, ensure the directory for Python has been set correctly: Best Regards,Qiuyun Yu Hi @v-qiuyu-msft, For reference, I am trying to get to the point of creating a plot of the Kaplan-Meier curve using my data, as shown in this tutorial: 1. I have tried running my own script (see below), as well as the script found in the tutorial here:. import numpy as npimport lifelines as llfrom lifelines.estimation import KaplanMeierFitterkmf = KaplanMeierFitter() import matplotlib.pyplot as pltimport plotly.plotly as pyimport plotly.tools as tlsfrom plotly.graph_objs import * from pylab import rcParamsrcParams['figure.figsize']=10, 5 f = dataset.type==1T = dataset[f]['Time']C = dataset[f]['Event'] kmf.fit(T, event_observed=C)kmf.plot(title='Event over Time1') 2. I am not sure what you mean "on the Power BI side". I am trying to build this visual in Python because I cannot do what I need to in Power BI using my dataset. 3. My python directory is through Anaconda3: Please modify your script like below: import numpy as np import pandas as pd import lifelines as ll from lifelines.estimation import KaplanMeierFitter import matplotlib.pyplot as plt import plotly.plotly as py import plotly.tools as tls from plotly.graph_objs import * from pylab import rcParams kmf = KaplanMeierFitter() rcParams['figure.figsize']=10, 5 f = True T = dataset['Time'] C = dataset['Event'] kmf.fit(T, event_observed=C) kmf.survival_function_.plot(title='Event over Time1') ax = kmf.plot() ax.get_figure().savefig("myfigure.png") I solved my DLL load failure issues by replacing Anaconda3 with WinPython.
https://community.powerbi.com/t5/Issues/python-script-error/idc-p/1006909
CC-MAIN-2021-25
refinedweb
420
61.43
Log message: Update ruby-net-ssh to 2.10.1.rc2. ## 1.8.1 * Change license to MIT, thanks to all the patient contributors who gave their permissions. ##]( … a438d5ee]( … c038ce6218) \]() ruby-net-ssh to 2.9.2. === 2.9.2-rc3 * Remove advertised algorithms that were not working \ (curve25519-sha256@libssh.org) [mfazekas] === 2.9.2-rc2 * number_of_password_prompts is now accepted as ssh option, by setting it 0 \ net-ssh will not ask for password for password auth as with previous versions \ [mfazekas] === 2.9.2-rc1 * Documentation fixes and refactoring to keepalive [detiber, mfazekas] === 2.9.2-beta * Remove advertised algorithms that were not working (ssh-rsa-cert-* *ed25519 \ acm*-gcm@openssh.com) [mfazekas] * Unkown algorithms now ignored instead of failed [mfazekas] * Asks for password with password auth (up to number_of_password_prompts) [mfazekas] * Removed warnings [amatsuda] === 2.9.1 / 13 May 2014 * Fix for unknown response from agent on Windows with 64-bit PuTTY [chrahunt] * Support negative patterns in host lookup from the SSH config file [nirvdrum] === 2.9.0 / 30 Apr 2014 * New ciphers [chr4] * Added host keys: ssh-rsa-cert-v01@openssh.com ssh-rsa-cert-v00@openssh.com \ ssh-ed25519-cert-v01@openssh.com ssh-ed25519 * Added HMACs: hmac-sha2-512-etm@openssh.com hmac-sha2-256-etm@openssh.com \ umac-128-etm@openssh.com * Added Kex: aes256-gcm@openssh.com aes128-gcm@openssh.com \ curve25519-sha256@libssh.org * Added private key support for id_ed25519 * IdentiesOnly will not disable ssh_agent - fixes #148 and new fix for #137 \ [mfazekas] * Ignore errors during ssh agent negotiation [simonswine, jasiek] * Added an optional "options" argument to test socket open method \ [jefmathiot] * Added gem signing (again) with new cert [delano] === 2.8.1 / 19 Feb 2014 * Correct location of global known_hosts files [mfischer-zd] * Fix for password authentication [blackpond, zachlipton, delano] Log message: Update ruby-net-ssh to 2.8.0. === 2.8.0 / 01 Feb 2014 * Handle ssh-rsa and ssh-dss certificate files [bobveznat] * Correctly interpret /etc/ssh_config Authentication settings based on openssh \ /etc/ssh_config system defaults [therealjessesanford, liggitt] * Fixed pageant support for Windows [jarredholman] * Support %r in ProxyCommand configuration in ssh_config files as defined in \ OpenSSH [yugui] * Don't use ssh-agent if :keys_only is true [SFEley] * Fix the bug in keys with comments [bobtfish] * Add a failing tests for options in pub keys [bobtfish] * Assert that the return value from ssh block is returned [carlhoerberg] * Don't close the connection it's already closed [carlhoerberg] * Ensure the connection closes even on exception [carlhoerberg] * Make the authentication error message more useful [deric] * Fix "ConnectionError" typo in lib/net/ssh/proxy/socks5.rb [mirakui] * Allow KeyManager to recover from incompatible agents [ecki, delano] * Fix for "Authentication Method determination can pick up a class from the \ root namespace" [dave.sieh] Log message: Update ruby-net-ssh to 2.7.0. === 2.7.0 / 11 Sep 2013 * Fix for 'Could not parse PKey: no start line' error on private keys with passphrases (issue #101) [metametaclass] * Automatically forward environment variables defined in OpenSSH config files [fnordfish] * Guard against socket.gets being nil in Net::SSH::Proxy::HTTP [krishicks] * Implemented experimental keepalive feature [noric] === 2.6.8 / 6 Jul 2013 * Added support for host wildcard substitution [GabKlein] * Added a wait to the loop in close to help fix possible blocks [Josh Kalderimis] * Fixed test file encoding issues with Ruby 2.0 (#87) [voxik]] Log message: Update ruby-net-ssh to 2.6.5. === 2.6.5 / 06 Feb 2013 * Fixed path in gemspec [thanks priteau] === 2.6.4 / 06 Feb 2013 * Added license info to gemspec [jordimassaguerpla] * Added public cert. All gem releases are now signed. === 2.6.3 / 10 Jan 2013 * Small doc fix and correct error class for PKey::EC key type [Andreas Wolff] * Improve test dependencies [Kenichi Kamiya]]
http://pkgsrc.se/security/ruby-net-ssh
CC-MAIN-2017-13
refinedweb
627
51.78
posted this to the sourceforge BTS a month or two ago but it seems to have gone unnoticed so I figured I'd post it to the list. The basic problem is that under some versions of the java-readline wrappers, an empty string input is returned as null, which in turn breaks jython. This is a relatively serious problem since you *need* empty string inputs to mark the ends of loops/etc. The patch is simple and is included below; the indenting got a bit munged by the sourceforge BTS but the point is clear. Below is the original post to the BTS. Ben. Hi. If you're using a readline console and you enter a blank line (such as when you end an indented block in a for loop, etc), ReadlineConsole returns a null input string and jython breaks with an error. This all seems to happen because the java-readline wrappers return null if an empty string was input. A patch that fixes this problem is shown below. The bug was originally reported as #145613 in the Debian BTS (). Thanks - Ben. --- jython-2.1.0.orig/org/python/util/ReadlineConsole.java +++ jython-2.1.0/org/python/util/ReadlineConsole.java @@ -39,7 +39,8 @@ **/ public String raw_input(PyObject prompt) { try { - return Readline.readline(prompt==null ? "" : prompt.toString()); + String line = Readline.readline(prompt==null ? "" : prompt.toString()); + return (line == null ? "" : line); } catch (java.io.EOFException eofe) { throw new PyException(Py.EOFError); } catch (java.io.IOException e) {
https://sourceforge.net/p/jython/mailman/jython-dev/thread/E17b9RA-00014x-00@localhost/
CC-MAIN-2018-05
refinedweb
247
65.83
Section (2) symlink Name symlink, symlinkat — make a new name for a file Synopsis #include <unistd.h> #include <fcntl.h> /* Definition of AT_* constants */ #include <unistd.h> DESCRIPTION. symlinkat(). RETURN VALUE On success, zero is returned. On error, −1 is returned, and errno is set appropriately. ERRORS - EACCES Write access to the directory containing linkpathis denied, or one of the directories in the path prefix of linkpathdid not allow search permission. (See also path_resolution(7).) - EDQUOT The user_zsingle_quotesz_s quota of resources on the filesystem has been exhausted. The resources could be inodes or disk blocks, depending on the filesystem implementation. - EEXIST linkpathalready exists. - EFAULT targetor linkpathpoints outside your accessible address space. - EIO An I/O error occurred. - ELOOP Too many symbolic links were encountered in resolving linkpath. - ENAMETOOLONG targetor linkpathwas too long. - ENOENT A directory component in linkpath symlinkat() was added to Linux in kernel 2.6.16; library support was added to glibc in version 2.4. NOTES No checking of target is done. Deleting the name referred to by a symbolic link will actually delete the file (unless it also has other hard links). If this behavior is not desired, use link(2). SEE ALSO ln(1), namei(1), lchown(2), link(2), lstat(2), open(2), readlink(2), rename(2), unlink(2), path_resolution(7), symlink(7) Section (7) symlink Name symlink — symbolic link handling DESCRIPTION inode number, where an inode number is an index into the inode table, which contains metadata about all files on a inode_zsingle_quotesz_t be changed. (Note that there are some magic symbolic links in the /proc directory tree—for example, the /proc/[pid]/fd/* files—that have different permissions.). Handling of symbolic links by system calls and commands Symbolic links are handled either by operating on the link itself, or by operating on the object referred to by the link. In the latter case, an application or system call is said to: Symbolic links used as filename arguments for system calls. Symbolic links specified as command-line arguments to utilities that are not traversing a file tree.), name_to_handle_at(2), open(2), openat(2), open_by_handle.) POSIX.1-2008 changed; for example, the command chown file is included in this rule, while the command chown −R file, which performs a tree traversal, is not. (The latter is described in the third area, below.) If it is explicitly intended that the command operate on the symbolic link instead of following the symbolic link—for example, it is desired that chown slink change the ownership of the file that slink is, whether it is a symbolic link or not—the −h option should be used. In the above example, chown root slink would change the ownership of the file referred to by slink, while chown −h—that is, −Roption is not specified), the ls(1) command follows symbolic links named as arguments if the −Hor −Loption is specified, or if the −F, −d, or −loptions are not specified. (The ls(1) command is the only command where the −Hand −Loptions −Loption −r that refer to directories are followed). Certain conventions are (should be) followed as consistently as possible by commands that perform file tree walks: A command can be made to follow any symbolic links named on the command line, regardless of the type of file they reference, by specifying the −H(for half-logical) flag. This flag is intended to make the command-line name space look like the logical name space. (Note, for commands that do not always do file tree traversals, the −Hflag will be ignored if the −Rflag is not also specified.) For example, the command chown −HR user slink will traverse the file hierarchy rooted in the file pointed to by slink. Note, the −His not the same as the previously discussed −hflag. The −Hflag −L(for logical) flag. This flag is intended to make the entire name space look like the logical name space. (Note, for commands that do not always do file tree traversals, the −Lflag will be ignored if the −Rflag is not also specified.) For example, the command chown −LR user slink will change the owner of the file referred to by slink. If slinkrefers to a directory, chownwill traverse the file hierarchy rooted in the directory that it references. In addition, if any symbolic links are encountered in any file tree that chowntraverses, they will be treated in the same fashion as slink. A command can be made to provide the default behavior by specifying the −P(for physical) flag. This flag is intended to make the entire name space look like the physical name space. For commands that do not by default do file tree traversals, the −H, −L, and −P flags are ignored if the −R flag is not also specified. In addition, you may specify the −H, −L, and −P options more than once; the last one specified determines the command_zsingle_quotesz −H, −L, or −Poptions. To maintain compatibility with historic systems, the ls(1) command acts a little differently. If you do not specify the −F, −dor −loptions, ls(1) will follow symbolic links specified on the command line. If the −Lflag is specified, ls(1) follows all symbolic links, regardless of their type, whether specified on the command line or encountered in the tree walk. SEE ALSO chgrp(1), chmod(1), find(1), ln(1), ls(1), mv(1), namei(1), rm(1), lchown(2), link(2), lstat(2), readlink(2), rename(2), symlink(2), unlink(2), utimensat(2), lutimes(3), path_resolution(7)
https://manpages.net/detail.php?name=symlink
CC-MAIN-2022-21
refinedweb
920
62.17
elf - format of Executable and Linking Format (ELF) files Synopsis Description Notes See Also Colophon #include <elf.h> The header file <elf.h> defines the format of ELF executable binary files. Amongst these files are normal executable files, relocatable object tables offset in the file are defined in the ELF header. The two tables describe the rest of the particularities of the file. This header file describes the above mentioned headers as C structures and also includes structures for dynamic sections, relocation sections and symbol tables.:A files section header table lets one locate all the files sections. The section header table is an array of Elf32_Shdr or Elf64_Shdr structures. The ELF headers_strndx; in other cases, each field in the initial entry is set to zero. An object file does not have sections for these special.Relocation is the process of connecting symbolic references with symbolic definitions. Relocatable files must have information that describes how to modify their section contents, thus allowing executable and shared object files to hold the right information for a processs;The .dynamic section contains a series of structures that hold relevant dynamic linking information. The d_tag member controls the interpretation of d_un.The .dynamic section contains a series of structures that hold relevant dynamic linking information. The d_tag member controls the interpretation of d_un. typedef struct { Elf64_Addr r_offset; uint64_t r_info; int64_t r_addend; } Elf64_Rela;[]; ELF first appeared in System V. The ELF format is an adopted standard. The extensions for e_phnum, e_shnum and e_strndx respectively are Linux extensions. Sun, BSD and AMD64 also support them; for further information, look under SEE ALSO. as(1), gdb(1), ld(1), objdump(1), execve(2), core(5) 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.sgvulcan.com/elf.5.php
CC-MAIN-2017-47
refinedweb
300
50.43
The ipv6-literal.net domain is used by Vista and Longhorn to allow for the entry of IPv6 addresses into UNCs rather than host names. Any domain name in this domain is not passed to DNS but is resolved by taking the host portion and converting it into an IPv6 address. For example: 3000:0:20::1 would be represented by 3000-0-20--1.ipv6-literal.net. The Samba resolver needs to be modified to support this behaviour. Let me know if you need any more info or if I can help with testing. Shouldn't this be in the (g)libc resolver? I don't think so. Here are my reasons: 1) It isn't a published standard so putting in the resolver is probably a bad idea. 2) The namespace is owned by Microsoft. Since it is a Microsoft specific solution that is not yet standardised. I think it is more appropriate that Samba should deal with this from a interoperability perspective. One thing I didn't mention before is that you can also encode the interface identifier in the ipv6-literal.net address. This is useful for link local addresses. For example fe80::10%4 can be written as the UNC or domain name fe80--10s4.ipv6-literal.net I think we should use this only in the places where we actually use UNC names, not in the regular resolver. This demonstrates we need a generic UNC parser, which we don't appear to have at the moment. Marking as "Feature request" *** Bug 4571 has been marked as a duplicate of this bug. ***. (In reply to comment #7) >. Agreed. This has been addressed by an NSS module written by Simo. This bug can be closed. ah great, didn't know that. Here's the link:
https://bugzilla.samba.org/show_bug.cgi?id=4549
CC-MAIN-2022-40
refinedweb
299
75.81
Thank you to everyone for all of the kind words and thoughts and prayers. Even before I was diagnosed with Stage IV melanoma, I had begun to shy away from posting here on Cprog. This is partly due... Type: Posts; User: Dave_Sinkula Thank you to everyone for all of the kind words and thoughts and prayers. Even before I was diagnosed with Stage IV melanoma, I had begun to shy away from posting here on Cprog. This is partly due... Something like this? #include <stdio.h> void swapbytes(void *object, size_t size) { unsigned char *start, *end; for ( start = object, end = start + size - 1; start < end; ++start,... Question 6.20 Question 6.16 Many ints will also be floats. I would choose to use strtol and strtof to attempt conversion of the text. If either is successful, you know what you got. If neither is successful, odds are it was... People who don't regularly tune in or attempt to digest a whole show from a sound bite begin to stick out like a sore thumb... ...because you just said the same thing that Rush did. And this of... Dwelling on minutia doesn't sell you point either. I gave links if anyone cared to investigate. The site I quoted was one that I read regularly and have a "feel" for. Much the same as if you... In the days following 9/11 or Katrina, I found news items that showed that some charities were not exactly using donations to go to the victims (cursory search). Now this is sad on its own, but... Not quite what was said, but I'm not surprised that is the presentation. Obama Leaps into Action on Haiti Danny Glover had an interesting response too. I believe a number of the K&R exercises like this expect a file to be piped to the stdin. @Anarchy: spoiler alert, but there's not a whole lotta code there. Here is an old thread on the topic that I'd found interesting (especially page 2): Interesting: iowahawk: Fables of the Reconstruction I haven't tried it yet myself. Met Office to re-examine 160 years of climate data - Times Online Emphasis mine, though somehwat borrowed from an evil blog that provided a "quality controlled and homogenised" snippet with the... Dead links? All of them worked for me. The opinion is what I was after, on a blog that is not a blog to be a blog, but a blog from a software guy since at least on of the links did discuss the... Might that not be akin to believing tobacco companies' views on tobacco? Warming blog RealClimate run by far left PR firm ClimateGate Development: CEI Notifies NASA of Intention to Sue - Chris... Apologies if I've missed some things already discussed -- I have a terrible skimming habit. Nuclear? Well, the "hockey stick" was certainly an "embellishment". Here are some items that have... 101 (number) - Wikipedia, the free encyclopedia Out of curiousity, what language is the above? Scrabble letter distributions - Wikipedia, the free encyclopedia Question 17.3 A brief skimming reminded me of this: Incompatibilities Between ISO C and ISO C++ Cprogramming.com - C/C++ Programming Code Snippets ? The moments pass. Then I get right back on the horse. Exactly. I start and quit several times a day.
https://cboard.cprogramming.com/search.php?s=b724cd1510a649cc52ea32ee96cb79ac&searchid=2591749
CC-MAIN-2020-05
refinedweb
552
75.5
Hi all jboss guru It is highly appreciated somebody can share with me your knowledge in following questions l did the test using jboss 2.4.8 and tomcat 4. 1) Below is my test program (there may be some typo) l am quite confused that 2nd invocation of "GetUserTransaction" function seems to return the same transaction context which is started by 1st invocation. Why ? Is it because tx context will be associated with the thread where it is created. Subsequent call of JNDI to look up the UserTransaction will alway return the tx context assoicated with the thread, if exist ? However, it seems to me that J2EE does explicitely specify this requirment. Is this a unique feature of JBOSS or all app server behave in this way ? Please comment on this. <%! public UserTransaction GetUserTransaction(){ InitContext ctx = new InitContext(); return (UserTransaction)ctx.lookup("java:comp/UserTransaction"); } %> <% UserTransaction tx = GetUserTransaction(); tx.begin(); UserTransaction tx2 = GetUserTransaction(); if(tx2.getStatus() == Status.ACTIVE_TRANSACTION){ out.println("active transaction"); } %> 2) Let's say, l create a bean-managed tx stateful session bean. Within the methods of the bean, instead of using "sessioncontext.getUserTransaction" to get "UserTransaction" and start tx, l using above approch of using JNDI to look up "UserTransaction" interface and start tx without ending the tx. Then in another methods of same stateful session bean, l use "sessioncontext.getUserTransaction" to control tx. Will the same tx context returned by the invocation of "sessioncontext.getUserTransaction" ? It seems to me that EJB spec explicitely specify same tx context should always be returned to stateful bean, if exist. Is it held true for a mix of JNDI approach and "sessioncontext.getUserTransaction". Please comment on it ! thx and rgds fox
https://developer.jboss.org/thread/29485
CC-MAIN-2018-13
refinedweb
282
50.33
The dependence on imported coal for meeting energy requirements is likely to continue in the coming years, going by the Planning Commission’s estimates. The Plan panel is of the view that the country would face shortfall of 200 million tonne coal by the end of the 12 Plan period in spite of its intensified efforts to ramp up the production. “The dependence on imported coal will continue unless we tap the renewable sources to augment power generation,” Planning Commission advisor (energy) I.A. Khan said here on Saturday. The demand for coal, according to him, was set to go up from the present level of 640 million tonne to around 980 million tonne by the end of the Plan period (2016-17) while the production could reach about 795 million tonne in the same time. “This will still leave a shortfall of close to 200 million tonne,” he said. Mr. Khan was addressing a meeting organised by the Federation of Indian Chambers of Commerce and Industry here. Given the shortfall and other factors, the Plan panel has evolved a three-pronged strategy to tackle the possible energy deficiency. This included making optimum utilisation of available resources, focussing on enhanced generation using renewable sources and ensuring a more energy efficient regime. Keywords: Coal shortage, Planning Commission, I.A. Khan import coal from australia. get into enhanced trade agreements with australia and import coal from them. Please Email the Editor
http://www.thehindu.com/business/Industry/coal-shortage-to-be-significant-by-end-of-xii-plan/article4727980.ece
CC-MAIN-2013-48
refinedweb
239
53.21
Release 2.2 of the Tapestry, an open source Java web application framework has been released. Tapestry is a component object model for building dynamic websites; It reconceptualizes web development in terms of objects, methods and properties instead of URLs and query parameters. More info ----------------------- Release 2.2 of the Tapestry: Java Web Components web application framework has been released and is available at SourceForge.. Tapestry has been greatly enhanced since its 2.1 release in July, 2002. Tapestry now uses the Object Graph Navigation Library to dynamically read and update bean properties. OGNL support Java-like expressions, allowing behavior to be specified in a specification that used to require Java code. New Tapestry components include a DatePicker (which allows date input via a popup JavaScript window) and the sophisticated Table component (which presents tabular results that can span multiple pages, with navigation). The ValidField component can now perform client-side validation, such as range checks on numeric fields. Documentation has been improved, including a new quick component reference guide, which lists the name, parameters and behaviors of all Tapestry components, complete with examples of how to use them. Tapestry components, pages and resources can now be packaged as libraries. A simple namespace system has been implemented to reference components packaged in a library. A sophisticated Tapestry plugin for the open-source Eclipse IDE is also available as a separate project: Spindle. Next items on the plate: More, and more sophisticated, components. Better integration with traditional servlet and JSP applications, including the ability to use a JSP instead of an HTML template. A streamlined, "Tapestry-lite" for new developers and new converts from JSP. More information about Tapestry is available at the Tapestry Home Page. Tapestry is licensed under the Lesser GNU Public License. My thanks to the growing Tapestry community for all the great help, support and feedback. Tapestry Java Web Components Framework 2.2 Released (33 messages) - Posted by: Howard Lewis Ship - Posted on: October 11 2002 11:53 EDT Threaded Messages (33) - Tapestry Java Web Components Framework 2.2 Released by Robert Liu on October 15 2002 05:32 EDT - Corrupted Documentation? No. by Howard Lewis Ship on October 15 2002 06:44 EDT - FYI: Does not run out of the box on jboss3.0.3 (tomcat & jetty) by Henrik Klagges on October 15 2002 08:17 EDT - FYI: Does not run out of the box on jboss3.0.3 (tomcat & jetty) by Henrik Klagges on October 15 2002 08:39 EDT - FYI: Does not run out of the box on jboss3.0.3 (tomcat & jetty) by Howard Lewis Ship on October 15 2002 08:52 EDT - FYI: Does not run out of the box on jboss3.0.3 (tomcat & jetty) by Greg Turner on October 15 2002 10:16 EDT - Tapestry and JBoss by Howard Lewis Ship on October 15 2002 10:41 EDT - Tapestry and JBoss by Greg Turner on October 15 2002 01:41 EDT - Tapestry / Jetty Setup by Howard Lewis Ship on October 15 2002 02:07 EDT - Tapestry and OGNL by Adam Greene on October 16 2002 07:45 EDT - Tapestry Java Web Components Framework 2.2 Released by Pratik Patel on October 15 2002 12:28 EDT - Tapestry and JSF by Howard Lewis Ship on October 15 2002 12:43 EDT - Tapestry Java Web Components Framework 2.2 Released by Geoff Longman on October 15 2002 21:21 EDT - Tapestry Java Web Components Framework 2.2 Released by Ga Sing Li on October 16 2002 16:26 EDT - Tapestry Java Web Components Framework 2.2 Released by Howard Lewis Ship on October 16 2002 17:23 EDT - Tapestry Java Web Components Framework 2.2 Released by Richard Lewis-Shell on October 16 2002 16:44 EDT - Tapestry Java Web Components Framework 2.2 Released by Geoff Longman on October 16 2002 19:12 EDT - Tapestry Java Web Components Framework 2.2 Released by Malcolm Edgar on October 17 2002 00:38 EDT - Tapestry-- components done right by joe panico on October 17 2002 11:47 EDT - Tapestry Java Web Components Framework 2.2 Released by Eric Schneider on October 18 2002 11:49 EDT - Tapestry Java Web Components Framework 2.2 Released by Dorothy Gantenbein on October 18 2002 12:13 EDT - Tapestry Java Web Components Framework 2.2 Released by Pablo Lalloni on October 21 2002 01:13 EDT - Tapestry Java Web Components Framework 2.2 Released by Howard Lewis Ship on October 21 2002 10:37 EDT - Still missing JSP by Karl Banke on October 18 2002 15:46 EDT - Still missing JSP by Howard Lewis Ship on October 18 2002 16:28 EDT - Still missing JSP by Karl Banke on October 19 2002 07:58 EDT - Still missing JSP by Kimmo Eklund on October 19 2002 11:05 EDT - Tapestry Java Web Components Framework 2.2 Released by Natesh Babu Lakshmanan on October 19 2002 08:33 EDT - 2 Questions About Tapestry Framework by Greg Turner on October 19 2002 19:53 EDT - Tapestry vs. XML/XSL by Howard Lewis Ship on October 20 2002 10:19 EDT - Tapestry vs. XML/XSL by Greg Turner on October 20 2002 12:19 EDT - Tapestry vs. XML/XSL by Howard Lewis Ship on October 20 2002 12:40 EDT - Tapestry Java Web Components Framework 2.2 Released by Neil Clayton on October 30 2002 03:31 EST Tapestry Java Web Components Framework 2.2 Released[ Go to top ] I've downloaded the 2.2 doc archieve--Tapestry-2.2-doc.tar.gz. When I try to extract files, an error message popup saying "corrupted tar file". I redownloaded the archieve file, same error. But DO succefully downloaded the program archieve. Any idea? Or the doc archieve IS indeed corrupted. - Posted by: Robert Liu - Posted on: October 15 2002 05:32 EDT - in response to Howard Lewis Ship Thanks, Robert Sinobest Information Technology Corrupted Documentation? No.[ Go to top ] Just tested. IE has trouble with .tar.gz unless you use save as. Download the file, but save it locally, and open it once complete and it'll work. Tested with Windows 2000 and WinZip. - Posted by: Howard Lewis Ship - Posted on: October 15 2002 06:44 EDT - in response to Robert Liu FYI: Does not run out of the box on jboss3.0.3 (tomcat & jetty)[ Go to top ] - Posted by: Henrik Klagges - Posted on: October 15 2002 08:17 EDT - in response to Howard Lewis Ship - The documtation.tar.gz is ok - but it doesn't run on "out of the box" on jboss3.0.3/ tomcat combination - neither on jboss 3.0.3/jetty ant-1.5.1 BUILD FAILED file:/opt/tapestry/build.xml:487: Error while expanding /opt/jboss/server/default/deploy/jetty-plugin.sar Cheers, Henrik TNGtech FYI: Does not run out of the box on jboss3.0.3 (tomcat & jetty)[ Go to top ] - Posted by: Henrik Klagges - Posted on: October 15 2002 08:39 EDT - in response to Henrik Klagges Here's the fix for build.xml to make it run on 3.0.3/jetty. Most examples seem to work. A couple of deployment exceptions are thrown during startup. === comment out the "unjar" line <!-- <unjar src="${jboss.server.default.dir}/deploy/jetty-plugin.sar" dest="${temp.dir}"/> --> === change the srcdir in the copy operation as given here: <copy todir="${tapestry.lib.dir}"> <fileset dir="${jboss.server.default.dir}/deploy/jbossweb.sar"> <include name="*.jar"/> </fileset> </copy> Cheers, Henrik TNGtech FYI: Does not run out of the box on jboss3.0.3 (tomcat & jetty)[ Go to top ] Thanks for the fix; keeping compatible with JBoss is a moving target, so for the meantime I've just kept the auto config compatible with 3.0.0. It's probably time to upgrade to 3.0.3 and be done with it. - Posted by: Howard Lewis Ship - Posted on: October 15 2002 08:52 EDT - in response to Henrik Klagges FYI: Does not run out of the box on jboss3.0.3 (tomcat & jetty)[ Go to top ] Why not upgrade to JBoss 3.2? - Posted by: Greg Turner - Posted on: October 15 2002 10:16 EDT - in response to Howard Lewis Ship Or better yet, why not have some docs that give instructions on integrating Tapestry to JBoss? That way, developers can integrate Tapestry with existing JBoss server configurations. The current Tapestry build creates a JBoss server configuration specific to Tapestry. IMHO, this is the wrong way to go. JBoss is the app server, the base. So why not have docs for integrating Tapestry to JBoss (or any other app server for that matter) and not integrating JBoss to Tapestry. Tapestry and JBoss[ Go to top ] Tapestry integrates into JBoss like any other framework ... just drop the necessary JARs into the lib directory. - Posted by: Howard Lewis Ship - Posted on: October 15 2002 10:41 EDT - in response to Greg Turner The build file support is to allow a completely turn-key demo to run on the user's machine. No manual instructions for copying files or editing XML, just "ant configure run-jboss". The demo involves running McKoi DB as a service and deploying .war and .ear files, and doing some special setup of the Jetty service. Better and easier to let Ant do the work. Since its only a demo, I don't see it as a deal-breaking issue to limit users to JBoss 3.0.0. Upgrading to a later JBoss would be good in that JBoss 3.0.0 has some severe class loading problems, speedwise, that are supposedly addressed in later releases. Right now, initial loads of pages are painfully slow, mostly because of class loading. Tapestry and JBoss[ Go to top ] Thank you. My apologies. The docs do say that build file sets up the **demo** for JBoss 3.0.0 and I have no problem with the demo only being supported by specific versions of JBoss. - Posted by: Greg Turner - Posted on: October 15 2002 13:41 EDT - in response to Howard Lewis Ship I am most curious about what "special setup of the Jetty service" is done for demo. As I am trying to run just the Hangman demo in JBoss 3.2 (with embedded Jetty) and its not working due to the _request.getParameter(name) in RequestContext always returning null for the service name. I think this question is best dealt with in the Tapestry Mailing List, so I will re-ask there. Tapestry / Jetty Setup[ Go to top ] Tapestry allows components to be packaged as libraries and distributed in JARs. - Posted by: Howard Lewis Ship - Posted on: October 15 2002 14:07 EDT - in response to Greg Turner A component may have assets (a general term for images, stylesheets and other resources) packaged with the classes in the JAR file. However, such resources are not visible to a client web browser. In someway, they must be "exposed" to the client web browser. Tapestry has two ways to do this; first is the "asset" engine service, which will read resource content from files in the classpath and push the bytes down to the client. A much better way is to, as needed, extract resources and place them into a directory mapped to a web folder. The resources can then be referenced using static URLs. The special Jetty configuration is to enable this "externalizing" of private assets. Tapestry and OGNL[ Go to top ] I think the OGNL stuff in Tapestry is a God send. I had a complex Java / Page interaction going on, and then realized that I could do the whole thing as a single line of code in the binding using OGNL. It had to do with formatting of currency values and displaying "Free" when it was zero. I simply created a format method, put a couple of ?: operatators in the binding and voila, it works great. I think OGNL is an important step for Tapestry in making it easier to use but still keeping code or script out of the HTML where it really doesn't belong (one of the shortcomings of JSP and other MVC wannabes). - Posted by: Adam Greene - Posted on: October 16 2002 07:45 EDT - in response to Howard Lewis Ship Tapestry Java Web Components Framework 2.2 Released[ Go to top ] How does Tapestry fit into the big picture with respect to Java Server Faces? - Posted by: Pratik Patel - Posted on: October 15 2002 12:28 EDT - in response to Howard Lewis Ship Tapestry and JSF[ Go to top ] That's a whole can of worms that's been done to death in a previous discussion: - Posted by: Howard Lewis Ship - Posted on: October 15 2002 12:43 EDT - in response to Pratik Patel Now that 2.2 is out, I'm starting to work on 2.3. My goal is to allow the use of JSPs instead of HTML templates. You'll be able to mix tag libraries, including JSF tag libraries, with Tapestry components (using a Tapestry tag library). It won't be perfect (using Tapestry as-is is perfect :-) ) but it may help with Tapestry adoption. I'm also looking at some ideas to simplify initial development, a more JSP-like approach to creating simple pages (by allowing simple pages to exist without a seperate page specification). I'm getting some howls from the Tapestry developer list, they don't want me to dumb down the framework, but I'm investigating it anyway. Tapestry Java Web Components Framework 2.2 Released[ Go to top ] <shameless plug> - Posted by: Geoff Longman - Posted on: October 15 2002 21:21 EDT - in response to Howard Lewis Ship Spindle 1.1beta4 the latest version of the IDE plugin for Tapestry 2.2 is out. This version integrates with the Eclipse Update Manager. </shameless plug> Looking at the traffic on the Tapestry lists, the community has grown significantly since the last release. There's been a lot of interest from WO-aware people. Geoff Tapestry Java Web Components Framework 2.2 Released[ Go to top ] Is there any demo or real website that use Tapestry? - Posted by: Ga Sing Li - Posted on: October 16 2002 16:26 EDT - in response to Howard Lewis Ship Tapestry Java Web Components Framework 2.2 Released[ Go to top ] Demos are available on the Tapestry home page (they are actually hosted at a second site, since SourceForge doesn't support servlets). - Posted by: Howard Lewis Ship - Posted on: October 16 2002 17:23 EDT - in response to Ga Sing Li There have been recent postings on the mailing list about live sites coming up with Tapestry, but I don't have the URLs handy. You can download the distro and JBoss 3.0.0 and run the local demos. Setup is easy and takes less than a minute. Tapestry Java Web Components Framework 2.2 Released[ Go to top ] Tapestry 2.2 really shines. OGNL takes away a lot of the drudgery that was involved with writing silly one/two liner Java methods. Even the earliest versions of Tapestry had good ways to package reusbale components, but the library mechanism really adds polish. The Eclipse plugin (Spindle) is great, and getting better as I type. But even better - people are actually starting to pay attention to Tapestry. The traffic on the dev list is increasing noticeably, and with the moves afoot this is set to continue. - Posted by: Richard Lewis-Shell - Posted on: October 16 2002 16:44 EDT - in response to Howard Lewis Ship I have been using Tapestry for over two years now, and I really couldn't ask for anything more than it has given me so far. If you're interested in Java MVC-type web development, this project deserves a serious look - and not just a quick glance. Tapestry Java Web Components Framework 2.2 Released[ Go to top ] Recently a member of the tapestry developer list posted some news about a project built in Tapestry that's available for download. A copy of his post follows... - Posted by: Geoff Longman - Posted on: October 16 2002 19:12 EDT - in response to Howard Lewis Ship I spent some time cheking out the screenshots (haven't had time to download & run it yet), very nice. Geoff ----------------------------------------------------------- FROM: joe DATE: 10/14/2002 13:24:16 SUBJECT: [Tapestry-developer] new Tapestry application available. Hi, We are making a new Tapestry application available for public consumption. WOF developers who are considering making the move to Tapestry might want to take a look at this app. I can tell you that developing the UI for this application was very easy-- at least as easy as using WOF. As a WOF developer I was immediately productive in Tapestry. Hats off to Howard for maintaining such high standards of software quality. This application contains a number of reuseable components (javascript popout-window links, tree browser, Grid, etc) which we will contribute to the Tapestry component library. If you see something in the UI that looks like it might be componentized and you would like to use it, give a shout and we will look at getting it in the library sooner rather than later. The application, along with some screenshots, can be found here: This is some information about the app: In a Nutshell Pixory is a "personal image server". It allows you to store your photos on your own pc but to access, compose into albums, and share them anywhere on the internet. Pixory was motivated by the desire to centralize the storage of, and access to, personal photo collections. But rather than centralizing them at a commercial online photography service such as Ofoto (tm), Pixory allows you to host your own photo albums by using the computers and broadband internet connection that you already own. Pixory presents a standard web interface through which you can browse and organize your photos from anywhere on your home network, or the internet at large, and share them with anyone on the internet. Highlights: * Presents a standard web, pure html, interface for all operations. Uses no plugins or proprietary extensions. * Requires no installation. Just unzip and run. * Portable, 100% pure java. Will run on any operating system that supports a 1.4 java runtime environment (JRE). * Is completely self-contained. Requires no third party packages, aside from the JRE. * All user entered album data is stored in standard XML, accessible to the user independent of Pixory. * Simple intuitive interface. * Pixory. * Efficient. Pixory has fairly minimal hardware requirements, running well on an Intel pentium 500 MHz. The Pixory download is only 3MB. * Can display image metadata such as the EXIF information embedded in image files by most digital cameras and scanners. Tapestry Java Web Components Framework 2.2 Released[ Go to top ] I am a big fan to the Tapestry framework (and have made some contributions). - Posted by: Malcolm Edgar - Posted on: October 17 2002 00:38 EDT - in response to Geoff Longman The learning curve is steeper than your average Command pattern web framework, but what you get is a more event based Object Oriented programming model. You get great reuse through components and end up writing a lot less code. The component feature is very important. Components revolutionised GUI programming with languages like VB and Delphi. They enabled developers with domain expertise to assemble applications using components developed by GUI experts. I think the Component based approach in Tapestry (and potentially in JSF one day) provides the same level of productivity boost. For instance application developers with no JavaScript skills can use the Tapestry JavaScript enabled DatePicker, PropertySelection or ValidField components out of the box. When learning Tapestry it took me 3-4 days for the whole thing gel in my head (you may do better :) also in my defence the documentation has improved since then). When approaching it don’t try think in terms of Servlets and JSPs, think more in terms of Components and Events. Another bonus with Tapestry is the Spindle Eclipse plugin. Spindle is very good and makes it easier to develop and maintain large Tapestry applications with their Page/JWC/Java files. What I would like to see next is a drag and drop GUI editor like Delphi :) It is great to see Tapestry framework development accelerating. <aside> I don't think the MVC design pattern should be used describe most web frameworks, I think people are kidding themselves when the do. </aside> Tapestry-- components done right[ Go to top ] We just released an early version of a Tapestry based application which is available for download here: - Posted by: joe panico - Posted on: October 17 2002 11:47 EDT - in response to Howard Lewis Ship It's not an enterprise application but a "personal server" (really both a client and a server). Working with Tapestry was quite a pleasure. I was almost immediately productive in Tapestry (ok, I have a WebObjects background). As you can see from the screenshots, our application contains some non-trivial UI elements (like the "lightbox"). Most of the UI elements in our application are reuseable components, the creation and maintenance of which is greatly facilitated by Tapestry's squeeky clean component model. These are *real* components, black boxes which only interact with the outside world through a small number of well defined parameters. And Tapestry components hierarchically compose into more complex components in a very natural way. We had to spend very little time building our UI components, even though Tapestry did not have pre-built components for our UI. In short, Tapestry components are as close to traditional event-driven components as web-components get. The Tapestry model of component interactions is quite elegant. regards, Joe Panico Tapestry Java Web Components Framework 2.2 Released[ Go to top ] Ah, if only I looked at it a year ago. It would have saved me a ton of time and energy. - Posted by: Eric Schneider - Posted on: October 18 2002 11:49 EDT - in response to Howard Lewis Ship I'd seriously suggest people spend the time to evaluate tapestry fairly. You will quickly realize it's maturity over other MVC web frameworks (including those with large followings). It?s really top notch, no BS. Tapestry Java Web Components Framework 2.2 Released[ Go to top ] I am a long time user of Tapestry. The new features of Tapestry are very exciting to us. We are using Tapestry for two different web applications in my company. - Posted by: Dorothy Gantenbein - Posted on: October 18 2002 12:13 EDT - in response to Howard Lewis Ship The first web application is for generating surveys pages on the fly. What was unique about our requirements is that we needed completely dynamic forms. The set of form components is a runtime calculated set of fields, checkboxes, radio buttons, etc. To make our problem more difficult, the layout of the individual form components is dynamic and parameterized. Tapestry handled this situation with ease letting us combine object-oriented form components at runtime into survey pages. Tapestry also automatically handled all the posted responses and validated the responses. Finally, we had stringent performance requirements. Our benchmarks showed that generating these dynamics pages under load was extremely fast (~30 ms) using Resin as our web container. (see) The second web application is a sophisticated administration console. We needed a tree on the left and an edit pane on the right. We wanted the edit pane to change based on the selection within the tree. Finally, we wanted a menubar along the top. We already had nice looking Javascript menus and trees but we wanted to work with them in an object-oriented manner within Java (not Javascript). So, we defined Tapestry components that wrap our Javascript pieces. Now, we have a very powerful object-oriented web framework. Using Tapestry, our development time was a fraction of what we expected it to be. Tapestry Java Web Components Framework 2.2 Released[ Go to top ] We've been using Tapestry for a while now, and must say it's far better, easier, faster, cleaner than stuffing code into JSPs... We've used Struts for a couple of applications previously and when I've found Tapestry I presented it to my boss, and after a short evaluation period we switched to Tapestry for all new web-based applications to be produced. - Posted by: Pablo Lalloni - Posted on: October 21 2002 01:13 EDT - in response to Dorothy Gantenbein Sometimes I still have to fix or change something in those two old apps based on Struts and, man, then I keep seeing the huge difference... once you start working on Tapestry you don't want to go back to other frameworks... at least not to Struts ;) I think the 2 most excellent features of Tapestry are the reusability of components (and the little work to make one) and the auto "handling" of URLs in services. I don't think the integration with JSP is necessary... I'm even affraid of it... don't want to see this framework contamined. This integration is proposed for gaining acceptance and making it easier for newbies to catchup. Well I think the JSP support will make the learning period just longer. Why instead of making these Tapestry Light to gain acceptance don't add features that make it unique and necessary when developing web-apps. For example we have an HTML template engine, why not have a, say, SVG or any other media template engine? I also must say here that the community of Tapestry is great, when I had difficulties and posted questions on the mailing list I had almost instant answers and Howard seems to have endless patience to answer newbie questions. Tapestry Java Web Components Framework 2.2 Released[ Go to top ] I'm hoping the newbie questions will decrease with the new tutorial Neil is putting together. - Posted by: Howard Lewis Ship - Posted on: October 21 2002 10:37 EDT - in response to Pablo Lalloni It is becoming harder for me to keep up with the traffic, which is good and bad. Fortunately, others can answer questions as well! Still missing JSP[ Go to top ] Hi, I like the Idea of having something like event driven /traditional GUI programming in Tapestry but I think it is really a shame that it is missing JSP integration. If you are interested in my preliminary vision of how JSP/Servlet Frontend Development should go, check out the - Posted by: Karl Banke - Posted on: October 18 2002 15:46 EDT - in response to Howard Lewis Ship iternum ui framework preview. Karl Still missing JSP[ Go to top ] Can you quantify *why* you miss JSP? - Posted by: Howard Lewis Ship - Posted on: October 18 2002 16:28 EDT - in response to Karl Banke I'm currently working on extending Tapestry to provide Tapestry Lite. It'll resemble JSP a little more, and serve as a transition layer for newbies. HTML templates will exist directly in the application context (instead of on the classpath). No application spec or page specs will be needed. Like JSP, there will not be a seperate specification file ... you'll put component types, ids and configuration directly into HTML. Like Tapestry Professional (Traditional?, anyway, the correct way of doing things), there will not be any Java in the HTML template. There will be a lot of things you can't do in Tapestry Lite. However, there'll be a clean transition to Tapestry Professional, simply by adding proper application and page specifications. You'll mix and match. What do you get in JSPs? Java code (universally accepted as a bad idea), taglibs (which can be useful, but Tapestry components are much better) and ... not much else. Lots of wierd Java-code-related cruft (like <%@ page import="" %> and the like. Long compile times when you change things. Still missing JSP[ Go to top ] You have a point from the traditional UI perspective. However I miss JSP because of some reasons like - Posted by: Karl Banke - Posted on: October 19 2002 07:58 EDT - in response to Howard Lewis Ship - most projects are not green field. They use some technology, server, product that is based on JSP and requires it - taglibs are not so bad after all. - JSP provides a programming model a lot of people have grown accustomed to. That does not say it is the best possible - it is probably not. - If I tell a manager, I want to use a JSP-standards-based-product might be a lot more successful then telling her I want a product that saves all of us money and time and uh unfortunately it does not depend on standards based JSP. Keep up the good work, Karl Still missing JSP[ Go to top ] <quote> - Posted by: Kimmo Eklund - Posted on: October 19 2002 11:05 EDT - in response to Karl Banke taglibs are not so bad after all. </quote> Well they aren't that good either. I used to think that tag libs are ok.. But not anymore. I realized that user interface development really benefits from component based approach in terms of better quality and productivity. <quote> If I tell a manager, I want to use a JSP-standards-based-product might be a lot more successful then telling her I want a product that saves all of us money and time and uh unfortunately it does not depend on standards based JSP. </quote> The fact that Tapestry runs on Servlet API is IMHO far more important than templating techology used. Integrating web applications on the user interface level is allways difficult if applications use different frameworks. Rarely you can just glue two web user interfaces together and live happily ever after. JSP-based application can be allmost anything. It might use proprietary framework or Struts etc. or nothing at all. There's no real application wide 'standards'. Integrating is allways easy if you are only showing stuff on a web page. But when there's actually some interaction with the user, things tend to become much more complicated. There are situations where JSP is the best choice. But there is other options also. Tapestry is *very* good at building web-based applications. Tapestry Java Web Components Framework 2.2 Released[ Go to top ] This is an attractive framework compared to other frameworks in terms of components, templating, writing simple java code, built in validators etc. - Posted by: Natesh Babu Lakshmanan - Posted on: October 19 2002 20:33 EDT - in response to Karl Banke Great work and keep it up. But eventhough it is based on Servlet API, it uses its own terminologies like Visit, Engine etc. A programmer who used to work in JSP or Servlets has to learn all these new concepts and correlate with standard terms like Session, Application etc. Do you have a head to head comparison with JSP concepts since this is an alternative to JSP and reduce the learning curve ? 2 Questions About Tapestry Framework[ Go to top ] 1. What are the advantages of using Tapestry over an XML/XSL approach. That is, morphing the data to XML and choosing an XSL template with which to generate the output. - Posted by: Greg Turner - Posted on: October 19 2002 19:53 EDT - in response to Howard Lewis Ship 2. I note that Tapestry supports Locales, but can Tapestry also be used for those apps where it needs to choose a template to render based on client type, for example - mozilla vs the bowser in my Treo 300. Thanks Tapestry vs. XML/XSL[ Go to top ] First off, when you talk about XML/XSL pipelines, you are largely covering just the render phase of your application. - Posted by: Howard Lewis Ship - Posted on: October 20 2002 10:19 EDT - in response to Greg Turner The important parts of Tapestry are in the round-trip logic, that links rendering (or links and forms) in one request cycle, to dispatching of incoming requests in the next cycle. That removes tons of code ... and thus, tons of bugs. To my mind, over-leveraging of XML/XSL to support multiple devices is a red herring. When you have an application that runs on a desktop PC its going to be completely different than an application on cell-phone. More than just editting out some stuff and re-arrainging fields, its simply a different application. The XML/XSL approach appeals to "ivory tower" mindset, because it appears to save effort ... but boy, that XSL can get complicated fast. To me, XML/XSL is like programming inside a case statement ... you end up having too many hidden dependencies to manage. In Tapestry terms, you'll have a large PC application and a small cell-phone application (probably inside the same WAR). Reuse occurs in the application layer (EJBs, datbase and beyond). The presentation layer should be kept flexible, to implement the many minor tweaks needed for good useability. Code reuse can happen with Tapestry objects .... so the PC version of a page and the cell-phone version of a page may share the same subclass of BasePage, but be different pages (with unique page specifications and templates). Tapestry vs. XML/XSL[ Go to top ] Thanks for your input. In general I agree about the short comings of the XML/XSL approach. Perhaps you are correct about an app on PDA being not the same app as on the PC, perhaps not. However, the URL for an app is part of the company's brand. And a brand is something that companies spend lots of $$ to get imprinted in our brains. No company is going to want a different version of a URL for a PDA version of their site. I can browse google or amazon with my PDA and its smart enough to deliver html appropriate to the device. I am not so quick to dismiss this bit of functionality. I want this ability in a framework. - Posted by: Greg Turner - Posted on: October 20 2002 12:19 EDT - in response to Howard Lewis Ship Tapestry vs. XML/XSL[ Go to top ] That's just a little implementation detail. You could easily have a "welcome" servlet that sniffs the client and forwards to the correct Tapestry application based on it. - Posted by: Howard Lewis Ship - Posted on: October 20 2002 12:40 EDT - in response to Greg Turner Once you are past that first URL, users should not know or care about the URL, they just want to be able to bookmark it. Tapestry Java Web Components Framework 2.2 Released[ Go to top ] I've been using and learning Tapestry for the last 3 or so months. My previous experience includes Turbine and Velocity - as well as standard JSP/Servlets. - Posted by: Neil Clayton - Posted on: October 30 2002 03:31 EST - in response to Howard Lewis Ship Summary: I'm now hooked :-) After converting an existing JSP/Servlet web site into Tapestry (partly to learn more about the framework) it has become clear to me that it is significantly more powerful in terms of every day site creation and maintenance than other framework's I've used. I should clarify that by saying that the site is interactive, not just static pages with little dynamic content. The Tapestry community is also excellent. I've been sitting on the mailing lists for a while now and almost every discussion is professional and helpful. It is certainly a list that a newbie can fell comfortable posting to! Howard and the team; Keep up the excellent work!
http://www.theserverside.com/discussions/thread.tss?thread_id=15973
CC-MAIN-2015-06
refinedweb
5,930
63.29
> I have tried to follow the tutorial I found here: > > Python 2.7 Tutorial > > > This is what I have done so far: > > #!/usr/bin/python > > from Tkinter import * > import Tkinter.MessageBox I figured I might as well, given how I recently had to learn about this, give you a heads up on imports. The format "from module import * " is not a very good idea: it makes your code harder to read (as the reader has to inspect each and every barename to check whether it's actually assigned locally OR comes from the deuced *). To quote the Zen of Python; "namespaces are a honking great idea -- let's do more of those!". You can import modules with several different syntaxes: import importable import importable1, importable2, ..., importableN import importable as preferred_name Here importable is usually a module such as collections, but could be a package or module in a package, in which case each part is separated with a dot, for example os.path. The first two syntaxes are the simplest and also the safest because they avoid the possibility of having name conflicts, since they force us to always use fully qualified names. The third syntax allows us to give a name of our choice to the package or module we are importing. Theoretically, this could lead to name clashes, but in practice the "import importable as" syntax is used to avoid them. There are some other import syntaxes: from importable import object as preferred_name from importable import object1, object2, ..., objectN from importable import (object1, object2, ..., objectN) from importable import * In the last syntax, the * means "import everything that is not private", which in practical terms means either that every object in the module is imported except for those whose names begin with a leading underscore, or, if the module has a global __all__ variable that holds a list of names, that all the objects in the __all__ variable are imported. The from importable import * syntax imports all the objects from the module (or all the modules from the package) -- this could be hundreds of names. In the case of from os.path import *, almost 40 names are imported, including "dirname", "exists", and "split", any of which might be names we would prefer to use for our own variables or functions. For example, if we write from os.path import dirname we can conveniently call dirname() without qualification. But if further on in our code we write dirname = "." the object reference dirname will now be bound to the string "." instead of to the dirname() function, so if we try calling dirname() we will get a TypeError exception because dirname now refers to a string, and we can't call strings. However, given that Tkinter is such a huge package to begin with, I'd say that you should continue to use from Tkinter import *, but be aware of what you're doing when you type that, and that there is a certain risk of conflicts. best regards, Robert S.
https://mail.python.org/pipermail/tutor/2011-August/085004.html
CC-MAIN-2016-50
refinedweb
499
67.79
GCC Moving To Use C++ Instead of C kdawson posted more than 4 years ago | from the keeping-with-the-times dept. ." I think it's a sign of impending apocalypse (2, Insightful) Anonymous Coward | more than 4 years ago | (#32415952) Either that, or could we be about to see the beginning of a gcc/llvm compiler arms race? As everyone forgot EGCS vs GCC back in Linux 2.x (0, Insightful) Anonymous Coward | more than 4 years ago | (#32416164) Damn I remember how everyone was fighting with different versions of slightly different compilers. Kernels of Linux back in 2.0 and 2.2 were a mess, and I was maintaining Caldera OpenLinux distributions numbered 2.1 to 2.3 (those aren't Linux kernel versions, but box product revisions). Yeah, I was the sole user that Darl McBride prided himself on just because OpenLinux was specialized for the better IPX network share support that my Fortune 10k company needed. Thankyou PJ and Groklaw sandwitchcraft for the smoke and mirrors. Re:As everyone forgot EGCS vs GCC back in Linux 2. (1) Pikoro (844299) | more than 4 years ago | (#32416416) Seems odd... (2, Interesting) man_of_mr_e (217855) | more than 4 years ago | (#32415956):Seems odd... (1) man_of_mr_e (217855) | more than 4 years ago | (#32415974) After a few moments of thought, the answer seems obvious, so i'm ansing my own question. They will likely have a bootstrap C++ compiler written in C that is capable of compiling the full C++ compiler. Re:Seems odd... (5, Insightful) Capena (1713520) | more than 4 years ago | (#32415998) how do you get a C++ compiler working on a platform that doesn't have one Why not bootstrap using a cross compiler? Re: ding ding winner (-1, Offtopic) Anonymous Coward | more than 4 years ago | (#32416124) Mod up. This is the correct answer to man_of_mr_e (217855)'s question. Re:Seems odd... (5, Funny) Anonymous Coward | more than 4 years ago | (#32416310) Re:Seems odd... (5, Funny) FlyingBishop (1293238) | more than 4 years ago | (#32416624) With the nodes that insert a backdoor into the unix login program colored red. Re:Seems odd... (1) Joce640k (829181) | more than 4 years ago | (#32416008) ie. They could use any of the current compilers for the 'bootstrap'... Re:Seems odd... (1) man_of_mr_e (217855) | more than 4 years ago | (#32416028) That would be wasteful, they could however strip down one of the current compilers and make it a "bare minimum" of features necessary to support the compiler. Re:Seems odd... (2, Insightful) Joce640k (829181) | more than 4 years ago | (#32416088) Thinking even harder ... they could compile GCC on another machine but set the output target as the platform they're trying to get it to run on. Then you just copy the binary across. Nope (1) Weezul (52464) | more than 4 years ago | (#32416254) They'll implement the new machine code generation routines in C++ just like now, and then cross compile gcc. Re:Seems odd... (0) Anonymous Coward | more than 4 years ago | (#32416016) GCC has always claimed that they are going the "we can bootstrap ourselves" way. Re:Seems odd... (1) Sigurd_Fafnersbane (674740) | more than 4 years ago | (#32416022) Since there are platforms for which C++ compilers exist you can compile the compiler on one of these and then cross-compile for the target platform. This is also how you boot-strap a C-compiler on a platform it is not implemented for initially. Re:Seems odd... (1) man_of_mr_e (217855) | more than 4 years ago | (#32416044) Yes, that is one way to do it, but my point was that gcc has always prided itself on being able to bootstrap itself with minimal work, and without cross compilation. Cross compilation was sort of considered to be "cheating" Re:Seems odd... (1) Sigurd_Fafnersbane (674740) | more than 4 years ago | (#32416114). Re:Seems odd... (3, Interesting) Philip_the_physicist (1536015) | more than 4 years ago | (#32416544) (2, Interesting) Joce640k (829181) | more than 4 years ago | (#32416402) Maybe they've admitted that 'pride' is holding them back and that being able to use STL (for example) is a greater good than being able to do an initial compile on some obscure microcontroller which has a barely functioning C compiler. Re:Seems odd... (0, Redundant) Yvanhoe (564877) | more than 4 years ago | (#32416282) You generate a binary for system Y on system X thanks to a compiler compiled in system X binary format. Re:Seems odd... (0, Redundant) Cyberax (705495) | more than 4 years ago | (#32416328) Cross-compilation from a working platform. It's not like many people don't have access to a platform powerful enough now. The need to bootstrap GCC from any platform only with K&R C has evaporated long ago. Re:Seems odd... (1) cyberthanasis12 (926691) | more than 4 years ago | (#32416514) Re:Seems odd... (0) Anonymous Coward | more than 4 years ago | (#32416578) To be fair, it is right that the only real compiler (ie not a toy like Microsoft's) is built using a real language (ie not a toy like Microsoft's C#, or, heaven forbid, VB lol). C++ programmers are the best, most experience programmers around, and getting them on board the GCC project is undoubtedly a good idea. Re:Seems odd... (1) Jaydee23 (1741316) | more than 4 years ago | (#32416644) Great (2, Funny) jimmydevice (699057) | more than 4 years ago | (#32415960) C++? (0, Flamebait) Anonymous Coward | more than 4 years ago | (#32415964) C++ is the most horrible language I've ever had to write code in. I think they should have sticked to pure C. Re:C++? (1, Interesting) Anonymous Coward | more than 4 years ago | (#32415996) Re:C++? (3, Insightful) man_of_mr_e (217855) | more than 4 years ago | (#32416010):C++? (1) Homburg (213427) | more than 4 years ago | (#32416138) C++ is also required to be more-or-less compatible with C, and with various different pre-standard dialects of C++, which both prevents removing some of unpleasant parts, and means that new features have often had to be added in fairly baroque forms. Re:C++? (1, Insightful) Anonymous Coward | more than 4 years ago | (#32416148) I would call the "high level" part a goodness. It's overly complex, making it too complex to understand, and hard to read and write code. C is already hard, but this is because of the low level part which you can't go without, and C++ just makes it insane. And some language decisions are troubling. Abusing left and write shift with streams? I nearly puke when I see code like that. Couldn't they just have a 'write' and 'read' methods? It's confusing as to whether the code is doing shifts, or writes, or what, making it hard to read. Template metaprogramming? Macros were already a way to make everything horrible, now there are even more things that allow you to do this. While it is a useful feature, any template code is nearly incomprehensible. References for me are just more complicated pointers that only make code even harder to understand. And if you're doing high-level OO programming you should be avoiding using such constructs, just pass or return an object that carries the required information. Returning values through arguments like that is a bad idea, like all the things that references allow you to do. f(x) changing the value of x is a completely unexpected and surprising result for me. In C at least it had to be f(&x). Not to mention the use of & for references confuses the hell out of me. Everything new, including the good things, like == overloading, and the way they are implementing, contributes to the extreme complication and insanity of C++. C++ is the most horrible Object Oriented system you could add on top of C. Even Objective-C is better. Re:C++? (4, Insightful) jandersen (462034) | more than 4 years ago | (#32416186) (5, Insightful) r00t (33219) | more than 4 years ago | (#32416470):C++? (2, Insightful) Anonymous Coward | more than 4 years ago | (#32416058). :) Transitioning from C to C++ (1) Decollete (1637235) | more than 4 years ago | (#32415988) Linus will raise! (1, Interesting) Anonymous Coward | more than 4 years ago | (#32416012) considering he sustained that C++ is utter crap and that is why he didn't use it to develop git.... I just long for his rants... ^_^ What... (1) f3rret (1776822) | more than 4 years ago | (#32416014) How do you compile a compiler written in the language it compiles... Re:What... (3, Informative) Ckwop (707653) | more than 4 years ago | (#32416048) Enjoy [google.co.uk] Re:What... (2, Insightful) zebslash (1107957) | more than 4 years ago | (#32416056) Well, ever thought that issue also happened for a gcc written in C? Compilers come with minimal bootstrap compilers written in assembler to initiate the first compilation. Then compilers compile themselves several times until they reach a final version. Re:What... (2, Informative) Anonymous Coward | more than 4 years ago | (#32416132):What... (1) BetterThanCaesar (625636) | more than 4 years ago | (#32416064) Re:What... (1) hey (83763) | more than 4 years ago | (#32416448) The other compiler can be a previous version of the same compiler. Re:What... (1) BetterThanCaesar (625636) | more than 4 years ago | (#32416094) Re:What... (5, Funny) josgeluk (842109) | more than 4 years ago | (#32416358) Quis compilabit ipsos compilatores? ecco, tibi fixi . Out of the ashes and into C++ (4, Funny) trialcode (1400591) | more than 4 years ago | (#32416052) Re:Out of the ashes and into C++ (-1, Flamebait) Anonymous Coward | more than 4 years ago | (#32416086) boycott Israel !!! Re:Out of the ashes and into C++ (0) bonch (38532) | more than 4 years ago | (#32416140) So sarcastic, it makes my teeth itch. Re:Out of the ashes and into C++ (0) Anonymous Coward | more than 4 years ago | (#32416194) Great idea! This will surely help steal back users from LLVM/clang. Disregarding the sarcasm in the rest of the post, LLVM's compiler is currently implemented by replacing GCC's backend with LLVM. This means C++ code gets into the compiler and whilst GCC is "Mainline = C only" it is impossible to declare GCC+LLVM an officially supported combination. By most accounts I've read, LLVM generally produces superior code and is designed to function as a research and experimentation platform so it's an obvious choice to use that as the backend (perhaps keep the GCC specific one but de-emphasise it). On the other hand, it may not have much to do with LLVM at all and instead just be intended to allow use of classes and stronger type checking for better internal organisation. Re:Out of the ashes and into C++ (1) nitehorse (58425) | more than 4 years ago | (#32416234) You jumped right on the LLVM part, but apparently don't know what clang is. You should read about clang. [llvm.org] Re:Out of the ashes and into C++ (1) serviscope_minor (664417) | more than 4 years ago | (#32416370), really hard. Given how much more complex compilers have become, we really don't want to return to those days. GCC and the GPL have really helped in this regard. Being the only fully featured, open source C++ compiler meant that the choice was low cost, low effort and openness, versus very high cost, high effort and closedness, the vendors chose GCC. Having basically a common. good, standards compliant compiler has made my life so much easier. So yeah as someone who suffered for years under the heel of shoddy vendor compilers, I really hope that the compiler world does swing back to gcc. It'd be better for business all round. Re:Out of the ashes and into C++ (1) ZorbaTHut (126196) | more than 4 years ago | (#32416496) As opposed to the closed inaccessible LLVM/Clang combo? The GCC developers have shown their ability to compete with MSVC. For a while, they had the edge. They no longer do, and part of that, from what I know, is thanks to how grim and unmaintainable the GCC codebase is. Personally, I'm quite excited for something better, and I'm really excited for something better that can be embedded in other projects. From the article it is obvious (0) FithisUX (855293) | more than 4 years ago | (#32416066) Re:From the article it is obvious (1, Interesting) Rockoon (1252108) | more than 4 years ago | (#32416146) Re:From the article it is obvious (0) Mad Merlin (837387) | more than 4 years ago | (#32416230) They're allowing the STL but not custom templates. The STL alone makes C++ worth using. Re:From the article it is obvious (1, Troll) Rockoon (1252108) | more than 4 years ago | (#32416240) Re:From the article it is obvious (4, Informative) Cyberax (705495) | more than 4 years ago | (#32416372) Because ObjectiveC is a slow shit? Seriously, it might be OK for designing GUI interfaces, its dynamic nature helps there. But for compiler writing I'd prefer something: 1) Fast. 2) Typed. 3) Deterministic (no non-deterministic GC). Re:From the article it is obvious (1, Informative) Anonymous Coward | more than 4 years ago | (#32416490):From the article it is obvious (1) Cyberax (705495) | more than 4 years ago | (#32416564) I'm aware of garbage collector in GC. However, it's completely deterministic, and GCC people don't want to change it. I tried to do some GCC hacking 3 years ago before I gave up and used LLVM. Choices, choices (4, Insightful) Cee (22717) | more than 4 years ago | (#32416068) To paraphrase Einstein [wikiquote.org] : [att.com] lately. C++ can be very lean and mean indeed. As can C# (which I'm mostly using right now). Re:Choices, choices (0) Anonymous Coward | more than 4 years ago | (#32416156) High level != Simple. Ruby is quite high level, but you'd be amazed by the amount of crappy Ruby code out there.. Re:Choices, choices (4, Insightful) daid303 (843777) | more than 4 years ago | (#32416236) (5, Insightful) serviscope_minor (664417) | more than 4 years ago | (#32416384):Choices, choices (1) LinuxAndLube (1526389) | more than 4 years ago | (#32416518) I don't oppose C++, but You Have To Know What You Are Doing (TM). Actually, that's why I like C so much: you don't have to know what you're doing! Re:Choices, choices (0) Anonymous Coward | more than 4 years ago | (#32416388) It's not that it's bloated or slow. It's that it's retarded and misdesigned. It's as if C was reinvented with slightly different semantics for no real benefit. So, if you present FAQ, I have to counter it with FQA [yosefk.com] . Re:Choices, choices (0, Troll) Joce640k (829181) | more than 4 years ago | (#32416494) The C++ bashers are an undereducated bunch of whiners. Telling a C++ programmer to go back to C is like telling a C programmer to go back to assembly language, not going to happen. Deciding how to allocate CPU registers to get the tightest loop might be fun for a while but it simply doesn't work when you've got to write a real program. Just like C compilers which allocate registers for you (and do a pretty good job!), a C++ compiler makes coding MUCH EASIER and MORE RELIABLE by doing all the micromanagement for you. Anybody who thinks "C++ is C with a few extra things" is very, very wrong. Re:Choices, choices (0) Anonymous Coward | more than 4 years ago | (#32416592) Have you read the announcement? No? Come on! it's not like it's an article, is it? OK, OK, this is Slashdot after all... Well, the funny thing is that they have agreed to use C++, but they still have to discuss what to do with it!! I will repeat it just in case: They don't have any idea of how (or if) it will be of any help. Talk about irony... Incorrect headline (5, Informative) Letharion (1052990) | more than 4 years ago | (#32416098) Re:Incorrect headline (1) jimmydevice (699057) | more than 4 years ago | (#32416218) AVR-GCC (1) dohzer (867770) | more than 4 years ago | (#32416102) Will this feed through to things like AVR-GCC for Atmel AVR 8-bit microcontrollers? I wonder what changes in performance we would see. [sourceforge.net] Re:AVR-GCC (0) Anonymous Coward | more than 4 years ago | (#32416122) AVR-GCC can already compile C++, like all other GCC versions (AFAIK). The language in which the compiler itself is written won't affect users directly, unless it results in flaky behavior. And frankly, it doesn't sound like a C++ guru convention is going on over there. The devs don't know C++?? Its a C++ compiler! (5, Funny) Viol8 (599362) | more than 4 years ago | (#32416116) Are they seriously trying to suggest that the people who work on developing and maintaining a C++ compiler are novices in C++?? Sorry , am I missing something here? Re:The devs don't know C++?? Its a C++ compiler! (-1) Pinhedd (1661735) | more than 4 years ago | (#32416150) Re:The devs don't know C++?? Its a C++ compiler! (0) Anonymous Coward | more than 4 years ago | (#32416178) Wrong. GCC has a C compiler, a C++ compiler, a Object-C compiler, a Java-Compiler, a Fortran compiler and a Ada Compiler.... and also libraries and extra tools useful for those languages. Re:The devs don't know C++?? Its a C++ compiler! (5, Informative) chocapix (1595613) | more than 4 years ago | (#32416224):The devs don't know C++?? Its a C++ compiler! (1) Homburg (213427) | more than 4 years ago | (#32416228) didn't know C++. Rubbish (1) Viol8 (599362) | more than 4 years ago | (#32416260) $-linux Reading specs from Target: i486-slackware-linux Configured with: Thread model: posix gcc version 4.3.3 (GCC) That looks like gcc to me Re:The devs don't know C++?? Its a C++ compiler! (0) Anonymous Coward | more than 4 years ago | (#32416300) What's the hell are you talking about ?! GCC, the GNU Compiler Collection The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...). Re:The devs don't know C++?? Its a C++ compiler! (1) bgarcia (33222) | more than 4 years ago | (#32416398) Parent is obviously one of the GCC developers. Re:The devs don't know C++?? Its a C++ compiler! (5, Funny) dgriff (1263092) | more than 4 years ago | (#32416452) Car analogy time! You can be an expert car mechanic without knowing how to drive. I'll get me coat... Re:The devs don't know C++?? Its a C++ compiler! (1) Ed Avis (5917) | more than 4 years ago | (#32416520) Great ! Another printout to burn (2, Funny) abies (607076) | more than 4 years ago | (#32416134) Looking at the GNU Coding Standard [gnu.org] which is used for gcc, whatever 'best practices' and style guideline they come with will make a good fireplace material [kernel.org] ... Finally! (2, Interesting) serviscope_minor (664417) | more than 4 years ago | (#32416158). Re:Finally! (1) Renegade88 (874837) | more than 4 years ago | (#32416322) Conversely, I'll keep rooting for LLVM because it's not GPL licensed. Re:Finally! (1) serviscope_minor (664417) | more than 4 years ago | (#32416422) like GCC being under the GPL. Re:Finally! (1, Informative) Anonymous Coward | more than 4 years ago | (#32416512) You're not parsing correctly. He's talking about app/whatever code written using tons of #defines to work around bugs and features in different vendors compilers. Re:Finally! (0) Anonymous Coward | more than 4 years ago | (#32416376) I once used a C compiler on CPM which was a shell script. You could see it step through the preprocessor, code generator and assembler. C++ flame wars (5, Funny) o'reor (581921) | more than 4 years ago | (#32416242) Here's somme ammo from bash.org [69.61.106.93] : Since they're clearly stealing ideas from clang... (2, Insightful) nitehorse (58425) | more than 4 years ago | (#32416262) Maybe while they're at it, they can add in actually-useful error messages. See [llvm.org] for some examples. It's shocking how user-hostile GCC is in comparison. Safe subset (5, Insightful) steveha (103154) | more than 4 years ago | (#32416302):Safe subset (0) Anonymous Coward | more than 4 years ago | (#32416344) Well, there is already one sick part of C++ in using something as simple as namespaces and simple OOP: C++ namemangling, which is horribly spec'ed and can lead to huge problems. For example a project of the size of gcc might suffer quite a noticable slowdown on startup (dynamic linking) because of those freaky long mangled names. Re:Safe subset (2, Interesting) Joce640k (829181) | more than 4 years ago | (#32416410) On the other hand ... having the compiler mangle the names for you instead of having to do it manually "MyClassAddFloatAndInt()" might be a win in the long term. Re:Safe subset (1) serviscope_minor (664417) | more than 4 years ago | (#32416404). There is nothing special about operator overloading. In any good language, you get used to thinking about operators as functions with a funny syntax. That removes all the mystery. Sure it is not good to overload everything up to the eyeballs, but one could say that in C it is easy to make millions of silly macros or functions. Besides, GCC makes quite heave use of GMP which is the *IDEAL* candidate for operator overloading. Re:Safe subset (1) Skapare (16644) | more than 4 years ago | (#32416474) Any language can be abused. GCC does that a lot with C already and it is widely known (GLIBC is worse, BTW). The issue might well just be that with this fact known, they want to avoid having that problem just get worse with C++. Re:Safe subset (1) Joce640k (829181) | more than 4 years ago | (#32416420) (5, Insightful) r00t (33219) | more than 4 years ago | (#32416530) (5, Funny) Joce640k (829181) | more than 4 years ago | (#32416632) Ummm... just right click the function name and select "Find all references" from the popup menu. Operator overloading is essential (1) Joce640k (829181) | more than 4 years ago | (#32416614) Sure it can be abused, if you're overloading operator+ to insert records in a database you're doing it wrong. OTOH operator[], operator*, operator-> and even operator are fundamental to data processing - you can't remove them from C++ without doing a lot of harm. Anybody who advocates removal of operator overloading has completely missed the point - yes they can be evil but the good far outweighs the bad. 80's technology (0, Flamebait) toolslive (953869) | more than 4 years ago | (#32416356) Re:80's technology (5, Funny) Narishma (822073) | more than 4 years ago | (#32416522) Yeah, exactly. I don't understand why they didn't chose something modern like Ajax. Busted! (1) Trivial Solutions (1724416) | more than 4 years ago | (#32416426) C, the best sub set of C++ (2, Insightful) itsybitsy (149808) | more than 4 years ago | (#32416620) Subject line says it all. C is the best subset of C++ there is or ever will be.
http://beta.slashdot.org/story/136442
CC-MAIN-2014-35
refinedweb
3,843
70.94
I'm interested in doing some benchmarking, but am new to the concept of doing so. Any pointers? What do you guys use to measure the running time of your algorithms/functions in C? Do you use tools? Printable View I'm interested in doing some benchmarking, but am new to the concept of doing so. Any pointers? What do you guys use to measure the running time of your algorithms/functions in C? Do you use tools? I'm certainly no expert on the matter, However, I will put my 2 cents in on how I do it (I only really need an approximation). On Linux I use time to measure the running time of my program (usually I 'benchmark' algorithms in one program by themselves -- eliminates the need for 'internal timing'). Otherwise I'd use time() or some more accurate time function like setitimer(). However you should be aware of caching, for example if you time an application once, and then do it again the 2nd time is probably going to be quicker because it's cached. I'm sure someone will reinforce what I've said, or shoot it out of the sky ;) The simple question leads to a long answer :) Benchmarking and performance improvement is an iterative process, you have to go through the same steps many times. The first thing to consider is: * What are you trying to achieve? Is the goal to improve the performance of your code, or something you can give to customers to measure the performance of their systems? The quality of the benchmark product would depend on the chosen goal. * Micro or macro benchmark? Do you want to measure the performance of the WHOLE application, or individual functions? There are benefits in both. In the former, you probably need some sort of script or similar to "run" the application, or a suitable "typical" input file, perhaps. In the latter case, you will have to write some code/script to perform specific functions (e.g. draw the same picture hundreds of times on the screen, output 100MB to a file, or whatever you want to achieve). It's also worth noting that some forms of micro-benchmarks can lead to "overoptimization" - waste of time, and sometimes also leading to overall reduction in performance, for example due to over-inlining, where some particular function gets too large, and throws out other bits of code from the cache, which harms the overall performance. After all, your "customers" will never run your code function by function, they will run the overall product - and that's what they require to run sufficiently fast. Tools: First of all, you need something to tell you what the current performance is. In it's simplest form, it's just a stopwatch and timing the application from start to finish. For less dependency on the finger-speed of the benchmarker, adding some code to the application to show the time taken and perhaps "loops per second" or "bytes per second" or such will help a lot. Once you have the "current performance data", you obviously will want to do something about it. First step is probably to make sure you have the right switches to the compiler. A debug build with no optimization can easily be 2-3x slower than a release build with full optimization. For large apps, it's also worth considering "Optimize for size" instead of "optimise for speed" - but measure on YOUR application, because there is no set rule here. Note also that once you have changed the code sufficiently, a different compiler option may turn out to be the best, compared to the first step. During my experiments to improve things, I use Excel to record what I've done, what the result was and compare improvements. The second step is to figure out where the code is spending time. Here is where a profiler comes in. There are two basic kinds: 1. Sampling profiler. Interrupts in the system are instrumented to record where in the code it was currently executing when the interrupt was taken. Given sufficient runtime of the application, you should get a pretty good idea of where the app spent it's time. Vtune, CodeAnalyst and oprofile are tools in this range, as well as "writing your own" if you have an embedded system. 2. Compiler helped profiler. This involves the compiler adding extra code to the application, which stores data about the execution time, and how many times it's hit, for each portion of code. I'm sure MS has such a feature, but I've never used it, but in gcc it's "-pg" to generate the extra code, and you use "gprof" to get the data presented in a readable form. You can of course do this yourself too, and for "only some" functions - just create an array with entries for as many functions as you think you need, and use a macro to indicate start and end of the function call, adding up the time taken for the function. Just be aware that adding this sort of code also slows the function down quite a bit. So we now have some sort of histogram or "top-ten" list of where the time is spent. Take a look at the top one - it's quite often a LARGE proportion of time spent in one function, so it's usually easy. Now look at that function - how can it be improved for speed? Is it doing a linear search, where keeping a list sorted and using a binary search would make it 8, 20 or 100x faster? Sometimes you find that the code isn't spending a lot of time anywhere, just small bits in lots and lots of functions. This makes it hard. It can be a hint that perhaps you have too many really small functions. Or that it's hard to optimize this code in general - perhaps it's already pretty well optimized? Finally, you have to "stop somewhere". For large projects, it's almost always possible to continue "forever" to improve the performance, but you have to set a goal, or set a time-limit for your improvements. I'm sure others have other ideas. -- Mats I would say the easiest way is to write a simple test 'stub' program which call the algorithms/functions a set number of times and then simply time it with a clock or wrist watch or whatever. Then change the number of times it is called form say 10 to 100 to 1000 or 1,000,000 or whatever so you get a value you can easily measure, which for example takes about 10 minutes or so. Then just divide the time by the number of iterations. Obviously there may be a slight overhead in loading the program but that will be constant for evey run. So you might get results like 10 runs + constant = D1 seconds 100 runs + constant = D2 seconds 1000 runs + constant = D3 seconds 100,000 runs + constant = D4 seconds etc.. So it will be fairly simple to work out what the constant value is and hence the run time accurately. Anyhow after 10 minutes the load time will be neglible anyway. Also this is will help smooth out the results as the run time will be affected by other activities your computer has to perform such as processing spurious network activity etc... matsp, thanks for the overview. I've been meaning to profile my C++ code in Xcode on a mac. I know it can be done, but haven't dug into it yet. I'm again motivated. Thanks! Todd On i686 and later: This measures the clock cycles spent since computer boot.This measures the clock cycles spent since computer boot.Code: uint64_t rdtsc() { uint32_t lo, hi; /* We cannot use "=A", since this would use %rax on x86_64 */ __asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi)); return (uint64_t)hi << 32 | lo; } Simply, subtract two consecutive calls and divide by your CPU frequency. It doesn't get more accurate.
http://cboard.cprogramming.com/c-programming/99007-benchmarking-printable-thread.html
CC-MAIN-2014-10
refinedweb
1,345
69.31
- Language Changes Created by admin on 2008-07-03. Updated: 2012-04-13, 15:17 No language changes were introduced in 2.9.1, 2.9.1-1 and 2.9.2. Here, body and cleanup can be arbitrary expressions, and handler can be any expression which evaluates to a valid exception handler (which is: PartialFunction[Throwable, T]). No language changes were introduced in Scala 2.8.1. No language changes were introduced in 2.7.3 through 2.7.7. There are new implementations of collection classes, contributed by David MacIver: IntMap, LongMap, and TreeHashMap (immutable), ArrayStack and OpenHashMap (mutable). For example This translation works if If the object exists already, only the Self types can now be introduced without defining an alias name for It is now possible to define existential types using the new keyword It is now possible to define lazy value declarations using the new modifier Type parameters and abstract type members can now also abstract over type constructors. This allows a more precise This definition of In the code above, the field The syntax of Thus a As a special case, a partially unapplied method is now designated The new notation will displace the special syntax forms The The syntax for tuples has been changed from Analogously, for any sequence of expressions or patterns The primary constructor of a class can now be marked The support for attributes has been extended and its syntax changed . Attributes are now called annotations. The syntax has been changed to follow Java's conventions, e.g. Annotations are now serialized so that they can be read by compile-time or run-time tools. Class It is now possible to give an explicit alias name and/or type for the self reference the name It is now possible to define patterns independently of case classes, using In the example, In the second-to-last line, A new lightweight syntax for tuples has been introduced . For any sequence of types Analogously, for any sequence of expressions or patterns A new standard attribute A simplified syntax for functions returning Protected members can now have a visibility qualifier , e.g. where The lookup method for implicit definitions has been generalized . When searching for an implicit definition matching a type (The second clause is more general than before). Here, a class is associated with a type one would now look in the companion modules (aka static parts) of A typed pattern match with a singleton type This will match the second case and hence will print There is a new syntax for class literals : For any class type Pattern matching expressions The Variants such as is no longer supported. A However, assuming where Regular Expression Patterns The only form of regular expression pattern that is currently supported is a sequence pattern, which might end in a sequence wildcard That is, selftypes are now indicated by the new Note the definition where Most likely, the programmer forgot to supply an empty argument list As a result, the address of a closure would be printed instead of the value of Scala version 2.0 will apply a conversion from partially applied method to function value only if the expected type of the expression is indeed a function type. For instance, the conversion would not be applied in the code above because the expected type of The partial application of On the other hand, Scala version 2.0 now automatically applies methods with empty parameter lists to Scala version 2.0 also relaxes the rules of overriding with respect to empty parameter lists. The revised definition of matching members makes it now possible to override a method with an explicit, but empty parameter list Previously this definition would have been rejected, because the A class parameter may now be prefixed by access to [ 2.9.0 | 2.8.0 | 2.7.2 | 2.7.1 | 2.7.0 | 2.6.1 | 2.6.0 | 2.5.0 | 2.4.0 | 2.3.2 | 2.3.0 | 2.1.8 | 2.1.7 | 2.1.5 | 2.0 ] No language changes were introduced in 2.9.1, 2.9.1-1 and 2.9.2. Changes in Version 2.9.0 (12-May-2011) The Scala 2.9.0)).. No language changes were introduced in Scala 2.8.1. Changes in Version 2.8.0 (14-Jul-2010) Scala 2.8.0 is a significantly innovative release, which contains a large amount of fixes and introduces many new. No language changes were introduced in 2.7.3 through 2.7.7. Changes in Version 2.7.2 (10-Nov-2008)). Changes in Version 2.7.1 (09-Apr-2008) Change in Scoping Rules for Wildcard Placeholders in Types A wildcard in a type now binds to the closest enclosing type application. For example List[List[_]] is now equivalent to the existential type List[List[t] forSome { type t }] In version 2.7.0, the type expanded instead to List[List[t]] forSome { type t } The new convention corresponds exactly to the way wildcards in Java are interpreted. No Contractiveness Requirement for Implicits The contractiveness requirement for implicit method definitions has been dropped. Instead it is checked for each implicit expansion individually that the expansion does not result in a cycle or a tree of infinitely growing types. Changes in Version 2 => ... } } Existential types It is now possible to define existential types using the new keyword forSome.")) Deprecated features - The old-style syntax of for-comprehensions has been deprecated. - The requiresclause has been deprecated; use { self: T => ... }instead. &ffor unapplied methods has been deprecated; use f _instead. Type constructor polymorphism It is now possible to initialize some fields of an object before any parent constructors are called. This is particularly useful for traits, which do not have normal constructor parameters. For example: trait Greeting { val name: String val msg = "How are you, " + name } class C extends { val name = "Bob" } with Greeting { println(msg) } In the code above, the field name is initialized before the constructor of Greeting is called. Therefore, field msg in class Greeting is properly initialized to "How are you, Bob".. For example: def scalarProduct(xs: Array[Double], ys: Array[Double]) = (0.0 /: (xs zip ys)) { case (a, (b, c)) => a + b * c } annotations are retained. Instances of an annotation class inheriting from trait scala.ClassfileAnnotation will be stored in the generated class files. Instances of an annotation class inheriting from trait scala.StaticAnnotation C.this and the requires clause in Scala. Assignment Operators It is now possible to combine operators with assignments . For example: var x: int = 0 x += 1 suceeds, Nonefor a match that fails. Pattern variables are returned as the elements of Some. If there are several variables, they are grouped in a tuple. In the second-to-last line, Twice's apply. Procedures A simplified syntax for functions returning unit has been introduced . Scala now allows the following shorthands: Type Patterns "unchecked" warning at places where type erasure might compromise type-safety. Standard Types The recommended names for the two bottom classes in Scala's type hierarchy have changed as follows: All ==> Nothing AllRef ==> Null The old names are still available as type aliases. Visibility Qualifier for protected Protected members can now have a visibility qualifier , e.g. protected[<qualifier>]. In particular, one can now simulate package protected access as in Java writing protected[P] def X ... where P HashSet. Tightened Pattern Match A typed pattern match with a singleton type p.type now tests whether the selector value is reference-equal to p .. Multi-Line string literals It is now possible to write multi-line string-literals enclosed in triple quotes . Example """this is a multi-line string literal""" No escape substitutions except for unicode escapes are performed in such string literals. Class Literals There is a new syntax for class literals : For any class type C, classOf[C] designates the run-time representation of C. Scala in its second version is different in some details from the first version of the language. There have been several additions and some old idioms are no longer supported. This section. For may still do. The new mixin model is explained in more detail in the Scala Language Specification..
http://www.scala-lang.org/old/index.html%3Fq=node%252F43.html
CC-MAIN-2016-30
refinedweb
1,376
54.83
#include <InMemoryPool.h> Inheritance diagram for InMemoryPool:: We might also want a CompactInMemoryPool, optimized for small size. More importantly, we might want to support CLUSTERING, which in this case means knowing some things about the prevalent schemas which will help with performance. This is also true with the disk version. Basically a set of sets of properties which should be clustered -- if something has one of these properties, it's very likely to have these others. Property-Cluster-Advice. One might also want this to advise of joins -- subrelations... But maybe this is another class (or two, if you count disk). This should undo all the performance loss of reification. Definition at line 34 of file InMemoryPool.h.
http://www.w3.org/2001/06/blindfold/api/classInMemoryPool.html
CC-MAIN-2016-50
refinedweb
118
57.16
%%writefile script.py x = 10 y = 20 z = x+y print('z is: %s' % z) Writing script.py %run script z is: 30 x 10 The %gui magic enables the integration of GUI event loops with the interactive execution loop, allowing you to run GUI code without blocking IPython. Consider for example the execution of Qt-based code. Once we enable the Qt gui support: %gui qt import sys from PyQt4 import QtGui, QtCore class SimpleWindow(QtGui.QWidget): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.setGeometry(300, 300, 200, 80) self.setWindowTitle('Hello World') quit = QtGui.QPushButton('Close', self) quit.setGeometry(10, 10, 60, 35) self.connect(quit, QtCore.SIGNAL('clicked()'), self, QtCore.SLOT('close()')) And now we can instantiate it: app = QtCore.QCoreApplication.instance() if app is None: app = QtGui.QApplication([]) sw = SimpleWindow() sw.show() from IPython.lib.guisupport import start_event_loop_qt4 start_event_loop_qt4(app) But IPython still remains responsive: 10+2 12 The %gui magic can be similarly used to control Wx, Tk, glut and pyglet applications, as can be seen in our examples. %%writefile simple-embed.py # This shows how to use the new top-level embed function. It is a simpler # API that manages the creation of the embedded shell. from IPython import embed a = 10 b = 20 embed(header='First time', banner1='') c = 30 d = 40 embed(header='The second time') Writing simple-embed.py The example in kernel-embedding shows how to embed a full kernel into an application and how to connect to this kernel from an external process. The %logstart magic lets you log a terminal session with various degrees of control, and the %notebook one will convert an interactive console session into a notebook with all input cells already created for you (but no output).
https://nbviewer.jupyter.org/github/ipython/ipython/blob/4.0.x/examples/IPython%20Kernel/Terminal%20Usage.ipynb
CC-MAIN-2018-51
refinedweb
296
50.84
Hello, I've got three files, main.cpp, test.h (the header file), and test.cpp, (implementation of test.h). Both main.cpp and test.cpp #include test.h, but I don't seem to be able to get test.h to use the implementation in test.cpp (leading to "undefined reference" errors when attempting to compile). Searching Google, I've found that g++ should automatically link test.cpp to test.h, but that does not appear to be the case here... #Including test.cpp (in main.cpp) instead of test.h worked, but Google tells me this is not an especially good idea. The contents of main.cpp: #include <iostream> #include "test.h" using namespace std; int main() { Test MyClass; MyClass.say_hi(); return 0; } The contents of test.h: #ifndef TEST_H #define TEST_H class Test { public: void say_hi(); }; #endif The contents of test.cpp: #ifndef TEST_CPP #define TEST_CPP #include <iostream> using namespace std; void Test::say_hi() { cout << "Hello, world\n"; } #endif G++'s output: $ g++ -Wall -o test main.cpp /tmp/cc9Lh1zk.o: In function `main':main.cpp:(.text+0x23): undefined reference to `Test::say_hi()' collect2: ld returned 1 exit status Thanks for your help...
https://www.daniweb.com/programming/software-development/threads/48678/header-files
CC-MAIN-2018-13
refinedweb
196
72.42
Analyzing cross-service requests with Apache Beam.python (59), infrastructure (34), data (2) After doing a survey of current data technologies, I wanted to write a couple simple programs in each to get a better feel of how they work. I decided to start with Apache Beam as it aims to allow you to write programs to run on many of the other platforms I hope to look into, which will hopefully allow me to reuse a single program for evaluating a number of different engines. As a toy project, I’ve picked analyzing cross-service requests in a hypothetical microservices architecture. I chose this problem for a couple of reasons, the first of which is that it doesn’t fit particularly well with fixed-sized windows, and is more accurately modeled using the session windowing strategy described in the Google Dataflow paper. It’s also interesting because many companies already thread a request_id across all their requests, and it seemed like it would be an interesting validation of Apache Beam’s approach if it can successfully provide real-time request tracing analytics. The set up Probably the easiest way to follow along is to install Docker, checkout the Git repository containing this code, and then follow these commands to get into a terminal where you can run the commands: git clone cd learning-beam docker build -t "learning-beam" . docker run -it learning-beam /bin/bash cd /tmp python traces.py --input logs/output.* --output output cat output* You can also do this with just a virtualenv, but debug at your own risk! git clone cd learning-beam virtualenv env . ./env/bin/activate pip install apache_beam python traces.py --input logs/output.* --output output cat output* Submit a pull request or drop me a note if you run into issues! Beyond configuring the environment, the other particulars relate to the synthetic data we’ve generated for this analysis. We’re generating log entries, represented in JSON, where each log entry is a span in our cross-service requests. { ", } } As a simplifying assumption, we’re using a logical clock that starts at 0 when our hypothetical infrastructure begins sending requests, and otherwise behaves as a normal clock (e.g. time 10 represents 10 seconds have passed since the infrastructure started). You can use the generate_logs.py tool to create sample logs, but it’ll be easier to rely on the pregenerated contents of the logs directory, which are grouped into five second buckets. Combining spans into a trace The goal of this job is to convert a series of log lines in format of: { ", } } Into complete traces in the format of: [ebcf] 6 spans: cache (t: 0) -> app (t: 1) -> posts (t: 2) -> ... [b6a1] 5 spans: search (t: 0) -> db (t: 1) -> queue (t: 2) -> ... [325a] 2 spans: cache (t: 0) -> frontend (t: 1) The full code is up on Github, so I’ll focus on two interesting parts: defining the pipeline, and writing the DoFn function to compile the spans into the span summaries. Stripping out configuration, the total pipeline definition is: with beam.Pipeline(options=opts) as p: lines = p | ReadFromText(args.input, coder=JsonCoder()) output = (lines | beam.Map(lambda x: (x['trace'], x)) | beam.GroupByKey() | beam.ParDo(AssembleTrace()) ) output | WriteToText(args.output) This shows us: - creating a beam.Pipeline, - reading in our logs (in their JSON format), - transforming them into (key, value)pairs using the tracekey (the request_idin this example), - grouping the data by keys, in this case trace, - using the AssembleTracesubclass of beam.DoFn, which we’ll cover next, to compile the grouped spans into a trace, - outputting the resulting lines to text. (The code in Github shows us windowing the data as well, but this example doesn’t make use of that functionality, so it’s strictly a no-op.) The only other important code in this example is the AssembleTrace class, so let’s look there next: class AssembleTrace(beam.DoFn): "DoFn for processing assembled traces." def fmt_span(self, span): "Format a span for joining." vals = (span['destination']['service'], span['time']) return "%s (t: %s)" % vals def process(self, element, window=beam.DoFn.WindowParam): "Take traces grouped by trace id and analyze the trace." trace = element[0] spans = list(element[1]) spans.sort(key=lambda x: x['time']) services = " -> ".join([self.fmt_span(span) for span in spans]) return ["[%s] %s spans: %s" % (trace, len(spans), services)] The process method is the key, unpacking the key and grouped values, sorting the values, and then joining them into a string representation of a trace. That’s all there is to it: now you have a simple Beam program that constructs request traces from logs containing the spans. This is clearly a very simple example, and it’s not constructing the spans in real-time, but altogether I was pretty impressed with how trivial it felt to write a somewhat useful program (including how few concepts I needed to understand to do it). Next up, I hope to get this running and compiling spans on top of a streaming runner (probably Apache Flink!). Mostly I worked off these resources: - Apache Beam Programming Guide which is significantly more up-to-date than the WordCount Example Walkthrough, to the extent that the walkthrough will often tell you that things don’t exist in the Python SDK, but the SDK documentation shows they do. - For reading in the JSON log lines, I followed this example, and it worked as advertised. - The Windowed wordcount example for windowing, and an example of writing a beam.DoFnsubclass. Overall, I was pretty impressed with how easy Beam as to work with once I got over my annoyance with the overriding of the | operator! I later did a Spark version of this job for a quick comparison point, and it was even shorter, and about equally simple: import json from pyspark.sql import SparkSession def fmt_span(span): "Format a span for joining." vals = (span['destination']['service'], span['time']) return "%s (t: %s)" % vals def build_trace(kv): trace, spans = kv spans = list(spans) spans.sort(key=lambda x: x['time']) services = " -> ".join([fmt_span(span) for span in spans]) return "[%s] %s spans: %s" % (trace, len(spans), services) def run(): log_dir = 'logs/*' spark = SparkSession.builder.appName("RequestSessions").getOrCreate() lines = spark.read.json(log_dir).cache().rdd traces = lines.map(lambda x: (x['trace'], x)) \ .groupByKey() \ .map(build_trace) output = traces.collect() spark.stop() if __name__ == '__main__': run() I found it a bit simpler to avoid the parDo terminology from Beam, which feels like unnecessary conceptual load, but overall pretty similar.
https://lethain.com/analyzing-cross-service-requests-apache-beam/
CC-MAIN-2021-21
refinedweb
1,091
53.92
JustLinux Forums > Community Help: Check the Help Files, then come here to ask! > Programming/Scripts > Basic fork()/processes question PDA Click to See Complete Forum and Search --> : Basic fork()/processes question dogn00dles 07-30-2002, 10:50 PM Here is the code in question: #include <iostream> #include <sys/types.h> #include <unistd.h> int main() { pid_t child_pid; cout<<"the main process is:"<<getpid(); child_pd = fork(); if(child_pid != 0){ cout<<"this is the parent process id"<<getpid(); cout<<"this is the child's"<<child_pid(); else cout<<"this is the child process"<<getpid(); return 0; } So is prints the main process pid, forks it, and says: if the child pid is zero print both the child's and the parent's. So if it's running as child, that prints, because the child returns "0" (and the parent returns the child pid). But I have some questions. For one, is pid_t a data type? And when I compile it, it prints everything, that confused me. Does anyone know why this is? thanks, -dogn00dles The Kooman 07-30-2002, 11:52 PM You're seeing all the messages because both the programs are running simultaneously. If you give a "wait()" in the parent to wait for the child to be over and then print your messages, then you'd see something "sensible" (not that its not sensible now) ;)! BTW, I think there's a typo in your code - it should be "child_pid" when you're "cout"ing it and not "child_pid()", since there's no function called "child_pid()"! /* I prefer to write in C (as against C++) ;) */ #include <stdio.h> #include <sys/types.h> #include <unistd.h> int main() { pid_t child_pid; printf ("the main process is:%d\n", getpid()); if ((child_pid = fork()) != 0) { printf("this is the parent process id:%d\n", getpid()); wait(); printf("child process is over now. This was the child's PID:%d\n", child_pid); } else { printf("this is the child process:%d\n", getpid()); } return 0; } "man wait" for more on the "wait()" system call. Read up Richard Stevens for even more on fork()s and wait()s ;) As for pid_t - its "typedef"ed to "__kernel_pid_t" in /usr/include/linux/types.h. "__kernel_pid_t" is "typedef"ed to "int" in /usr/include/asm/posix_types.h HTH dogn00dles 07-31-2002, 12:02 AM Thanks, that finally makes sense now. From what I've read (I guess you could say I "know" C++, but haven't really delved into its more advanced features) C++ is pretty much C with classes (wasn't that its original name?). Is there a major reason to switch, or just personal preference? Thanks, dogn00des The Kooman 07-31-2002, 12:18 AM Well, I did work on C++ for a while and what I found out was that debugging C++ code was hell. I was maintaining some huge legacy software with millions of lines of code. To make sense of anything we needed a class hierarchy tool which was a pain to find. Then, on top of that, you gotta admit that there are some, ummm, not so smart programmers in the industry. So things do get horribly obfuscated :mad:. The switch back to C was because my new job required it. Debugging C code is much easier, even if its dirty ;)! And last but not the least (and I know I can start a war here ;)), inspite of all the great things people say about OOP, my experience has shown that people rarely re-use code ... esp. classes!! People spend ages designing and writing classes with a lot of generic, unused stuff (member functions, etc) in it. But as soon as somebody new takes over the project, the same cycle starts all over again!!! You might argue that the same thing happens in C, but then, in C, I don't really see too many people putting in really generic things into a module just 'coz it should look like a "good" class!!! So there's less junk resulting in a smaller footprint. I suppose you can make out - I'm not too great a fan of OOP ;). Just plain encapsulation is enough for most of my needs till date. Have never felt the need for inheritance. ... And "traditional" imperative languages do offer decent encapsulation! *Phew*, I can rant :D!!! Kinjana 08-01-2002, 02:31 AM It prints both becuase the fork call creates a duplicate process. The new process consists of a copy of the address space of the orignal process . this allows the parent to communicate easily with its child. BOTH continue execution at the instruction after the fork system call, with one difference. The return code for the fork call is zero for the child and the non zero process identifier of the child is returned to the parent. Usually an execlp(...) call is used after a fork by one of the two process to replace that process'; memory space with a new program. modifying your code slightly... child = fork(); if (child == 0) { /*child process*/ execlp("/bin/ls","ls","NULL); } else if ((child != 0) { wait(NULL); printf("child complete\n"); } the major thing to recognize is that you initially get two different processes running a copy of the same progrtam. This is an excellent way for two processes to commicate and then go their seperate ways --- there is no requiremnet that the parent wait for the child to finish executing. They both continue. Kinj -- justlinux.com
http://justlinux.com/forum/archive/index.php/t-57606.html
crawl-003
refinedweb
906
72.66
example: class Foo def initialize(var) @instanceVariable = var end end f = Foo.new("foo") f. -----------> CTRL+SPACE I suppose that after parsing this is editor aware that "f" is an instance of class Foo. Thus code completion could provide methods of class Foo, and of classes in inheritance tree up to Object?! (something like IRB does) The situation for IRB and the editor is quite different; IRB is working on a live executing program, so it's easy for it to know the actual type of f - and the available methods on it. In the editor we need to rely on static analysis, which is possible in some cases (such as the quoted example) and also not possible in others. The synopsis of this bug is "Provide Some Basic Code Completion for Methods", and that is there today. If you try to get completion on f. today, you get to see ALL methods Ruby knows about (unless the left hand side expression is known, such as "self" or "super"). If you try to get method completion without a dot operator, you get to see inherited methods. I will work on dataflow to make this better for dotted operators, but that falls outside the "Basic" category of code completion so I'm closing this as fixed. Reassigning this issue to newly created 'ruby' component. Changing target milestone of all resolved Ruby issues from TBD to 6.0 Beta 1 build.
https://netbeans.org/bugzilla/show_bug.cgi?id=91278
CC-MAIN-2015-11
refinedweb
239
61.87
Scala FAQ: Can you share some examples of using tuples in Scala? Getting started with Scala tuples: val stuff = (42, "fish") This creates a specific instance of a tuple called a Tuple2, which we can demonstrate in the REPL: scala> val stuff = (42, "fish") stuff: (Int, java.lang.String) = (42,fish) scala> stuff.getClass res0: java.lang.Class[_ <: (Int, java.lang.String)] = class scala.Tuple2 A tuple isn't actually a collection; it's a series of classes named Tuple2, Tuple3, etc., through Tuple22. You don't have to worry about that detail, other than knowing that you can have anywhere from two to twenty-two items in a tuple. (And in my opinion, if you have twenty-two miscellaneous items in a bag, you should probably re-think your design.) Accessing tuple elements You can access tuple elements using an underscore syntax. The first element is accessed with _1, the second element with _2, and so on, like this: scala> val things = ("a", 1, 3.5) things: (java.lang.String, Int, Double) = (a,1,3.5) scala> println(things._1) a scala> println(things._2) 1 scala> println(things._3) 3.5 Use variable names to access tuple elements When referring to a Scala tuple you can also assign names to the elements in the tuple. I like to do this when returning miscellaneous elements from a method. To demonstrate the syntax, let's create a very simple method that returns a tuple: def getUserInfo = ("Al", 42, 200.0) Now we can call that method, and assign the tuple results directly to variables, like this: val(name, age, weight) = getUserInfo Here's what this looks like in the REPL: scala> def getUserInfo = ("Al", 42, 200.0) getUserInfo: (java.lang.String, Int, Double) scala> val(name, age, weight) = getUserInfo name: java.lang.String = Al age: Int = 42 weight: Double = 200.0 It's shown in the REPL results, but we'll further confirm that we can indeed access the values by variable name: scala> name res4: java.lang.String = Al scala> age res5: Int = 42 scala> weight res6: Double = 200.0 That's pretty nice. In a cool, related feature, if you only want to access some of the elements, you can ignore the others by using an underscore placeholder for the elements you want to ignore. Imagine you want to ignore the weight in our example: scala> val(name, age, _) = getUserInfo name: java.lang.String = Al age: Int = 42 Or suppose you want to ignore the age and weight: scala> val(name, _, _) = getUserInfo name: java.lang.String = Al Again, that's good stuff. Iterating over a Scala tuple As mentioned, a tuple is not a collection; it doesn't descend from any of the collection traits or classes. However, you can treat it a little bit like a collection by using its productIterator method. Here's how you can iterate over the elements in a tuple: scala> val t = ("Al", 42, 200.0) t: (java.lang.String, Int, Double) = (Al,42,200.0) scala> t.productIterator.foreach(println) Al 42 200.0 The tuple toString method The tuple toString method gives you a nice representation of a tuple: scala> t.toString res9: java.lang.String = (Al,42,200.0) scala> println(t.toString) (Al,42,200.0) Creating a tuple with -> In another cool feature, you can create a tuple using this syntax: 1 -> "a" This creates a Tuple2, which we can demonstrate in the REPL: scala> 1 -> "a" res1: (Int, java.lang.String) = (1,a) scala> res11.getClass res2: java.lang.Class[_ <: (Int, java.lang.String)] = class scala.Tuple2 You'll see this syntax a lot when creating maps: scala> val map = Map(1->"a", 2->"b") map: scala.collection.immutable.Map[Int,java.lang.String] = Map(1 -> a, 2 -> b) Summary: Scala tuples If you needed information on how to use a Scala tuple, I hope these examples have been helpful. Here are a few links to the tuple classes mentioned:
https://alvinalexander.com/scala/scala-tuple-examples-syntax/
CC-MAIN-2020-16
refinedweb
669
67.86
Sorry for joining this discussion so late. Like before, I just would like to contribute how we approached the problem at Jangaroo. Some time ago, we created our own declarative, XML-based language, targeted at creating Ext JS UIs, thus called EXML. EXML is very similar to MXML. The main difference is that, to have IDE support and validation, EXML is XML-Schema-based. For every module containing EXML files, an XSD is generated, describing all available classes and their properties. EXML is translated to ActionScript, which is then compiled to JavaScript, so we use two chained compilers for EXML->AS and AS->JS. After discovering that MXML is acutally not as tied to Flex components as I used to think (I stumbled across this blog I experimented with using MXML to define Ext JS UIs. I already have a prototype of MXML support for Jangaroo on a github branch<>which uses a different approach. Things became quite complicated with EXML when we wanted to make EXML->AS generation more intelligent. The EXML->AS compiler needed to inspect ActionScript classes, but these again could be referring to ActionScript classes generated from EXML. So we have a circular dependency here, which was complex to deal with. Thus, for my MXML prototype, I chose a different approach, namely to integrate MXML->JS compilation directly into the Jangaroo compiler, so that when the compilation process needs class acme.Foo, it looks for both acme/Foo.as and acme/Foo.mxml. If an MXML file is found, internally, the compiler still generates ActionScript code, parses it into an AST, and then hands it on to its standard JS code generator. While this may not be the most efficient solution, it provides best reuse of software components and works great! There is one important aspect to consider when deciding which route to take. If you, like Bernd Paradies, see JavaScript's role in this process as an machine language, it is completely valid to generate JS code from ABC. But this is not the viewpoint we take for Jangaroo. We chose ActionScript, not Dart, TypeScript or Haxe, as the high-level language to compile to JavaScript, because it is so very similar to JavaScript. In fact, it is a super set of JavaScript, so that you can copy-paste JavaScript code into your ActionScript program and it works! When you look at Jangaroo-generated JavaScript code, it closely resembles the original ActionScript source code. We optimized the compiler as well as the runtime to give the impression that the JS code is actually the AS code that you, the developer, wrote. Every source file results in a separate JS file, which is also loaded separately in "debug mode". Even the line numbers are kept precisely. This allows for debugging directly in the browser without the need for any additional / new debugger tools. Of course, this approach would not be possible at all when generating JS code from ABC, not from AS/AST. We would like to provide something similar for MXML, too. So the ideal solution would be a mixture of the approaches described in this thread. Combine Alex' datastructures and the AST->JS approach. This is also very similar to how Ext JS UIs are specified using nested JS Object literals. The idea would be to generate AS code from MXML that contains both the datastructures and the code fragments (<fx:Script>), keeping the line numbers (if possible). Then compile the resulting AS to JS, using the AST-based method. The format could look like so (pseudo-code): MyPanel.mxml: 01 <s:Panel xmlns:s="..." 02 03 <s:Button 04 <s:click> 05 trace('clicked ' + event.source.label); 06 </s:click> 07 </s:Button> 08 </s:Panel> could become something like MyPanel.as: 01 package ... { import ...; public class MyPanel extends Panel { public function MyPanel() { MxmlUtil.apply(this, { 02 title: "hello world", children: [ 03 { type: Button, label: "click me!", 04 click: function(event:MouseEvent) { 05 trace('clicked ' + event.source.label); 06 } 07 } 08 ]});}}} When using the Jangaroo approach, this could be compiled to JavaScript, keeping the line numbers for code fragments. So if you set a JavaScript breakpoint in line 05, this would exactly correspond to line 5 of your MXML source code! There is just one game changer that would convince me that this effort is not necessary, and that is JavaScript source maps support in all major browsers. Then, the generated JavaScript code could look as ugly as you like to have it, but the compiler would still have to provide the mapping to the original source code. This should be possible using ABC with debug information, shouldn't it? What do you think?
http://mail-archives.apache.org/mod_mbox/flex-dev/201212.mbox/%3CCAD8jDGJKbdpYWRdaVv8GfHfdvVJ0mB88z6+=K=A7mMaui4jH1w@mail.gmail.com%3E
CC-MAIN-2018-26
refinedweb
781
64.3
Jul 01, 2011 08:42 PM|Vivi1985|LINK Hello again, Im testing a database with the basics of MVCMusicStore Tutorial... I created a Income.cs (Model Class) with using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace ProbandoBD.Models { public class Income { public int idIncome { get; set; } public float countIncome { get; set; } public DateTime dateIncome { get; set; } } } Then Create an DBCalEntities.cs using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Data.Entity; namespace ProbandoBD.Models { public class DBCalEntities { public DbSet<Income> Incomes { get; set; } } } I have a table in the DBCal.mdf with a Table Income with the fields KEY int idIncome float countIncome DateTime dateIncome And my connection string is: <connectionStrings> <add name = "DBCalEntities" connectionString ="data source=.\SQLEXPRESS; Integrated Security=SSPI; AttachDBFilename=|DataDirectory|\DBCal.mdf; User Instance=true" providerName="System.Data.SqlClient"/> </connectionStrings> The Database is working an the connectionstring I think is Ok... So, when I go to my controller: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using ProbandoBD.Models; namespace ProbandoBD.Controllers { public class CalendarBDController : Controller { DBCalEntities dbCal = new DBCalEntities(); // // GET: /CalendarBD/ public ActionResult Index() { //{ // new Income {idIncome = 1, dateIncome = new DateTime(2011,6,30), countIncome = 70}, // new Income {idIncome = 2, dateIncome = new DateTime(2011,7,3), countIncome = 60}, // new Income {idIncome = 3, dateIncome = new DateTime(2011,7,13), countIncome = 50} //}; var incomes = dbCal.Incomes.ToList(); return View(incomes); } } } And run the Project occurs that Exception and points this line var incomes = dbCal.Incomes.ToList(); I dont know what is going on!!! please help!!!! Jul 01, 2011 08:45 PM|CodeHobo|LINK DBCalEntities needs to inherit from the DbContext Class public class DBCalEntities : DbContext { public DbSet<Income> Incomes { get; set; } } Edit: Also check to see if the columns are nullable in the database, if they are then you need to use nullable types in your model public class Income { public int idIncome { get; set; } public float? countIncome { get; set; } public DateTime? dateIncome { get; set; } } Try adding the ? to float and DateTime to see if that fixes it. But if those database columns are null for a particular row, you will get an error unless your model has a nullable type. Jul 01, 2011 08:58 PM|Vivi1985|LINK Full of me!! I corrected it! thanks! But now there's another ERROR... (one or more validation errors were detected during the model generation) Model ValidationException was unhandled by user System.Data.Edm.EdmEntityType::Entity Type 'Income' has no key defined.... And points the same line of code... But I do defined that KEY in the Database table...is idIncome... :( Jul 01, 2011 09:04 PM|CodeHobo|LINK EF code first works on convention and expects your primary key to be "somethingId", in your case it's expecting public int IncomeId{ get; set; } However, since your field is Idincome it's not able to realize that it's the primary key column. In this case you have to specifically tell it that it's the id like so: public class Income { [Key] public int idIncome { get; set; } public float? countIncome { get; set; } public DateTime? dateIncome { get; set; } } Jul 01, 2011 09:29 PM|CodeHobo|LINK What are the names of the tables you have? Since your DbSet is defined as (DbSet<Income>Incomes), EF is expecting a table called Incomes. If it is not called Incomes, you will have to manually map your table to the entity. public class DBCalEntities :DbContext{ public DbSet<Income> Incomes { get; set; } protected override void OnModelCreating(ModelBuilder modelBuilder){ modelBuilder.Entity<Income>().MapSingleType().ToTable("Income"); // or whatever tablename you have } } Jul 01, 2011 09:47 PM|Vivi1985|LINK How many problems in a single line!!!!! countIncome is a float in my class and a money in may database table... It put this error: The 'countIncome' property on 'Income' could not be set to a 'Decimal' value. You must set this property to a non-null value of type 'Single' I change the atribute to Single and still with that error... Jul 01, 2011 09:56 PM|CodeHobo|LINK try public class Income { [Key] public int idIncome { get; set; } public decimal countIncome { get; set; } public DateTime? dateIncome { get; set; } } 8 replies Last post Jul 01, 2011 09:56 PM by CodeHobo
http://forums.asp.net/t/1695951.aspx
CC-MAIN-2014-42
refinedweb
714
50.63
Paper: 219 (C++ & Its Applicatio to Numerical Analysis) Time Allowed: Hrs. Note Attempt Five Question including compulsory Q.No.1. selecting at least One question from each Section Q.No.1 moveto function changes the current position to (x, y). Means if you want to move a point from the current position to a new position then you can use this function. execution are terms that describe the process of running a computer software program or command. The program in running is also known as execution.. P.No.30 iv) If x=20, y=15, z=5 then write down the output of the following expression ( x<y) && (z==5) Answer: False Section-I Data: Raw facts and figure is known as data e.g 25, ali etc File: A file is a collection of data stored in one unit, identified by a filename. It can be a document, picture, audio or video stream, data library, application, or other collection of data. The following is a brief description of each file type. A file may be a source program, pic,pdf etc. Varible: P.No.54 Q.No.3 a) Define arithmetic expression and wite down the order of precedence of operatoions. b) Define constants and discuss three different types of constants in C++. P.No. 56 Q.No.4 a) Discuss the term “cout-output stream” with examples also discuss its general syntax. P.No.92 b) Write a program in C++ to read the temperature in Fahrenheit, convert the temperature to Celsius degrees #include <iostream.h> #include<conio.h> void main() { float c; float f; b) Write a program to input two integers values and find out whether these numbers are equal or different? #include <iostream.h> #include<conio.h> void main() { int a,b; cout << "Enter first Number: "; cin >> a; cout << "Enter 2nd Number: "; cin >> b; if(a==b) cout << " Numbers are equal”; else cout << " Numbers are Not equal”; getch(); } Section-III #include <iostream.h> #include<conio.h> void main() { int n; cout<<”1 to 10 Natural Number series in descending order”<<endl; for(n=10; n>=1; n--) cout << n<”\t”; getch(); } Q.No.7 a) Write a program in C/C++ to find out and print the maximum value in the array: { 15, 11, 2, 6, 13, 16, 12, 4} #include <iostream.h> #include<conio.h> void main() { int n[]={ 15, 11, 2, 6, 13, 16, 12, 4}; int max=0; for(int a=0; a<=7; a++) { if( max<n[a]) max=n[a]; } cout<<”The maximum value in tha array: “<<max; getch(); b) Write a program to input data into two different arrays and then to add the two arrays and store the result in a third array. #include <iostream.h> #include<conio.h> void main() { int n,a; cout<<”enter the length of array”; cin<<n; int array1[n],array2[n],array3[n]; Cout<<”Enter elements of ist array” Q.No. 8 a) What type of information the following statement provides the Compiler: #include<stdio.h> #include<graphics.h> #include<conio.h> int main(){ int gd = DETECT,gm; int x ,y ,radius=80; initgraph(&gd, &gm, "C:\\TC\\BGI"); /* Initialize center of circle with center of screen */ x = getmaxx()/2; y = getmaxy()/2; getch(); closegraph(); return 0; } Help for the above program In this program, we will draw a circle on screen having centre at mid of the screen and radius of 80 pixels. We will use outtextxy and circle functions of graphics.h header file. Below is the detailed descriptions of graphics functions used in this program. Function Description It initializes the graphics system by loading the passed graphics driver then initgraph changing the system into graphics mode. getmaxx It returns the maximum X coordinate in current graphics mode and driver. getmaxy It returns the maximum Y coordinate in current graphics mode and driver. outtextxy It displays a string at a particular point (x,y) on screen. circle It draws a circle with radius r and centre at (x, y). closegraph It unloads the graphics drivers and sets the screen back to text mode.
https://www.scribd.com/document/368340322/MSc-Math-C-Paper-Solution-annual-12
CC-MAIN-2019-30
refinedweb
683
61.56
import random import time def chase(): # Create an amphitheater in which our turtles will fight to the death amphitheater = makeWorld() # Create two turtles: a predator and its prey predator = makeTurtle(amphitheater) prey = makeTurtle(amphitheater) # Position the turtles in their starting positions. # The predator is aiming at the prey; the prey is pointing South-East. penUp(predator) penUp(prey) x1 = random.choice(range(0, amphitheater.getWidth()/4)) y1 = random.choice(range(0, amphitheater.getHeight()/4)) x2 = amphitheater.getWidth()/2 y2 = amphitheater.getHeight()/2 moveTo(predator, x1, y1) moveTo(prey, x2, y2) turnToFace(predator, x2, y2) turnToFace(prey, amphitheater.getWidth(), amphitheater.getHeight()) # Have the predator chase the prey and the prey run away until the # predator has closed to within tooth distance. while distanceBetween(predator, prey) > 5: # The prey panics and keeps changing direction. evasiveAngle = random.choice(range(-90, 90)) turn(prey, evasiveAngle) forward(prey, 40) # The predator aims at where the prey is, not where it is moving to. # But if it is fast enough, it won't keep going round in a circle. turnToFace(predator, prey.getXPos(), prey.getYPos()) # We need to stop the predator overshooting. # See what happens if you take this out. closingDistance = int(min(distanceBetween(predator, prey), 50)) # Close in on the victim forward(predator, closingDistance) # PaREMOVEDe half a second so that we can watch what happens at leisure. time.sleep(0.5) # Compute the Euclidean distance between turtles. Very like the distance between # colors, except that they have three dimensions (RGB). Turtles live in a # two-dimensional plane. def distanceBetween(turtle1, turtle2): x1 = turtle1.getXPos() x2 = turtle2.getXPos() y1 = turtle1.getYPos() y2 = turtle2.getYPos() dist = math.sqrt((x1-x2)**2 + (y1-y2)**2) return dist
http://coweb.cc.gatech.edu/cs1315/5233
crawl-003
refinedweb
279
61.53
iOS 15 console freeze issue ? - momorprods Greetings incredible community! After migrating my iPhone to iOS15 I noticed a few issues with my Pythonista-made apps. Weird SceneView behaviours, but more critical - some random freeze when running some of the console functions. As a basic illustration, the code below freezes randomly (about 1 freeze every 3-4 attempts when I hit the « close » button (the popup does not display upon freeze) import ui import console def button_action(sender): global view #view.close() q=console.alert('Confirm Close ?','So you want to close this view? So you want to close this view?So you want to close this view? ','OK') if q==1: view.close() button = ui.Button(title='Close',action=button_action) button.y = 100 view = ui.View() view.add_subview(button) view.present() Of course this didn’t occur right before my iOS update. Do you also have this kind of random problem? Thks! @momorprods could you read this topic , perhaps it could help you - momorprods Thanks a lot, this seems to help a lot. 👍
https://forum.omz-software.com/topic/7306/ios-15-console-freeze-issue
CC-MAIN-2021-49
refinedweb
173
60.41
Hi all…… Here is a simple example of expandandable ListView in ANDROID. But I am not going to explain any code, because everything is explained inside the java file. Make sure to read it. package pack.Coderzheaven; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import android.app.ExpandableListActivity; import android.os.Bundle; import android.view.View; import android.widget.ExpandableListView; import android.widget.SimpleExpandableListAdapter; public class ExpandableListDemo extends ExpandableListActivity { @SuppressWarnings("unchecked") public void onCreate(Bundle savedInstanceState) { try{ super.onCreate(savedInstanceState); setContentView(R.layout.main); SimpleExpandableListAdapter expListAdapter = new SimpleExpandableListAdapter( this, createGroupList(), // Creating group List. R.layout.group_row, // Group item layout XML. new String[] { "Group Item" }, // the key of group item. new int[] { R.id.row_name }, // ID of each group item.-Data under the key goes into this TextView. createChildList(), // childData describes second-level entries. R.layout.child_row, // Layout for sub-level entries(second level). new String[] {"Sub Item"}, // Keys in childData maps to display. new int[] { R.id.grp_child} // Data under the keys above go into these TextViews. ); setListAdapter( expListAdapter ); // setting the adapter in the list. }catch(Exception e){ System.out.println("Errrr +++ " + e.getMessage()); } } /* Creating the Hashmap for the row */ @SuppressWarnings("unchecked") private List createGroupList() { ArrayList result = new ArrayList(); for( int i = 0 ; i < 15 ; ++i ) { // 15 groups........ HashMap m = new HashMap(); m.put( "Group Item","Group Item " + i ); // the key and it's value. result.add( m ); } return (List)result; } /* creatin the HashMap for the children */ @SuppressWarnings("unchecked") private List createChildList() { ArrayList result = new ArrayList(); for( int i = 0 ; i < 15 ; ++i ) { // this -15 is the number of groups(Here it's fifteen) /* each group need each HashMap-Here for each group we have 3 subgroups */ ArrayList secList = new ArrayList(); for( int n = 0 ; n < 3 ; n++ ) { HashMap child = new HashMap(); child.put( "Sub Item", "Sub Item " + n ); secList.add( child ); } result.add( secList ); } return result; } public void onContentChanged () { System.out.println("onContentChanged"); super.onContentChanged(); } /* This function is called on each child click */ public boolean onChildClick( ExpandableListView parent, View v, int groupPosition,int childPosition,long id) { System.out.println("Inside onChildClick at groupPosition = " + groupPosition +" Child clicked at position " + childPosition); return true; } /* This function is called on expansion of the group */ public void onGroupExpand (int groupPosition) { try{ System.out.println("Group exapanding Listener => groupPosition = " + groupPosition); }catch(Exception e){ System.out.println(" groupPosition Errrr +++ " + e.getMessage()); } } } The main.xml file. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <ExpandableListView android: <TextView android: </LinearLayout> The child_row.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <TextView android: </LinearLayout> The group_row.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <TextView android: </LinearLayout> The strings.xml (This file contains the string for the color that is used for text in the ListView) <?xml version="1.0" encoding="utf-8"?> <resources> <string name="app_name">ExpandableList Demo</string> <drawable name="white">#ffffff</drawable> <drawable name="blue">#2554C7</drawable> <drawable name="green">#347C2C</drawable> <drawable name="orange">#ff9900</drawable> <drawable name="pink">#FF00FF</drawable> <drawable name="violet">#a020f0</drawable> <drawable name="gray">#778899</drawable> <drawable name="red">#C11B17</drawable> </resources> The manifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <application android: <activity android: <intent-filter> <action android: <category android: </intent-filter> </activity> </application> </manifest> Please leave your comments if the post was useful. Follow me on facebook, twitter and google plus for more updates. hi, can u send me code plz… Haresh we have send you the code.. If you didn’t get it, please leave a comment here. Could you send me the code? Hello Haresh….. You can just make new project and copy and paste the code into their respective files. I have given the file names to copy on top of each code also. Additionally you need only an image named “bkg.png” or “bkg.jpg” in your drawable folder. that’s all…….. If Still you couldn’t get it. Kindly leave a comment. We are here to help you. Great tutorial! I ‘m interested in making a n level tree any suggestions on how to do that? Thanks Hi JS…. I think you can return another expandable Listview to create an n level tree in the following function by extending BaseExpandableListAdapter class.. public View getGroupView(int groupPosition, boolean isExpanded, View convertView, ViewGroup parent) hi, you created single row listview…for that you created the chaild list view…….but my problem is i want to make main listview with two rows…..how can i plz???? @rajesh : i have explained this in one of the comments above (in reply to JS). Currently I am little busy in my company project. So I can’t produce a sample for you now. hi! i need a two row(two rows text in group list) expandable list view if psbl pls help me……. Hii Haresh, Is there any way to customize the parent row of the list, I mean the row which is opening the child row(Group Item 1)? if you want to customize group, then change the color or something in group_row.xml Which API are you compiling against? I tried this with two emulator instances, one running Android 2.2 and the other with the Google APIs for 2.2 (both level 8). When loaded, all I get is a blank screen 🙁 It will work with all API’s. I was using ANDROID 2.0 when I wrote it. Check the Logcat if you got any errors or not. Check whether you have copied everything perfectly. Drag and copy the code, don’t double click and copy. can yousend me your code or give me a link about the download it Hello wang_peng1, We have sent you the code. Please check your mail and leave us a comment. You seems to be an expert in this field, good article and keep up the good work, my buddy recommended me this. My blog: Meilleur Taux aussi Rachat de Credit immobilier icant make it correnctly. igot an error can u please send me the code please >> thankz u very much..i got the project.and it works .u seems to be very help full. i have lot of problems with android..how can i ask .. post a comment or somethig else .please tell me.. 😀 Hi ishan, Post your doubts as comments….If we have time we will solve your problem for sure….. hi, please send me the complete code . thanks Hi, i want to know that this is expandable list view or Custom view? in the application. I want to make some application with this much good look. Regards I just signed up to your blogs rss feed. Will you post more on this subject? Yeah sure…… Hi, please mail the code Hi, please mail the code Please give a correct mail ID Can we get dynamic data from database when we click the Group item. @padmanabhan : You can do whatever you like on group item click. There is a listener for group item in the program please check. hi, please send me the complete code . Thanks in advance. July – Please check your mail, we have mailed you the code. hi, please send me the complete code . Thanks in advance anna Hello Anna, Please check your mail we have send you the whole project. thanks for such a quick reply:) Always welcome. keep in touch. Can we fix the parent in expandable listview. Please give me any suggestion if we can’t. What are you trying to say? Can you please explain. When we scroll the expandable list view then parent also scroll with child. I want to fix parent when we scroll child. Hi, Can you tell me how to drag the child items in list2, if we have xml which is divided into 2 lists, list1 & list2.We are displaying expandable list in list1 & want to drag the child items in list2. Please tell me if u know. Regards, Neha Sorry Neha, Not sure.. I have to work it out. Hello, Can you send me the whole code. praveen2288@gmail.com Hello Praveen:- Please check your mail. We have sent you the code. Hello, Is the code public? If so, can you please send me the source code? Thx either way, have a nice day. Hi Gabriela, Please check your mail. hi, Please send me the complete code to jegadjame@gmail.com. Thanks in advance Jegadeesh Hi James, Can you please send me the code. How to get dynamic data onclick of group item, can u please give an hint. Thank you. Hello thank you for this example. I am currently trying to use the ExpandableListView in an Activity which does not extends ExpandableListActivity. I want to create a Layout which is divided in two LinearLayouts. The inner one holds some textViews and images. And the outer one holds the ExpandableListView. The application has one “StartActivity” but even before i open the Activity with the ExpandableListView the application crashes. Do you have some simple code or tutorial for this? Would be great. Thanks in advance. Hello Roman :- Check your inbox. we have send you the code for Expandable Lists. Hi, I’m also like to user an ExpandableListView in an Activity which is not ExpandableListActivity. I’m trying to create a layout which have expandable list view which act as multiple options list right beside regular button which suppose to start a test. How can I do that? Thanks in advanced. You can have an activity which expands ExpandableListActivity and add other views like buttons in that. No Problem. How to show main items following subitems automatically (without clicking main item) using listview in android? If I use ExpandableListActivity, I need to click on the main item to see the subitems. I want to show mainitem following subitems at one instance(click evnt is not neede.When I run my application, I want both main item and subitems must be displayed on the emulator. how can i achieve this using listview? Please help me. Hello Jyothi. The use some custom Layout. you dont need ExpandableList. Hello, Wow this is awesome, Can you please send me the full source code? have a nice day. How to use ExpandableListActivity to open another activity? Please help me. Just call startActivity and pass your activity name. Hi, Thanks for the post, could you please send me the source code. I like to try it out Good one, pls share the code. Thanks in advance. Exactly what I looking for…thanks A little late…but could I get source code too? Great post!.could you please send me the source code? Thx this tutorial realy helps my project but i’m having problem putting images in group and child rows… id like to have little thumbnails beside the button.. can you please send me some samples?? and i samples that group and child are manualy made.. i cant edit it since it’s looped.. thanks in advance. please reply.. my deadline is friday Hey Kent please check your email. Could u send me the code please !!! Tyvm for the useful posts, however it is not working on my computer, it suns fine and it says it was successful but i dont see screen like yours, all i see is my bkg pic, any idea? thx in advance Check your inbox. we have sent you the code. Can you please send me the code? Thanks, Mufaddal Hi, Thanks to the post. Can you send me the code please? Thanks again! Can I get the code as well? Thanks! Where to send the code? Can you send me the code? Hi. Good tutorial.. can u send the code to my mail. this is my mail id s.seshu143attherate gmaildotcom.. thank Q. Hi, This is really good tutorial and very helpful. Can anyone have an idea to hide the default expand/collapse icon? Thanks in advance, Thanks to the post, Can you send me the code please? Thanks; kumar Hi, How do we make multicolumn expandablelistview. I added the things to the hashmap and also to the XML but no luck like the way it is described for ListView. TY What do you mean by multicolumn? You can have any custom layout for the row. I think I know what you’re trying to do. I couldn’t find any documentation on doing this either, but I just kept trying different things and figured this out. My code is based on the code on this page so you should be able to figure it out. First the java code: //for each slqite column this create 4 subrow columns where my database column names are Item, QuantitySold, SoldPrice and SetPrice. This is done right before returning the child list. for( int n = 0 ; n < mycursor.getCount(); n++ ) { HashMap child = new HashMap(); child.put( "Sub Item", mycursor.getString(mycursor.getColumnIndex("Item")) ); child.put( "Sub Item1", mycursor.getString(mycursor.getColumnIndex("QuantitySold")) ); child.put( "Sub Item2", mycursor.getString(mycursor.getColumnIndex("SoldPrice")) ); child.put( "Sub Item3", mycursor.getString(mycursor.getColumnIndex("SetPrice")) ); secList.add( child ); mycursor.moveToNext(); } result.add( secList ); } } return result; ———————- And here is my adapter formatting: mAdapter = new SimpleExpandableListAdapter(this, createGroupList(),R .layout.expandable_list_item1,new String[] { "Group Item" }, new int[] { R.id.toprow_name }, createChildList(), R.layout.expandable_list_item2, new String[] {"Sub Item", "Sub Item1", "Sub Item2", "Sub Item3"}, new int[] { R.id.subrow_item, R.id.subrow_quantitysold, R.id.subrow_minprice, R.id.subrow_setprice}); ———————- Second my xml layout code (this is crap formatting, its 2:50am and I just figured this out so I havn't written in proper xml format). That should do it! Oops, the website grabbed my xml code. I’ll try pasting it again, but basically its just 4 textviews with the id’s I mentioned above (subrow_item, subrow_quantitysold…) with a linear layout that is set to horizontal. I specify the textview width=20dp, but height is set to wrap_content. Hi… nice tutorial… I’ve searched so many tutorials and examples about usability of expandablelistview, but I think this is the most obvious I’ve ever read… May I have the code please… can you send me the complete code via my email: top_x_classix@yahoo.com Thanks… Thanks for the very nice tutorial. Your code is so simple to understand. great job…keep it up Thanks Umesh for the encouragement. Hey Andik,you can get the code from the link here……… Hi, the tutorial is great, I got a fight of two days with tutorials lists and more lists. Now I have a couple of questions that I’m fighting with your code. I can not find how to change the bottom of each row according to a condition such as: if (a == 2) row.setBackgroundDrawable (R.drawable.team) elserow.setBackgroundDrawable (R.drawable.only). The other is that I need to put the first subitem is fixed, ie ITEM TITLE Item description: blabla n No. tests n Rating: bla bla item team1 item team2 the explanation is that I have the title of the race that is first rate, once deployed I have the description of the race so many items and then another level as teams. Sorry for the long text Thanks awesome code….. very userful. thanks. Great tutorail.Thanks alot for such a nice tutorial hey..I’ve downloaded the project…but i got error s on the expandablelist.java…do i need to change anything or add anything to the code to make it working? Thanks OH.. it work..thank you so much for the codes. 😀 Hi… Am retrieving data as string from data base to android like name ,address, phone. This is dynamic pull from database i want it to display on android in a list but the activity should not extend ListAcitvity as I do have edit text on same screen. I want to display the name on list and when clicked rest of the details should come , am not able to display the list pls do help me . You can simply change the ListActivity and change the way ListView is referenced. If it is a ListActivity the ListView in the xml has the id = “@android:list” or something. You can simply change this to give ur id and change the way adapter is setup. thats all. I’ll right away snatch your rss as I can not to find your email subscription hyperlink or newsletter service. Do you have any? Please allow me recognise in order that I may subscribe. Thanks. Helpful information. Fortunate me I found your web site by chance, and I’m stunned why this coincidence did not happened in advance! I bookmarked it. Great tutorial, thanks! Most tutorials always leave out something critical like the XML file. hi……….. This is very helpful for me. because i am new in android development. i am developing Android Digital menu system application.i want one help. how to put icons in front of the expanablelist view and how to view the list items like image format…….. please give the sample program……. Advance Thanks for our responce…… Did you download the sample project I provided below the post, there you can change the adapter for the list. I want to create a multi level list view. I just want to do a simple example about this. Just two class, one class extends Activity, one class extends BaseExpandableListAdapter. I want to know how to implement the class extends BaseExpandableListAdapter to return an expandable list view for multilevel Please help me how to do that. Pingback: ListView with Sections in android. | Coderz Heaven Hi , Can you send me the code for the simple expandablelistview..thanks! The source code is below the post. PLease check that. HI , I great thanks for the tutorial. 🙂 Pingback: ClassCastException: java.util.HashMap with android SimpleExpandableListAdapter | Software development support, software risk,bugs for bugs, risk analysis, please send me source code of this application hello jayesh patel :- You can download the project from the download link below. Awesome tutorial thanks!! anyone who uses this don’t forget to put the colours in the string.xml file (I’m not the best at reading and that cost me a lot of unnecessary debugging) 🙂 I found it very useful but only thing i found problem was with getting values at the position where i am clicking.. This has been a great help. The only thing I’m not seeing is how the adapter gets linked to the list view with the name you gave it. Hi the above tutorial is very helpful for me..thanks for giving this nice tutorial…but i have one doubts… how is creating 2 groups in this example for separate details.For eg: Group name: OrderInfo,CustomerInfo. Childname(OrderInfo): payment_method Childname(CustomerInfo): name,email. How is to do. Hi i have to created different groups in this example.but i can’t develop different child list is placed in different formats.please refer my code and give me solutions.now i have to created 2 groups in my code. I need to place “payment_method” and “total” inside first group,and firstname and lastname is place in second group (e.g:Groupname:OrderInfo,customerInfo, Childname(OrderInfo):payment_method,total, Childname(CustomerInfo):firstname,lastname). My code can placed Childname(OrderInfo) inside OrderInfo group, but i wish to need if i have to click customerInfo group means the firstname,lastname also displayed customerInfo group. Here first child value is placed first group is successfully finished…but i can’t do place the second child value is placed second group.please help me.how is to do. The code is here: Hello! can i have to the code please? The full android project is below the post for download. Nice one. Pingback: android expanlistview arrow : Android Community - For Application Development Pingback: android expandlistview arrow : Android Community - For Application Development all solution about any software android app Pingback: ClassCastException: java.util.HashMap with android SimpleExpandableListAdapter - Tutorial Guruji
https://www.coderzheaven.com/2011/04/10/expandable-listview-in-android-using-simpleexpandablelistadapter-a-simple-example/?replytocom=439
CC-MAIN-2022-40
refinedweb
3,308
68.87
In ionic 2 Framework, If you are facing the problem like no provider Device, Camera, Contacts, File, Bluetooth, Google Maps, Google Plus, Push Notification etc. Then you are in the right place. This tutorial will explain, why the no provider is occurring and how to solve this. Recently(March 21, 2017 ), the Ionic team updated the native plugin version from 2.x to 3.x. The purpose of the update is, 1. 100% browser development support for native plugins. You can execute the most of the native plugins using your browser. 2. Improved application code bundling size. Due to the Ionic native plugin 3.x update, the no provider error is occurring. To solve this problem, you must install ionic-core and include the required plugin to the ngModule provider. Installation Say, for example, if you want to use the camera plugin or any other plugin, you must install ionic-native before installing other plugins. To install ionic-native, go the root of your project and execute the below command npm install @ionic-native/core --save Then install camera plugin using npm install @ionic-native/camera --save or ionic plugin add cordova-plugin-camera Add Plugins to Your App's Module After installing the camera plugin, you must add the plugin to the App's ngModule using the app.component.ts file. Open app.module.ts file and add camera plugin using import { Camera } from '@ionic-native/camera'; Then add the camera to the provider providers: [ ... Camera ... ] Ionic/Angular 1 Support Ionic native plugin 3.x won't support Ionic/AngularJS 1. So if you are using Ionic Framework version 1, then don't use the version 3.x CLI(command line interface)
https://ampersandacademy.com/tutorials/ionic-framework-version-2/no-provider-device-problem-ionic-2-framework-native-plugin-version-3-x
CC-MAIN-2019-35
refinedweb
283
57.87
Joins and Schema Validation in MongoDB 3.2 12/07/15 Version 3.2 of the NoSQL database MongoDB introduces two new interesting features (amongst others) that I’d like to explore in this blog post. Joins The logical namespaces where documents are stored are called collections in MongoDB. Up to now every type of query, aggreagtion and even map/reduce job operated on exactly one of these collections. In version 3.2 the aggregation framework introduces a kind of fetch join that enables you to load documents from more than one collection. Let’s assume the following schema … … and the need to query the customers togehter with their orders. We use this test data set: Our fetch join from the customers collection to the orders collection uses the new pipeline operation $lookup of the aggregation framework: The resulting customer document holds the array of orders in the joined field orders: Right now, the join condition is expressed on one field on each side, that may become more general in future versions (with multi field join conditions). Schema Validation One very fundamental characteristic of document orientation in MongoDB was the schemalessness, i.e. the absence of a validation mechanism that enforces a schema on documents of a collection. You had neither mandatory nor type checking on the fields of a document. Now you can define a so called validator on the collection level that can perform type checking and even semantic checks: We define expected types for the fields name and age. That also makes them mandatory fields. For the field age we define a condition that requires the age to be >= 18. The syntax is more or less the same as with find queries. An invalid document is rejected with an error message: You have to provide an age >= 18 to successfully become a customer: Conclusion The fetch joins give you a lot more freedom when designing your schema. You are no longer forced to plan purely query orientated. It also reduces denormalization. Of course, joins will eat up some of your speed. They will impact performance, also in MongoDB. Schema validation will help you to ensure the semantic consistency of your data. MongoDB can now act as an additional validating instance for your business data. Validation too, will have performance impacts. With these two new features MongoDB continues providing more and more enterprise readiness. They want to be as powerful as their relational counterparts that offered joins and validation since the beginning of the IT age. MongoDB is becoming an all-purpose database. Let’s see how long this goes along with the basic idea behind the NoSQL movement … All details and more new features can be found in the release notes of version 3.2. Comment
https://blog.codecentric.de/en/2015/12/joins-schema-validation-mongodb-3-2/
CC-MAIN-2017-17
refinedweb
459
63.7
The Flask Mega-Tutorial, Part IX: Pagination This is the ninth (this article) -! Submission of blog posts Let's start with something simple. The home page should have a form for users to submit new posts. First we define a single field form object (file app/forms.py): class PostForm(Form): post = StringField('post', validators=[DataRequired()]) Next, we add the form to the template (file app/templates/index.html): <!-- extend base layout --> {% extends "base.html" %} {% block content %} <h1>Hi, {{ g.user.nickname }}!</h1> <form action="" method="post" name="post"> {{ form.hidden_tag() }} <table> <tr> <td>Say something:</td> <td>{{ form.post(size=30, maxlength=140) }}</td> <td> {% for error in form.post.errors %} app/views.py): from forms import LoginForm, EditForm, PostForm from models import User,: - We are now importing the PostFormclasses - We accept POST requests in both routes associated with the indexview function, since that is how we will receive submitted posts. - When we arrive at this view function through a form submission we insert a new Postrecord into the database. When we arrive at it via a regular GET request we do as before. - The template now receives an additional argument, the form, so that it can render the text field. One final comment before we continue. Notice how after we insert a new Post into the detabase we do this: Post record. Displaying blog posts index view app/views.py): posts = g.user.followed_posts().all() And when you run the application you will be seeing blog posts from the database! The followed_posts method of the User class returns a sqlalchemy query object that is configured to grab the posts we are interested in. Calling. Pagination paginate method can be called on any query object. It takes three arguments: - the page number, starting from 1, - the number of items per page, - an error flag. If True, when an out of range page is requested a 404 error will be automatically returned to the client web browser. If False, an empty list will be returned instead of an error. The return value from paginate is a Pagination object. The items member of this object contains the list of items in the requested page. There are other useful things in the Pagination object that we will see a bit later. Now let's think about how we can implement pagination in our index view function. We can start by adding a configuration item to our application that determines how many items per page we will display (file config.py): # pagination POSTS_PER_PAGE = 3 It is a good idea to have these global knobs that can change the behavior of our application in the configuration file all together, because then we can go to a single place to revise them all. In the final application we will of course use a much larger number than 3, but for testing it is useful to work with small numbers. Next, let's decide how the URLs that request different pages will look. We've seen before that Flask routes can take arguments, so we can add a suffix to the URL that indicates the desired page: <-- page #1 (default) <-- page #1 (default) <-- page #1 <-- page #2 This format of URLs can be easily implemented with an additional route added to our view function (file app/views.py): from config import POSTS_PER_PAGE @app.route('/', methods=['GET', 'POST']) @app.route('/index', methods=['GET', 'POST']) @app.route('/index/<int:page>', methods=['GET', 'POST']) @login_required def index(page=1): form = PostForm() if form.validate_on_submit(): post = Post(body=form.post.data, timestamp=datetime.utcnow(), author=g.user) db.session.add(post) db.session.commit() flash('Your post is now live!') return redirect(url_for('index')) posts = g.user.followed_posts().paginate(page, POSTS_PER_PAGE, False).items return render_template('index.html', title='Home', form=form, posts=posts) Our new route takes the page argument, and declares it as an integer. We also need to add the page argument to the index function, and we have to give it a default value because two of the three routes do not have this argument, so for those the default will always be used. And now that we have a page number available to us we can easily hook it up to our followed_posts query, along with the POSTS_PER_PAGE configuration constant we defined earlier. Note how easy these changes are, and how little code is affected each time we make a change. We are trying to write each part of the application without making any assumptions regarding how the other parts work, and this enables us to write modular and robust applications that are easier to test and are less likely to fail or have bugs. At this point you can try the pagination by entering URLs for the different pages by hand into your browser's address bar. Make sure you have more than three posts available so that you can see more than one page. Page navigation We now need to add links that allow users to navigate to the next and/or previous pages, and luckily this is extremely easy to do, Flask-SQLAlchemy does most of the work for us. We are going to start by making a small change in the view function. In our current version we use the paginate method as follows: posts = g.user.followed_posts().paginate(page, POSTS_PER_PAGE, False).items By doing this we are only keeping the items member of the Pagination object returned by paginate. But this object has a number of other very useful things in it, so we will instead keep the whole object (file app/views.py): posts = g.user.followed_posts().paginate(page, POSTS_PER_PAGE, False) To compensate for this change, we have to modify the template (file app/templates/index.html): <!-- posts is a Paginate object --> {% for post in posts.items %} <p> {{ post.author.nickname }} says: <b>{{ post.body }}</b> </p> {% endfor %} What this change does is make the full Paginate object available to our template. The members of this object that we will use are: has_next: True if there is at least one more page after the current one has_prev: True if there is at least one more page before the current one next_num: page number for the next page prev_num: page number for the previous page With these for elements we can produce the following (file app/templates/index.html): <!-- posts is a Paginate object --> {% for post in posts.items %} <p> {{ post.author.nickname }} says: <b>{{ post.body }}</b> </p> {% endfor %} {% if posts.has_prev %}<a href="{{ url_for('index', page=posts.prev_num) }}"><< Newer posts</a>{% else %}<< Newer posts{% endif %} | {% if posts.has_next %}<a href="{{ url_for('index', page=posts.next_num) }}">Older posts >></a>{% else %}Older posts >>{% endif %} So we have two links. First we have one labeled "Newer posts" that sends us to the previous page (keep in mind we show posts sorted by newest first, so the first page is the one with the newest stuff). Conversely, the "Older posts" points to the next page. When we are looking at the first page we do not want to show a link to go to the previous page, since there isn't one. This is easy to detect because posts.has_prev will be False. We handle that case simply by showing the same text of the link but without the link itself. The link to the next page is handled in the same way. Implementing the Post sub-template Back in the article where we added avatar pictures we defined a sub-template with the HTML rendering of a single post. The reason we created this sub-template was so that we can render posts with a consistent look in multiple pages, without having to duplicate the HTML code. It is now time to implement this sub-template in our index page. And, as most of the things we are doing today, it is surprisingly simple (file app/templates/index.html): <!-- posts is a Paginate object --> {% for post in posts.items %} {% include 'post.html' %} {% endfor %} Amazing, huh? We just discarded our old rendering code and replaced it with an include of the sub-template. Just with this, we get the nicer version of the post that includes the user's avatar. Here is a screenshot of the index page of our application in its current state: The user profile page We are done with the index page for now. However, we have also included posts in the user profile page, not posts from everyone but just from the owner of the profile. To be consistent the user profile page should be changed to match the index page. The changes are similar to those we made on the index page. Here is a summary of what we need to do: - add an additional route that takes the page number - add a pageargument to the view function, with a default of 1 - replace the list of fake posts with the proper database query and pagination - update the template to use the pagination object Here is the updated view function (file app/views.py): @app.route('/user/<nickname>') @app.route('/user/<nickname>/<int:page>') @login_required def user(nickname, page=1): user = User.query.filter_by(nickname=nickname).first() if user is None: flash('User %s not found.' % nickname) return redirect(url_for('index')) posts = user.posts.paginate(page, POSTS_PER_PAGE, False) return render_template('user.html', user=user, posts=posts) Note that this function already had an argument (the nickname of the user), so we add the page number as a second argument. The changes to the template are also pretty simple (file app/templates/user.html): <!-- posts is a Paginate object --> {% for post in posts.items %} {% include 'post.html' %} {% endfor %} {% if posts.has_prev %}<a href="{{ url_for('user', nickname=user.nickname, page=posts.prev_num) }}"><< Newer posts</a>{% else %}<< Newer posts{% endif %} | {% if posts.has_next %}<a href="{{ url_for('user', nickname=user.nickname, page=posts.next_num) }}">Older posts >></a>{% else %}Older posts >>{% endif %} Final words Below I'm making available the updated version of the microblog application with all the pagination changes introduced in this article. Download microblog-0.9.zip. As always, a database isn't provided so you have to create your own. If you are following this series of articles you know how to do it. If not, then go back to the database article to find out. As always, I thank you for following my tutorial. I hope to see you again in the next one! Miguel #1 Siros said : Thank you so much again , this is very helpful. #2 Sean said : Great series! This is the most helpful Python/Flask tutorial I have read. Thank you very much!! #3 Bobby said : This tutorial is amazing....really great!! Thanks for sharing your skills with us. Two small points: your index function example above is missing "user = g.user"; it might be helpful to explain to people what the "form.hidden_tag()" does. #4 Bobby said : can you tell me why my route decorators need the trailing slash to work? @app.route('/login/') I'm sure I'm doing something silly?? #5 Miguel Grinberg said : @Bobby: hidden tags were covered in part 3 of the series. Can you expand on the "user = g.user" comment? What would that achieve? #6 Bobby said : in an earlier part of the tutorial, we were passing the user into the index template to say "Hello Bobby"...I guess that got dropped somewhere and I missed it. Can you tell me why my routes need the trailing "/"? #7 Miguel Grinberg said : @Bobby: I'm not sure why you need trailing slashes. Are you using the development web server when you run the application? The only idea I can offer is that a different web server might be redirecting requests without a trailing slash. Check in the debugging console of your browser to see what's happening. Then please let me know as I would like to know! #8 Dogukan Tufekci said : @Miguel thanks for another great tutorial. You are making my life so easy in this journey to understand Flask. I noticed that posts on a user's profile are not sorted by date. So I tweaked the code this way. Not sure if there's a better way to do this: class User(db.Model): #.... def sorted_posts(self): return Post.query.filter(Post.user_id == self.id).order_by(Post.timestamp.desc()) #9 Miguel Grinberg said : @Dogukan: you are right, I missed the sorting! You could simplify your solution a bit, the sorted_posts() method can be implemented as "return self.posts.order_by(Post.timestamp.desc())". I will update the article to include this. Thanks. #10 Dogukan Tufekci said : @Miguel That's much simpler indeed! Thanks! #11 George Mabley said : Hello, I'm not sure if this is the best article to ask this on, but it at least uses the concept. Is there a function similar to redirect which either redirects you to the current page, or refreshes the page you are on? Say I have a view that I can call on both the index and user page. If an arbitrary condition is met, I would like a flash message to occur on the page, and for the page to basically start fresh. However, I have to return something, so I must choose to redirect to url_for('index') or url_for('user'). If what I am asking is not clear, I will gladly provide some code as an example. Thank you! #12 Miguel Grinberg said : @George: I'm not completely sure I understand, but I think a good example of what you are asking is the login view. Let's say the user wants to visit some password protected page, so he gets redirected to the login page. Once he enters his credentials you have two options, if the credentials are valid you have to go to the page the user wanted to visit originally, if the credentials are invalid you have to redirect back to the login page. This is implemented with an additional argument sent to the view that needs to decide where to redirect. In the login example if the user needs to access the index page the server will redirect to. If instead the user went to the profile page the redirect will be. The login page then has the "next" argument in the request to know where to redirect. I hope this helps. #13 George Mabley said : Wow, thanks for the quick reply. I think that could work, but I am still hoping there is a simpler solution, Let me try to explain with code here:. Is there not a way for flask to, if those conditions are met, redirect you to the user page if you are on the user page, and the index page if you are on the index? #14 Miguel Grinberg said : Ah, I think I understand it better now. I can think of two ways to handle the problem. One is similar to what I said before, you have to insert something in the request that tells the view function what is the originating page. For example, you could use a more complex route, like "/repost/<source>/<id>" so then the repost view function gets an additional argument that can be "index" or "user". The problem is that you have to build different URLs depending on what view you are in. A more sophisticated solution would be to let the client handle this via an ajax call, which does not trigger a page reload. Then it is up to the Javascript client code to stay on the same page or trigger a reload, based on instructions provided by the Ajax handler on the server. (hint: next article in the series covers ajax). Good luck, #15 abenrob said : @Dogukan, Miguel - how are you modifying the User view for the sorted object? we ahve "posts = user.posts.paginate(page, POSTS_PER_PAGE, False)" I tried "user.sorted_posts.paginate(page, POSTS_PER_PAGE, False)" after implenting sorted_view() into the User model, but that isn't working... #16 Miguel Grinberg said : @abenrob: sorted_posts is a method, you have to add the parenthesis at the end: "user.sorted_posts().paginate(page, POSTS_PER_PAGE, False)". #17 abenrob said : Of course. Thanks again! #18 uldis said : How to solve the problem when the user double click the "post!" button? The post with the same content is inserted twice. #19 Miguel Grinberg said : @uldis: the standard solution for the double click to a form submit button is to use Javascript to disable the button when it is clicked the first time. You can achieve that simply by adding onclick="this.disabled=true;this.form.submit();" inside the submit button's input element. #20 Napoleon Ahiable said : Thank you so much for these amazing tutorials. YOU my friend, are my new favourite dude and I'll be hanging out with you a lot. God bless you for your kind giving heart. #21 Saber Rastikerdar said : Thank you for this great tutorial series. #22 Tri said : Miguel, Great tutorial once again! Could you please walk me through how to create other users and have them follow other users to test out the functionalities we just made? Kind of like in your screenshot. Thank you! #23 Miguel Grinberg said : @Tri: the easiest way is for you to play different users. For example, login to the server with two different browsers, using a different OpenID on each. Then each of these users can follow the other. #24 Tri said : I tried that but I keep on getting errors saying, "Invalid login. Please try again", even though it says, "Hi, Tri" underneath. And even when I logged in with another OpenID, the page always says, "Hi, Tri". Then when I click on 'your profile', there's an error that says, "TypeError: must be string or buffer, not None". I tried with many different OpenIDs and different browsers...logging out after each one, but I always get that same exact error where "Tri" is always the one that comes up. However, when I log in through Google with my email, there's no error. That's the only account that has no error, which is why I can't do multiple logins. I don't know where the problem comes from. Hopefully, you know what's going on. Thanks again. #25 Miguel Grinberg said : @Tri: you may have found a bug, but the information that you are giving me isn't enough for me to figure out where or why. Could you show me stack traces of the errors that you get?
http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-ix-pagination
CC-MAIN-2014-52
refinedweb
3,085
73.88
Description Now you've grown up, it's time to make friends. The friends you make in university are the friends you make for life. You will be proud if you have many friends. Input There are multiple test cases for this problem. Each test case starts with a line containing two integers N, M (1 <= N <= 100'000, 1 <= M <= 200'000), representing that there are totally N persons (indexed from 1 to N) and M operations, then M lines with the form "M a b" (without quotation) or "Q a" (without quotation) follow. The operation "M a b" means that person a and b make friends with each other, though they may be already friends, while "Q a" means a query operation. Friendship is transitivity, which means if a and b, b and c are friends then a and c are also friends. In the initial, you have no friends except yourself, when you are freshman, you know nobody, right? So in such case you have only one friend. Output For each test case, output "Case #:" first where "#" is the number of the case which starts from 1, then for each query operation "Q a", output a single line with the number of person a's friends. Separate two consecutive test cases with a blank line, but Do NOT output an extra blank line after the last one. Sample Input 3 5 M 1 2 Q 1 Q 3 M 2 3 Q 2 5 10 M 3 2 Q 4 M 1 2 Q 4 M 3 2 Q 1 M 3 1 Q 5 M 4 2 Q 4 Sample Output Case 1: 2 1 3 Case 2: 1 1 3 1 4 Notes This problem has huge input and output data, please use 'scanf()' and 'printf()' instead of 'cin' and 'cout' to avoid time limit exceed. /* Author:2486 Memory: 952 KB Time: 170 MS Language: C++ (g++ 4.7.2) Result: Accepted */ #include <cstdio> #include <cstring> #include <algorithm> using namespace std; const int maxn=100000+5; int par[maxn],sum[maxn]; int n,m,x,y; char op[5]; void init(int x) { for(int i=0; i<=x; i++) { par[i]=i; sum[i]=1; } } int find(int x) { return par[x]==x?x:par[x]=find(par[x]); } bool same(int x,int y) { return find(x)==find(y); } void unite(int x,int y) { x=find(x); y=find(y); if(x==y)return; par[x]=y; sum[y]+=sum[x]; } int main() { int cases=0; //freopen("D://imput.txt","r",stdin); while(~scanf("%d%d",&n,&m)) { init(n); cases++; if(cases!=1)printf("\n"); printf("Case %d:\n",cases); for(int i=0; i<m; i++) { scanf("%s",op); if(op[0]=='Q') { scanf("%d",&x); printf("%d\n",sum[find(x)]); } else { scanf("%d%d",&x,&y); unite(x,y); } } } return 0; }
https://blog.csdn.net/qq_18661257/article/details/46790007
CC-MAIN-2018-22
refinedweb
483
73.61