text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Note: Maps SDK for iOS, and provides necessary methods for adding, removing and managing other objects such as markers and polylines. Introduction The Maps SDK for iOS allows you to display a Google map in your iOS application. These maps have the same appearance as the maps you see in the Google Maps iOS app, and the SDK exposes many of the same features. In addition to mapping functionality, the API also supports a range of interactions that are consistent with the iOS UI model. For example, you can set up interactions with a map by defining responders that react to user gestures, such as tap and double-tap. The key class when working with a Map object is the GMSMapView class. GMSMapView handles the following operations automatically: - Connecting to the Google Maps service. - Downloading map tiles. - Displaying tiles on the device screen. - Displaying various controls such as pan and zoom. - Responding to pan and zoom gestures by moving the map and zooming in or out. - Responding to two finger gestures by tilting the viewing angle of the map. In addition to these automatic operations, you can control the behavior and appearance of the map through the properties and methods exposed by the GMSMapView class. GMSMapView allows you to add and remove markers, ground overlays and polylines, change the type of map that is displayed, and control what is shown on the map through the GMSCameraPosition class. Add a Map The basic steps for adding a map are: - (Once) Follow the steps in Getting Started to get the SDK, obtain a key and add the required frameworks. - In your AppDelegate, provide your API key to the provideAPIKey:class method on GMSServices. - Create or update a ViewController. If the map will be displayed when this view controller becomes visible, be sure to create it within the loadViewmethod. - Create a GMSCameraPositionobject that specifies the center and zoom level of the map. When you instantiate the GMSMapViewobject, you must pass the GMSCameraPositionobject as a required parameter. - Create and instantiate a GMSMapViewclass using the GMSMapView mapWithFrame:method. If this map is to be used as the view controller's only view, then CGRectZerocould be used as the map's frame — the map will be resized automatically. - Set the GMSMapViewobject as the view controller's view, e.g. self.view = mapView;. The below example adds a map, centered at downtown Singapore, to an app. Swift import UIKit import GoogleMaps class DemoViewController: UIViewController { override func loadView() { let camera = GMSCameraPosition.camera(withLatitude: 1.285, longitude: 103.848, zoom: 12) let mapView = GMSMapView.map(withFrame: .zero, camera: camera) self.view = mapView } } Objective-C #import "DemoViewController.h" @import GoogleMaps; @implementation DemoViewController - (void)loadView { GMSCameraPosition *camera = [GMSCameraPosition cameraWithLatitude:1.285 longitude:103.848 zoom:12]; GMSMapView *mapView = [GMSMapView mapWithFrame:CGRectZero camera:camera]; self.view = mapView; } @end Once you've followed these steps, you may further configure the GMSMapView object.: Change = kGMSTypeSatellite var mapView = GMSMapView.map(withFrame: .zero, camera:camera) mapView.isIndoorEnabled = false Objective-C GMSMapView *mapView = [GMSMapView mapWithFrame:CGRectZero camera:camera];. This will enable only users of your application to view your floor plans. // The myLocation attribute of the mapView may be null if let mylocation = mapView.myLocation { print("User's location: \(mylocation)") } else { print("User's location is unknown") }: - The camera's target will reflect the center of the padded region. - Map controls are positioned relative to the edges of the map. - Legal information, such as copyright statements or the Google logo appear along the bottom edge of the map. You can add padding around the edges of the map using the GMSMapView. padding property. The map will continue to fill the entire container, but text and control positioning, map gestures, and camera movements will behave as if it has been placed in a smaller space. This results in the following changes: - Camera movements via API calls or button presses (e.g., compass, my location) will be relative to the padded region. GMSMapView. projectionwill return a projection that includes only the padded region. - UI controls will be offset from the edge of the container by the specified number of points. Padding can be helpful when designing UIs that overlap some portion of the map. For example, in the below image, the map is padded along the top and right edges. Visible map controls and legal text will be displayed along the edges of the padded region, shown in green, while the map will continue to fill the entire container, shown in blue. In this example, you could float a menu over the right side of the map without obscuring map controls. To add padding to your map, create a UIEdgeInsets object and pass it to the GMSMapView. padding property. Swift;
https://developers-dot-devsite-v2-prod.appspot.com/maps/documentation/ios-sdk/map?hl=de
CC-MAIN-2020-24
refinedweb
781
56.76
A position paper for the W3C Workshop on Web of Services for Enterprise Computing, by Benjamin Carlyle of Westinghouse Rail Systems Australia. The Web and traditional SCADA technology are built on similar principles and have great affinity. However, the Web does not solve all of the problems that the SCADA world faces. This position paper consists of two main sections: The first section describes the SCADA world view as a matter of context for readers who are not familiar with the industry; the second consists of a series of "Tier 1" and "Tier 2" positions that contrast with the current Web. Tier 1 positions are those that are based on a direct and immediate impact on our business. Tier 2 positions are more general in nature and may only impact business in the longer term. Supervisory Control and Data Acquisition (SCADA) is the name for a broad family of technologies across a wide range of industries. It has traditionally been contrasted with Distributed Control Systems (DCS), where distributed systems operate autonomously and SCADA systems typically operate under direct human control from a central location. The SCADA world has evolved to usually be a hybrid with traditional DCS systems, but its meaning has expanded further. When we talk about SCADA in the modern era, we might be talking about any system that acquires and concentrates data on a soft real-time basis for centralised analysis and operation. SCADA systems or their underlying technologies now underpin most operational functions in the railway industry. SCADA has come to mean "Integration" as traditional vertical functions like train control, passenger information, traction power, and environmental control exchange ever more information. The demands of our customers for more flexible, powerful, and cost-effective control over their infrastructure are an ever-increasing set. Perhaps half of our current software development can be attributed to protocol development to achieve our integration aims. This is an unacceptable figure, unworkable, and unnecessary. We tend to see a wide gap between established SCADA protocols and one-off protocols developed completely from scratch. SCADA protocols tend to already follow many of the REST constraints. They have limited sets of methods, identifiers that point to specific pieces of information to be manipulated, and a small set of content types. The one-off protocols tend to need more care before they can be integrated, and often there is no architectural model to be found in the protocol at all. We used to think of software development to support a protocol as the development of a "driver", or a "Front End Processor (FEP)". However, we have begun to see this consistently as a "protocol converter". SCADA systems are typically distributed, and the function of protocol support is usually to map an externally-defined protocol onto our internal protocols. Mapping from ad hoc protocols to an internally-consistent architectural style turns out to be a major part of this work. We have started to work on "taming" HTTP for use on interfaces where we have sufficient control over protocol design, and we hope to be able to achieve Web-based and REST-based integration more often than not in the future. Our internal protocols already closely resemble HTTP. The application of REST-based integration has many of the same motivations and goals as the development of the Semantic Web. The goal is primarily to integrate information from various sources. However, it is not integration with a view to query but with a view to performing system functions. For this reason it is important to constrain the vocabularies in use down to a set that in some way relate to system functions. I would like to close this section with the observation that there seems to be a spectrum between the needs of the Web at large, and the needs of the enterprise. Probably all of my Tier 1 issues could be easily resolved within a single corporate boundary, and continue to interoperate with other parts of the Web. The solutions may also be applicable to other enterprises. In fact, as we contract to various enterprises I can say this with some certainty. However, it seems difficult to get momentum behind proposals that are not immediately applicable to the real Web. I will mention pub/sub in particular, which is quickly dismissed as being unable to cross firewalls easily. However, this is not a problem for the many enterprises that could benefit from a standard mechanism. Once acceptance of a particular technology is established within the firewall, it would seem that crossing the firewall would be a more straightforward proposition. Knowing that the protocol is proven may encourage vendors and firewall operators to make appropriate provisions when use cases for the technology appear on the Web at large. My first Tier 1 issue is the use of HTTP to communicate with High Availability (HA) clusters. In the SCADA world, we typically operate with no single point of failure anywhere in a critical system. We typically have redundant operator workstations, each with redundant Network Interface Cards (NICs), and so on and so forth, all the way to a HA cluster. There are two basic ways to design the network between, either create two separate networks for traffic, or interconnect. One approach yields multiple IP addresses to connect to across the NICs of a particular server, and the other yields a single IP. Likewise, it is possible to perform IP takeover and have either a single IP shared between multiple server hosts or have multiple IPs. Other than HA, we typically have a constraint on failover time. Typically, any single point of failure is detected in less than five seconds and a small amount of additional time is allocated for the actual recovery. Demands vary, and while some customers will be happy with a ten or thirty second total failover time others will demand a "bumpless" transition. The important thing about this constraint is that it is not simply a matter of a new server being able to accept new requests. Clients of the HA cluster also need to make their transition in the specified bounded time. HTTP allows for a timeout if a request takes too long, typically around forty seconds. If this value was tuned to the detection time, we could see that our server had failed and attempt to reconnect. However, this would reduce the window in which valid responses must be returned. It would be preferable to send periodic keepalive requests down the same TCP/IP connection as the HTTP request was established on. This keepalive would allow server death detection to be handled independently of a fault that causes the HTTP server not to respond quickly or at all. We are experimenting with configuring TCP/IP keepalives on HTTP connections to achieve HA client behaviour. The first question in such a system is about when the keepalive should be sent, and when it should be disabled. For HTTP the answer is simple. When a request is outstanding on a connection, keepalives should be sent by a HA client. When no requests are outstanding keepalives should be disabled. In general theory, keepalives need to be sent whenever a client expects responses on the TCP/IP connection they established. This general case affects the pub/sub model that I will describe in the next section. If pub/sub updates can be delivered down a HA client's TCP/IP connection, the client must send keepalives for the duration of its subscriptions. It is the server who must send keepalives if the server connects back to the client to deliver notifications. Such a server would only need to do so while notification requests are outstanding, but would need to persist the subscription in a way that left the client with confidence that the subscription would not be lost. Connection is also an issue in a high availability environment. A HA client must not try to connect to one IP, then move onto the others after a timeout. It should normally connect to all addresses in parallel, then drop all but the first successful connection. This process should also take place when a failover event occurs. One of the constants in the ever-changing SCADA world, is that we perform soft real-time monitoring of real-world state. That means that data can change unexpectedly and that we need to propagate that data immediately when we detect the change. A field unit will typically test an input every few milliseconds, and on change will want to notify the central system. Loose coupling will often demand that a pub/sub model be used rather than a push to a set of urls configured in the device. I have begun drafting a specification that I think will solve most pub/sub problems, with a preliminary name of SENA. It is loosely based on the GENA protocol, but has undergone significant revision to attempt to meet the security constraints of the open Web while also meeting the constraints of a SCADA environment. I would like to continue working on this protocol or a similar protocol, helping it reach a status where it is possible to propose it for general use within enterprise boundaries. We are extremely sensitive to overload problems in the SCADA world. This leads us to view summarisation as one of the core features of a subscription protocol. We normally view pub/sub as a way to synchronise state between two services. We view the most recent state as the most valuable. If we have to process a number of older messages before we get to the newest value, latency and operator response time both increase. We are also highly concerned with situations permanent or temporary where state changes occur at a rate beyond which the system can adequately deal with. We dismiss with prejudice, any proposal that involves infinite or arbitrary buffering at any point in the system. We also expect a subscription model to be able to make effective use of intermediaries, such as web proxies that may participate in the subscription. I believe that the architectural styles of the Web can be applied to the enterprise. However, local conventions need to be permitted. Special methods, content types, and other mechanisms should all be permitted where required. I anticipate that the boundary between special and general will shift over time, and that the enterprise will act as a proving ground for new features of the wider Web. Once such features are established in the wider Web, I would also expect the tide to flow back into enterprises that are doing the same thing in proprietary ways. If properly nurtured, I see the enterprise as a nursery for ideas that the Web is less and less able to experiment with itself. I suspect that the bodies that govern the Web should also be involved with ideas that are emerging in the enterprise. These bodies can help those involved with smaller-scale design keep an eye on the bigger picture. Web Services are not a good solution space for Web architecture because they attack integration problems at too low a level. It is unlikely that two services independently developed against the WS-* stack will interoperate. That is to say, they will only interoperate if their WSDL files match. HTTP is ironically a higher-level protocol than the protocol that is layered on top of it. That said, we do not rule out interoperating with such systems if the right WSDL and architectural styles are placed on top of the WS-* stack. We anticipate a "HTTP" WSDL eventually being developed for WS-*, and expect to write a protocol converter back to our internal protocols for systems that implement this WSDL. The sheer weight of expectation behind Web Services suggests that it will be simpler for some organisations to head down this path, than down a path based on HTTP directly. We view RDF as a non-starter in the machine-to-machine communications space, though we see some promise in ad hoc data integration within limited enterprise environments. Large scale integration based on HTTP relies on clear, well-defined, evolvable document types. While RDF allows XML-like document types to be created, it provides something of an either/or dilemma. Either use arbitrary vocabulary as part of your document, or limit your vocabulary to that of a defined document type. In the former case you can embed rich information into the document, but unless the machine on the other side expects this information as part of the standard information exchange, it will not be understood. It also increases document complexity by blowing out the number of namespaces in use. In practice it makes more sense to define a single cohesive document type with a single vocabulary that includes all of the information you want to express. However, in this case you are worse off than if you were to start with XML. You cannot relate a single cohesive RDF vocabulary to any other without complex model-to-model transforms. In short, it is easier to extract information from a single-vocabulary XML document than from a single-vocabulary RDF document. RDF does not appear to solve any part of the system integration problem as we see it. However, again, it may assist in the storage and management of ad hoc data in some enterprises in place of traditional RDBMS technology. We view the future of the semantic web as the development of specific XML vocabularies that can be aggregated and subclassed. For example, the atom document type can embed the html document type in an aggregation relationship. This is used fo elements such as <title>. The must-ignore semantics of atom also allow sub-classing by adding new elements to atom. The subclassing mechanism can be used to produce new versions of the atom specification that interoperate with old implementations. The mechanism can also be used to produce jargonised forms of atom rather than inventing a whole new vocabulary for a particular problem domain. We see the development, aggregation, and jargonisation of XML document types as the key mechanisms in the development of the semantic web. The graph-based model used by RDF has currently not demonstrated value in the machine-to-machine data integration space, however higher-level abstractions expressed in XML vocabularies are a proven technology set. We anticipate the formation of communities around particular base document types that work on resolving their jargon conflicts and folding their jargon back into the base document types. We suspect this social mechanism for vocabulary development and evolution will continue to be cancelled out in the RDF space by RDF's reliance URI namespaces for vocabulary and by its overemphasis of the graph model. One the subject of XML, we have some concerns over the current direction in namespaces. The selection of a parser for a document is typically based on its MIME type. Some XML documents will contain sub-documents, however there is no standard way to specify the MIME type of the sub-document. We view MIME as more fully-featured than arbitrary URIs, particularly due to the explicit subclassing mechanism available. In MIME we can explicitly indicate that a particular document type is based on xml: application/some-type+xml. Importantly, we can continue this explicit sub-typing: application/type2+some-type+xml. We consider this an important mechanism in the evolution of content types, especially when jargonised documents are passed to standard processors. It is normal to expect that the standard processor would ignore any jargon and extract the information available to it as part of standard vocabulary. While MIME also has its weaknesses, the explicit subclassing mechanism is not available in URI name-spaces at all. To use the atom example, again, atom has a application/atom+xml MIME type but an XML namespace of <>. We view the former as more useful than the latter in the development of the Semantic Web and in general machine to machine integration problems. We regard the protection of secret data by IP-level or socket-level security measures as being sufficient at this time. Secret data is known and communicated by few components of the architecture, so is usually not a scalability issue. We do not think that secret data should have significant impact on Web architecture, however, we do view the ability to digitally sign non-secret data as a likely enabler for future protocol features. Web technology and architectural style are proven useful tools for systems integration, but are incomplete. A scalable summarising Publish/Subscribe mechanism is an essential addition to the suite of tools, as is a client profile for operating in High Availability environments. These tools must be defined and standardised in order to gain a wide participation to be useful to the enterprise. We have concerns about some current trends in Web Architecture. These relate to to namespaces in XML, Web Services, and RDF. All of these trends appear to work against the goal of building integrated architectures from multi-vendor components. Our goal outcomes would also appear to be the goal outcomes of the Semantic Web, so we have some hope that these trends will begin to reverse in the future.
http://www.w3.org/2007/01/wos-papers/wrsa
CC-MAIN-2015-27
refinedweb
2,868
51.28
Pt-I-1 Iros: Change the views/users/edit.html.erb content to <h1>Editing user</h1> <% form_for(@user) do |f| %> <%= f.error_messages %> <p> <label for="user_name">Name: </label> <%= f.text_field :name, :size =>40 %> </p> <p> <label for="user_password">Password: </label> <%= f.password_field :password, :size =>40 %> </p> <p> <label for="user_password_confirmation">Confirm: </label> <%= f.password_field :password_confirmation, :size =>40 %> </p> <p> <%= f.submit "Update" %> </p> <% end %> <%= link_to 'Show', @user %> | <%= link_to 'Back', users_path %> gackd: The method posted by Iros is what I first did. It’s the obvious easy solution. What I’m wondering is: how do you do this in a way that the user has to enter their current password to change their password?I tried adding a field to edit.html.erb the way shown above but Rails always complained about it. I sort of got it to work with fields_for like this: <% fields_for :current_password do |f| %> <%= f.password_field :current_password %> <% end %> That ended up in the params has as params:current_password. That sucks. I decided to see if I could use User#authenticate with @user.name and that params[][] mess to see if the user’s current password was correct and then update to the new password provided. It was ugly but seemed to work okay. Any suggestions? Jinyoung: To gackd. You can use password_field_tag function. <p> <%= label_tag 'Current password' %>: <%= password_field_tag :current_password, '', :size => 40 %> </p> I suggest below additional code to Iros’s one.In user.rb :-) ] class User < ActiveRecord::Base # ... validates_presence_of :name, :password # ... # ... <p> <label for="user_name">Name: </label> <%= f.text_field :name, :size =>40, :value => @user.name %> </p> # ... Grazybom:Iros’s solution does not work for me, it changes only view layout but nothing happens on the server side. It does not update the hashed_password. Here is my solution. First, I added a new method to the user’s model: def self.update_user(params,salt) # params: {name, confirmation_password} user = {} unless params[:password_confirmation].blank? hashed_confirmation_password = User.encrypted_password(params[:password_confirmation],salt) user[:name] = params[:name] user[:hashed_password] = hashed_confirmation_password user[:salt] = salt end return user end and, second, I changed the update method in users_controller.rb: def update @user = User.find(params[:id]) user = User.update_user(params[:user],@user[:salt]) return redirect_to(:action =>:index) if user.blank? respond_to do |format| if @user.update_attributes(user) .............................. Now, it saves an updated password if the field was changed. But, there is always “but”, I cannot find the way how to make validation working if I add password and password_confirmation fields. If I create a new user User.new() then validation works (name,password), but if I use User.find() it does not work.. syborg: Let me explain my soution. Possibly there are more efficient ones but the Convention over configuration asks to memorize or have a lot of documentacion and/or experience. It’s not my case. So, here is what I did: First, I exactly made a new edit.html.erb as Iro’s one. Second, I have modified a line and added another in the update method: def update @user = User.find(params[:id]) respond_to do |format| if (@user.update_attributes(params[:user]) if !params[:user][:password].blank?) #modified line flash[:notice] = "User #{@user.name} was successfully updated." format.html { redirect_to :action=>:index } format.xml { head :ok } else flash[:notice] = "User #{@user.name} needs a password" # added line format.html { render :action => "edit" } format.xml { render :xml => @user.errors, :status => :unprocessable_entity } end end end I suppose it is not the niciest code, more than that, I’m sure there is some better way to do that. But works BTW, does anybody know why “validate :password_non_blank” in user model doesn’t work here as it did for the ‘new’ action?
https://pragprog.com/wikis/wiki/Pt-I-1/version/8
CC-MAIN-2015-14
refinedweb
606
53.47
Still new at this, especially graphics. I'm working on a 3d graphics project, and found nice samples for mat3D.py, which is available from. The program includes this code: """OpenGL-based 3d surface plot""" #last updated 12/10/2006 from OpenGL.GL import * from OpenGL.GLUT import * from OpenGL.Tk import * import Tkinter import numpy as N import tkFileDialog -snip- def make_plot(self,colors = Colors): """Draw a plot in a Tk OpenGL render window.""" # Some api in the chain is translating the keystrokes to this octal string # so instead of saying: ESCAPE = 27, we use the following. ESCAPE = '\033' # Number of the glut window. window = 0 # Create render window (tk) f = Tkinter.Frame() f.pack(side = 'top') self.offset = 0.05 self.textlength,self.xticks,self.yticks,self.zticks =self.GetFormatMaxTextLength() #todo: each tick own offset o = Opengl(width = 640, height = 480, double = 1, depth = 1) o.redraw = self.redraw -snip- which yields the following error: Traceback (most recent call last): File "/home/browerg/text.py", line 35, in <module> o = Opengl(None, width = 400, height = 200, double = 1, depth = 1) NameError: name 'Opengl' is not defined Searching the web led me to text.py at. The program includes this code: #!/usr/bin/env python from OpenGL.GL import * from OpenGL.Tk import * from logo import * -snip- import Tkinter, sys o = Opengl(None, width = 400, height = 200, double = 1, depth = 1) -snip- which gave the same error: Traceback (most recent call last): File "/home/browerg/text.py", line 35, in <module> o = Opengl(None, width = 400, height = 200, double = 1, depth = 1) NameError: name 'Opengl' is not defined I'm running Python2.5, Tk8.4, and Tcl8.4 on Ubuntu 7.10. Hardware is an i686 laptop. What am I missing? Thanks in advance. George Hi, I'm the maintainer for glChess in the gnome-games package and we're having a problem with the name of the GLU error exception class on Gentoo. On my Ubuntu 7.10 box the following works: >>> import OpenGL.GLU >>> OpenGL.GLU.GLUError but on Gentoo it fails, they expect: >>> import OpenGL.GLU >>> OpenGL.GLU.GLUerror What is the correct name for the exception class? Thanks, --Robert See: HI. I am using the latest pyOpenGL for windows and when I type this in python import OpenGL.GL it says that pkg_resources is missing. I have no clue what I am supposed to do. Please can you help me I would like to use a single vertex buffer object to contain information about vertices, normals, and colors. According to the GL spec for glNormalPointer, glVertexPointer, and glColorPointer: If a non-zero named buffer object is bound to the GL_ARRAY_BUFFER target (see glBindBuffer) while a normal/vertex/color array is specified, pointer is treated as a byte offset into the buffer object's data store. Also, the buffer object binding (GL_ARRAY_BUFFER_BINDING) is saved as normal vertex array client-side state (GL_NORMAL_ARRAY_BUFFER_BINDING). Is there any way to do this currently? PyOpenGL seems to expect an actual sequence for the pointer value. If there isn't a way of doing this already, I'll poke around and see if I can't make a patch to support this alternative behavior in my spare time. Any suggestions about how to go about this patch for PyOpenGL? (I'm fairly swift with Python, minimal experience with ctypes, and moderate OpenGL experience.) --Jason -- ^ ^ <<o.o>> ="'=="`== Hi All I am new to PyOpenGl and want to use 2D graphics, to create rectangle to represent some information. Is there any 2D graphics library that is supported by pygame and will be a better option instead of using PyOpenGl? Thanks, Sibtey Mehdi Is there a simple example in pyopengl that uses numpy to generate a display list of verticies? If so, where can I find it? I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/pyopengl/mailman/pyopengl-users/?viewmonth=200802&style=flat
CC-MAIN-2016-30
refinedweb
676
67.35
When using Redux, you may have come across a scenario where you’d like one action creator to dispatch multiple actions. There are a few reasons you’d want to do this, but let’s consider the problem of handling a submit event from a form. In response, you’ll want to do multiple things. For example: RESET your form’s view model, POST your form’s data to the server, and also NAVIGATE to another route. So should you dispatch separate actions for each of these behaviours, or instead dispatch a single action which is handled by each of the applicable reducers? Actually, this is a trick question; with the appropriate store middleware, it is possible to do both at once! How does this work? Well, no matter what you pass to dispatch, it is still a single action. Even if your action is an array of objects, or a function which can then create more action objects! Dispatching functions which dispatch actions But wait a moment. If we were to take the example of an action creator which returns a function for the redux-thunk middleware, aren’t we definitely calling dispatch multiple times? function submit() { return function(dispatch) { dispatch({ type: 'NAVIGATION/NAVIGATE', location: {name: 'documentEdit', {id: data.id}}, ) dispatch({ type: 'DOCUMENT_VIEW/RESET', id: data.id, }) dispatch({ type: 'DOCUMENT_DATA/POST', data, }) } } Indeed we are – but that doesn’t change the fact that what you originally dispatched is just a single function. All that is happening here is that redux-thunk then dispatches more actions as it processes your function. Dispatching arrays Things are a little more obvious if you use the redux-multi middleware to handle arrays of action objects: function submit() { return [ { type: 'NAVIGATION/NAVIGATE', location: {name: 'documentEdit', {id: data.id}}, }, { type: 'DOCUMENT_VIEW/RESET', id: data.id, }, { type: 'DOCUMENT_DATA/POST', data, }, ] } In this case, you’re clearly only dispatching one array object, even if the action objects it contains are then individually dispatched by the appropriate middleware. Gotchas The one thing to be careful of is that one call to store.dispatch may now result in many calls to your store’s subscribers: Sometimes this is what you actually want. But more likely, you only care about the last state from a given dispatch. In that case, batch up your listeners using the redux-batched-subscribe store enhancer: import { createStore, applyMiddleware } from 'redux' import reduxMulti from 'redux-multi' import { batchedSubscribe } from 'redux-batched-subscribe' // Add middleware to allow our action creators to return functions and arrays const createStoreWithMiddleware = applyMiddleware( reduxThunk, reduxMulti, )(createStore) // Ensure our listeners are only called once, even when one of the above // middleware call the underlying store's `dispatch` multiple times const createStoreWithBatching = batchedSubscribe( fn => fn() )(createStoreWithMiddleware) const reducer = function(state = 0, action) { return state + 1 } // Create a store with our application reducer const store = createStoreWithBatching(reducer) store.subscribe(function() { console.log(store.getState()) }) store.dispatch([ { type: 'FOO' }, { type: 'BAR' }, ]) // Output: // 3 So it is OK to dispatch multiple actions from an action creator? Yes! There is absolutely nothing wrong with doing so. Don’t worry, you’re fine. But what if I want to dispatch actions in listeners? Now you’re getting into let’s have a good think about this territory. Dispatching actions in listeners can cause fun problems like infinite loops and terrible performance. But these problems aside, dispatching actions in listeners can still certainly be useful. For example, you might want to fetch data based on the route you arrive at, not the route that you think you will. So wouldn’t it be great if there was a safe way to do so? And wouldn’t you believe it, there actually is! My solution is a pattern called actors, and there is a guide to it coming really soon. Make sure to join my newsletter, or you might miss out! And to sweeten the deal, in return for your e-mail you’ll immediately receive three print-optimised PDF cheatsheets – for React (see preview), ES6 and JavaScript promises!. Looking forward to hearing from you! Related Projects - See multiple dispatch in action in Unicorn Standard’s Redux/React Boilerplate Hello. I have been dispatching actions from other actions in an application and so far no problems. But after reading your article I am thinking that maybe I am doing an anti-pattern. This is more or less what I do in an action. ———————– export function(payload) { return function(dispatch) { return addRecordPromise.then(actionUpdateListRecords()) } } ————————- What do you think? Hi Ivan, Don’t worry – there is nothing anti-patternish about this. Dispatching actions from action creators is ok – after all, that is why they’re called action creators! It is dispatching actions after actions have been processed which is frowned upon. For example, in a Redux `subscribe` function. Hi James, I’ve just read your latest articles, and thanks for sharing all this stuff ! About the topic of this post and the fact of dispatching multiples actions ending up to only one render (if I get it right), how would you handle the case of fetching some date and providing info about the loading status ? let’s say just displaying a spinner when the fetching starts and hiding it when it’s done… If you render just once, it doesn’t work. I’m using the promise middleware of redux which allows us to dispatch 3 actions when fetching data : REQUEST, SUCCESS, FAILURE. I modified these middleware so that it always returns a promise even if the action creator was not a promise. It allows me to do things like that in my component : buyVideoHandler (assetId, priceId) { const { dispatch, video } = this.props const payload = { priceId, productId : video._id, assetId } dispatch(buyVideo(payload)).then(() => { return dispatch(closeModal()) }).then(() => { return dispatch(cleanCheckout()) }).then(() => { this.context.history.pushState(null, `/${this.props.locale}/${ Routes.PATH.JOURNEY_VIEW }`) }).catch((err) => { debug(‘Error while buying video’, err) }) } Here buyVideo make a call to an API endpoint, closeModal and cleanCheckout are just synchronous actions but which still return promises so that I can chain them. I prefer chaining the actions here (instead of inside the first action) because perhaps I don’t want to chain all these actions every time the buyVideo action will be called but only in this specific component. What do you think about that ? Thanks. Boris. Hi Boris, I think I could probably make this a little clear in the article, but what the batching does is ensures that all of the actions dispatched from a single `dispatch` call in one tick result in only a single call to the handler function passed to subscribe. So in your example, most of the dispatched items will result in separate renders. Since closeModaland cleanCheckoutare synchronous, they should be batched into a single render call. I believe this is the behaviour you expect? Hi James, sorry for this late reply. You’re right ! The expected behavior would be just one rendering after all the actions are done. I’ll try to use the redux-multi and redux-batched-subscribe middlewares for that ! Thank you. Great write up! I’m trying to use the redux-multi method of dispatching an array of action creators, and it seems to work except for one small oddity – my redux-logger middleware is reporting that the encapsulating array is itself an action, which gets dispatched and logged as “undefined” before the actual action creators themselves get dispatched. is there a way to have it ignore the encapsulating array? This is an other alternative: redux-batched-actions Dan Abramov suggest handling with this way: This may make it in:
http://jamesknelson.com/can-i-dispatch-multiple-actions-from-redux-action-creators/
CC-MAIN-2017-34
refinedweb
1,264
54.42
dns dynamically updated - Currently Being ModeratedNov 21, 2011 5:50 AM (in response to lluca40) The underlying DNS server of Mac OS X Server is called BIND and does support this, however Apple's admin tools do not provide a means of configuring this. So the answer for most people is no it cannot be done. However have a look at this article and see if it helps you. - Currently Being ModeratedNov 21, 2011 9:54 AM (in response to John Lockwood) Thanks for your answer. I had a look at the article you mention. I wasn't able to find out publicView.conf.apple in lion server. Looking for configuration file gave me named.conf, which should be the standard BIND conf file. I modified the entries for allow-update, from none to any. No luck. Is there something to enable on a client workstation 10.4, 10.5, 10.6, 10.7? Moreover I don't find any log file, setting log level to information gives me nothing ("the selected log file does not exist"). Thanks Luca - Currently Being ModeratedNov 22, 2011 2:07 AM (in response to lluca40) I believe the author of that article is the same Hoffman as the MrHoffman who frequents these forums. Hopefully he will step in and answer your query further. If you don't get a reply from him in a day or two you could try posting a message with his user name in the subject. - Currently Being ModeratedNov 22, 2011 5:16 AM (in response to lluca40) Luca, what do you imagine "updating" DNS will do when a workstation is switched on? DNS is a way of assigning names to IP addresses and does not need to be updated when computers appear on the network. Perhaps you are thinking of DHCP, which assigns configuration such as an IP address to the workstations when they are turned on. You can configure DHCP to assign the IP address that corresponds to a DNS entry by telling the DHCP server what MAC address a workstation's network adapter has assigned to it. If you configure both DHCP and DNS correctly, you can address your computers by their names. That said, Mac OS X does a pretty good job of advertising the host name (computer name) via Bonjour, so it may not be necessary for you to set this up if all your computers work via Bonjour--you just browse the network in Finder and you can see all the names. The DHCP/DNS configuration I'm describing is most useful when you need to run services that need to be accessible from outside your network, but even then simply setting a static IP for your workstation should be sufficient. It's also the standard way of doing things when working with Unix hosts. - Currently Being ModeratedNov 22, 2011 5:27 AM (in response to Damon Allen Davison) There are two different forms of Dynamic DNS. The one Luca is referring to is when you have devices on a network which get their IP address via DHCP and this is a dynamic and changing address but you still want to use a DNS hostname with that specific machine. Normally DNS hostnames always point to the same static IP address. If you have a suitable DNS server it can link the device on the network to that hostname and change the IP address to match any change in address issued to the device by the DHCP server. As I mentioned the DNS server Apple use can do this but Apple themselves do not provide a means to configure this. For your information the second form of Dynamic DNS is where you have a single public internet IP address which is also dynamic (issued by your ISP) it is possible to use an external service to link a hostname to that dynamic address and then people on the Internet can use that unchanging hostname to access your computer even though your IP address is potentially changing. DynDNS.org is an example of this. - Currently Being ModeratedNov 22, 2011 5:45 AM (in response to John Lockwood) Thanks for your answers. The environment I'm speaking about is an "enterprise" one, 250 osx (range 10.4-10.7). It comes from loosely connected sites and now I'm able to get a solid intranet infrastructure. Of course, IPs are provided via DHCP. No Bonjour, I have networks connected via router and I need a central DNS. What I need is a machine getting via DHCP its IP, netwok mask, gateway, etc. and telling my DNS its IP and its name. DNS should automatically update related records. Dynamic DNS (es. dyndns.org) is not an issue in this context. Moreover I don't really like to explicitly write in the DNS MAC addresses, I need something more flexible. Frankly I'm quite disappointed by this choice from Apple, i.e. inhibit updating records. I've worked almost 14 years with a windows infrastructure (15000 workstation distributed all over Italy) working this way, I would like to achieve same behavior. Thanks again, discussion is always useful. - Currently Being ModeratedNov 22, 2011 5:57 AM (in response to lluca40) I would agree it is disappointing Apple do not provide a built-in Dynamic DNS solution - I have actually suggested it to them in the past. However at this point they could still be considered to not yet be addressing the Enterprise market. For what its worth it is perfectly possible to use Macs and Mac servers and use a non-Mac DNS server. Things are getting a bit silly, it is getting to a state that in anything other than the smallest simplest setup you need to use non-Mac servers and I say this as a dedicated Mac fan. While the likely amount of sales and revenue versus the costs of server products might make Apple think it is not worth it they are wrong on every count. Firstly I know how they could solve the hardware side at effectively no cost while still having control (i.e. not allowing other companies to run Mac OS X Server on their hardware), secondly the whole point of Mac servers was to provide a better solution for Macs, Windows servers while they can be used for Mac clients are not Mac friendly and have definitely non-Mac friendly licensing terms. - Currently Being ModeratedDec 7, 2011 2:25 AM (in response to lluca40) I guess there isn't much to do about my problem. I found an article explaining how to couple DHCP and DNS to work together to update dns records, but I should use services not provided by lion server. So, a final word from people more expert than me: is it true? I really can't have this service working as I need? Anyway, a 10.7 lion client is ready to work with, let say, a Microsoft DNS server? Is there some tuning to work with? Thanks again Luca - Currently Being ModeratedDec 7, 2011 10:52 AM (in response to lluca40) Perhaps I'm inexperienced with the whole Dynamically Updating DNS thing, but I just don't see the need for it other than ease of finding which computers are which on a network. ...and even that can be cryptic when a computers name is e011034d2a... Top that off with an amazingly bloated DNS record set! Short answer to your question, Luca... Here's what I did in my lab. I set up a windows server with DNS. I had it set up to allow recursion to my Mac DNS Server. The Mac Server was then set up to recieve all records that the Windows server set up. This way will allow you to run OD on your Mac Server, and AD on your PC Server with the Golden Triangle. The DHCP Server was on the mac, and was set up so that the first DNS entry provided went to the PC Server. The second DNS Entry pointed to the Mac. I didn't like it, but it worked. For some, two servers is one too many and for others, two isn't enough. -Graham - Currently Being ModeratedDec 16, 2011 7:50 AM (in response to gracoat) i had similar issue. i tend to change it via Command Line but all my changes where overwritten when some Lovely Inocent Operator from Heaven used the Server Admin tool and saves any DNS change... so i did some research here: first i add these secondary servers to the "Nameservers" box inside the DNS zone (in both zones: direct and reverse) My problem was, when i enable one "allow transfer" in one zone (for example, direct DNS zone) ALL OTHERS checkbox got unchecked!!! (in this example, the reverse zone). So it seems to me like Server Admin is only populating the checkboxes changed in Server Admin, and it's not reading the configuration file to see if there was a previous definition with "allow-transfer" (as it does when Server Admin loads... weird). So as a Workaround you can do: 1) UNCHECK all "allow transfers" checkboxes from Server Admin 2) SAVE (and quit if paranoia is hitting you so hard) 3) CHECK all needed checkboxes (in my case, that was the direct and reverse zone) WITHOUT SAVE till you have check all. 4) SAVE! that's the way we had it done, hope that helps! and excuse my obfuscation explaining, it's not a good day for me, my mind is asking for beach time, hope you understand - Currently Being ModeratedMar 26, 2012 5:40 AM (in response to John Lockwood) Hello, first of all nice to see that I'm not the only human on this planet wondering about this topic. A longer while ago I stopped already reading Apple's official documents about Server management. First of all usefull information is only available until version 10.6. I'm also brand new in Apples world using it at home starting from OS version 10.7. I have one server and three clients and yes I have separated my home network into VLANs. As I was still in Windows world at home I never made any thought of having VLAN in use as it did not make any difference but with Mac OS Server I run from one problem into the next one. Topic of problems are many: Find printers, configure Time Machine over VLAN, configure Radius clients and and and. Bonjour should help to easily fix all the problems by working automatically. Nice for not so experienced users but it's such a pitty that Apple does not provide and advanced tools or guides to get things done without this bloddy Bonjour. This company earns so much money why they are not developing Enterprise suitable server tools. I can't use things like Bonjour in a multi-site company if this even makes troubles at home. The ability that Windows clients updated their own DNS records or DHCP servers did this on behalf of not Windows clients was just great. I can't even tell how often I open Server Admin DHCP console just to look up for an IP address. I won't use thrid party tools as with this amount of money I spent for those devices I expect that Apple provides me tools to be happy. If not maybe only M$ is your friend then. Very sorry but true. Cheers Robert - Currently Being ModeratedJun 4, 2012 6:09 AM (in response to lluca40) is macserver.office.internal DHCP is turned off in "Server Admin" and we use Macports isc-dhcp implementation DNS is turnedupdate 3. Edit /opt/local/etc/dhcp/dhcpd.conf Message was edited by: gilcelli - Currently Being ModeratedJun 5, 2012 1:34 AM (in response to gilcelli) via OS X Lion Server "Server" app to "macserver.office.internal" - Set the internal IP address to 192.168.64.0/24, e.g macserver.office.internal IP address: 192.168.64.100 - Set DHCP range from 192.168.64.190 to 192.168.64.250 - DHCP is turned off in "Server Admin" and we use Macports isc-dhcp implementation - Edit DNS settings and start: Note that most of the files are installed in the /opt/local/ directory The dhcpd binary is installed in /opt/local/bin/-update The output should be written to /var/named/dhcp-update.key Don't forget to set permissions to "read-only" for root: #sudo chmod ugo-w /var/named/dhcp-update.key For example my key looks like: #sudo cat /var/named/dhcp-update.key key DHCP-UPDATE-KEY { algorithm hmac-md5; secret "a9hXeJ31ALVsW/19Rx9OXQ=="; }; 5. Edit /opt/local/etc/dhcp/dhcpd.conf A good tutorial on how to setup a dhcp server with automagically updating DNS is here So my /opt/local/etc/dhcp/dhcpd.conf looks like this: # You need the next line or you won't actually be a DHCP server! authoritative; # DDNS stuff - these are the bits that get your DHCP server talking with your DNS server ddns-update-style interim; ddns-updates on; ddns-ttl 600; server-identifier macserver.local; ddns-domainname "office.internal."; ddns-rev-domainname "64.168.192.in-addr.arpa."; # this is the file with your shared key in it #include "/var/named/dhcp-update.key"; key DHCP-UPDATE-KEY { algorithm hmac-md5; secret "a9hXeJ31ALVsW/19Rx9OXQ=="; }; # this generates a client's DNS name from the hostname the give or the leased IP address # ddns-hostname = pick-first-value(ddns-hostname, option host-name, binary-to-ascii(10,8, "-", leased-address)); # Normal DHCP stuff option domain-name "office.internal"; option domain-name-servers 192.168.64.100; option ip-forwarding off; #default-lease-time 600; #max-lease-time 7200; # New lease-time default-lease-time 86400; max-lease-time 86400; # My Network - this is the set of addresses that you're handing out subnet 192.168.64.0 netmask 255.255.255.0 { range 192.168.64.190 192.168.64.250; option broadcast-address 192.168.64.255; option subnet-mask 255.255.255.0; option routers 192.168.64.1; allow unknown-clients; allow client-updates; zone office.internal. { primary 192.168.64.100; key DHCP-UPDATE-KEY; } zone 64.168.192.in-addr.arpa. { primary 192.168.64.100; # this key name matches the name you gave it in the key file key DHCP-UPDATE-KEY; } } 6. Setup DNS Service (but don't start it yet) - Startup DNS normally via "Server Admin" and add the primary zone: here "office.internal" and - add the nameserver hostname: macserver.office.internal The reverse DNS will automatically be added by "Server Admin". Save it but don't start it yet since we need to configure /etc/named.conf 7. Edit /etc/named.conf To allow DNS to update it's hostnames edit /etc/named.conf: - add the dns-sec key at the top of the file, like here - Add the line allow-update { key DHCP-UPDATE-KEY; }; #cat /etc/named.conf key DHCP-UPDATE-KEY { algorithm hmac-md5; secret "a9hXeJ31ALVsW/19Rx9OXQ=="; }; options { directory "/var/named"; allow-transfer { none; }; }; acl "com.apple.ServerAdmin.DNS.public" { localhost; localnets; }; logging { channel _default_log { file "/Library/Logs/named.log"; severity info; print-time yes; }; category "default" { "_default_log"; }; }; view "com.apple.ServerAdmin.DNS.public" { zone "office.internal" IN { type master; file "db.office.internal"; allow-transfer { none; }; allow-update { key DHCP-UPDATE-KEY; }; }; zone "64.168.192.in-addr.arpa" IN { type master; file "db.64.168.192.in-addr.arpa"; allow-transfer { none; }; allow-update { key DHCP-UPDATE-KEY; }; }; allow-recursion { com.apple.ServerAdmin.DNS.public; }; }; 6. Reboot and start DNS Service from "Server Admin" app: Reboot OS X Lion Server and check that dhcpd from Macports is running (with command /opt/local/bin/daemondo Start DNS Service with "Server Admin" and normally it should work ;-) Log files to watch: DHCP /opt/local/var/db/dhcpd/dhcpd.leases See if you get a journal file for DNS (.jnl) in /var/named/ Hope this helps (someone)... This is the complete edit (my previous post was not saved since the Discussions site switched to maintenance mode... grrr...) - Currently Being ModeratedJun 27, 2012 11:46 AM (in response to gilcelli) Hello gilcelli, thank you first of all for you very long and good explanation how to get this whole thing working. I see that this is not only something which you can do fastly with Apple standard tools but also not too complicated. Because of lack of time I have to skip this now to a later time but for sure I will try this one day (in a test environment). It's just a pitty that Apple does not support this natively. The have the Server Admin Tool for those services and all the configuraton pages in there are almost empty. It's quite confusing when you come from the Windows world and just expecting those things to be present. Thanks again for this great article once more! Cheers Robert
https://discussions.apple.com/message/16798192
CC-MAIN-2014-10
refinedweb
2,840
62.38
Hey, I have been trying to pass a class variable to another class but I have no clue how to do this... Here is the basic code I have been fighting with :But apparently I got the following errorsBut apparently I got the following errorsCode:#include <iostream> using namespace std; class Text { public : char words[80] ="This is a text line"; }; class Line { public : void showtext(void) { cout << words; } }; int main() { Line line; line.showtext(); cin.get(); return 0; }I understand the last error(but I have no clue how to pass char 'text' to class 'Line')I understand the last error(but I have no clue how to pass char 'text' to class 'Line')ISO C++ forbids initialization of member 'text' error : making 'text' static invalid in-class initialization of static data member of non-integral type 'char[50]' In member function 'void Line::showtext()': 'words' was not declared in this scope But I don't understand the first 3 errors, I mean why would it forbid initialization of 'text'. I hope you guys can help me figure out what my problem is and how to pass a variable from one class to another. Thanks in advanced.
http://cboard.cprogramming.com/cplusplus-programming/117001-passing-variable-one-class-another.html
CC-MAIN-2016-18
refinedweb
198
51.04
Live-testing changes in OpenShift clusters I have been hacking on the runc container runtime. So how do I test my changes in an OpenShift cluster? One option is to compose a machine-os-content release via coreos-assembler. Then you can deploy or upgrade a cluster with that release. Indeed, this approach is necessary for testing installation and upgrades. It also seems useful for publishing modified versions for other people to test. But it is a heavyweight and time consuming option. For development I want a more lightweight approach. In this post I’ll demonstrate how to use the rpm-ostree usroverlay and rpm-ostree override replace commands to test changes in a live OpenShift cluster. Background § OpenShift runs on CoreOS. CoreOS uses OSTree to manage the filesystem. Most of the filesystem is immutable. When upgrading, a new filesystem is prepared before rebooting the system. The old filesystem is preserved, so it is easy to roll back. So I can’t just log onto an OpenShift node and replace /usr/bin/runc with my modified version. Nevertheless, I have seen references to the rpm-ostree usroverlay command. It is supposed to provide a writable overlayfs on /usr, so that you can test modifications. Changes are lost upon reboot, but that’s fine for testing. There’s also the rpm-ostree override replace … command. This command works on the level of RPM packages. It allows you to install new packages or replace or remove packages. Changes persist across reboots, but it is easy to roll back to the pristine state of the current CoreOS release. The rest of this article explores how to use these two commands to apply changes to the cluster. usroverlay via debug container (doesn’t work) § I first attempted to use rpm-ostree usroverlay in a node debug pod. % oc debug node/worker-a Starting pod/worker-a-debug ... To use host binaries, run `chroot /host` Pod IP: 10.0.128.2 If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host sh-4.4# rpm-ostree usroverlay Development mode enabled. A writable overlayfs is now mounted on /usr. All changes there will be discarded on reboot. sh-4.4# touch /usr/bin/foo touch: cannot touch '/usr/bin/foo': Read-only file system The rpm-ostree usroverlay command succeeded. But /usr remained read-only. The debug container has its own mount namespace, which was unaffected. I guess that I need to log into the node directly to use the writable /usr overlay. Perhaps it is also necessary to execute rpm-ostree usroverlay as an unconfined user (in the SELinux sense). I restarted the node to begin afresh: sh-4.4# reboot Removing debug pod ... usroverlay via SSH § For the next attempt, I logged into the worker node over SSH. The first step was to add the SSH public key to the core user’s authorized_keys file. Roberto Carratalá’s helpful blog post explains how to do this. I will recap the critical bits. SSH keys can be added via MachineConfig objects, which must also specify the machine role (e.g. worker). The Machine Config Operator will only add keys to the core user. Multiple keys can be specified, across multiple MachineConfig objects—all the keys in matching objects will be included. I don’t have direct network access to the worker node. So how could I log in over SSH? I generated a key in the node debug shell, and will log in from there! sh-4.4#: SHA256:jAmv…NMnY root@worker-a sh-4.4# cat ~/.ssh/id_rsa.pub ssh-rsa AAAA…4OU= root@worker-a The following MachineConfig adds the SSH key for user core: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: ssh-authorized-keys-worker labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 passwd: users: - name: core sshAuthorizedKeys: - ssh-rsa AAAA…40U= root@worker-a I created the MachineConfig: % oc create -f machineconfig-ssh-worker.yaml machineconfig.machineconfiguration.openshift.io/ssh-authorized-keys created In the node debug shell, I observed that Machine Config Operator applied the change after a few seconds. It did not restart the worker node. My key was added alongside a key defined in some other MachineConfig. sh-4.4# cat /var/home/core/.ssh/authorized_keys ssh-rsa AAAA…jjNV devenv ssh-rsa AAAA…4OU= root@worker-a Now I could log in over SSH: sh-4.4# ssh core@$(hostname) The authenticity of host 'worker-a (10.0.128.2)' can't be established. ECDSA key fingerprint is SHA256:LUaZOleqVFunmLCp4/E1naIQ+E5BpmVp0gcsXHGacPE. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'worker-a,10.0.128.2' (ECDSA) to the list of known hosts. Red Hat Enterprise Linux CoreOS 48.84.202106231817-0 Part of OpenShift 4.8, RHCOS is a Kubernetes native operating system managed by the Machine Config Operator (`clusteroperator/machine-config`). WARNING: Direct SSH access to machines is not recommended; instead, make configuration changes via `machineconfig` objects: --- [core@worker-a ~]$ The user is unconfined and I can see the normal, read-only ( ro) /usr mount (but no overlay): [core@worker-a ~]$ id -Z unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 .KZ4V50/upper,workdir=/var/tmp/ostree-unlock-ovl.KZ4V50/work) I executed rpm-ostree usroverlay via sudo. After that, a read-write ( rw) overlay filesystem is visible: [core@worker-a ~]$ sudo rpm-ostree usroverlay Development mode enabled. A writable overlayfs is now mounted on /usr. All changes there will be discarded on reboot. .TCPM50/upper,workdir=/var/tmp/ostree-unlock-ovl.TCPM50/work) And it is indeed writable. I made a copy of the original runc binary, then installed my modified version: [core@worker-a ~]$ sudo cp /usr/bin/runc /usr/bin/runc.orig [core@worker-a ~]$ sudo curl -Ss -o /usr/bin/runc \ Digression: use a buildroot § The runc executable I installed on the previous step didn’t work. I had built it on my workstation, against a too-new version of glibc. The OpenShift node (which was running RHCOS 4.8, based on RHEL 8.4) was unable to link runc. Therefore it could not run any container workloads. I was able to SSH in from another node and reboot, discarding the transient change in the usroverlay and restoring the node to a functional state. All of this is obvious in hindsight. You have to build the program for the environment in which it will be executed. In my case, it was easiest to do this via Brew or Koji. I cloned the dist-git repository (via the fedpkg or rhpkg tool), created patches and updated the runc.spec file. Then I built the SRPM ( .src.rpm) and started a scratch build in Brew. After the build completed I made the resulting .rpm publicly available, so that it can be fetched from the OpenShift cluster. override replace via node debug container § I now have my modified runc in an RPM package. So I can use rpm-ostree override replace to install it. In a debug node on the host: sh-4.4# rpm-ostree override replace \ Downloading ''... done! Checking out tree eb6dd3b...-97.rhaos4.8.gitcd80260.el8 -> 1.0.0-98.rhaos4.8.gitcd80260.el8 Run "systemctl reboot" to start a reboot rpm-ostree downloaded the package and prepared the updated OS. Per the advice, the update is not active yet; I need to reboot: sh-4.4# rpm -q runc runc-1.0.0-97.rhaos4.8.gitcd80260.el8.x86_64 sh-4.4# systemctl reboot sh-4.4# exit sh-4.2# Removing debug pod ... After reboot I started a node debug container and verified that the modified version of runc is visible: % oc debug node/worker-a Starting pod/worker-a-debug ... To use host binaries, run `chroot /host` Pod IP: 10.0.128.2 If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host sh-4.4# rpm -q runc runc-1.0.0-98.rhaos4.8.gitcd80260.el8.x86_64 And the fact that the debug container is working proves that the modified version of runc isn’t completely broken! Testing the new functionality is a topic for a different post, so I’ll leave it at that. Listing and resetting overrides § rpm-ostree status --booted lists the current base image and any overrides that have been applied: sh-4.4# rpm-ostree status --booted State: idle BootedDeployment: * pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9a23adde268dc8937ae293594f58fc4039b574210f320ebdac85a50ef40220dd CustomOrigin: Managed by machine-config-operator Version: 48.84.202106231817-0 (2021-06-23T18:21:06Z) ReplacedBasePackages: runc 1.0.0-97.rhaos4.8.gitcd80260.el8 -> 1.0.0-98.rhaos4.8.gitcd80260.el8 To reset an override for a specific package, run rpm-ostree override reset $PKG: sh-4.4# rpm-ostree override reset runc Staging deployment... done Freed: 1.1 GB (pkgcache branches: 0) Downgraded: runc 1.0.0-98.rhaos4.8.gitcd80260.el8 -> 1.0.0-97.rhaos4.8.gitcd80260.el8 Run "systemctl reboot" to start a reboot To reset all overrides, execute rpm-ostree reset: sh-4.4# rpm-ostree reset Staging deployment... done Freed: 54.8 MB (pkgcache branches: 0) Downgraded: runc 1.0.0-98.rhaos4.8.gitcd80260.el8 -> 1.0.0-97.rhaos4.8.gitcd80260.el8 Run "systemctl reboot" to start a reboot Discussion § I achieved my goal of installed a modified runc executable on an OpenShift node. There were two approaches: rpm-ostree usroverlaycreates a writable overlay on /usr. The overlay disappears at reboot, which is fine for my testing needs. This technique doesn’t work from a node debug container; you have to log in over SSH, which requires additional steps to add SSH keys. rpm-ostree override replaceoverrides a particular package RPM. The change takes effect after reboot and is persistent. It is easy to rollback or reset the override. This technique does not require SSH login; it works fine in a node debug container. Because I needed to build my package in a RHEL 8.4 / RHCOS 4.8 buildroot, I used Brew. The build artifacts are RPMs. Therefore rpm-ostree override replace is the most convenient option for me. Both options apply changes per-node. After confirming with CoreOS developers, there is currently no way to roll out a package override cluster-wide or to a defined group of nodes (e.g. to MachineConfigPool/worker via a MachineConfig). So for now, you either have to apply changes/overrides on specific nodes, or build the whole machine-os-content image and upgrade the cluster. As a container runtime developer, my sweet spot is in a gulf between the existing options. I can tolerate this mild annoyance on the assumption that it discourages messing around in production environments. In the meantime, now that I have worked out how to install my modified runc onto worker nodes, I will get on with testing it!
https://frasertweedale.github.io/blog-redhat/posts/2021-06-29-openshift-live-changes.html
CC-MAIN-2022-27
refinedweb
1,831
60.41
SciChart® the market leader in Fast WPF Charts, WPF 3D Charts, and iOS Chart & Android Chart Components Hi, I would like to draw a polygon over a heatMap using an inherited class of UniformHeatMapDataSeries. An example using FastHeatMapRenderableSeries was described here : ,but it is now deprecated since SciCharts V5. Instead of that, i tried to declare an inherited class of CustomRenderableSeries as shown here : public class PolygonRenderableSeries : CustomRenderableSeries { protected override void Draw(IRenderContext2D renderContext, IRenderPassData renderPassData) { base.Draw(renderContext, renderPassData); } } What i need now is to pass a List of Polygons to this class in order to draw them directly. I know how to draw a polygon, but i don’t know how to override the renderPassData for adding polygons to it. Could you help me about that ? Best regards, Hi Valentin, Thanks for your inquiry. I am sorry for the late reply. We have implemented a set of DrawingTools which include BrushAnnotation. Please take a look at the corresponding documentation for more info: The approach from the link you’ve mentioned () should work fine too. You just have to change the base class from FastHeatMapRenderableSeries to FastUniformHeatmapRenderableSeries. You can find more info regarding Heatmap Chart Types here: Hope this helps. With best regards, Oleksandr
https://www.scichart.com/questions/wpf/drawing-a-polygon-over-a-heatmap?tab=answers&sort=active
CC-MAIN-2020-05
refinedweb
205
55.84
CRYPTTAB(5) crypttab CRYPTTAB(5) crypttab - Configuration for encrypted block devices /etc/crypttab The /etc/crypttab file describes encrypted block devices that are set up during system boot. Empty lines and lines starting with the "#" character are ignored. Each of the remaining lines describes one encrypted block device. Fields are delimited by white space. Each line is in the form volume-name encrypted-device key-file options The first two fields are mandatory, the remaining two are optional. Setting up encrypted block devices using this file supports four encryption modes: LUKS, TrueCrypt, BitLocker four fields of /etc/crypttab are defined as follows: 1. The first field contains the name of the resulting volume with decrypted data; its block device is set up below /dev/mapper/. 2. The second field contains a path to the underlying block device or file, or a specification of a block device via "UUID=" followed by the UUID. 3. The third field specifies an absolute path to a file with the encryption key. Optionally, the path may be followed by ":" and an /etc/fstab style device specification (e.g. starting with "LABEL=" or similar); in which case the path is taken relative to the specified device's file system root. If the field is not present or is "none" or "-", a key file named after the volume to unlock (i.e. the first column of the line), suffixed with .key is automatically loaded from the /etc/cryptsetup-keys.d/ and /run/cryptsetup-keys.d/ directories, if present. Otherwise, the password has to be manually entered during system boot. For swap encryption, /dev/urandom may be used as key file, resulting in a randomized key. If the specified key file path refers to an AF_UNIX stream socket in the file system, the key is acquired by connecting to the socket and reading it from the connection. This allows the implementation of a service to provide key information dynamically, at the moment when it is needed. For details see below. 4. The fourth field, if present, is a comma-delimited list of options. The supported options are listed below. Six different mechanisms for acquiring the decryption key or passphrase unlocking the encrypted volume are supported. Specifically: 1. Most prominently, the user may be queried interactively during volume activation (i.e. typically at boot), asking them to type in the necessary passphrase(s). 2. The (unencrypted) key may be read from a file on disk, possibly on removable media. The third field of each line encodes the location, for details see above. 3. The (unencrypted) key may be requested from another service, by specifying an AF_UNIX file system socket in place of a key file in the third field. For details see above and below. 4. The key may be acquired via a PKCS#11 compatible hardware security token or smartcard. In this case an encrypted key is stored on disk/removable media, acquired via AF_UNIX, or stored in the LUKS2 JSON token metadata header. The encrypted key is then decrypted by the PKCS#11 token with an RSA key stored on it, and then used to unlock the encrypted volume. Use the pkcs11-uri= option described below to use this mechanism. 5. Similar, the key may be acquired via a FIDO2 compatible hardware security token (which must implement the "hmac-secret" extension). In this case a (during enrollment) randomly generated key is stored on disk/removable media, acquired via AF_UNIX, or stored in the LUKS2 JSON token metadata header. The random key is hashed via a keyed hash function (HMAC) on the FIDO2 token, using a secret key stored on the token that never leaves it. The resulting hash value is then used as key to unlock the encrypted volume. Use the fido2-device= option described below to use this mechanism. 6. Similar, the key may be acquired via a TPM2 security chip. In this case a (during enrollment) randomly generated key — encrypted by an asymmetric key derived from the TPM2 chip's seed key — is stored on disk/removable media, acquired via AF_UNIX, or stored in the LUKS2 JSON token metadata header. Use the tpm2-device= option described below to use this mechanism. For the latter five mechanisms the source for the key material used for unlocking the volume is primarily configured in the third field of each /etc/crypttab line, but may also configured in /etc/cryptsetup-keys.d/ and /run/cryptsetup-keys.d/ (see above) or in the LUKS2 JSON token header (in case of the latter three). Use the systemd-cryptenroll(1) tool to enroll PKCS#11, FIDO2 and TPM2 devices in LUKS2 volumes. The following options may be used in the fourth field of each line: cipher= Specifies the cipher to use. See cryptsetup(8) for possible values and the default value of this option. A cipher with unpredictable IV values, such as "aes-cbc-essiv:sha256", is recommended. Embedded commas in the cipher specification need to be escaped by preceding them with a backslash, see example below.. Optionally, the path may be followed by ":" and an /etc/fstab device specification (e.g. starting with "UUID=" or similar); in which case, the path is relative to the device file system root. The device gets mounted automatically for LUKS device activation duration only.file-erase If enabled, the specified key file is erased after the volume is activated or when activation fails. This is in particular useful when the key file is only acquired transiently before activation (e.g. via a file in /run/, generated by a service running before activation), and shall be removed after use. Defaults to off.. keyfile-timeout= Specifies the timeout for the device on which the key file resides and falls back to a password if it could not be mounted. See systemd-cryptsetup-generator(8) for key files on external devices. luks Force LUKS mode. When this mode is used, the following options are ignored since they are provided by the LUKS header on the device: cipher=, hash=, size=. bitlk Decrypt BitLocker drive. Encryption parameters are deduced by cryptsetup from BitLocker header. added to cryptsetup.target. This means that it will not be automatically unlocked on boot, unless something else pulls it in. In particular, if the device is used for a mount point, it'll be unlocked automatically during boot, unless the mount point itself is also disabled with noauto. nofail This device will not be a hard dependency of cryptsetup.target. It'll still be pulled in and started, but the system will not wait for the device to show up and be unlocked, and boot will not fail if this is unsuccessful. Note that other units that depend on the unlocked device may still fail. In particular, if the device is used for a mount point, the mount point itself also needs to have the nofail option, or the boot will fail if the device is not unlocked successfully. offset= Start offset in the backend device, in 512-byte sectors. This option is only relevant for plain devices. plain Force plain encryption mode. read-only, readonly Set up the encrypted block device in read-only mode. same-cpu-crypt Perform encryption using the same CPU that IO was submitted on. The default is to use an unbound workqueue so that encryption work is automatically balanced between available CPUs. This requires kernel 4.0 or newer. submit-from-crypt-cpus Disable offloading writes to a separate thread after encryption. There are some situations where offloading write requests from the encryption threads to a dedicated thread degrades performance significantly. The default is to offload write requests to a dedicated thread because it benefits the CFQ scheduler to have writes submitted using the same context. This requires kernel 4.0 or newer. no-read-workqueue Bypass dm-crypt internal workqueue and process read requests synchronously. The default is to queue these requests and process them asynchronously. This requires kernel 5.9 or newer. no-write-workqueue Bypass dm-crypt internal workqueue and process write requests synchronously. The default is to queue these requests and process them asynchronously. This requires kernel 5.9 or newer.. sector-size= Specifies the sector size in bytes. mkfs(8). Takes a file system type as argument, such as "ext4", "xfs" or "btrfs". If no argument is specified defaults to "ext4".. headless= Takes a boolean argument, defaults to false. If true, never query interactively for the password/PIN. Useful for headless systems. verify If the encryption password is read from console, it has to be entered twice to prevent typos. password-echo=yes|no|masked Controls whether to echo passwords or security token PINs that are read from console. Takes a boolean or the special string "masked". The default is password-echo=masked. If enabled, the typed characters are echoed literally. If disabled, the typed characters are not echoed in any form, the user will not get feedback on their input. If set to "masked", an asterisk ("*") is echoed for each character typed. Regardless of which mode is chosen, if the user hits the tabulator key ("↹") at any time, or the backspace key ("⌫") before any other data has been entered, then echo is turned off. pkcs11-uri= Takes either the special value "auto" or an RFC7512 PKCS#11 URI[1] pointing to a private RSA key which is used to decrypt the encrypted key specified in the third column of the line. This is useful for unlocking encrypted volumes through PKCS#11 compatible security tokens or smartcards. See below for an example how to set up this mechanism for unlocking a LUKS2 volume with a YubiKey security token. If specified as "auto" the volume must be of type LUKS2 and must carry PKCS#11 security token metadata in its LUKS2 JSON token section. In this mode the URI and the encrypted key are automatically read from the LUKS2 JSON token header. Use systemd-cryptenroll(1) as simple tool for enrolling PKCS#11 security tokens or smartcards in a way compatible with "auto". In this mode the third column of the line should remain empty (that is, specified as "-"). The specified URI can refer directly to a private RSA key stored on a token or alternatively just to a slot or token, in which case a search for a suitable private RSA key will be performed. In this case if multiple suitable objects are found the token is refused. The encrypted key configured in the third column of the line is passed as is (i.e. in binary form, unprocessed) to RSA decryption. The resulting decrypted key is then Base64 encoded before it is used to unlock the LUKS volume. Use systemd-cryptenroll --pkcs11-token-uri=list to list all suitable PKCS#11 security tokens currently plugged in, along with their URIs. Note that many newer security tokens that may be used as PKCS#11 security token typically also implement the newer and simpler FIDO2 standard. Consider using fido2-device= (described below) to enroll it via FIDO2 instead. Note that a security token enrolled via PKCS#11 cannot be used to unlock the volume via FIDO2, unless also enrolled via FIDO2, and vice versa. fido2-device= Takes either the special value "auto" or the path to a "hidraw" device node (e.g. /dev/hidraw1) referring to a FIDO2 security token that implements the "hmac-secret" extension (most current hardware security tokens do). See below for an example how to set up this mechanism for unlocking an encrypted volume with a FIDO2 security token. If specified as "auto" the FIDO2 token device is automatically discovered, as it is plugged in. FIDO2 volume unlocking requires a client ID hash (CID) to be configured via fido2-cid= (see below) and a key to pass to the security token's HMAC functionality (configured in the line's third column) to operate. If not configured and the volume is of type LUKS2, the CID and the key are read from LUKS2 JSON token metadata instead. Use systemd-cryptenroll(1) as simple tool for enrolling FIDO2 security tokens, compatible with this automatic mode, which is only available for LUKS2 volumes. Use systemd-cryptenroll --fido2-device=list to list all suitable FIDO2 security tokens currently plugged in, along with their device nodes. This option implements the following mechanism: the configured key is hashed via they HMAC keyed hash function the FIDO2 device implements, keyed by a secret key embedded on the device. The resulting hash value is Base64 encoded and used to unlock the LUKS2 volume. As it should not be possible to extract the secret from the hardware token, it should not be possible to retrieve the hashed key given the configured key — without possessing the hardware token. Note that many security tokens that implement FIDO2 also implement PKCS#11, suitable for unlocking volumes via the pkcs11-uri= option described above. Typically the newer, simpler FIDO2 standard is preferable. fido2-cid= Takes a Base64 encoded FIDO2 client ID to use for the FIDO2 unlock operation. If specified, but fido2-device= is not, fido2-device=auto is implied. If fido2-device= is used but fido2-cid= is not, the volume must be of LUKS2 type, and the CID is read from the LUKS2 JSON token header. Use systemd-cryptenroll(1) for enrolling a FIDO2 token in the LUKS2 header compatible with this automatic mode. fido2-rp= Takes a string, configuring the FIDO2 Relying Party (rp) for the FIDO2 unlock operation. If not specified "io.systemd.cryptsetup" is used, except if the LUKS2 JSON token header contains a different value. It should normally not be necessary to override this. tpm2-device= Takes either the special value "auto" or the path to a device node (e.g. /dev/tpmrm0) referring to a TPM2 security chip. See below for an example how to set up this mechanism for unlocking an encrypted volume with a TPM2 chip. Use tpm2-pcrs= (see below) to configure the set of TPM2 PCRs to bind the volume unlocking to. Use systemd-cryptenroll(1) as simple tool for enrolling TPM2 security chips in LUKS2 volumes. If specified as "auto" the TPM2 device is automatically discovered. Use systemd-cryptenroll --tpm2-device=list to list all suitable TPM2 devices currently available, along with their device nodes. This option implements the following mechanism: when enrolling a TPM2 device via systemd-cryptenroll on a LUKS2 volume, a randomized key unlocking the volume is generated on the host and loaded into the TPM2 chip where it is encrypted with an asymmetric "primary" key pair derived from the TPM2's internal "seed" key. Neither the seed key nor the primary key are permitted to ever leave the TPM2 chip — however, the now encrypted randomized key may. It is saved in the LUKS2 volume JSON token header. When unlocking the encrypted volume, the primary key pair is generated on the TPM2 chip again (which works as long as the chip's seed key is correctly maintained by the TPM2 chip), which is then used to decrypt (on the TPM2 chip) the encrypted key from the LUKS2 volume JSON token header saved there during enrollment. The resulting decrypted key is then used to unlock the volume. When the randomized key is encrypted the current values of the selected PCRs (see below) are included in the operation, so that different PCR state results in different encrypted keys and the decrypted key can only be recovered if the same PCR state is reproduced. tpm2-pcrs= Takes a "+" separated list of numeric TPM2 PCR (i.e. "Platform Configuration Register") indexes to bind the TPM2 volume unlocking to. This option is only useful when TPM2 enrollment metadata is not available in the LUKS2 JSON token header already, the way systemd-cryptenroll writes it there. If not used (and no metadata in the LUKS2 JSON token header defines it), defaults to a list of a single entry: PCR 7. Assign an empty string to encode a policy that binds the key to no PCRs, making the key accessible to local programs regardless of the current PCR state. try-empty-password= Takes a boolean argument. If enabled, right before asking the user for a password it is first attempted to unlock the volume with an empty password. This is useful for systems that are initialized with an encrypted volume with only an empty password set, which shall be replaced with a suitable password during first boot, but after activation. x-systemd.device-timeout= Specifies how long systemd should wait for a device to show up before giving up on the entry. The argument is a time in seconds or explicitly specified units of "s", "min", "h", "ms". x-initrd.attach Setup this encrypted block device in the initramfs, similarly to systemd.mount(5) units marked with x-initrd.mount. Although it's not necessary to mark the mount entry for the root file system with x-initrd.mount, x-initrd.attach is still recommended with the encrypted block device containing the root file system as otherwise systemd will attempt to detach the device during the regular system shutdown while it's still in use. With this option the device will still be detached but later after the root file system is unmounted. All other encrypted block devices that contain file systems mounted in the initramfs should use this option. At early boot and when the system manager configuration is reloaded, this file is translated into native systemd units by systemd-cryptsetup-generator(8). If the key file path (as specified in the third column of /etc/crypttab entries, see above) refers to an AF_UNIX stream socket in the file system, the key is acquired by connecting to the socket and reading the key from the connection. The connection is made from an AF_UNIX socket name in the abstract namespace, see unix(7) for details. The source socket name is chosen according the following format: NUL RANDOM "/cryptsetup/" VOLUME In other words: a NUL byte (as required for abstract namespace sockets), followed by a random string (consisting of alphanumeric characters only), followed by the literal string "/cryptsetup/", followed by the name of the volume to acquire they key for. Example (for a volume "myvol"): Example 1. \0d7067f78d9827418/cryptsetup/myvol Services listening on the AF_UNIX stream socket may query the source socket name with getpeername(2), and use it to determine which key to send, allowing a single listening socket to serve keys for a multitude of volumes. If the PKCS#11 logic is used (see above) the socket source name is picked in identical fashion, except that the literal string "/cryptsetup-pkcs11/" is used (similar for FIDO2: "/cryptsetup-fido2/" and TPM2: "/cryptsetup-tpm2/"). This is done so that services providing key material know that not a secret key is requested but an encrypted key that will be decrypted via the PKCS#11/FIDO2/TPM2 logic to acquire the final secret key. Example 2. /etc/crypttab example Set up four encrypted block devices. One using LUKS for normal storage, another one for usage as a swap device and two TrueCrypt volumes. For the fourth device, the option string is interpreted as two options "cipher=xchacha12,aes-adiantum-plain64", "keyfile-timeout=10s".,cipher=xchacha12\,aes-adiantum-plain64 Example 3. Yubikey-based PKCS#11 Volume Unlocking Example The PKCS#11 logic allows hooking up any compatible security token that is capable of storing RSA decryption keys for unlocking an encrypted volume. Here's an example how to set up a Yubikey security token for this purpose on a LUKS2 volume, using ykmap(1) from the yubikey-manager project to initialize the token and systemd-cryptenroll(1) to add it in the LUKS2 volume: # Destroy any old key on the Yubikey . The "subject" should be an arbitrary user-chosen string to identify # the token with. ykman piv generate-certificate --subject "Knobelei" 9d pubkey.pem # We don't need the public key anymore, let's remove it. Since it is not # security sensitive we just do a regular "rm" here. rm pubkey.pem # Enroll the freshly initialized security token in the LUKS2 volume. Replace # /dev/sdXn by the partition to use (e.g. /dev/sda1). sudo systemd-cryptenroll --pkcs11-token-uri=auto /dev/sdXn # Test: Let's run systemd-cryptsetup to test if this all worked. sudo /usr/lib/systemd/systemd-cryptsetup attach mytest /dev/sdXn - pkcs11-uri=auto # If that worked, let's now add the same line persistently to /etc/crypttab, # for the future. sudo bash -c 'echo "mytest /dev/sdXn - pkcs11-uri=auto" >> /etc/crypttab' A few notes on the above: • We use RSA2048, which is the longest key size current Yubikeys support • We use Yubikey key slot 9d, since that's apparently the keyslot to use for decryption purposes, see documentation[2]. Example 4. FIDO2 Volume Unlocking Example The FIDO2 logic allows using any compatible FIDO2 security token that implements the "hmac-secret" extension for unlocking an encrypted volume. Here's an example how to set up a FIDO2 security token for this purpose for a LUKS2 volume, using systemd-cryptenroll(1): # Enroll the security token in the LUKS2 volume. Replace /dev/sdXn by the # partition to use (e.g. /dev/sda1). sudo systemd-cryptenroll --fido2-device=auto /dev/sdXn # Test: Let's run systemd-cryptsetup to test if this worked. sudo /usr/lib/systemd/systemd-cryptsetup attach mytest /dev/sdXn - fido2-device=auto # If that worked, let's now add the same line persistently to /etc/crypttab, # for the future. sudo bash -c 'echo "mytest /dev/sdXn - fido2-device=auto" >> /etc/crypttab' Example 5. TPM2 Volume Unlocking Example The TPM2 logic allows using any TPM2 chip supported by the Linux kernel for unlocking an encrypted volume. Here's an example how to set up a TPM2 chip for this purpose for a LUKS2 volume, using systemd-cryptenroll(1): # Enroll the TPM2 security chip in the LUKS2 volume, and bind it to PCR 7 # only. Replace /dev/sdXn by the partition to use (e.g. /dev/sda1). sudo systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=7 /dev/sdXn # Test: Let's run systemd-cryptsetup to test if this worked. sudo /usr/lib/systemd/systemd-cryptsetup attach mytest /dev/sdXn - tpm2-device=auto # If that worked, let's now add the same line persistently to /etc/crypttab, # for the future. sudo bash -c 'echo "mytest /dev/sdXn - tpm2-device=auto" >> /etc/crypttab' systemd(1), systemd-cryptsetup@.service(8), systemd-cryptsetup-generator(8), systemd-cryptenroll(1), fstab(5), cryptsetup(8), mkswap(8), mke2fs(8) 1. RFC7512 PKCS#11 URI 2. see CRYPTTAB(5) Pages that refer to this page: systemd-cryptenroll(1), systemd-cryptsetup-generator(8), systemd-cryptsetup@.service(8)
https://man7.org/linux/man-pages/man5/crypttab.5.html
CC-MAIN-2021-43
refinedweb
3,771
54.63
Library for reading OSM XML/GZ/BZ2/PBF files Project description osmiter A simple library for parsing OSM data. Supports simple OSM XML files as well as OSM GZ, OSM BZ2 and OSM PBF. Please be aware that osmiter uses Google's protobuf library, written in pure Python, which isn't particularly fast. Example Usage import osmiter shop_count = 0 for feature in osmiter.iter_from_osm("some_osm_file.osm"): if feature["type"] == "node" and "shop" in feature["tag"]: shop_count += 1 print(f"this osm file containes {shop_count} shop nodes") What is osmiter generating? For each feature (node/way/relation) it yields a dict containing element attributes (like id, lat or timestamp) and 2 additional items: key "type" holding "node"/"way"/"relation" and key "tag" holding a dict with feature tags (this dict may be empty). Additionally nodes will contain keys "lat" and "lon" with node coordinates, ways will contain key "nd" with a list of all node_ids references by this way, and relations contain a key "member" with a list of dicts of each member's attributes. Almost all attributes are returned as strings with the exception for: id, ref, version, changeset, uidand changeset_count→ int lat, lon→ float openand visible→ bool timestamp→ aware datetime.datetime item. Data validation osmiter preforms almost no data validation, so it is possible to recieve ways with no nodes, relations with no members, empty tag values, invalid coordinates, references to non-existing items, or duplicate ids※. However, several data assumptions are made: - Each feature has an idattribute. (※) For OSM PBF files, if an object is missing an id -1will be assigned, per the osmformat.proto definition. This can result in multiple objects with an id equal to -1. - Each node has to have both latand londefined. - Every attribute defined in the table on attribute type conversion has to be convertible to its type. So, id == 0x1453, changeset_count == AAAAAA, ref == 12.433or lat == 1.23E+10will cause an exception; timestampvalue has to be either ISO8601-compliant or epoch time represented by an integer. - Boolean atributes are only considered truthy if they're set to true(case-insensitive). Values 1, on, yes, TRUEwill all evaluate to False. Minimum requirements for each element Bare-minimum node: { "id": int, "type": "node", "lat": float, "lon": float, "tag": Dict[str, str], # May be empty } Bare-minimum way: { "id": int, "type": "way", "tag": Dict[str, str], # May be empty "nd": List[int], } Bare-minimum relation: { "id": int, "type": "relation", "tag": Dict[str, str], # May be empty "member": List[ dict ] } Example elements See the corresponding OSM XML examples. { "type": "node", "tag": {} "id": 298887269, "lat": 54.0901746, "lon": 12.2482632, "user": "SvenHRO", "uid": 46882, "visible": True, "version": 1, "changeset": 676636, "timestamp": datetime.datetime(2008, 9, 21, 21, 37, 45, tzinfo=datetime.timezone.utc) } { "type": "node", "tag": {"name": "Neu Broderstorf", "traffic_sign": "city_limit"}, "id": 1831881213, "version": 1, "changeset": 12370172, "lat": 54.0900666, "lon": 12.2539381, "user": "lafkor", "uid": 75625, "visible": True, "timestamp": datetime.datetime(2012, 7, 20, 9, 43, 19, tzinfo=datetime.timezone.utc), } { "type": "way", "tag": {"highway": "unclassified", "name": "Pastower Straße"}, "id": 26659127, "user": "Masch", "uid": 55988, "visible": True, "version": 5, "changeset": 4142606, "timestamp": datetime.datetime(2010, 3, 16, 11, 47, 8, tzinfo=datetime.timezone.utc), "nd": [292403538, 298884289, 261728686] } { "type": "relation", "tag": { "name": "Küstenbus Linie 123", "network": "VVW", "operator": "Regionalverkehr Küste", "ref": "123", "route": "bus", "type": "route" }, "id": 56688, "user": "kmvar", "uid": 56190, "visible": True, "version": 28, "changeset": 6947637, "timestamp": datetime.datetime(2011, 1, 12, 14, 23, 49, tzinfo=datetime.timezone.utc), "member": [ {"type": "node", "ref": 294942404, "role": ""}, {"type": "node", "ref": 364933006, "role": ""}, {"type": "way", "ref": 4579143, "role": ""}, {"type": "node", "ref": 249673494, "role": ""}, ] } License osmiter is provided under the MIT license, included in the license.md file. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/osmiter/
CC-MAIN-2020-10
refinedweb
637
54.73
In this post, I’ll walk you through web scraping the North Dakota Oil & Gas Division website, pulling tons of great oil and gas data related to North Dakota’s Bakken shale play. North Dakota’s Oil and Gas Division’s website is located here. Its primary mission is to regulate the drilling and production of oil and gas wells in the state of North Dakota. On its site, it offers a true treasure trove of data sets, available via its general statistics page. Its monthly production report index, in particular, offers some amazing information on a very granular level–including monthly oil, gas, and water production data on a well-by-well basis, as well as days online for each well, and operator of each well. I’ve included a snapshot of one of the monthly Excel files that I pulled from the page, shown below: There are individual monthly .xlsx and .pdf file extensions available for this data from 2003 to current, but managing the data pull from the internet and assembling all of the data into a single database can be quite unruly. So, to significantly speed up the process, I wrote a quick Python script that allows you to select the months you’d like to pull, and then web scrape the site to create a master Pandas dataframe that you can use to perform data analysis. The Python script is below (also available via my GitHub account): import requests import xlrd import csv import pandas as pd from datetime import datetime import os def pull_desired_files_and_create_master_df(dates): """ This function web scrapes the target website, pulls all of the xlsx files based on the selected dates using the xlrd function, converts to csv files, and reads in all csv's into a master pandas dataframe that will be used for all subsequent calculations. Arguments: dates: list. List of all of the dates (month/year, formatted based on website xlsx formatting), which is looped through Outputs: master_df: Pandas dataframe. csv's of all of the files that were scraped, saved in the Documents folder. """ #Define the base url we will be pulling off of base_url = "" #Create a master pandas dataframe that will hold all of the data that we want master_df=pd.DataFrame() #loop through all of the dates in the list for date in dates: desired_url=base_url+date+'.xlsx' r = requests.get(desired_url) # make an HTTP request #Open the contents of the xlsx file as an xlrd workbook workbook = xlrd.open_workbook(file_contents=r.content) #Take first worksheet in the workbook, as it's the one we'll be using worksheet = workbook.sheet_by_index(0) #Obtain the year/month date data from the worksheet, and convert the ReportDate column from float to datetime, using xlrd datemode functionality for i in range(1, worksheet.nrows): wrongValue = worksheet.cell_value(i,0) workbook_datemode = workbook.datemode year, month, day, hour, minute, second = xlrd.xldate_as_tuple(wrongValue, workbook_datemode) worksheet._cell_values[i][0]=datetime(year, month, 1).strftime("%m/%Y") #Generate a csv name to save under file_name='C:/Bakken/'+date+'.csv' #Save as a csv csv_file = open(file_name, 'w',newline='') #Create writer to csv file wr = csv.writer(csv_file) #Loop through all the rows and write to csv file for rownum in range(worksheet.nrows): wr.writerow(worksheet.row_values(rownum)) #Close the csv file csv_file.close() #Read in csv as pandas dataframe dataframe=pd.read_csv(file_name) #Append to the master dataframe master_df=master_df.append(dataframe) #Return the final master dataframes return master_df def main(): #Create a new folder called 'Bakken' to drop all of the files in newpath = r'C:\Bakken' if not os.path.exists(newpath): os.makedirs(newpath) #Pull the months that we want to process--December 2016 to January 2019 dates=['2016_12', '2017_01', '2017_02', '2017_03', '2017_04', '2017_05', '2017_06', '2017_07', '2017_08', '2017_09', '2017_10', '2017_11', '2017_12', '2018_01', '2018_02', '2018_03', '2018_04', '2018_05', '2018_06', '2018_07', '2018_08', '2018_09', '2018_10', '2018_11', '2018_12', '2019_01'] #Run through the web scraper and save the desired csv files. Create a master dataframe with all of the months' data master_dataframe_production=pull_desired_files_and_create_master_df(dates) #Declare the ReportDate column as a pandas datetime object master_dataframe_production['ReportDate']=pd.to_datetime(master_dataframe_production['ReportDate']) #Write the master dataframe to a master csv master_dataframe_production.to_csv(newpath+'\master_dataframe_production.csv') if __name__== "__main__": main() Let’s break down what the above Python code means. From the main() block, I first create a new folder in the C:/ drive, called ‘Bakken’, to store all of the data. From there, I select the months that I wish to pull–between December 2016 and January 2019–and add them to a list. After that, I run the pull_desired_files_and_create_master_df() function. In this function, I ping the URL, retrieve the desired .xlsx file for the designated month, write it as an .xlsx file in the C:/Bakken folder, and then read the file’s first tab into an ongoing master pandas dataframe, called master_df. After all of the dates have been read in a loop, the master_df dataframe is returned. This dataframe contains all of our concatenated data from the spreadsheets, ready to be used for data analysis in Python. […]… […] […] the following Python code, I use Bakken well production data (see this tutorial for public Bakken data) to perform DCA for well production. This script automates curve fitting for both hyperbolic and […]
https://techrando.com/2019/06/26/how-to-web-scrape-oil-and-gas-data-from-the-north-dakota-oil-and-gas-division-website/
CC-MAIN-2021-31
refinedweb
874
61.87
The Q3Signal class can be used to send signals for classes that don't inherit QObject. More... #include <Q3Signal> This class is part of the Qt 3 support library. It is provided to keep old source code working. We strongly advise against using it in new code. See Porting to Qt 4 for more information. Inherits QObject. The Q3Signal class can be used to send signals for classes that don't inherit QObject. If you want to send signals from a class that does not inherit QObject, you can create an internal Q3Signal object to emit the signal. You must also provide a function that connects the signal to an outside object slot. This is how we used to implement signals in Qt 3's QMenuData class, which was not a QObject. In Qt 4, menus contain actions, which are QObjects. In general, we recommend inheriting QObject instead. QObject provides much more functionality. You can set a single QVariant parameter for the signal with setValue(). Note that QObject is a private base class of Q3Signal, i.e. you cannot call any QObject member functions from a Q3Signal object. Example: #include <q3signal.h> class MyClass { public: MyClass(); ~MyClass(); void doSomething(); void connect(QObject *receiver, const char *member); private: Q3Signal *sig; }; MyClass::MyClass() { sig = new Q3Signal; } MyClass::~MyClass() { delete sig; } void MyClass::doSomething() { // ... does something sig->activate(); // emits the signal } void MyClass::connect(QObject *receiver, const char *member) { sig->connect(receiver, member); } Constructs a signal object called name, with the parent object parent. These arguments are passed directly to QObject. Destroys the signal. All connections are removed, as is the case with all QObjects. Emits the signal. If the platform supports QVariant and a parameter has been set with setValue(), this value is passed in the signal. Connects the signal to member in object receiver. Returns true if the connection is successful. See also disconnect() and QObject::connect(). Disonnects the signal from member in object receiver. Returns true if the connection existed and the disconnect was successful. See also connect() and QObject::disconnect(). Sets the signal's parameter to value See also value(). Returns the signal's parameter See also setValue().
http://doc.qt.nokia.com/4.6-snapshot/q3signal.html#activate
crawl-003
refinedweb
359
60.31
Working with Ionic Native - Using Secure Storage This post is more than 2 years old. Today I'm reviewing another Ionic Native feature, the Secure Storage wrapper. As the plugin docs explain, this is a plugin that allows for encrypted storage of sensitive data. It follows an API similar to that of WebStorage, with a few differences. First, the plugin lets you define a 'bucket' for your data. So your app could have multiple different sets of data that are separated from each other. (The plugin refers to it as 'namespaced storage', but buckets just made more sense to me.) Second, you can't get all the keys like you can with WebStorage. That's probably related to the whole 'secure' thing, but in general, I can't imagine needing that functionality in a real application. You could also use a key that represents a list of keys. Secure Storage is a key/value storage system, and like WebStorage, you can only store strings, but you can use JSON to get around that. With that out of the way - let's build a simple demo. I created a simple two page app to represent a login screen and main page. Let's start by looking at the first page, our login screen. <ion-header> <ion-navbar> <ion-title> Secure Storage Example </ion-title> </ion-navbar> </ion-header> <ion-content padding> <ion-list> <ion-item> <ion-label fixed>Username</ion-label> <ion-input</ion-input> </ion-item> <ion-item> <ion-label fixed>Password</ion-label> <ion-input</ion-input> </ion-item> </ion-list> <button primary block (click)="login()">Login</button> </ion-content> Now we'll look at the code behind this. import {Component} from '@angular/core'; import {NavController} from 'ionic-angular'; import {LoginProvider} from '../../providers/login-provider/login-provider'; import { Dialogs } from 'ionic-native'; import {MainPage} from '../main-page/main-page'; @Component({ templateUrl: 'build/pages/home/home.html', providers:[LoginProvider] }) export class HomePage { public username:string; public password:string; private loginService:LoginProvider; constructor(public navCtrl: NavController) { this.loginService = new LoginProvider(); } login() { console.log('login',this.username,this.password); this.loginService.login(this.username,this.password).subscribe((res) => { console.log(res); if(res.success) { //thx mike for hack to remove back btn this.navCtrl.setRoot(MainPage, null, { animate: true }); } else { Dialogs.alert("Bad login. Use 'password' for password.","Bad Login"); } }); } } All we've got here is a login handler that calls a provider to verify the credentials. There's one interesting part - the setRoot call you see there is used instead of navCtrl.push as it lets you avoid having a back button on the next view. Finally, let's look at the provider, even though it's just a static system. import { Injectable } from '@angular/core'; import 'rxjs/add/operator/map'; import {Observable} from 'rxjs'; //import 'rxjs/Observable/from'; @Injectable() export class LoginProvider { constructor() {} public login(username:string,password:string) { let data = {success:1}; if(password !== 'password') data.success = 0; return Observable.from([data]); } } Basically - any login with "password" as the password will be a succesful login. That's some high quality security there! You can view this version of the code here: Ok, so let's kick it up a notch. My plan with Secure Storage is to modify the code as such: - When you login, JSON encode the username and password and store it as one value. - When the app launches, first create the 'bucket' for the system, which will only actually create it one time. - See if pre-existing data exists, and if so, get it, decode it, put the values in the form, and automatically submit the form. Since I'm using a plugin, I know now that my app has to wait for Cordova's deviceReady to fire. I've got a login button in my view that I can disable until that happens. So one small change to the view is to show/hide it based on a value I'll use based on the ready status. Here is the new login button: <button primary block (click)="login()" *Login</button> Now let's look at the updated script. I'll share the entire update and then I'll point out the updates. import {Component} from '@angular/core'; import {NavController,Platform} from 'ionic-angular'; import {LoginProvider} from '../../providers/login-provider/login-provider'; import { Dialogs } from 'ionic-native'; import {MainPage} from '../main-page/main-page'; import {SecureStorage} from 'ionic-native'; @Component({ templateUrl: 'build/pages/home/home.html', providers:[LoginProvider] }) export class HomePage { public username:string; public password:string; private loginService:LoginProvider; public readyToLogin:boolean; private secureStorage:SecureStorage; constructor(public navCtrl: NavController, platform:Platform ) { this.loginService = new LoginProvider(); this.readyToLogin = false; platform.ready().then(() => { this.secureStorage = new SecureStorage(); this.secureStorage.create('demoapp').then( () => { console.log('Storage is ready!'); this.secureStorage.get('loginInfo') .then( data => { console.log('data was '+data); let {u,p} = JSON.parse(data); this.username = u; this.password = p; this.login(); }, error => { // do nothing - it just means it doesn't exist } ); this.readyToLogin = true; }, error => console.log(error) ); }); } login() { this.loginService.login(this.username,this.password).subscribe((res) => { console.log(res); if(res.success) { //securely store this.secureStorage.set('loginInfo', JSON.stringify({u:this.username, p:this.password})) .then( data => { console.log('stored info'); }, error => console.log(error) ); //thx mike for hack to remove back btn this.navCtrl.setRoot(MainPage, null, { animate: true }); } else { Dialogs.alert('Bad login. Use \'password\' for password.','Bad Login','Ok'); this.secureStorage.remove('loginInfo'); } }); } } So let's start at the top. Don't forget that your Ionic views can fire before the Cordova deviceReady event has fired. I still wish there was a simple little flag I could give to my Ionic code to say "Don't do anything until then", but until then, you can use the Platform class and the ready event. I create my Secure Storage bucket "demoapp", and in the success handler, I immediately look for the key loginInfo. Obviously on the first run it won't exist, but the bucket will be created. On the second (and onward) run, the bucket will already exist, and the data may or may not exist. If it does - I decode it, set the values, and login. That last operation was optional of course. Maybe your app will just default the values. There's a few different ways of handling this. Finally, in the login handler I both set the value (after encoding it) and clear it based on the result of the login attempt. Notice that both calls are asynchronous, but I really don't need to wait for them, right? Therefore I treat them both as 'fire and forget' calls. They could, of course, error. And there is a very good reason why it could. In the docs, they mention that this plugin works just fine on iOS, but on Android it will only work if the user has a secure pin setup. That's unfortunate, but the plugin actually provides an additional API to bring up that setting for Android users, which is pretty cool I think. You can find the code for this version here: How about a few final thoughts? - While you can store a username and password, and the docs even say this, I still feel a bit wonky about doing so. I'd maybe consider storing a token instead that could be used to automatically login just that user. And it could have an automatic timeout of some sort. - If you read the blog post, Ionic Native: Enabling Login with Touch ID for iOS, then this plugin would be a great addition to that example. - A bit off topic, but I would normally have added a "loading" indicator on login to let the user know what's going on. And of course, Ionic has one. I was lazy though and since my login provider was instantaneous, I didn't feel like it was crucial. As always - let me know what you think in the comments below. p.s. I'm loving Ionic 2, and Angular 2, and TypeScript, but wow, it is still a struggle. For this demo, I'd say 80% of my time was spent just building the first version. I'm still struggling with Observables, still struggling with Angular 2 syntax. Heck, it took me a few minutes to even just bind the form fields to values. That being said, and I know I've said this before, I still like the code in v2 more than my Angular 1 code. Archived Comments have you had any issues running this with IOS? I'm having issues when I try and set, get the created storage it errors. The only error I get back is the following: {"line":71,"column":45,"sourceURL":""} You don't see anything else in the console? You said it errors, so therefore the error must show up. Error message is "Failure in SecureStorage.set() - Refer to SecBase.h for description"... I think it's related to emulator issue. I'll have to try and test out on a real device and see if I reproduce. Ok - and if you can - you should file that w/ the plugin repo. Yep confirmed it's a bug in the emulator. Was able to progress when deployed to a real device. It doesn't works on some devices like HTC , Samsung. But working on Asus. What I do? my code is this.secureStorage.create('storeroom') .then( ()=>{ this.secureStorage.set('userId',this.username) .then( data=>{ }, error=>{ console.log('Your device issue'); } ) }, error=>{ console.log(error.error); } ); Report it as a bug to the plugin author. Question what is the main difference between ionic native storage and ionic secure storage? There is no Ionic Native Storage. Ionic Native is a group name for plugins that have been 'wrapped' and made more 'Angular friendly'. Can we see encrypted data in console ? Did you try? No, I can't see because when I try to get data using this.secureStorage.get(.......) then it returns me decrypted data. So how we can see encrypted data. I'd ask the plugin author - I believe it is stored in a platform specific manner. But yea - check on the plugin site. I just looked and the plugin docs talk about this. Thanks. Thanks for the nice blog ive got the problem, when try to login, then ive got a runtime error with Cannot read proberty 'set' of undefindet the same happens if i typ the wrong pass, but then i got insteadt of 'set' 'remove' TypeError: Cannot read property 'set' of undefined at SafeSubscriber._next () at SafeSubscriber.__tryOrSetError () at SafeSubscriber.next () at Subscriber._next () at Subscriber.next () at ArrayObservable._subscribe () at ArrayObservable.Observable._trySubscribe () at ArrayObservable.Observable.subscribe () at HomePage.webpackJsonp.227.HomePage.login () at Object.eval [as handleEvent] (ng:///AppModule/HomePage.ngfactory.js:517:24) Ionic Framework: 3.6.0 Ionic App Scripts: 2.1.3 Angular Core: 4.1.3 Angular Compiler CLI: 4.1.3 Node: 6.11.2 OS Platform: Windows 10 Navigator Platform: Win32 User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36 Not sure I understand. You type the wrong password for my login service? i mean, in your example, you coded the password to 'password' so if i write anything other then 'passwort' i get the error with 'remove' What remove call though? If you put in a bad password it should keep you on the login page, right? Oh wait - I think I see the issue. My logic assumes that you had already logged in once. So this is a bug in my code. It should say (in pseudo-code) if login failed, AND if I had stored info before, then remove it. how to create storage instance just once? I believe (stress, BELIEVE) the create API only makes it once and after that it's just opened. Of course, but the documentation is too light and doesn't specify it Then I'd file a bug report so they know the docs are lacking in this regard. Hi, thanks for the good summary! Do you have any idea how to properly unit test this. Is there a way I can run automated tests to make sure my credentials are saved securely? Thanks in advance :) Sorry - I haven't used this in a while. what solve this issue, same issue I'd offer the same advice, to report it to the plugin.
https://www.raymondcamden.com/2016/08/16/working-with-ionic-native-using-secure-storage
CC-MAIN-2021-39
refinedweb
2,078
60.11
On Sat, Aug 17, 2002 at 11:16:06AM -0400, Jeff Maxson wrote: | | Main... Yes, that is actually very easy to do if the sender properly tags every message. For example using an exim filter file : if $h_Subject: matches "^<kid's name>:" then save /var/mail/<kids_acct> endif (assuming you have write permission on his mailspool; if not, then forward the message to his account instead and let exim deliver it locally, the way you would do it if you weren't the sys admin) -D -- In the way of righteousness there is life; along that path is immortality. Proverbs 12:28 Attachment: pgpstLEogzai4.pgp Description: PGP signature
https://lists.debian.org/debian-user/2002/08/msg03011.html
CC-MAIN-2018-09
refinedweb
108
59.64
23 May 2011 By clicking Submit, you accept the Adobe Terms of Use. Prior experience with Flex or ActionScript development will be helpful in understanding how SourceMate features improve productivity. Additional required other products All SourceMate is an add-on to Flash Builder that brings a wealth of advanced coding tools and puts extraordinary power in the hands of Flex and ActionScript developers. SourceMate has many features, but often just one feature can eliminate a day of tedious and boring coding by automating and streamlining one of the many mind-numbing tasks that we, as developers, never enjoy. This article introduces SourceMate and describes just a handful of the popular code generation, refactoring, globalization, and localization features SourceMate adds to Flash Builder 4.5. For a listing of all the notable features SourceMate provides, visit the SourceMate product page on the ElementRiver website.. The three features described in this section—generating an event class, extracting an interface, and extracting a method—provide just a taste of the full refactoring and code generation capabilities that SourceMate offers. There are many additional refactorings available, including Change Method Signature, Extract Variable, Inline Method or Variable, and Disable/Remove trace() statements among others. Do you ever need to dispatch your own custom event? Creating an event class seems easy at first but there are lots of details and pitfalls that are easy to forget. Did you declare your event type constant? Did you make sure your read only event properties aren't mutable? Did you create a toString() method? And most importantly, did you remember to override clone() ? These are all things you're expected to do when you create your own custom event classes, and these details can make such classes a nuisance to code. The Event Class wizard in SourceMate (see Figure 1) will take care of these details for you. The wizard is similar to other Flash Builder class wizards but it offers two additional options. First, there's a table to declare your custom event type constants. Second, there's another table to declare your event properties. When you click Finish, SourceMate will generate the necessary event class, including all the important little details. A constructor will be created that enables you to pass in the event properties. Each property will receive a getter, but not a setter, ensuring that the property is not mutable. Additionally an appropriate toString() and clone() method will be generated for you. Now creating an event class really is a piece of cake. Below is an event class generated by SourceMate: package { import flash.events.Event; public class CustomerEvent extends Event { public static const CUSTOMER_ADDED:String = "customerAdded"; public static const CUSTOMER_REMOVED:String = "customerRemoved"; public static const CUSTOMER_MODIFIED:String = "customerModified"; private var _customer:Customer; private var _modifiedFields:Array; public function CustomerEvent(type:String, customer:Customer, modifiedFields:Array=null) { super(type,false,false); this._customer = customer; this._modifiedFields = modifiedFields; } public function get customer():Customer { return _customer; } public function get modifiedFields():Array { return _modifiedFields; } public override function clone():Event { return new CustomerEvent(type,customer,modifiedFields); } public override function toString():String { return formatToString("CustomerEvent","customer","modifiedFields"); } } } Two important cornerstones of good architecture are encapsulation and polymorphism. Interfaces play a key role in enabling these two concepts. However, developers don't often have the chance to put good architecture in place until after much of the code is written. With interfaces, it's common to have written a concrete class only to realize later that the details should have been hidden behind an interface. In other cases a developer may notice other classes that fill a similar role, and need to create a common interface. When you encounter these situations you need to do more than just create the interface class itself. You also need to change the concrete class so it implements this interface. More significantly, you need to go through the calling code and change any references to the concrete class over to the new interface class you created. This, in particular, can be a hassle. When using SourceMate's Extract Interface feature (see Figure 2), all the interface set up tasks are done for you. SourceMate will generate the interface based on the selected methods of the concrete class. Then it will modify the concrete class to implement the interface. Finally, it will go through your entire project and replace any references to the concrete class with the interface name. Instead of tediously wading through your project code, finding references to your class, and making manual updates, you can use SourceMate to do it automatically. Extract Method refactoring (see Figure 3) is one of the most useful SourceMate features for everyday coding. Extract Method takes a chunk of code from the current method and moves it into its own method. You can use this feature when you identify a piece of common code that should have its own utility method. Of course, you can do this manually, but you'll need to figure out what arguments the method is going to need, determine the method's return type, and then replace the code in the original method with a call to the new method. SourceMate performs these steps for you. Just select the lines of code you want extract into the new method and SourceMate will parse the code, determine the necessary arguments for the new method, figure out the required return type, create the utility method, and finally replace all instances of the selected code with a call to the new method. It's easy and quick. To see how this capability works, consider the function loadCustomers() as it was written and before Extract Method was invoked: public function loadCustomers(customers:Array):void { for (var i:int = 0; i < customers.length; i++) { var customer:Customer = customers[i]; var isValid:Boolean = true; if (customer.name == null) isValid = false; if (customer.email == null) isValid = false; if (customer.location == null) isValid = false; if (!isValid) continue; //load customer } } The code below would be selected by the developer as the target for the refactoring: var isValid:Boolean = true; if (customer.name == null) isValid = false; if (customer.email == null) isValid = false; if (customer.location == null) isValid = false; After Extract Method, the selected code is moved to a separate function named isCustomerValid() : public function loadCustomers(customers:Array):void { for (var i:int = 0; i < customers.length; i++) { var customer:Customer = customers[i]; var isValid:Boolean = isCustomerValid(customer); if (!isValid) continue; //load customer } } private function isCustomerValid(customer:Customer):Boolean { var isValid:Boolean = true; if (customer.name == null) isValid = false; if (customer.email == null) isValid = false; if (customer.location == null) isValid = false; return isValid; } SourceMate 3.0 Enterprise Edition includes a set of localization features that make it particularly valuable for any enterprise creating large multilanguage Flex applications. Fully localizing a Flex application can a daunting task. There are a series of steps you must perform to get your application ready to accept new translations and they're not all obvious or easy to remember. Perhaps the most difficult task is replacing all your hardcoded strings with lookups to strings in resource properties files. This can get even more painful when the strings are part of concatenating expressions (for example: "Showing " + count + " items" ). This task usually accounts for the largest amount of development time when localizing applications. Fortunately SourceMate can automate this for you. You can use SourceMate's Externalize Strings feature (see Figure 4) to pull hardcoded strings out from a class and put them into a Flex resource properties file. The hardcoded strings in code are then replaced with calls to the Flex ResourceManager class to retrieve them from the correct properties file. On large projects (which are typically the projects that get translated), performing this process manually can take days or even weeks of mind-numbing manual changes. This is not the type of code anyone looks forward to writing. Using SourceMate you can cut this time to a bare minimum and get back to writing solid and interesting code. Of course, externalizing your hardcoded strings isn't the only localization task you need to complete. You still need to create different properties files for each different locale. Furthermore, if you're dealing with existing properties files, perhaps because you're working on an update of an already localized application, you need to manage the existing data and ensure it's all still correct and valid. When working with many different properties files over time, it's very easy to run into missing translations, missing keys, unreferenced or orphaned strings, and other problems. SourceMate's Resource Bundle Editor provides features to help with these problems so you don't have to hunt through a myriad of properties files to find and fix them. With SourceMate installed, the Resource Bundle Editor comes up automatically when you open a .properties files. The most basic functionality of the Resource Bundle Editor is to gather all the related properties files from all your locales and treat them as one logical entity. After all, for each key you really have just one logical string in the application with different translations kept in different locales. In the Resource Bundle Editor, you can see the keys for your strings, and in the values grid you can see the different translations for each string in the different locale files (see Figure 5). This grouping makes it much easier to deal with multiple properties files, but it's the advanced functions where the editor really shines. There are four advanced functions in the Resource Bundle Editor: Find Unused Keys compares the keys in your properties files against the ResourceManager calls in your code. It will then mark any keys that exist in your properties files but aren't retrieved in your project code. Show Missing Keys and Keys with Empty Values provides similar functionality. This feature will mark keys that exist in some of your properties files but not in others, or similar keys that exist but don't have any values. This helps you locate missing translations. Find Duplicate Keys generates a report of all duplicate keys in each locale file. Create New Locale File, as the name suggests, creates a new properties file (and a locale folder if necessary). The new locale file will be populated with the existing keys and optionally have automatically translated values using Google Translate. Machine translators like Google Translate don't produce translations that you would want in your final application but they do create usable pseudo-translations. Pseudo-translations are temporary translations that are used to test the ability of the application to accept and properly show translated data. Pseudo-translations can help expose localization problems early in the process. The most common of these problems is text truncation, as translated strings can be many times longer than their original text. The real benefit of pseudo-translations is that you can start your localization testing without having to wait for the final results from the translators. In addition to the features mentioned above, SourceMate will also help configure your Flex projects as necessary. While externalizing strings, SourceMate will scan your project's properties and offer to make any configuration changes necessary to support localization. For example, by convention localized projects should include a separate source path for locale-specific resources. This source path is typically named locale/{locale}, with the {locale} key being substituted for the current locale. If your project doesn't have this additional source path, SourceMate will offer to add it for you. Similarly, while creating a new locale file, if the Flex SDK does not contain framework resources for that new locale, SourceMate can create these for you via the copylocale utility. Without SourceMate, you have to run this command line utility manually and it's not always obvious when you need to. With SourceMate, you'll get a reminder with an option to run it right inside Flash Builder. Localizing a Flex application without SourceMate is tedious and often error prone. With SourceMate, the same task is straightforward and substantially faster. Eliminating even one monotonous coding task with SourceMate can save days of boring work and troubleshooting. When you start using the rest of the great features SourceMate brings to Flash Builder, you'll feel like you're coding on steroids. SourceMate is available for a free, 30-day trial. You can download the free trial from here. This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License
http://www.adobe.com/devnet/flex/articles/introducing-sourcemate3.html
CC-MAIN-2017-09
refinedweb
2,076
53.81
solved. I'm trying to learn how to sort a linked list alphabetically using strcmp. Does anyone have any resources / a skeleton of how to do it? Thanks. The if(pos==1) *do this* portion works. I just can't get the second half to work. I cannot figure out for this function for the life of me, can anyone help, please? my struct typedef struct items item_t; struct items { char name[32]; float cost, weight; TBH a switch / case in this instance is so simple and rudimentary it's hardly more work to copy / paste someone's answer from here than to just look at a tutorial and do it. But if you want a shot at... Nevermind, I'm using fgets. Still having some minor problems with the print out loop and the read through file loop both of which pertain to the full file not being printed out line by line.... How do you use fscanf when there's a space in the string? I'm sorry im at school right now and the connection went out. Ok so I understand how to assign things to the nodes, but how do I read properly from the file and store it? For starters, I use... Ok here's a good question. So this is the tutorial on this site for traversing a linked list: #include <stdio.h> #include <stdlib.h>struct node { int x; struct node *next;};int main(){ /*... Well you answered the question by posting some resources, so thank you. And I don't understand storing things into linked lists as a whole. While I got it to work, I hardly understand. I guess to be... I literally can't find any resources. The only thing I can find is people post there codes only, with help that is inconclusive regarding the functionality of the code. I'm looking for an efficient,... Just a quick question. I recently finished an assignment regarding reading from a file and storing it into a linked list. I got it to work, but don't really understand the theory behind it, so the... One thing that I think you're not understanding - could be wrong, is this: If you say define a variable as char, and then assign it a number, like this: char num = 26; then if I'm not mistaken... There's no need for the second void instruction() function. No worries, yeah when I was going to post to offer "help" I realized the only thing to do was just point it out. Haha! printf("The square of the distance between (%lf,%lf) and (%lf,%lf) is %lf", x1, y1, x2, y2,z); look at the end of this statement, bold text is what you're missing! Aka is this school work!^^^ also I reccommend mallock(sizeof()) Oh is this that new game you were talking about adak? haha... Besides that, your if statement isn't doing anything. There's no way weight == lb and height == ft if there's no sort of pre-existing conditions over the ft and lb variables. To start, they aren't... Holy balls. After you type a sentence and put a period, you make a space. Take my syntax as an example. Seriously, that was painful as hell to read. No I meant tomorrow, I already turned in the assignment a while ago out of frustration. Couldn't get out of an infinite loop, something was wrong with the while loop in ThrowDarts. So I'm not asking... Just PM it to me if you don't mind. :D tl;dr make a thread for help with class. Another student copies and pastes my code, makes it work for them, and tells me they linked prof to thread? and then says to contact them for tutoring -_- Unfortunately I can't use this because you're in my class. :( To actually copy the file, go to the path (as in go to my computer) and navigate to the file.
https://cboard.cprogramming.com/search.php?s=2813e785ed0e2067ea7a1f8841ccb4e8&searchid=2965418
CC-MAIN-2020-10
refinedweb
667
83.25
13 June 2011 08:18 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The company shut its No 2 and No 3 ACN plants at “We have debottlenecked the No 3 plant and added another 15,000 tonnes/year to [the capacity to raise it to] 245,000 tonnes/year during the turnaround,” the source said. The company is now operating its 70,000 tonne/year No 2 ACN plant at full rate, according to the company source. Tongsuh Petrochemical can now produce 315,000 tonnes/year of ACN as compared with its previous capacity of 300,000 tonnes/year, prior to the turnaround. Its 50,000 tonne/year No 1 ACN plant was mothballed in 2002. Tongsuh Petrochemical is a wholly owned subsidiary of For more on acrylonitri
http://www.icis.com/Articles/2011/06/13/9468700/south-koreas-tongsuh-petchem-ramps-up-production-at-no-3-acn.html
CC-MAIN-2013-48
refinedweb
128
60.85
It's been a while since I blogged, but it has been a rather busy time. Lots of big projects at work that need my complete attention, and lots of personal stuff going on. Besides that I've finished the first two couple of chapters for Packt on my book on Three.js, so time has been in short supply :) So, finally an update, in this update I want to show and explain a visualization experiment I've been working on when I get a couple of moments of time. The result of this can be found at the following two sites: And looks like this. I started this because a couple of months ago I saw someone visualizing movies in the form of a barcode. Every couple of seconds a frame's 'average' color was determined, and put together to form a colorful barcode, that nicely shows the color usage in that specific movie. I liked the result, so I wanted to reproduce this for a couple of movies I had laying around. I started out by looking at ways to play back movies in java/scala and analyzing the video frames myself. I quickly gave up on this approach, though. I did get java and FFMpeg working, but analyzing each frame programmatically proofed a bit more work than I initially thought. So, after some looking around, I ran into boblight. Boblight is a small library that can be used to (from the site): Its main purpose is to create light effects from an external input, such as a video stream (desktop capture, video player, tv card), an audio stream (jack, alsa), or user input (lirc, http). Currently it only handles video input by desktop capture with xlib, video capture from v4l/v4l2 devices and user input from the commandline with boblight-constant. Boblight uses a client/server model, where clients are responsible for translating an external input to light data, and boblightd is responsible for translating the light data into commands for external light controllers. And as an added bonus it comes with an XBMC plugin! So now I could just use my XBMC installation to play back my movies and use boblight to convert the screen to light data! Basically I needed to take the following steps to capture the light data: - Compile, install, configure and run the boblight daemon - Install the XBMC boblight plugin - Play a movie You can find information on how to compile and install boblight on their site. I had a little bit of issues with missing libraries and headers, but some quick googling and actually reading the error messages quickly fixed this. I used the following configuration: jos@XBMC:~$ cat /etc/boblight.conf [global] interface 127.0.0.1 [device] name device1 output dd bs=1 > /home/jos/movie.out 2>&1 channels 3 type popen interval 41700 debug off [color] name red rgb FF0000 [color] name green rgb 00FF00 [color] name blue rgb 0000FF [light] name main color red device1 1 color green device1 2 color blue device1 3 hscan 0 100 vscan 0 100 With this configuration the boblight daemon outputs the light information to a file with the name /home/jos/movie.out in the following (r,g,b) format: 0.282200 0.240661 0.206841 0.280939 0.239639 0.205967 0.279679 0.238619 0.205094 0.278934 0.238013 0.204573 0.276436 0.235238 0.201714 0.273940 0.232466 0.198857 With the default settings I ran a couple of experiments (see here for the first set), but the color were a bit oversaturated and the rgb values often clipped to the maximum values. The output though looked nice (see further down for the explanation how to get this): But I wan't completely happy with this. It looks nice, but way to bright. So after some experimenting with the saturation and some other boblight values I got better results. For instance "The Dark Knight" looked like this: This nicely reflects the dark mood this movie sets. The XBMC boblight configuration used for this was the following: <settings> <setting id="bobdisable" value="false" /> <setting id="hostip" value="127.0.0.1" /> <setting id="hostport" value="19333" /> <setting id="movie_autospeed" value="0.000000" /> <setting id="movie_interpolation" value="true" /> <setting id="movie_preset" value="0" /> <setting id="movie_saturation" value="0.700000" /> <setting id="movie_speed" value="100.000000" /> <setting id="movie_threshold" value="20.000000" /> <setting id="movie_value" value="2.000000" /> <setting id="musicvideo_autospeed" value="0.000000" /> <setting id="musicvideo_interpolation" value="false" /> <setting id="musicvideo_preset" value="1" /> <setting id="musicvideo_saturation" value="1.000000" /> <setting id="musicvideo_speed" value="100.000000" /> <setting id="musicvideo_threshold" value="0.000000" /> <setting id="musicvideo_value" value="1.000000" /> <setting id="networkaccess" value="false" /> <setting id="other_misc_initialflash" value="true" /> <setting id="other_misc_notifications" value="true" /> <setting id="other_static_bg" value="false" /> <setting id="other_static_blue" value="128.000000" /> <setting id="other_static_green" value="128.000000" /> <setting id="other_static_onscreensaver" value="false" /> <setting id="other_static_red" value="128.000000" /> <setting id="overwrite_cat" value="false" /> <setting id="overwrite_cat_val" value="0" /> <setting id="sep1" value="" /> <setting id="sep2" value="" /> <setting id="sep3" value="" /> </settings> At this point I can play back a complete movie, be patient, and at the end of the movie I've got a complete set of colors at 24.9 FPS interval for the complete movie in the format I showed previously. With this format we can now easily create the visualizations I showed earlier. The following is my simple (very ugly) experimental java code I used for this (also tried realtime in javascript and canvas, but "The Dark Knight Rises" for instance contains over 260000 measurements and it took a while): import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.List; import javax.imageio.ImageIO; import org.apache.commons.io.FileUtils; public class BobLightConverter { // step through the measure points private final static int STEP = 12; // how wide is the picture private final static int WIDTH = 1024; // how much rows do we print (-1 for automatic) private final static int HEIGHT = -1; // point height and width private final static int P_WIDTH = 1; private final static int P_HEIGHT = 50; private final static String SRC = "dark.knight.out"; /** * @param args * @throws IOException */ public static void main(String[] args) throws IOException { String data = FileUtils.readFileToString(new File("/Users/jos/Desktop/" + SRC)); String[] rows = data.split("\n"); List<String> filteredRows = new ArrayList<String>(); System.out.println("Total number of points: " + rows.length); // first filter the rows based on the steps int stepCount = 0; for (String row : rows) { stepCount++; if (stepCount%STEP==0) { filteredRows.add(row); } } System.out.println("Filtered number of points: " + filteredRows.size()); // next calculate how many elements we can store into the width int nWidth = (int) Math.ceil(WIDTH/P_WIDTH); System.out.println(nWidth); int nHeight = HEIGHT; if (nHeight == -1) { // calculate the height based on the image width, the P_WIDTH and P_HEIGHT nHeight = (int) Math.ceil(((filteredRows.size())/nWidth)+1)*P_HEIGHT; } BufferedImage image = new BufferedImage(WIDTH,nHeight, BufferedImage.TYPE_INT_RGB); int x = 0; int y = -P_HEIGHT; for (String row : filteredRows) { String[] rgb = row.split(" "); int r = Math.round(Float.valueOf(rgb[0])*255); int g = Math.round(Float.valueOf(rgb[1])*255); int b = Math.round(Float.valueOf(rgb[2])*255); // the size of each line if (x%WIDTH==0) { x=0; y+=P_HEIGHT; } for (int i = 0 ; i < P_WIDTH ; i++) { for (int j = 0 ; j < P_HEIGHT ; j++) { image.setRGB(x+i, y+j, 65536 * r + 256 * g + b); } } x+=P_WIDTH; } File f = new File("/Users/jos/" + SRC + ".png"); ImageIO.write(image, "PNG", f); } } Not the most complex code, but with this code I can simple state the dimensions I want to have and produce, at least in my eyes, great looking visualizations. That's pretty much all I wanted to write. As a final note, in the Batman Trilogy example I show three donuts created using D3.js. To create these I first used a simple java program to create a histogram of all the colors used. These colors are first grouped together based on rgb values (to restrict number of values) and next sorted based on their HSV value to sort from dark to light. The output from that looks like this: r,g,b,count 0,0,0,225 7,0,0,8 0,7,7,1 7,7,7,148 7,7,0,15 14,14,14,353 For each color the count represents how often this specific color is used in the image. This is used in D3.js to create a donut. var pie = d3.layout.pie() .sort(null) .value(function(d) { return d.count; }); var svg = d3.select("#donut-3").append("svg") .attr("width", width) .attr("height", height) .append("g") .attr("transform", "translate(" + width / 2 + "," + height / 2 + ")"); d3.csv("data/histogram-darkknight.csv", function(error, data) { data.forEach(function(d) { d.count = parseInt(d.count); }); var g = svg.selectAll(".arc") .data(pie(data)) .enter().append("g") .attr("class", "arc"); g.append("path") .attr("d", arc) //.style("fill", function(d) { return color(d.data.r); }); .style("stroke-width","0.1") .style("stroke", function(d) { var color = (d3.rgb(parseInt(d.data.r) ,parseInt(d.data.g) ,parseInt(d.data.b)).toString()) return color; }) .style("fill", function(d) { var color = (d3.rgb(parseInt(d.data.r) ,parseInt(d.data.g) ,parseInt(d.data.b)).toString()) return color; }); }); Won't dive into the details of the code here. If you're interested though, let me know. That's it for this article. The examples can be found here: And if you want to raw data, let me know and I'll put it online somewhere. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/movie-color-analysis-xbmc
CC-MAIN-2016-30
refinedweb
1,617
57.16
How to define functions/functors and split arguments? I have a struct of fixed parameters, P, a mutable struct of variables, V, and three constructors for methods F, G, and H, where H is the composition of F and G. The constructor for F has three parts: (1) The struct that holds parameters F; (2) a function f(q,x) that unpacks its arguments and returns some calculation; (3) a functor-type method F(q)(x) defined on struct F. The argument tuple (q,x) is split between a typed q and some vector x. The construction of G and H is similar. The main purpose of this design is to be able to change the functional forms and the parameters while keeping the overall architecture. I’ve managed to make this work in simple cases, but am having difficulty when P and V hold more than one parameter. So let me review what appears to perform as intended and what doesn’t. Simple case: F is typed with a single fixed parameter a and an instance can be created with F(a0)(x0), for some concrete a0 and x0. Works. Mixed types: G is typed with both a fixed parameter and a variable, b and β, and an instance can be created with G(b0, β0)(x0), for some concrete b0, β0 and x0. Composition: H is typed with both P and V and the method must properly unpack and access the fields. That is, the argument tuple q holds an instance p of P and vof V, like so q = p, v. I can get a composition function h(q,x) to work, but I can’t get a method defined directly on the type H to work (i.e. H(q)(x) does not work). I have tried several variations of H(q,x) without success. Trying to pick up a hint from the error message, I tried to type with ::Tuple{P,V}. I also tried to define H(q)(x) as an inner method, but that didn’t work at all. Thanks for your suggestions! """ `P` struct to hold all fixed parameters """ struct P a :: Float64 # not used in this example b :: Vector{Float64} end function P(; a = 1.0, b = [2.0, 3.0]) P(a, b) end """ `V` mutable struct to hold all variable parameters """ mutable struct V α :: Float64 β :: Vector{Float64} end function V(; α = 0.0, β = [0.0, 0.0]) V(α, β) end """ `F(x)` functor method defined on struct holding fixed parameter a """ struct F a :: Float64 end (dummy::F)(x) = f(dummy.a, x) # one-line functor definition function f(a, x) return x.^(a) # note the broadcasting dot end """ `G(b, β, x)` b :: Vector{Float64} β :: Vector{Float64} """ struct G b :: Vector{Float64} β :: Vector{Float64} end function (q::G)(x) return g(q, x) end function g(q, x) b = q.b; β = q.β return sqrt.(b .* β) * x end """ `H(q, x)` functor method defined as the composition of F and G defined on a struct typed with `p::P` (fixed parameters) and `v::V` (variables) """ struct H{FG} h :: FG end function (q::H)(x) return h(q, x) end function h(q, x) p, v = q # split pars/vars a, b, β = p.a, p.b, v.β # unpack return (F(a) ∘ G(b,β))(x) end A few checks to see that F and G work as expected, while H does not (but h does): # Check F: ((a,x) -> x.^(a))(2.0,2.0) ## 4.0 F(2.0)(2.0) ## 4.0 # Check G: ((b,β,x) -> sqrt.(b.*β)*x)([2.0, 3.0], [4.0, 5.0], 2.0) ## 2-element Vector{Float64}: ## 5.656854249492381 ## 7.745966692414834 G([2.0, 3.0], [4.0, 5.0])(2.0) ## 2-element Vector{Float64}: ## 5.656854249492381 ## 7.745966692414834 # Create instances of P() and V() p0 = P(a = 2.0) ## P(1.0, [2.0, 3.0]) v0 = V(β = [4.0, 5.0]) ## V(0.0, [4.0, 5.0]) # Check H: (F(2.0) ∘ G([2.0, 3.0], [4.0, 5.0]))(2.0) ## 2-element Vector{Float64}: ## 32.00000000000001 ## 60.00000000000001 (F(p0.a) ∘ G(p0.b, v0.β))(2.0) ## 2-element Vector{Float64}: ## 32.00000000000001 ## 60.00000000000001 h((p0, v0), 2.0) ## 2-element Vector{Float64}: ## 32.00000000000001 ## 60.00000000000001 H((p0,v0))(2.0) ## ERROR: LoadError: MethodError: no method matching iterate(::H{Tuple{P, V}}) H(p0,v0)(2.0) ## ERROR: LoadError: MethodError: no method matching H(::P, ::V)
https://discourse.julialang.org/t/julia-function-composition-how-to-correctly-split-arguments/61433
CC-MAIN-2022-21
refinedweb
769
73.47
im_subtract - subtracts two images #include <vips/vips.h> int im_subtract(in1, in2, out) IMAGE *in1, *in2, *out; This functions calculates in1 - in2 and writes the result in the image descriptor out. Input images in1 and in2 should have the same channels and the same sizes; however they can be of different types. Only the history of the image descriptor pointed by in1 is copied to out. The type of the output is given by the table: in1 - uchar char ushort short uint int -------|----------------------------------------- in2 | uchar | short short short short int int char | short short short short int int ushort | short short short short int int short | short short short short int int uint | int int int int int int int | int int int int int int The result of this operation cannot be unsigned. For float types, the refult is float unless one of the inputs is double, in which case the result is double. For complex types the result is FMTCOMPLEX, unless one of the inputs is FMTDPCOMPLEX, in which case the output is FMTDPCOMPLEX. None of the functions checks the result for over/underflow. All functions return 0 on success and -1 on error. im_add(3), im_lintra(3), im_multiply(3). National Gallery, 1995 24 April 1991 SUBTRACTION(3)
http://huge-man-linux.net/man3/im_subtract.html
CC-MAIN-2017-13
refinedweb
210
69.72
> On 2015-Dec-7, at 12:48 PM, Simon J. Gerraty <s...@juniper.net> wrote: > > Mark Millard <mar...@dsl-only.net> wrote: >> My guess is that it is picking up the >> >> MAKEOBJDIRPREFIX=/usr/obj/xtoolchain > > You should use ?= if you want this to work. > There are many places in Makefile.inc1 where MAKEOBJDIRPREFIX is tweaked > in the environment of a sub-make. > > By using = above, you break that. That presumes that MAKEOBJDIRPREFIX has not been assigned a default value before the SRC_ENV_CONF file has been included the first time. If MAKEOBJDIRPREFIX had been defined already then the ?= would do nothing and the wrong value would be used. I believe that the following trace shows that such an assignment of a default value does happen first, making ?= not work either. /usr/src/Makefile (head/Makefile 29160) has > MAKEOBJDIRPREFIX?= /usr/obj at line 145 (used when it is not using targets/Makefile from the relevant .if/.else/.endif). Line 105 has .include <bsd.compiler.mk> and there no others before the above assignment. bsd.compiler.mk in turn includes bsd.opt.mk (only), which in turns includes bsd.mkopt.mk (only). That in turn includes nothing else. So these files and only these files are the involved files before that assignment as far as I can tell. None of these get to src.sys.env.mk and so SRC_ENV_CONF use has not happened yet when > MAKEOBJDIRPREFIX?= /usr/obj is executed. So, if I understand right, MAKEOBJDIRPREFIX is already defined before the code > SRC_ENV_CONF?= /etc/src-env.conf > .if !empty(SRC_ENV_CONF) && !target(_src_env_conf_included_) > .-include "${SRC_ENV_CONF}" > _src_env_conf_included_: .NOTMAIN > .endif is executed and so using ?= would not be effective in the included file. Did I miss something? === Mark Millard markmi at dsl-only.net _______________________________________________ freebsd-current@freebsd.org mailing list To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
https://www.mail-archive.com/freebsd-current@freebsd.org/msg163497.html
CC-MAIN-2018-39
refinedweb
307
54.49
Insane TypeError on project Hi everyone, today I’m posting this because I get the most strange bug of my Pythonista experience. I was experiencing with neural networks made from scratch (in fact, behaviour of raptor VS sheep) and I get an amazingly simple TypeError in my main.py file (l.131): Traceback (most recent call last): File "/var/containers/Bundle/Application/2BCE66D1-3EB5-46DC-AFEE-F33C73B6BB37/Pythonista3.app/Frameworks/Py3Kit.framework/pylib/site-packages/scene.py", line 199, in _draw self.update() File "/private/var/mobile/Containers/Shared/AppGroup/D48F6FDD-3F34-40EB-9497-17B2560BB120/Pythonista3/Documents/Neural Networks/RAPTOR-MOUTON/main.py", line 232, in update input = self.get_input(agent) File "/private/var/mobile/Containers/Shared/AppGroup/D48F6FDD-3F34-40EB-9497-17B2560BB120/Pythonista3/Documents/Neural Networks/RAPTOR-MOUTON/main.py", line 131, in get_input obj.position[0], obj.position[1]) TypeError: 'NoneType' object is not iterable I get a NoneType object in a list, but I tried to debunk it in every way without seeing a single “None”..... I’m despaired... I tried one hundred times to resolve this, but even when I check before execution If BOTH positions (of f.vision_cut() args - > l.130) aren’t NoneType objects, I get the same error. Here is below the link of my project, with images and others scripts included in the project. Please just try and see how weird it is. Thanks in advance for every answer ! Cordially, SmartGoat P.S: I don’t have included comments, tell it to me if you absolutely need them to solve the bug, and I’ll make commented versions of my code. Have a nice day/night ! I know what is causing the error but don’t know exactly why. In your funcs.py module, you have a vision_cut function. It doesn’t have a default return. It has elif that are not always going to be entered. So if no condition is met I presume python returns None for the tuple which then cannot be unpacked into the two values. This I am not 100% sure of but it seems logical. John. Have you tried using the debugger on that line? Or, try import pdb pdb.pm() Then print out the various attributes to figure out which one is None. If I had to guess, it appears that the error must be two lines before, iterating over self.agents. How about checking self. agents everywhere that agents can be modified: * After generate_agents call in setup * At start of update I.e print('value of agents ={}'.format(self.agents)) If you ever find that self.agents is None instead of [], then something wonky happened! @jgoalby I love you, you found indirectly the error ^^ sometimes the difference in Y axis divided by difference in X axis between two objects was exactly 1, and I didn’t take this case in account in my vision_cut() function, so now i’ve made a little change and it’s now fixed ! Thank you so much for your help, and have a nice day / night ! @JonB thank you for helping me, but the bug is already fixed, I hope it doesn’t annoy you ^^ good night !
https://forum.omz-software.com/topic/5513/insane-typeerror-on-project
CC-MAIN-2021-39
refinedweb
528
59.6
What is Polymorphism? Polymorphism is the ability of an object to have a different form. So polymorphism literally means the word polymorphism means having different forms and whenever you define an object, class object I should say and its property and quality having many forms is called polymorphism. How we can make our classes polymorphism or how you can define an object which follows polymorphism. Here I am going to give an example. So I have four classes defined. One is called the bank class, which will be my base class and this bank class contain three or one methods. package lesson1; public class Bank { int getInterestRate() { return 0; } } This base class method just gives the rate of interest so we will define this method as get interest rate. And because it’s a base class It will be returning 0 as an interest. We also have defined Bank_ABC class, Bank_DEF class and Bank_XYZ class. So if you don’t know how to make a class, just right click new, go to class, and in here name the class. So this is how I have created these classes. So one is called the bank class which will be my main class. Second, class is called Bank_ABC, which extends from bank class because bank class is our base class and Bank_ABC is our derived class. And this also contains the same method but this returns the rate of interest or this bank has a rate of interest = 5%. package lesson1; public class_BankABC extends Bank { int getInterestRate() { return 5; } } Bank_DEF, which also extends from Bank class have the rate of interest 6%. So I have defined a method here which returns 6%. Simple! package lesson1; public class_BankDEF extends Bank { int getInterestRate() { return 6; } } And Bank class XYZ which is also extending from Bank. So this is the derived class and this is the base class also have same method get interest rate and the interest rate in this bank is 10%. package lesson1; public class_BankXYZ extends Bank { int getInterestRate() { return 10; } } So I have the same method in all the four classes get interest rate and the only difference in this function is the rate of interest. The main bank itself has 0, Bank ABC has 5% interest rate, Bank DEF has 6% interest rate and Bank XYZ has 10 % interest rate. Now there is a property in Java called polymorphism by which you can define the reference or you can point the reference to a base class to any object of the derived class. So what I mean by that is so when reference object or reference of a parent class points to the object of the sub-class it’s called upcasting. And this is the basic you know extract of polymorphism. So our, we will define the reference of our Bank class where we will just define Bank b1 = or Bank abc=. So this is the reference to the Bank class which will point to object of Bank ABC, which is a child class. package lesson1; public class MyClass { public static void main(string[] args) { Bank abc = new Bank_ABC(); } } and this is what we call polymorphism. When an object can have different form So the object of Bank class is having the form of Bank ABC. So the reference of the Bank class pointing to the object of Bank ABC class. In the same way, we can define different objects from here. You can define Bank def and here also Bank DEF and Bank xyz and Bank XYZ.. package lesson1; public class MyClass { public static void main(string[] args) { Bank abc = new Bank_ABC(); Bank def = new Bank_DEF(); Bank xyz = new Bank_XYZ(); } } So and let’s see it had the same name or not. This should be underscored. So we have defined three references of bank class itself which points to the subclasses or object of subclasses. Reference abc points to the object of the Bank_ABC class, reference def points to the reference object of DEF class and reference xyz points to object XYZ class. And I can call all these methods get interest rate from all these references. So I cn just write syso ctlr shift and in the same way I will do three times. and first time I will take abc as my object and I will just call get interest rate and in the second statement I will paste def.getinterestrate and third time I will paste xyz as my instance and I will call getinterestrate._7<< And when I run the program I get 5,6 and 10._8<< So this type of reference defining reference from the object of subclasses is called polymorphism and all you know member, functions are available through this reference. So this get interest rate are available in here in all the classes. And this Bank ABC gives 5% interest rate so it is return 5. Bank DEF gives 6% interest rate so it returns and Bank XYZ gives 10% interest rate so it returns 10. But the interesting thing here is all are the object of bank or all reference are pointing to different objects but all objects are from bank class. So this is how polymorphism work in java.
https://www.stechies.com/polymorphism-java/
CC-MAIN-2021-39
refinedweb
871
77.87
Walkthrough: Writing Queries in Visual Basic. Create a Project To create a console application project Start Visual Studio. On the File menu, point to New, and then click Project. In the Installed Templates list, click Visual Basic. In the list of project types, click Console Application. In the Name box, type a name for the project, and then click OK. A project is created. By default, it contains a reference to System.Core.dll. Also, the Imported namespaces list on the References Page, Project Designer (Visual Basic) includes the System.Linq namespace. On the Compile Page, Project Designer (Visual Basic), ensure that Option infer is set to On. Add an In-Memory Data Source The data source for the queries in this walkthrough is a list of Student objects. Each Student object contains a first name, a last name, a class year, and an academic rank in the student body. To add the data source Define a Studentclass, and create a list of instances of the class. Important The code needed to define the Studentclass and create the list used in the walkthrough examples is provided in How to: Create a List of Items. You can copy it from there and paste it into your project. The new code replaces the code that appeared when you created the project. To add a new student to the students list - Follow the pattern in the getStudentsmethod to add another instance of the Studentclass to the list. Adding the student will introduce you to object initializers. For more information, see Object Initializers: Named and Anonymous Types. Create a Query When executed, the query added in this section produces a list of the students whose academic rank puts them in the top ten. Because the query selects the complete Student object each time, the type of the query result is IEnumerable(Of Student). However, the type of the query typically is not specified in query definitions. Instead, the compiler uses local type inference to determine the type. For more information, see Local Type Inference. The query's range variable, currentStudent, serves as a reference to each Student instance in the source, students, providing access to the properties of each object in students. To create a simple query Find the place in the Mainmethod of the project that is marked as follows: ' ****Paste query and query execution code from the walkthrough, ' ****or any code of your own, here in Main. Copy the following code and paste it in. Dim studentQuery = From currentStudent In students Where currentStudent.Rank <= 10 Select currentStudent Rest the mouse pointer over studentQueryin your code to verify that the compiler-assigned type is IEnumerable(Of Student). Run the Query The variable studentQuery contains the definition of the query, not the results of running the query. A typical mechanism for running a query is a For Each loop. Each element in the returned sequence is accessed through the loop iteration variable. For more information about query execution, see Writing Your First LINQ Query. To run the query Add the following For Eachloop below the query in your project. For Each studentRecord In studentQuery Console.WriteLine(studentRecord.Last & ", " & studentRecord.First) Next Rest the mouse pointer over the loop control variable studentRecordto see its data type. The type of studentRecordis inferred to be Student, because studentQueryreturns a collection of Studentinstances. Build and run the application by pressing CTRL+F5. Note the results in the console window. Modify the Query It is easier to scan query results if they are in a specified order. You can sort the returned sequence based on any available field. To order the results Add the following Order Byclause between the Wherestatement and the Selectstatement of the query. The Order Byclause will order the results alphabetically from A to Z, according to the last name of each student. Order By currentStudent.Last Ascending To order by last name and then first name, add both fields to the query: Order By currentStudent.Last Ascending, currentStudent.First Ascending You can also specify Descendingto order from Z to A. Build and run the application by pressing CTRL+F5. Note the results in the console window. To introduce a local identifier Add the code in this section to introduce a local identifier in the query expression. The local identifier will hold an intermediate result. In the following example, nameis an identifier that holds a concatenation of the student's first and last names. A local identifier can be used for convenience, or it can enhance performance by storing the results of an expression that would otherwise be calculated multiple times. Dim studentQuery2 = From currentStudent In students Let name = currentStudent.Last & ", " & currentStudent.First Where currentStudent.Year = "Senior" And currentStudent.Rank <= 10 Order By name Ascending Select currentStudent ' If you see too many results, comment out the previous ' For Each loop. For Each studentRecord In studentQuery2 Console.WriteLine(studentRecord.Last & ", " & studentRecord.First) Next Build and run the application by pressing CTRL+F5. Note the results in the console window. To project one field in the Select clause Add the query and For Eachloop from this section to create a query that produces a sequence whose elements differ from the elements in the source. In the following example, the source is a collection of Studentobjects, but only one member of each object is returned: the first name of students whose last name is Garcia. Because currentStudent.Firstis a string, the data type of the sequence returned by studentQuery3is IEnumerable(Of String), a sequence of strings. As in earlier examples, the assignment of a data type for studentQuery3is left for the compiler to determine by using local type inference. Dim studentQuery3 = From currentStudent In students Where currentStudent.Last = "Garcia" Select currentStudent.First ' If you see too many results, comment out the previous ' For Each loops. For Each studentRecord In studentQuery3 Console.WriteLine(studentRecord) Next Rest the mouse pointer over studentQuery3in your code to verify that the assigned type is IEnumerable(Of String). Build and run the application by pressing CTRL+F5. Note the results in the console window. To create an anonymous type in the Select clause Add the code from this section to see how anonymous types are used in queries. You use them in queries when you want to return several fields from the data source rather than complete records ( currentStudentrecords in previous examples) or single fields ( Firstin the preceding section). Instead of defining a new named type that contains the fields you want to include in the result, you specify the fields in the Selectclause and the compiler creates an anonymous type with those fields as its properties. For more information, see Anonymous Types. The following example creates a query that returns the name and rank of seniors whose academic rank is between 1 and 10, in order of academic rank. In this example, the type of studentQuery4must be inferred because the Selectclause returns an instance of an anonymous type, and an anonymous type has no usable name. Dim studentQuery4 = From currentStudent In students Where currentStudent.Year = "Senior" And currentStudent.Rank <= 10 Order By currentStudent.Rank Ascending Select currentStudent.First, currentStudent.Last, currentStudent.Rank ' If you see too many results, comment out the previous ' For Each loops. For Each studentRecord In studentQuery4 Console.WriteLine(studentRecord.Last & ", " & studentRecord.First & ": " & studentRecord.Rank) Next Build and run the application by pressing CTRL+F5. Note the results in the console window. Additional Examples Now that you understand the basics, the following is a list of additional examples to illustrate the flexibility and power of LINQ queries. Each example is preceded by a brief description of what it does. Rest the mouse pointer over the query result variable for each query to see the inferred type. Use a For Each loop to produce the results. ' Find all students who are seniors. Dim q1 = From currentStudent In students Where currentStudent.Year = "Senior" Select currentStudent ' Write a For Each loop to execute the query. For Each q In q1 Console.WriteLine(q.First & " " & q.Last) Next ' Find all students with a first name beginning with "C". Dim q2 = From currentStudent In students Where currentStudent.First.StartsWith("C") Select currentStudent ' Find all top ranked seniors (rank < 40). Dim q3 = From currentStudent In students Where currentStudent.Rank < 40 And currentStudent.Year = "Senior" Select currentStudent ' Find all seniors with a lower rank than a student who ' is not a senior. Dim q4 = From student1 In students, student2 In students Where student1.Year = "Senior" And student2.Year <> "Senior" And student1.Rank > student2.Rank Select student1 Distinct ' Retrieve the full names of all students, sorted by last name. Dim q5 = From currentStudent In students Order By currentStudent.Last Select Name = currentStudent.First & " " & currentStudent.Last ' Determine how many students are ranked in the top 20. Dim q6 = Aggregate currentStudent In students Where currentStudent.Rank <= 20 Into Count() ' Count the number of different last names in the group of students. Dim q7 = Aggregate currentStudent In students Select currentStudent.Last Distinct Into Count() ' Create a list box to show the last names of students. Dim lb As New System.Windows.Forms.ListBox Dim q8 = From currentStudent In students Order By currentStudent.Last Select currentStudent.Last Distinct For Each nextName As String In q8 lb.Items.Add(nextName) Next ' Find every process that has a lowercase "h", "l", or "d" in its name. Dim letters() As String = {"h", "l", "d"} Dim q9 = From proc In System.Diagnostics.Process.GetProcesses, letter In letters Where proc.ProcessName.Contains(letter) Select proc For Each proc In q9 Console.WriteLine(proc.ProcessName & ", " & proc.WorkingSet64) Next Additional Information After you are familiar with the basic concepts of working with queries, you are ready to read the documentation and samples for the specific type of LINQ provider that you are interested in: See Also Language-Integrated Query (LINQ) (Visual Basic) Getting Started with LINQ in Visual Basic Local Type Inference Object Initializers: Named and Anonymous Types Anonymous Types Introduction to LINQ in Visual Basic LINQ Queries
https://docs.microsoft.com/en-us/dotnet/visual-basic/programming-guide/concepts/linq/walkthrough-writing-queries
CC-MAIN-2018-30
refinedweb
1,658
58.08
I sat in on a workshop on WF yesterday that David was running. One of the interesting discussions that came up was around the "CallExternalMethod"/"HandleExternalEvent" Activities. The idea of these Activities is that you can build a Workflow which makes a call out to the "outside world" using the "CallExternalMethod" Activity; You can also have a Workflow wait for something to happen in the "outside world" by using the "HandleExternalEvent" Activity; At build time with these Activities the only thing that the Workflow needs knowledge of is the definition of a .NET interface that provides methods that it (the Workflow Instance) can call and events that it (the Workflow Instance) can handle to receive data. This means that the Workflow definition is very much abstracted from the means by which it sends information "out" and the means by which it receives notifications "in". I made a few videos on this stuff which you can find up at under the screencasts section. Specifically, MSDN UK WF Nuggets - 10 - Communications from the Workflow to the Host MSDN UK WF Nuggets - 11 - Communications from the Host to the Workflow MSDN UK WF Nuggets - 12 - Bidirectional Communication (Host Workflow) MSDN UK WF Nuggets - 10 - Communications from the Workflow to the Host MSDN UK WF Nuggets - 11 - Communications from the Host to the Workflow MSDN UK WF Nuggets - 12 - Bidirectional Communication (Host Workflow) However. If you were, for instance, trying to build a Workflow that waited for something and that something happens to be a call into an ASMX web service then there's already an Activity that does this; And there's also an Activity that will call a web service for you. So, clearly, there's an idea here that sometimes you would use the general communication mechanism to talk between the Workflow Instance and "the outside world" but there are other times when a specific Activity is preferable as in the ASMX web services case. In David's workshop, the question that we probably spent the most time on was "How do I decide when I use this Host/Workflow generic mechanism and when to use or build a specific Activity to achieve what I want". It's an interesting question :-) Firstly, I think when it comes right down to it whichever path you take, the same core mechanism will underpin it (using queues that communicate with a Workflow instance) but you're still left with a choice of whether to go with a very generic mechanism or whether to build a specific Activity. Imagine you need to do something like; In each case, what do you do? Do you build a specific Activity that does what you want or do you use the general purpose "CallExternalMethod" Activity? Here's what I think are some of the trade-offs between the two. I'm sure there's a lot more. Build a Specific Activity Your activity is easily re-usable across different Workflows. An MSMQActivity is a useful thing to have. Your activity is easier for the Workflow builder to use. It has a clear purpose and inputs/outputs that line up with that purpose. If you need to be event-driven then you need to build a service to plug in to the runtime that will deliver the right events to your Activity. This means some code to support your Activity. Most of the code you write then goes into writing your Activity and its service. Use the General "CallExternalMethod" Mechanism You can re-host the Workflow in different hosting environments and implement its communication in different ways each time. That is, you're not tied to a specific implementation of "communication". You can make dynamic choices in the host as to how the communication is implemented (i.e. have some fancy switch statement that chooses depending on the day of the week :-)) You do not need to write a service to plug in to the runtime - the ExternalDataService bits already do that for you. Most of the code you write goes into the host. Build a Specific Activity Use the General "CallExternalMethod" Mechanism So it seems that we have the Activity-centric way of doing things and the Host-centric way of doing things and I don't think it's simple enough that there's a "right" answer and each time you make this decision you'll have to evaluate against the pros/cons of each approach and decide what to do. I thought I'd try and illustrate this with a simple scenario. Say I want to build a Windows Forms application and I want to have a Workflow that traps key presses and draws shapes on the screen. Implementing that with "CallExternalMethod" looks something like this; and it was pretty easy to implement once the definition of the interface that sits between the Workflow and the Host was established as; public enum Shape { Circle, Square, Triangle } [Serializable] public class KeyPressEventArgs : ExternalDataEventArgs { public KeyPressEventArgs(Guid g) : base(g) { this.WaitForIdle = true; } public KeyPressEventArgs(Guid g, char k) : base(g) { this.key = k; this.WaitForIdle = true; } public char Key { get { return key; } set { key = value; } } private char key; } [ExternalDataExchange] public interface IUICommunication { void DrawShape(Shape s); event EventHandler<KeyPressEventArgs> KeyPress; } Here's the project file that does it - when the app's running you can press 's', 'c' or 't' and you'll get squares, circles and triangles on the screen. Note that I don't bother to keep a list of what's on the screen so if you invalidate the window your shapes will disappear. What about doing this the alternative way with custom Activities? It's not too hard to build a DrawShapeActivity although there is the question of how it gets hold of a Graphics object to draw with. In terms of building a WaitForKeyPressActivity it's a bit trickier because it needs to be asynchronous because it could run for an indefinite period of time. So, the WaitForKeyPressActivity needs a runtime service to support it and it's probably best if the DrawShapeActivity also uses some kind of service to get hold of a Graphics object to draw with. Here's the project file that works this way - it should run the same way as the other one and I didn't spend too long on it but I'd say it was slightly harder than the other way of working. The Workflow ends up looking very similar; but those two white boxes are custom Activities now rather than something that ships in the box. I also have a UIService class which helps those 2 Activities do what they need to do (i.e. draw to screen and draw shapes). Going back to the list of 5 examples earlier - here's my thoughts on what I'd probably do unless I needed to be able to specifically achieve the following without needing to re-host the Workflow with different mechanisns; but, as I say, to me it's not a question of right or wrong.
http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2006/11/23/8984.aspx
crawl-002
refinedweb
1,172
54.05
In January 1995, Scott Meyers, in his C++ Report article entitled "min, max, and more" [1], makes a challenge to the C++ community. After a careful analysis of the macro-based implementation of min and max and a comparison with the (back then) state-of-the-art template-based implementations, he concludes: What, then, is the best way the correct way to implement max? In the immortal words of Tevye, "I'll tell you: I don't know." Faced with the above analysis, I increasingly find myself telling people that the macro approach may well be best, and I hate macros. If you know of a superior way to implement max, please let me know. To the best of my knowledge, the challenge is as valid today as it was six years ago, and this article takes the challenge. But before I start, let's wrap up the previous installment of Generic<Programming> [2]. Volatile Substance Abuse? I have received much input following my February column "Generic<Programming>: volatile Multithreaded Programmer's Best Friend" [2]. As fate sometimes has it, I received most of the kudos in form of private email, while the gripes went to the Usenet newsgroups comp.lang.c++.moderated and comp.programming.threads. The ensuing debates have been fiery and lengthy, and if you have an interest in the subject, you may want to check them out. The thread is entitled "volatile, was: memory visibility between threads." I know I learned a lot from that thread. For one thing, the Widget example in the opening of the article is irrelevant. To make a long story short, there are systems (such as the POSIX-compliant ones) on which the volatile modifier is not needed, and there are other systems on which adding volatile will not help, leaving the program incorrect. The most important problem with volatile correctness is that it relies on POSIX-like mutexes, and there are multiprocessor systems on which mutexes are not enough you have to use memory barriers. Another more philosophical problem is that strictly speaking, casting volatile away off a variable is illegal, even if it's you who added the volatile qualifier yourself to apply volatile correctness! As Anthony Williams notes, a system could conceivably store volatile data in a different storage than non-volatile data so that casting addresses would behave erratically. Yet another critique was that volatile correctness, while it can solve race conditions at a lower level, cannot properly detect higher-level, logical race conditions. For example, say you have an mt_vector class template that emulates an std::vector, but has properly synchronized member functions. Consider: volatile mt_vector<int> vec; ... if (!vec.empty()) { vec.pop_back(); } The intent is to remove the last element in a vector, if any. The code above acts perfectly kosher in a single-threaded environment. If, however, you use mt_vector in a multithreaded program, the code might throw an exception even though empty and pop_back are properly synchronized. So the coherence of the low-level data (vec) is properly preserved, yet the higher-level operation is erratic. At any rate, after all the discussion, I maintain my recommendation of volatile correctness as a valuable tool for detecting race conditions on systems supporting POSIX-like mutexes. But if you work on a multiprocessor system sporting memory-access reordering, you may want to peruse your compiler's documentation first. You know who you are. And finally, Kenneth Chiu mentions a very interesting paper at, paper entitled, guess what, "Type-Based Race Detection for Java." The paper describes how a small number of additions to Java's type system make it possible for the compiler, in conjunction with the programmer, to detect race conditions at compile time.Eppur si muove. Min and Max So, let's review Scott's challenge. The macro-based min looks like this: #define min(a, b) ((a) < (b) ? (a) : (b)) The macro works very nicely in that it's completely generic (supports any expressions for which operator< and operator?: make sense). Unfortunately, min always evaluates one of it arguments twice, and this can lead to a lot of confusion. It just looks too much like a function in usage, and it doesn't behave like one. (For an extended discussion on the problems of min and of macros in general, refer to Herb's discussion [3].) A simple and effective template-based solution, present in the C++ Standard library, looks like this: template <class T> const T& min(const T& lhs, const T& rhs) { return lhs < rhs ? lhs : rhs; } As you see, this solution puts const everywhere (arguments and result), which is one of its problems. Imagine you want to do this: Increase the minimum of floating-point values a and b by two. Then you would want to write: double a, b; ... min(a, b) += 2; This works nicely with the macro-based min, but not with the templated one, because you cannot modify a const object. As Scott notes, adding a second version: template <class T> T& min(T& lhs, T& rhs) { return lhs < rhs ? lhs : rhs; } still won't work satisfactorily because the compiler won't be able to figure out mixed cases one const and one non-const argument. Furthermore, templates don't play nicely with automatic conversion and promotions, which means that the following code won't compile: int a; short int b; ... int smallest = min(a, b); // error: can't figure out T // in template instantiation Needless to say, the macro-based min works nicely again with such conversions. With the template-based solution, you must write: int smallest = min(a, int(b)); // aha, T is int The gist of providing a good min/max implementation is to obtain something that behaves much like the macros, but that doesn't share their trouble of being macros. An (Almost) Good Start The following clever solution is a nice example of out-of-the-box thinking: template <class L, class R> class MinResult { L& lhs_; R& rhs_; public: operator L&() { return lhs_ < rhs_ ? lhs_ : rhs_; } operator R&() { return lhs_ < rhs_ ? lhs_ : rhs_; } MinResult(L& lhs, R& rhs) : lhs_(lhs), rhs_(rhs) {} }; template <class LR> class MinResult<LR, LR> { LR& lhs_; LR& rhs_; public: operator LR() { return lhs_ < rhs_ ? lhs_ : rhs_; } MinResult(LR& lhs, LR& rhs) : lhs_(lhs), rhs_(rhs) {} }; template <class L, class R> MinResult min(L lhs, R rhs) { return MinResult(lhs, rhs); } The partial specialization MinResult<LR, LR> is needed to consolidate the two conversion operators in one otherwise, operator L& and operator R& would form a duplicate definition. The MinResult-based solution delays the computation until it's needed and performs it "lazily" right before the result is fetched. For example, the code: int a, b; ... min(a, b); doesn't really do anything, and if you're the pensive type, such code might make you cogitate on the concept of a tree falling in the forest. On the other hand, if you type: int c = min(a, b); the compiler invokes operator int& for the temporary MinResult<int, int> object returned by min, the operator which performs the calculation and returns the correct result. In spite of the fact that you can fix the const-related issues (ignored above) quite nicely, the MinResult-based solution is not satisfactory due to ambiguity problems. Consider: int a; short int b; extern Fun(int); extern Fun(short int); ... Fun(min(a, b)); // error! Don't know which overload to invoke! MaxResult<int, short int> supports two conversions: to int& and to short int&. Consequently, the compiler is equally motivated to invoke any of the two overloads of Fun and, like Buridan's donkey, dies in between two equally attractive options. Again, the macro-based solution passes this test, too the code invokes Fun(int) as you would expect. Quest for a Type What would genuinely solve the problem would be a device that, given two types L and R, computes the proper type of min(L, R). For example, if L is char and R is int, the result should be int, and so on. Assuming we have such a device (let's call it MINTYPE), we can write the definitive solution to min as four functions: template <class L, class R> MINTYPE(L, R) Min(L& lhs, R& rhs) { return lhs < rhs ? lhs : rhs; } template <class L, class R> MINTYPE(const L, R) Min(const L& lhs, R& rhs) { return lhs < rhs ? lhs : rhs; } template <class L, class R> MINTYPE(L, const R) Min(L& lhs, const R& rhs) { return lhs < rhs ? lhs : rhs; } template <class L, class R> MINTYPE(const L, const R) Min(const L& lhs, const R& rhs) { return lhs < rhs ? lhs : rhs; } The four overloads of Min correspond to the four possible combinations between const and non-const arguments. So far, so good, but how to define MINTYPE? Well, a consecrated technique for type computation is traits [4]. Indeed, we can use traits to figure out Min's type like so: #define MINTYPE(L, R) typename MinTraits<L, R>::Result template <class L, class R> struct MinTraits; // Specialization for the L == R case template <class LR> struct MinTraits<LR, LR> { typedef LR& Result; }; // Specialization for bool and char template <> struct MinTraits<bool, char> { typedef char Result; }; ... That works, provided you write an awful lot of code. There are 14 arithmetic types, and you have to write specializations of MinTraits for all combinations thereof. Then you have to add the const variants in. There are tricks you can do to simplify this task, like using, well, um, macros, but it's still not a very elegant solution. Even then, the solution is incomplete. You have to take pointers and user-defined classes into account. Plus, what about calling Min for base and derived classes? Consider you define a class Shape and define operator< to order Shape objects by their area. class Shape { ... unsigned int Area() = 0; }; bool operator<(const Shape& lhs, const Shape& rhs) { return lhs.Area() < rhs.Area(); } class Rectangle : public Shape { ... }; void Hatch(Shape& shape) { Rectangle frame; ... Shape& smallest = Min(shape, frame); ... use smallest ... } Now really, wouldn't it be nice if Min invoked above would magically figure out that Rectangle derives from Shape and return a reference to Shape? That would make a lot of sense because a reference to Rectangle is automatically convertible to a reference to Shape. But... by the time you start to wish this, you dream of more than what the macro could give. The macro doesn't work correctly in the example above, because the expression: shape < frame ? shape : frame converts both parts to the same type, so it is equivalent to: shape < frame ? shape : Shape(frame) which doesn't do what we want. (Instead, it does a very bad thing called slicing.) This article implements Min so that you get every nice thing you could have possibly gotten from the macro-based version, plus more. Better yet, the implementation has a reasonable size about 80 lines of code in all (Max included, too). Interested? Reheat that coffee in the microwave and let's talk. Loki Okay, I lied. There are 80 lines of code only if you don't count the library. More specifically, the code uses Loki, a generic library that's featured in my upcoming book [5]. Among other things, Loki provides advanced type manipulation means. The tools in Loki used by Min are: - Typelists. Typelists [6] offer you what you'd expect from regular lists, except that they don't hold values typelists hold types. For example, the construct: typedef TYPELIST_3(float, double, long double) FloatingPointTypes;builds a typelist containing three types and stores it in FloatingPointTypes. Given a typelist such as FloatingPointTypes and an arbitrary type T, you can find out on what position, if any, T is in that typelist by using the compile-time algorithm Loki::TL::IndexOf. For example: Loki::TL::IndexOf<FloatingPointTypes, double>::value evaluates to 1. If the type is not found in the typelist, the result is -1. - The second tool we'll use is the Select class template, which is thoroughly described in [7]. In short, Select allows you to select one of two types, based upon a compile-time Boolean constant. For example: typedef Loki::Select<sizeof(wchar_t) < sizeof(short int), wchar_t, short int>::Result SmallInt;defines SmallInt to be the smallest integral type among wchar_t and short int. - TypeTraits is a class template that makes all kind of deductions about a type, such as "Is this type a pointer? To what does it point to?" etc. We'll use only the NonConstType type definition inside TypeTraits. TypeTraits<T>::NonConstType is a typedef that removes the const qualifier, if any, from T. - Last, but not least, we'll use the Conversion class described in [7], a class that detects whether an arbitrary type can be implicitly converted to another. Conversion is the cornerstone to implement the magic mentioned above regarding Shape and Rectangle. The MinMaxTraits Class Template To simplify type computation, I established a simple linear hierarchy on arithmetic types. Basically I put all arithmetic types in a specific order, and I postulated that the type of Min's result is the type that's toward the bottom of the list. Here's the list by the way (ignore the const for now): namespace Private { typedef TYPELIST_14( const bool, const char, const signed char, const unsigned char, const wchar_t, const short int, const unsigned short int, const int, const unsigned int, const long int, const unsigned long int, const float, const double, const long double) ArithTypes; } In essence, unsigned types come after signed types, larger types come after smaller types, and floating-point types come after integral types. For example, if you pass Min a long int and a double, the result will have type double, because double is after long int in the ArithTypes list. Now the general algorithm for figuring out Min's result type, if you pass it two non-reference types L and R, is as follows: - Assume the Result is R. - If an R can be implicitly converted to an L, then change Result to L. - If L and R are arithmetic types and R comes after L in Private::ArithTypes above, change Result to R. This step takes care of all the math conversions. - If an L& can be automatically converted to an R& without the conversion involving a temporary, then change Result to R&. This step ensures that calls such as Min(frame, shape) return a Shape&. - If an R& can be automatically converted to an L& without the conversion involving a temporary, then change Result to L&. This step ensures that calls such as Min(shape, frame) return a Shape&. You can see MinMaxTraits' implementation in the downloadable code. The hardest part is to figure out the "without the conversion involving a temporary" part in the algorithm above. In essence, T is convertible to U without a temporary if a reference to non-const T is convertible to a reference to non-const U. The Min and Max Overloads There are four Min and four Max overloads, corresponding to the four combinations of const and non-const argument types. To avoid the slicing problem discussed in the Shape/Rectangle example above, Min has a body that's slightly different from the classic a < b ? a : b. Here it is: template <class L, class R> typename MinMaxTraits<L, R>::Result Min(L& lhs, R& rhs) { if (lhs < rhs) return lhs; return rhs; } template <class L, class R> typename MinMaxTraits<const L, R>::Result Min(const L& lhs, R& rhs) { if (lhs < rhs) return lhs; return rhs; } ... two more overloads ... .. similar Max implementation ... The two return statements ensure proper conversions without slicing. The four overloads cover mixed cases such as Min(a +b, c + d) or Min(a +b, 5). Analysis Let's see how the newly developed Min satisfies Scott Meyers' requirements. He asks that a good Min/Max implementation do the following four things: - Offers function call semantics (including type checking), not macro semantics. Min obviously does that. - Supports both const and non-const arguments (including mixing the two in a single call). Thanks to the four overloads, Min supports any combinations of const and non-const arguments. - Supports arguments of different types (where that makes sense). Min does support arguments of different types and actually has a fair amount of intelligence that's inaccessible to both the macro and the simple templated solution: Min disambiguates between various arithmetic types like a champ and takes initiative in performing conversions that make sense. The conversion selection process (based upon Private::ArithTypes) is under the control of the library writer. - Requires no explicit instantiation. Min doesn't need explicit instantiation. Min works properly with pointers (even pointers pointing to different, but related, types, such as Shape* and Rectangle*). This is due to the first step in the algorithm.A remarkable feature of Min is that it deduces its result type by using an algorithm that you can configure, as opposed to staying inside a predefined type system. If you find the algorithm unsatisfactory, you can tweak it to do pretty much what you want, including semantics-directed typing. For example, the minimum between an unsigned char and an unsigned int will always have the type unsigned char, because unsigned char's range is included in unsigned int's range. You can achieve such "smart" typing by changing the type deduction algorithm. It would all be so nice, but there's a little detail worth mentioning. Sadly, Min doesn't work with any compiler I have access to. In fairness, each compiler chokes on a different piece of code. I know the code is correct because a loosely-defined reunion of compilers would compile it, but then I haven't seen a working example yet. So if you have access to a modern compiler and could give the code a try, please let me know. Look Ahead in Anger These days I'm reading The Age of Spiritual Machines by Ray Kurzweil [8]. Kurzweil argues, and rather convincingly, that in the 2020s you'll be able to buy a machine with the power of a human brain for $1,000. Well I can't repress a smile when thinking of how people or maybe myself, hopefully only a bit older and a lot wiser will look at this article in 20 years. "Amazing, in 2001 these guys were having trouble implementing generically the min and max no-brainers in the most popular programming language of the time. Hah, it took this guy an entire article and some esoteric techniques to get min and max right." Maybe min and max aren't important? I'd argue to the contrary. Minimum and maximum are two simple concepts present both in math and in the real life. If a programming language is unable to express simple concepts of math and life, then there is something deeply wrong with that language. "Mom, I don't care about vocabulary and grammar. I just want to write poetry!" If you have to throw in a couple of temporaries and write "a < b ? a : b" when you actually think "min(expr1, expr2)", it means that you have a serious problem: you work with a machine that's able to compute the minimum of any two expressions, but is unable to express the concept of minimum. There is something wrong here, isn't there? And C++ is not the only one to blame. Java and C#, two newer and supposedly superior languages, are utterly impotent at expressing min and max because, you know, min and max are not objects. Maybe in the future this period will be called the "object frenzy." Who knows... but I can't help asking with chagrin: "Quo Vadis, Programmatorae?" Acknowledgements Many thanks are due to all participants in the volatile-related thread on the Usenet, especially Dave Butenhof, Kenneth Chiu, James Kanze, Kaz Kylheku, Tom Payne, and David Schwartz. Bibliography [1] [2] [3] [4] A. Alexandrescu. "Traits: the else-if-then of Types," C++ Report, April 2000. [5] A. Alexandrescu. Modern C++ Design (Addison-Wesley Longman, 2001). [6] J. Vlissides and A. Alexandrescu. "To Code or Not to Code," C++ Report, March 2000. [7] A. Alexandrescu. "Generic<Programming>: Mappings between Types and Values," C/C++ Users Journal Experts Forum, September 2000,. [8] R. Kurzweil. The Age of Spiritual Machines: When Computers Exceed Human Intelligence (Penguin USA, ().
http://www.drdobbs.com/generic-min-and-max-redivivus/184403774
CC-MAIN-2014-10
refinedweb
3,416
62.07
Javascript - Generator-Yield/Next & Async-Await Functions that can return multiple values at different time interval, as per the user demands and can manage its internal state are generator functions. A function becomes a GeneratorFunction if it uses the functionsyntax. They are different from the normal function in the sense that normal function run to completion in a single execution where as generator function can be paused and resumed, so they do run to completion but the trigger remain in our hand. They allow better execution control for asynchronous functionality but that does not mean they cannot be used as synchronous functionality. Note: When generator function are executed it returns a new Generator object. The pause and resume are done using yield& next. So lets look at what are they and what they do. The yieldkeyword pauses generator function execution and the value of the expression following the yieldkeyword is returned to the generator's caller. It can be thought of as a generator-based version of the returnkeyword. The yield keyword actually returns an IteratorResult object with two properties, value and done. (Don’t know what are iterators and iterables then read here). Once paused on a yieldexpression, the generator's code execution remains paused until the generator's next()method is called. Each time the generator's next()method is called, the generator resumes execution and return the iterator result. pheww..enough of theory, lets see an example function UUIDGenerator() { let d, r; while(true) { yield 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) { r = (new Date().getTime() + Math.random()16)%16 | 0; d = Math.floor(d/16); return (c=='x' ? r : (r&0x3|0x8)).toString(16); }); } }; Here, UUIDGenerator is an generator function which calculate the UUID using current time an a random number and return us a new UUID every time its executed. To run above function we need to create a generator object on which we can call const UUID = UUIDGenerator(); // UUID is our generator object UUID.next() // return {value: 'e35834ae-8694-4e16-8352-6d2368b3ccbf', done: false} UUID.next() this will return you the new UUID on each UUID.next() under value key and done will always be false as we are in infinite loop. Note: We pause above the infinite loop, which is kind of cool and at any “stopping points” in a generator function, not only can they yield values to an external function, but they also can receive values from outside. There are lot of practical implementation of generators as one above and lot of library that heavily use it, co , koa and redux-saga are some examples. Traditionally, callbacks were passed and invoked when an asynchronous operation returned with data which are handled using Promise. Async/Await is special syntax to work with promises in a more comfort fashion which is surprisingly easy to understand and use. Async keyword is used to define an asynchronous function, which returns a AsyncFunction object. Await keyword is used to pause async function execution until a Promise is fulfilled, that is resolved or rejected, and to resume execution of the async function after fulfillments. When resumed, the value of the await expression is that of the fulfilled Promise. Key points: 1. Await can only be used inside and async function. 2. Functions with the async keyword will always return a promise. 3. Multiple awaits will always run in sequential order under a same function. 4. If a promise resolves normally, then await promisereturns the result. But in case of a rejection it throws the error, just if there were a throwstatement at that line. 5. Async function cannot wait for multiple promises at the same time. 6. Performance issues can occur if using await after await as many times one statement doesn’t depend on the previous one. So far so good, now lets see a simple example :- async function asyncFunction() { const promise = new Promise((resolve, reject) => { setTimeout(() => resolve("i am resolved!"), 1000) }); const result = await promise; // wait till the promise resolves () console.log(result); // "done!" } asyncFunction(); The asyncFunction execution “pauses” at the line await promise and resumes when the promise settles, with result becoming its result. So the code above shows “ i am resolved!” in one second. - Generator functions/yield and Async functions/await can both be used to write asynchronous code that “waits”, which means code that looks as if it was synchronous, even though it really is asynchronous. - Generator function are executed yield by yield i.e one yield-expression at a time by its iterator (the nextmethod) where as Async-await, they are executed sequential await by await. - Async/await makes it easier to implement a particular use case of Generators. - The return value of Generator is always {value: X, done: Boolean} where as for Async function it will always be a promise that will either resolve to the value X or throw an error. - Async function can be decomposed into Generator and promise implementation which are good to know stuff. If you liked the article, please clap your heart out. Tip — You can clap 50 times! Also, recommend and share to help others find it!
http://brianyang.com/javascript-generator-yield-next-async-await/
CC-MAIN-2018-51
refinedweb
856
55.54
Touch propagation When a user touches a UI control in your app, the Cascades framework delivers a touch event to that control, and you can handle the event in any way you want. These types of interactions are straightforward: users touch the control that they want to interact with, and you specify your app's response based on this intention. What might not be obvious is that other controls in your app can also receive the touch event and have the opportunity to respond to it. This behavior is called touch propagation and determines how and when the controls in your app receive events. Objects in Cascades are arranged in a hierarchical structure, with parent/child relationships. Consider a Container that includes two Button controls: Container { Button { text: "Button 1" } Button { text: "Button 2" } } The Container is the parent of the Button controls, and the Button controls are children of the Container. Cascades represents this relationship by using a structure called a scene graph. This graph is a tree of controls (also called nodes) that shows the parent/child relationships between all elements in your app. You can't view the scene graph for your app in the Momentics IDE or anywhere else; it's just a concept that makes it easier to visualize how Cascades keeps track of object relationships internally. Here's what the scene graph would look like for the code sample above: When a user touches a control and a touch event is generated, the control that was touched isn't the only control that receives the event. The parent control (and its parent, and so on up to the root of the scene graph) also receives the event and can act on it. In the example above, if a user touches one of the Button controls, both the Button and its parent Container receive the touch event. Propagation phases When a touch event occurs and is delivered to various parent and child nodes in the scene graph, the nodes don't receive the event at the same time. There are three phases of touch propagation, and each phase gives different nodes the opportunity to handle events at different times: - Capture phase - At-target phase - Bubble phase Capture phase This phase allows parent nodes to listen for and respond to events that are directed at one of their children. Parent nodes are the first nodes to receive the event, even before the child node that was targeted by the event. You can handle an event in this phase if you want to intercept the event before it reaches the node that was targeted. Consider the following scene graph: If a user touches the Label, its parent Container receives the event first in the capture phase. When a control receives an event in this phase, it emits the touchCapture() signal to give you the opportunity to handle the event. Only nodes in the scene graph that have children receive events in the capture phase. Leaf nodes, which are nodes that don't have children, don't receive events in this phase. Here's how to handle an event in the capture phase in QML. The text of the Label changes to indicate when an event is received in the capture phase. It's important to note that even if the touchCapture() signal is emitted, the parent Container will also emit the touch() signal in its bubble phase. A custom captured property is used to indicate that an event occurred in the capture phase and the Label text shouldn't be updated when the touch() signal is emitted in the bubble phase. import bb.cascades 1.0 Page { content: Container { id: captureContainer // This property indicates whether this container has received // a touch event in the capture phase property bool captured: false Label { id: captureLabel text: "Not captured" } onTouch: { // If the touch event wasn't already received in the // capture phase, update the text of the label if (!captureContainer.captured) captureLabel.text = "Not captured"; captureContainer.captured = false; } onTouchCapture: { // If the touch event is received in the capture phase, // update the label text and set the captured property // to true (to prevent the text from being updated in // the onTouch signal handler) captureLabel.text = "Captured!" captureContainer.captured = true; } } } Here's how to accomplish the same thing in C++ by connecting the touch() and touchCapture() signals to slots. // In your application's source file // Create the root page and top-level container Page* root = new Page; Container* topContainer = new Container; // Create the label and add it to the top-level container captureLabel = Label::create("Not captured"); topContainer->add(captureLabel); // Connect the signals of the top-level container to slots. // If any Q_ASSERT statement(s) indicate that the slot failed // to connect to the signal, make sure you know exactly why // this has happened. This is not normal, and will cause your // app to stop working bool res = QObject::connect(topContainer, SIGNAL(touch(bb::cascades::TouchEvent*)), this, SLOT(onTouch(bb::cascades::TouchEvent*))); // This is only available in Debug builds Q_ASSERT(res); res = QObject::connect(topContainer, SIGNAL(touchCapture(bb::cascades::TouchEvent*)), this, SLOT(onTouchCapture(bb::cascades::TouchEvent*))); // This is only available in Debug builds Q_ASSERT(res); // Since the variable is not used in the app, this is added to avoid // a compiler warning Q_UNUSED(res); // Initialize the variable that indicates whether the container has // received an event in the capture phase captured = false; // Set the content of the page and display it root->setContent(topContainer); app->setScene(root); ... // Define the slot function for the touch() signal void App::onTouch(bb::cascades::TouchEvent* event) { Q_UNUSED(event); // If the touch event wasn't already received in the capture phase, // update the text of the label if (!captured) captureLabel->setText("Not captured"); captured = false; } // Define the slot function for the touchCapture() signal void App::onTouchCapture(bb::cascades::TouchEvent* event) { Q_UNUSED(event); // If the touch event is received in the capture phase, update // the label text and set the captured variable to true captureLabel->setText("Captured!"); captured = true; } // In your application's header file public slots: void onTouch(bb::cascades::TouchEvent* event); void onTouchCapture(bb::cascades::TouchEvent* event); private: Label* captureLabel; bool captured; At-target phase This phase occurs when a node receives a touch event and that node is the intended target of the event, as opposed to an ancestor of the intended target. An ancestor of the intended target node is any node that is a parent of the target node, or a parent of the parent of the target node, and so on, all the way up to the root of the scene graph. This phase is the most common one in which to handle touch events. When an event is received in this phase, you can safely assume that a user is interacting directly with a control on the screen, such as a Button or a Label. The touch() signal is emitted in this phase, and you can handle this signal to respond to a touch that's targeted at a specific control. Consider the following image of a Button inside a Container: If a user touches the button, the Button receives the touch event in the at-target phase because the Button is interpreted as the intended target of the touch event. The Container doesn't receive the event in this phase; instead, it receives the event in the capture phase (as discussed above) and again in the bubble phase. If the user touches anywhere else in the Container (but not the Button), the Container control receives the touch event in the at-target phase. Bubble phase This phase occurs after the at-target phase. The event travels sequentially from leaf nodes toward the root of the scene graph, which gives each node along the way the opportunity to handle the event. The touch() signal is emitted in this phase (similar to the at-target phase). Consider the following image of a Button in two Container controls: The corresponding scene graph for this arrangement looks like this: If a user touches the button, the Button receives the touch event in the at-target phase. If the event isn't consumed in this phase, the event travels up the nodes in the scene graph, and the dark gray Container receives the event in the bubble phase. If the event still isn't consumed, the event reaches the light gray Container, also in the bubble phase. Determining the propagation phase of an event You might notice that the touch() signal is emitted in both the at-target phase and bubble phase. To determine which phase a control receives a touch event, you can use the propagationPhase property, which is part of the TouchEvent parameter that's included in the touch() signal. By comparing the value of this property to enumeration values in the PropagationPhase class (None, Capturing, AtTarget, and Bubbling), you can respond to touch events in different phases using the same signal handler. Here's how you might implement the onTouch() signal handler in QML to determine the phase in which a control received a touch event: onTouch: { if (event.propagationPhase == PropagationPhase.AtTarget) { // Handle the event in the at-target phase } else if (event.propagationPhase == PropagationPhase.Bubbling) { // Handle the event in the bubble phase } } Here's how you might implement an onTouch() slot in C++. Remember that you still need to connect the touch() signal to this slot elsewhere in your app (for example, in the constructor of your app class). void App::onTouch(bb::cascades::TouchEvent* event) { if (event->propagationPhase() == PropagationPhase::AtTarget) { // Handle the event in the at-target phase } else if (event->propagationPhase() == PropagationPhase::Bubbling) { // Handle the event in the bubble phase } } Propagation modes By default, Cascades delivers touch events to all eligible nodes in the scene graph. If you want to consume an event and prevent other nodes from acting on the event, you need to create this functionality yourself. However, Cascades does give you a bit of control over how touch events are propagated to various nodes. You can control whether touch events are delivered to a visual node and its children by using the touchPropagationMode property, which is part of VisualNode. This property accepts enumeration values from the TouchPropagationMode class: Full, None, or PassThrough. When the propagation mode for a node is set to Full (the default value), touch events are delivered to that node and all other eligible nodes that are connected to it in the scene graph. This includes all children of a particular node, if the children were the intended targets of the touch. When the propagation mode is set to None, events are not delivered to the node or any children of the node. When the propagation mode is set to PassThrough, events are not delivered to the node itself, but any eligible children of the node will receive events. For example, consider the same image of a Button inside two Container controls that was presented in the previous section: If the touch propagation mode of the dark gray Container is set to None, that Container won't receive touch events. In addition, the Button won't receive touch events either. If the touch propagation mode of the dark gray Container is set to PassThrough, that Container won't receive touch events but the Button will receive them. Overlap touch policies Like propagation modes, overlap touch policies let you control which nodes receive touch events in your app. Unlike propagation modes, these policies determine propagation based on how UI controls are displayed on the screen, instead of how they're arranged in the scene graph. You can control whether touch events are allowed to pass through a control and be received by controls behind it by using the overlapTouchPolicy property, which is part of VisualNode. This property accepts enumeration values from the OverlapTouchPolicy class (Deny or Allow). When the overlap touch policy for a control is set to Deny (the default value), touch events aren't delivered to any controls that are placed behind the control on the screen. When the overlap touch policy is set to Allow, touch events pass through controls and are received by controls behind. It's important to note that overlap touch policies apply only to nodes that aren't ancestors of each other. For example, if you add a Button to a Container, the Container is the parent (and, thus, an ancestor) of the Button. Any overlap touch policy that you specify for the Button won't prevent touch events from being received by the Container, even though the Container appears beneath the Button visually. In contrast, if you add two Button controls to the same Container, the buttons aren't ancestors of each other. If the buttons overlap visually on the screen, an overlap touch policy that you apply to the top Button would affect whether touch events are received by the bottom Button. An example of touch propagation The three phases of touch propagation don't necessarily occur sequentially, and it can be difficult to visualize the order in which nodes receive a touch event. The following is an example of touch propagation in action. Consider the following tree of nodes (on the left) and their visual appearance on the screen (on the right): A user touches exactly where the gray circle is placed (on the image on the right). A touch event is generated and sent to several of the nodes in the tree. From top to bottom, nodes E, D, B, and A all receive the event (as long as none of these nodes consume the event, and the overlap touch policy on node E is set to allow events to pass through to nodes beneath). Because node E is displayed on top of all of the other nodes, you might expect that this node receives the event first. However, if you recall the different propagation phases, you'll see that things aren't quite that straightforward. To make it easier to visualize the propagation of the touch event, the tree can be broken into two propagation paths: E - A and D - B - A: Propagation of the touch event starts on the path that contains the top node that was touched (path E - A). If none of the nodes in the graph consume the event and node E has an overlap touch policy of Allow, the event propagates as follows: - Node A receives the event in the capture phase. This node is the parent of node E, so it's the first node that's eligible to receive and respond to the event. - Node E receives the event in the at-target phase. This node is considered the intended target of the touch event. This node doesn't receive the event in the capture phase because it's a leaf node. - Node B receives the event in the capture phase. This node is the parent of node D, which is immediately beneath node E visually, so node B is the next eligible node to receive the event. Node A doesn't receive the event again because it already received the event during the capture phase. - Node D receives the event in the at-target phase. Similar to node E, this leaf node doesn't receive the event in the capture phase. - Node B receives the event in the bubble phase. Now that all nodes that are eligible to receive the event in the at-target phase (namely, nodes E and D) have processed the event, the event starts to propagate up the tree toward the root node. - Node A receives the event in the bubble phase. Propagation continues up the tree until the root node is reached. At each step in the propagation, the nodes emit signals ( touchCapture() in the capture phase, touch() in the at-target and bubble phases) so that an app can respond appropriately. Last modified: 2013-12-21
https://developer.blackberry.com/native/documentation/cascades/dev/touch/touch_propagation.html
CC-MAIN-2014-10
refinedweb
2,648
54.66
As you’ve probably heard, Swift 4 now has multiline strings. Rejoice! And thank John Holdsworth. For now you can do stuff like this: let <author>\\(author)</author> <title>XML Developer's Guide</title> <genre>Computer</genre> <price>44.95</price> <publish_date>2000-10-01</publish_date> <description>An in-depth look at creating applications with XML.</description> </book> </catalog> """ It’s super handy, allowing you to incorporate newline and individual " characters without having to escape them. (You do have to escape the backslash, as in the preceding example). One of the things you might want to do with a big hefty string is to count the number of words, and maybe find out which word occurs the most. So here’s another multi-line string, one pulled from a lorem ipsum generator: let lipsum = """ Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur vitae hendrerit orci. Suspendisse porta ante sed commodo tincidunt. Etiam vitae nunc est. Vestibulum et molestie tortor. Ut nec cursus ipsum, id euismod diam. Sed quis imperdiet neque. Mauris sit amet sem mattis, egestas ligula ac, fringilla ligula. Nam nec eros posuere, rhoncus neque ut, varius massa. """ This particular example occupies 5 lines and includes a lot of text and punctuation. Because you can now treat Strings as collections, you can do stuff like this: let w = "Hello".filter({ $0 != "l" }) // "Heo" Similarly, you can use character set membership to select only letters and spaces: let desiredCharacters = CharacterSet.letters .union(CharacterSet(charactersIn: " ")) let workString = lipsum.filter({ character in let uniScalars = character.unicodeScalars return desiredCharacters .contains(uniScalars[uniScalars.startIndex]) }) Unfortunately, Character and CharacterSet are still struggling a bit to get along with each other, which is why I’m doing that nonsense with the unicodeScalars. Anyway, this gives you a single line string with just letters and spaces, so you can then break the string into words. // Split along spaces let words = workString.split(separator: " ") Dictionary now: // Add to dictionary, with "uniquing" let baseCounts = zip(words.map(String.init), repeatElement(1, count: .max)) let wordCounts = Dictionary(baseCounts, uniquingKeysWith: +) This code creates an infinite sequence of the number 1, and applies addition each time a duplicate key is found. You get exactly the same results by applying + 1 closure, although this is uglier and a little wasteful: let wordCounts = Dictionary(baseCounts, uniquingKeysWith: { (old, _) in old + 1 }) You can then find the word that appears the most // Find the word that appears most often var (maxword, maxcount) = ("UNDEFINED", Int.min) for (word, count) in wordCounts { if count > maxcount { (maxword, maxcount) = (word, count) } } print("\(maxword) appears \(maxcount) times") // et appears 8 times (at least it did // in my much longer text) You can use uniqueKeysWithValues to fill up a dictionary by zipping two sequences: let letterOrders = Dictionary(uniqueKeysWithValues: zip("ABCDEFGHIJKLMNOPQRSTUVWXYZ", 1...)) print(letterOrders) // ["H": 8, "X": 24, "D": 4, "J": 10, "I": 9, "M": 13, "Z": 26, // "S": 19, "A": 1, "C": 3, "N": 14, "Y": 25, "R": 18, "G": 7, // "E": 5, "V": 22, "U": 21, "L": 12, "B": 2, "K": 11, "F": 6, // "O": 15, "W": 23, "T": 20, "P": 16, "Q": 17] Another thing you might do with updated dictionaries is to build a set or array out of sequence values. This next example collects values for each key: let invitedFriends: [(String, String)] = [ ("Rizwan", "John"), ("Rizwan", "Abe"), ("Soroush", "Dave"), ("Joe", "Dave"), ("Soroush", "Zev"), ("Soroush", "Erica")] let invitationLists = Dictionary( invitedFriends.map({ ($0.0, [$0.1]) }), uniquingKeysWith: { (old: [String], new: [String]) in return old + new } ) print(invitationLists) // ["Rizwan": ["John", "Abe"], "Soroush": ["Dave", "Zev", "Erica"], "Joe": ["Dave"]] You can store a tuple of the maximum and minimum values found for each unique key. The value structure has to be established in the initial streams, which can be ugly: // Create 100 random numbers let hundredRandom: [(Int, Int)] = (1...100).map({ _ in let value = Int(arc4random_uniform(10000)); return (value, value) }) // Create ten sequences of 1 through 10 let tens = sequence(state: 1, next: { (value: inout Int) -> Int in value += 1; return (value % 10) + 1 }) // Build the two together let values = zip(tens, hundredRandom) let extremes = Dictionary(values, uniquingKeysWith: { (old: (Int, Int), new: (Int, Int)) in return (min(old.0, new.0), max(old.1, new.1)) }) print(extremes) // [10: (504, 8342), 2: (770, 8874), 4: (164, 7871), 9: (177, 8903), // 5: (1707, 9627), 6: (577, 8318), 7: (174, 8818), 3: (2837, 9198), // 8: (3573, 9432), 1: (474, 8652)] I probably could have made this a little more elegant but I was running out of time because I had to pick up my kids. If you have improvements for the last few examples, let me know. Sorry about the rush. p.s. Thanks for the tip about using unicodeScalars on char. One Comment […] Update (2017-06-20): Erica Sadun: […]
https://ericasadun.com/2017/06/19/the-startling-uniquing-of-swift-4-dictionaries/
CC-MAIN-2021-31
refinedweb
787
61.67
On Tue, Apr 16, 2013 at 10:31 AM, Christopher Schultz < chris@christopherschultz.net> wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Howard, > > On 4/15/13 4:02 PM, Howard W. Smith, Jr. wrote: > > On Mon, Apr 15, 2013 at 1:08 PM, Christopher Schultz < > > chris@christopherschultz.net> wrote: > > > >. :) > > Just remember that caching isn't always a worthwhile endeavor, and > that the cache itself has an associated cost (e.g. memory use, > management of the cache, etc.). Noted, and per my experience (already), I have definitely recognized that. Thanks. > If your users don't use cached data very much Smiling... um, well, the endusers don't 'know' that they 'are' using the cache, but I did enlighten the one enduser, yesterday, that reported that eclipselink issue (that was most likely caused by my use of the 'readonly' query hint). And for the record, they 'are' using the cache, since there are common pages/data that they access and/or request, multiple times, daily (and throughout the day), and even multiple times, most likely, throughout each session. > or, worse, make so many varied requests that the cache is thrashing the > whole time, then you are actually hurting performance: > you may as well go directly to the database each time. > They definitely make varied requests, 'too', throughout the day and during each session, and since I like to monitor performance via jvisualvm, I am recognizing a lot of 'eclipselink' code that is executed, since i commonly use readonly and query-results-cache query hints, but performance seems worse when readonly and/or query-results-cache are not used (when I look at the times in jvisualvm). just today, i recognized a query, such as following which was performing very poorly, even though the JOIN was on a primary/foreign key, and ORDER BY on primary key (which 'should' be fast): @NamedQuery(name = "OrderCostDetails.findByOrderId", query = "SELECT ocd FROM OrderCostDetails ocd JOIN ocd.orders o WHERE o.orderId = :orderId ORDER BY ocd.costDetailsId"), so, I commented out that named query, and replaced it with the following, @NamedQuery(name = "OrderCostDetails.findByOrderId", query = "SELECT o.orderCostDetails FROM Orders o WHERE o.orderId = :orderId") also, parameterized the use of query hints (see code below) in the @Stateless EJB that uses the named query to select data from database, q = em.createNamedQuery("OrderCostDetails.findByOrderId") .setParameter("orderId", id) .setHint("eclipselink.query-results-cache", "true"); if (readOnly) { q.setHint("eclipselink.read-only", "true"); } list = q.getResultList(); if (list == null || list.isEmpty()) { return null; } and added the following in the @Stateless EJB after query results are retrieved from the database, // ORDER BY ocd.serviceAbbr, ocd.nbrOfPassengers Collections.sort(list, new Comparator<OrderCostDetails>() { @Override public int compare(OrderCostDetails ocd1, OrderCostDetails ocd2) { String ocd1SortKey = ocd1.getServiceAbbr() + ocd1.getNbrOfPassengers(); String ocd2SortKey = ocd2.getServiceAbbr() + ocd2.getNbrOfPassengers(); return ((Comparable)ocd1SortKey).compareTo(ocd2SortKey); } }); and now, this query, is 'no longer' a hotspot in jvisualvm; it doesn't even show up in the 'calls' list/view of jvisualvm. Why did I target this query? because this query seemed as though it should be fast, but the eclipselink code was executing some 'twisted' method and a 'normalized' method, etc..., so I said to myself, I need to refactor this query/code, so all that eclipselink code will not hinder performance. I think the performance improved because of the following: Orders has OrderCostDetails (1 to many), search Orders via primary key (OrderId) is much easier than searching OrderCostDetails JOIN(ed) to Orders WHERE Orders.OrderId = :orderId. So, I am 'sure' that eclipselink is NOT calling some 'twist' (or normalize) method anymore, and I'm sure use of Collections.sort(list, ...) is sorting the list in memory...much faster than the database can... but feel free to correct me on this assumption of mine. :) > (This is why many people disable the MySQL query cache which is > enabled by default: your use of the phrase, 'query cache' = query statements cache OR query 'results' cache? I'm using Apache Derby, and default = no query results cache or statements cache, but I have configured query statements cache in persistence.xml, and using query results cache 'query hint' at various times throughout the app... if you aren't issuing the same query over and over again, you are just > wasting time and memory with the query cache). > it is very 'possible' that queries will be the same over and over again (at least 2+ times) per user, but of course, queries will vary per user session as well. > >). :) > > You should probably monitor your cache: what's the hit rate versus > miss rate, and the cache turnover. You might be surprised to find that > your read-through cache is actually just a churning bile of bytes that > nobody really uses. > > It also sounds like you need a smoke test that you can run against > your webapp between hourly-deployments to production ;) I highly > recommend downloading JMeter and creating a single workflow that > exercises your most-often-accessed pages. You can a) use it for > smoke-testing and b) use it for load-testing (just run that workflow > 50 times in a row in each of 50 threads and you've got quite a load, > there). > Interesting. i will have to do that, thanks. > > ><>. > > If you are searching your List<>, perhaps you don't have the right > data structure. Hmmm, good point, but the data structure (or 'object') is a very small POJO, just containing 2 members, Integer orderId, and Integer orderNumber. > What is the algorithmic complexity of your searches? > If it's not better than O(n), then you should reconsider your data > structure and/or searching algorithm. > public Integer getOrderNumberForOrder(Orders order) { Integer orderNumber = 0; if (orderNumberList != null && order != null && order.getOrderId() != null) { for (OrderNumber oNumber : orderNumberList) { if (oNumber == null || oNumber.getOrderId() == null || oNumber.getOrderNumber() == null) { continue; } if (oNumber.getOrderId().equals(order.getOrderId())) { orderNumber = oNumber.getOrderNumber(); break; } } } return orderNumber; } > > Does the list need re-population? How often? > the list is re-populated when ORDERS are marked as 'confirmed', 'canceled', and/or when ORDERS are 'deleted'. This is customized transportation/reservation software for tour bus company (my family business). So, usually, endusers are modifying ORDERS/data on single-date 'and' single-year basis, so the List<> contains 'all' ORDER IDs for the 'selected' year of the 'selected' date that they are viewing and/or working on. Each ORDER is assigned an ORDER #, which is basically assigned, sequentially, to all confirmed-and-not-cancelled ORDERS from beginning of year. Do you see why now, this is a List<> that I can move at application scope instead of maintaining it in session scope? I just haven't taken the time to move it to application scope yet. Some may say/ask, why don't you use OrderID as the Order#. Nope, I am using that as a 'Quote #' (as requested by one of the endusers), and the President/CEO wanted/demanded the ORDER # (ever since the birth of the legacy version of the app...developed back in 1994/1995). :) Moving this list to application scope and controlling access via @Singleton bean 'will' improve performance... I'm quite confident of that. Most of my performance enhancements made within the last 6 months have proven to be effective and noticeable. :) > > Since we discuss GC a lot on this list, i wonder if you all > > recommend to set the 'list' to null, first, and then List<> ... = > > new ArrayList<>(newList), or new ArrayList<>(newList) is sufficient > > for good GC. > > Setting the reference to null is a waste of time as far as the GC is > concerned. When I write code that re-uses the same identifier a lot in > a particular method (e.g. a single PreparedStatement identifier in a > long JDBC transaction executing many statements), I will close the > statement and then set explicitly it to null before continuing. I do > that to ensure that the statement is no longer re-usable due to bad > coding in my part. But it does not really help anything at runtime. > Interesting, just last night (or early this morning), I went through members of the most-frequently-used @SessionScoped bean (OrdersController), and added a @PreDestroy method that will set member (or instance variable) = null, hoping that this will help GC a bit/lot. I really have not checked jvisualvm to see if that code change improved GC or not. I'm still novice at monitoring GC performance and/or writing code that promotes/helps GC. > > On the other hand, if you have a large data structure that you use > during a part of a method but not all of it, then explicitly setting > the reference to null can help collect the garbage sooner. A > completely contrived example: > > > List<Id> itemIds = ...; > Map<Id,Item> map = createEnormousMap(); > List<Item> items = new ArrayList<>; > for(Id id : itemIds) > items.add(map.get(id)); > > // Marker for discussion > > for(some long time) > { > // do some long operation > // that does not require the "enormous map" > } > > return items; > > In the above method, if the second loop runs for a long time (relative > to the rest of the method), then explicitly setting "itemIds" to null > in the middle of the method will cause the object pointed to by "map" > to become unreachable and therefore collectable by the GC (assuming > that the map isn't referenced anywhere else, of course) before the > method returns. So, in this case, setting a variable to null *can* > help the GC. > hmmm interesting. thanks for sharing. I did read somewhere (either on this list or some question/answer on stackoverflow) about this. Noted, and will have to try to remember to do/use this approach. Honestly, I don't think I 'ever' do that, but it is in the back of my mind (or 'up my sleeve') to do it. :) > > >>. :) > > It's not a ThreadLocal, it just wraps the request so that accesses of > the HttpSession can have WeakReferences transparently wrapped-around > the actual objects. I did it to a) avoid having to re-write a bunch of > code when I wanted to use WeakReferences and b) avoid having to have > that ugly code visible in my actual application code. Since it's an > orthogonal feature (e.g. the application code doesn't have to know > there is an expirable-cache in play... it just knows that it's > recoverable for an object *not* to be available), it's a perfect job > for a Filter an associated wrapper objects. > > very interesting! i will have to consider doing that at some point. i'm taking in a lot what you and others recommend and try to apply to my app, at my earliest convenience, or slowly-but-surely. :) > - -chris > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.17 (Darwin) > Comment: GPGTools - > Comment: Using GnuPG with Thunderbird - > > iQIcBAEBCAAGBQJRbWC/AAoJEBzwKT+lPKRY88kP/1jOt9yEHNNJy0b4fcmrNcK8 > 1mB4DADmvNoW5F+xI56YxjZ3wLP8GK4hkOHRz82eED9qpiCnIvSEfO4mdSFNnVfq > CJOFYNnBmdbDxPea9K52VjJ6lenjN5+gOggrJB1LuImP359pmkW3Xdv/q89OSrph > Q9xf1VPeohv6ANv2eOWZ4zC5L+28LLmkWqJXajfw940MOvBTiSmKi1/Zz9hCuGOQ > XG+QkSxcGUnP2cWqQKkkuIUGOR8iTcwy00STI/i9QxruBooUcMqTBQ4v9YcqTGoN > FXV1mLiIM3oG36XickblLyCAK60Qtvx1PaKAvmFM30XeKAcS2gRmglkDM6yYG311 > 2TUzHytYVEnZm/6Wet+2VmMdEVqjhwRFOTcegs5VhC6KBoUSJ6W/xNugfJam0DHv > 3dF9mZy/6ecL3KKY3c+7hJcQ6b9F79J7MksCvf8YGff+k/Br3Vme9VVzUfEcy572 > i6CuAXb9X5uSC/vr1yincyqfNAZoPrabkNExqW8/tGd1YGky92DG/bRxMCxEUlMn > S6k8av3ZXv+neF3kNfkLgynvxD/2MWyK1cO5Q61VZ7jkRx3AqWFcHx3sImh9FW5N > vwVTd7jb8N3gBYegTfeWFN8fGaMp4bLWlpSAoyuvB/M0OXQ+GW3GDasVqt0zxBct > 0a25Y6XUHv7bxLPKGNWK > =nLK3 > -----END PGP SIGNATURE----- > > --------------------------------------------------------------------- > To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org > For additional commands, e-mail: users-help@tomcat.apache.org > >
http://mail-archives.apache.org/mod_mbox/tomcat-users/201304.mbox/%3CCAGV1rDK4TQc-VZ3ZM=0OZ7PXC3e_ysArPxR78U_kLLssoqZX+g@mail.gmail.com%3E
CC-MAIN-2018-26
refinedweb
1,846
53.92
Inspire a community of professionals skilled in spatiotemporal analysis for community health and safety. Write a tool that converts the example KMZ datasets into a normalized format. Roy Hyunjin Han We have some example datasets and we would like to normalize them so that downstream modules know how to load them. 20170912-1715 - 20170920-1700: 1 week actual _ Load example kmz datasets _ Save kmz datasets _ Build a tool that extracts specific information from a kml file 20170912-1715 - 20170912-1815: 60 minutes fastkml _ kmltogeojson _ pykml _ simplekml _ pygis + Find packages for handling kmz files 20170912-2130 - 20170912-2200: 30 minutes After evaluating a bunch of packages, I think we should use fastkml. + Decide which package we might use 20170920-1630 - 20170920-1700: 30 minutes Maybe we can stick to GDAL for both loading and saving the KML. I don't feel like going into the specifics of KML format. I think the reason fastkml exists is if you do not want any dependencies other than Python. _ Use fastkml to load the example datasets I probably could use fiona, but I would rather upgrade geometryIO as it has been on my list of things to do for a long time. But I would like to call it something different. from os.path import expanduser dataset_path = expanduser('~/Experiments/Datasets/evi-20170819.kmz') # Try fiona import pip pip.main(['install', 'fiona']) import fiona # fiona.open(dataset_path) # Raises FionaValueError It looks like fiona will not support kml: # Try geometryIO import geometryIO geometryIO.load(dataset_path) The geometryIO package makes more progress, but it looks like it is not loading any geometries. We might have to put KML to CSV/SHP.ZIP as a separate tool. import pip pip.main(['install', 'fastkml']) %%bash cd ~/Downloads cp ~/Experiments/Datasets/evi-20170819.kmz . unzip evi-20170819.kmz ls -l from fastkml.kml import KML from os.path import expanduser kml = KML() kml_text = open(expanduser('~/Downloads/doc.kml'), 'rb').read() kml.from_string(kml_text) len(list(kml.features())) features = list(kml.features()) f = list(features[0].features())[0] f.__dict__ x = list(list(list(f.features())[0].features())[0].features())[0] x x.__dict__ x._geometry.__dict__ It looks like kml is one of those strange nested file formats. x = list(list(list(f.features())[0].features())[0].features())[1] x.__dict__ list(list(list(f.features())[0].features())[2].features())[1].__dict__ _ Examine KML of each example dataset _ Try gdal _ Load the different datasets _ Save the different datasets We decided not to accept kml files for now.
https://crosscompute.com/n/JrbhgevrX9nf2wr2tF7iZZdQfJr7I8A0/-/logs/normalize-kmz-files-20170920
CC-MAIN-2020-10
refinedweb
429
58.38
This most recent version of the RabbitCounter service does provide user authentication, but not in a way that scales. A better approach would be to use a web service container that provides not only user authentication but also wire-level security. Tomcat, the reference implementation for a Java web container, can provide both. Chapter 4 showed how Tomcat can be used to publish RESTful web services as servlets. Tomcat also can publish SOAP-based services. Tomcat can publish either a @WebService or a @WebServiceProvider. The example to illustrate how Tomcat provides container-managed security is built in two steps. The first step publishes a SOAP-based service with Tomcat, and the second step adds security. A later example is a secured @WebServiceProvider under Tomcat. The SOAP-based service is organized in the usual way. Here is the code for the TempConvert SEI: package ch05.tc; import javax.jws.WebService; import javax.jws.WebMethod; @WebService public interface TempConvert { @WebMethod float c2f(float c); @WebMethod float f2c(float f); } And here is the code for the corresponding SIB: package ch05.tc; import javax.jws.WebService; @WebService(endpointInterface = "ch05.tc.TempConvert") public class TempConvertImpl implements TempConvert { public float c2f(float t) { return 32.0f + (t * 9.0f / 5.0f); } public float f2c(float t) { return (5.0f / 9.0f) * (t - 32.0f); } } For deployment under Tomcat, the service ... No credit card required
https://www.safaribooksonline.com/library/view/java-web-services/9780596157708/ch05s04.html
CC-MAIN-2018-26
refinedweb
229
53.68
When you are working on a large amount of data, every data element is attached to other data elements in one way or another. Using sequential data structure always fails to reflect this relationship between the data and that’s where the non-sequential data structure such as trees, graphs, files, etc comes into the picture. In this article, we will learn what is the diameter of a binary tree and how to calculate it using two different approaches along with python code. But before moving forward, let us understand the binary tree in detail below. What is a Binary Tree? Trees are widely used as an abstract data structure that simulates the hierarchical structure to store data. The tree consists of nodes to represent the data as shown in the below image. The topmost node is known as the root node and the bottommost node is known as a leaf node. A general tree data structure has no limitation for the number of child nodes and leaf nodes. But when it comes to a binary tree, the given condition changes. A binary tree is the type of tree data structure that possesses a maximum of two children nodes for each parent node. Each of these children's nodes is generally referred to as the left child and right child respectively. The below image represents the binary tree data structure. A binary tree is further divided into two parts: - Rooted Binary Tree: It consists of the root nodes and every node has a maximum of two children nodes. - Fully Binary Tree: It consists of the root node and every other node in the tree has either 0 or 2 children exactly. As the binary tree is a widely used data structure, you are most likely to face competitive questions related to it while cracking the interview of your dream company. One such question is calculating the diameter of a binary tree. So, let’s dive deep into finding the solution to this problem after defining the diameter of the binary tree below. What is the diameter of a binary tree? The diameter of the binary tree is defined as the total number of nodes on the longest path between two end nodes. Remember that this node is not compulsory to pass through the root node of the tree. For example, the below image shows the diameter of the tree from the left side to the right side passing through the root node. In the above image, the diameter of the binary tree is calculated via root node i.e., D-B-A-C-E-G. Whereas the below image shows the diameter of the binary tree from the left side to the right side without passing through the root node i.e., G-E-C-F. How to Calculate Diameter of Binary Tree? Below are the two approaches by which you can find the diameter of the binary tree: 1) Calculating the Diameter of Binary Tree Using Recursive Approach If you observe at the above images, the diameter of the binary tree is the maximum sum of heights of the left subtree and right subtree. Also, the diameter of every subtree may or may not include the sub-root in the calculation. If it does, then the final equation for the diameter of the binary tree will be: Diameter = Left subtree height + Right subtree height + 1 Using the recursive approach, you can calculate the height of the left subtree and right subtree recursively from the left node. Later, calculate the diameter of the left-subtree and right-subtree as shown in the below visual. The maximum of both will be your final output. As seen in the above illustration, there are multiple options as the diameter of a binary tree. However, at last, we should choose the option displaying the longest diameter of binary tree shown in the green color that is 73-74-75-50-100-150-125-130-131. Algorithm for Recursive Approach 1) Node passed in recursive function is null then return zero 2) Find the height of the left subtree 3) Find the height of the right subtree 4) Using recursive call, calculate the diameter of left-subtree until node becomes NULL 5) Using recursive call, calculate the diameter of right-subtree until node becomes NULL 6) If the root node is included then, Diameter = left-subtree + right-subtree + 1 7) If the root node is not included then, Diameter = max(diameter of left-subtree, diameter of right subtree) 8) The final output will return the max of step 6 and step 7 Repeat the above process recursively until NULL nodes are encountered. Python Code for Recursive Approach class TreeNode: ''' Tree Node ''' def __init__(self, val=0, left=None, right=None): self.val = val self.left = left self.right = right class Solution: def __init__(self): self.max = 0 def Diameter(self, root: TreeNode) -> int: if root is None: return 0 def traverse(root): if root is None: return 0 left = traverse(root.left) right = traverse(root.right) if left + right > self.max: self.max = left+right return max(left, right) + 1 traverse(root) return self.max if __name__ == '__main__': root = TreeNode(10) root.left = TreeNode(11) root.left.left = TreeNode(2) root.left.right = TreeNode(31) root.right = TreeNode(12) print(Solution().Diameter(root)) Output 3 Time Complexity The time complexity of the above approach is O(n^2) where n is the number of nodes in the tree. It is because, the function recursively calls itself, and hence, every node is visited at least twice while calculating the diameter. However, the space complexity of the recursive approach is O(n) because of recursion. 2) Calculating the Diameter of Binary Tree Using Iterative Approach You can also find the diameter of the binary tree using the iterative approach by the Depth First Search algorithm. As the diameter is always defined as the path between two leaf nodes of a binary tree, if you find the farthest node from the leaf node, you can easily calculate the diameter of the tree. This method is also called the Iterative method and provides me with the optimized solution for calculating the diameter of a binary tree. Algorithm for Iterative Approach 1) Starting from the root node, find the farthest node in the binary tree using the DFS algorithm 2) Consider this farthest node as the initial node of the diameter of the binary tree 3) Again, using the DFS algorithm, calculate the farthest node from the initial node found in step 2. 4) Newly found farthest node will be the end node of the diameter of binary tree Python Code for Iterative Approach from collections import deque class TreeNode: ''' Tree Node ''' def __init__(self, val=0, left=None, right=None): self.val = val self.left = left self.right = right class Solution: def Diameter(self , root: TreeNode ): ''' Function to find depth of the Binary Tree :param root: Root Node :return: Depth of the tree ''' stack = deque ( [ root ] ) depth = {None : 0} ans = 0 while stack : node = stack.pop () if not node : continue LDepth = depth.get ( node.left , None ) RDepth = depth.get ( node.right , None ) if LDepth== None or RDepth == None : stack.append ( node ) stack.append ( node.left ) stack.append ( node.right ) continue depth [ node ] = max ( LDepth, RDepth ) + 1 ans = max ( LDepth+ RDepth , ans ) return ans if __name__ == '__main__': root = TreeNode(10) root.left = TreeNode(11) root.left.left = TreeNode(2) root.left.right = TreeNode(31) root.right = TreeNode(12) print(f'Diameter is : {Solution().Diameter(root)}') Output Diameter is : 3 Time Complexity The time complexity using this approach is O(n) and therefore, it is considered to be an optimized solution for calculating the diameter of a binary tree. It is because using the iterative approach, we only visit the nodes of the tree once. At the same time, the space complexity of this approach is also O(n) where n is the number of nodes present in the tree. Applications for Diameter of Binary Tree 1) Diameter of the binary tree is used to calculate the route during inter-processor communication within the network of any structure. 2) The DADO, a special purpose parallel computer make use of a binary tree interconnection network for faster execution of AI-oriented, rule-based software Conclusion A binary tree is an ideal way to store your data in a hierarchical manner and later access it efficiently whenever necessary. It is quite flexible in comparison to other data structures in any programming language. Finding the diameter of a binary tree is one of the fundamental problems that you can come across while understanding binary trees. Therefore, in this article, we have provided two basic approaches to calculate the diameter along with its algorithm and python code.
https://favtutor.com/blogs/binary-tree-diameter
CC-MAIN-2022-05
refinedweb
1,462
52.9
On Mon, Nov 10, 2003 at 08:50:44AM +0000, John Bradford wrote:> On the other hand, many users out there are _obviously_ under the> illusion that 2.6.0-test has no known security issues, and that is> false. If their machine is internet-connected and compromised, it can> cause annoyance to third parties. Given that, I think a file in the> root of the kernel tree, saying something like, "Don't use me on an> internet connected machine unless you know what you're doing" would be> worth considering.Something vaguely like this might help the issue (with the obvious filecreated having the appropriate note in it)Untested, and I'm sure there are style problems, but the idea should beobvious:--- Makefile 2003-10-26 19:04:21.000000000 -0500+++ Makefile.ryan 2003-11-11 02:45:01.000000000 -0500@@ -81,7 +81,7 @@ # That's our default target when none is given on the command line .PHONY: all-all:+all: check-beta ifneq ($(KBUILD_OUTPUT),) # Invoke a second make in the output directory, passing relevant variables@@ -1027,3 +1027,12 @@ endif # skip-makefile FORCE:++.PHONY: check-beta++check-beta:+ if [ -f README.Security && ! -f README.Security.IknowwhatImdoing ] ; then \+ cat README.Security ; \+ read ; \+ fi+-- Ryan Anderson sometimes Pug Majere-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2003/11/11/19
CC-MAIN-2015-48
refinedweb
238
56.35
Post your Comment Calling ( Invoking ) Methods through Reflection Calling ( Invoking ) Methods through Reflection... call methods of a class with the help of "Reflection". Since methods of a class either consists of arguments or do not have any argument. So Reflection API Tutorials and Examples that we have taken in our program. Calling ( Invoking ) Methods through... In this section you will learn about Reflection API and all of its methods. Reflection... through Reflection API A more generic way, how to retrieve the name of the class What is reflection? - Java Beginners What is reflection? Hi, I want to know more about reflection in Java. Thanks Hello, Through reflection you can do... all declared field Through Field Class of reflection packege. Class c Reflection API : A Brief Introduction of the package java.lang.reflect and the methods of Reflection API are the parts.... Reflection API also affects the application if the private fields and methods... the class name through Reflection API A more generic way, how to retrieve the name Reflection by these new methods. Uses of Reflection Reflection is generally used by programs... Reflection Basically the reflection API includes two components: objects representing various Reflection api :Invocation target exception - JDBC Reflection api :Invocation target exception Am using a function to insert a record into the database .Using reflection api am invoking it,but it throws a class not found : oracle.jdbc.driver.oracledriver exception looping through a list of methods looping through a list of methods I have a number of methods that do the almost same thing: public void setColorRed (int colorvalue); public...); Is there a way to place these methods in a list and then call each one from Reflection API Tutorials and Examples invoking exe files on sound invoking exe files on sound how to invoke .exe files on input as sound in java Java Reflection Java Reflection What is reflection? Reflection... behavior of applications running in the Java virtual machine. A reflection... that caveat in mind, reflection is a powerful technique and can enable applications Invoking exe - Java Beginners Invoking exe Hi, How can I invoke a exe with command line arguments in java. Hi Friend, Try the following code: import java.lang.*; import java.io.*; public class Invoke{ public static void main Retrieving the class name through Reflection API Retrieving the class name through Reflection API A more generic way, how to retrieve the name of the class (that is used in the program) that reflects the package name by using java reflection - Development process java reflection how to save and run reflection class in java?please immediately reply sir methods methods PrintStream class has two formatting methods,what RIAs Methods And Techniques RIAs Methods And Techniques JavaScript It is the first major client side... language), through using Adobe Flex tool. Adobe is currently working on providing... the browser or as free Standing applications through Java Web Start. Java RIAs can Abstract methods and classes Abstract methods and classes While going through the java language programming you have learned... is used with methods and classes. Abstract Method An abstract method one Traversing through filtering Traversing through filtering Traversing through filtering Using jQuery, you can traverse through different elements of a web page-randomly or sequentially Abstract class,Abstract methods and classes Abstract methods and classes While going through the java language programming you have learned... is used with methods and classes. Abstract Method An abstract method one Access Static Member Of The Class Through Object Access Static Member Of The Class Through Object Static methods are special type of methods that work without any object of the class. Static methods are limited to calling Object Class Methods in Java We are going to discus about Object Class Methods in Java... or indirectly. There are many methods defined in java.lang.Object class...(), wait(), etc Java object class methods are:- finalize() clone() equals Setting Fields Information of a class using Reflection Setting Fields Information of a class using Reflection As in our previous example... set() methods. xfield.setInt(rect setStyle() and getStyle() methods Example ; methods example in Flex4:- If user want to set style properties at the run time, the action script provide two methods to access the object of the related... by setStyle() method through the components instance of objects. After that used Velocity Web Edit Assist Velocity Variables Names Velocity Variable Methods and Properties (using Reflection) Javascript Function Methods Velocity Macro JSON-RPC-Java to call remote methods in a J2EE Application Server and Transparently maps Java objects to and from JavaScript objects using Java reflection. Read full Java Methods going through the example, you will be able to declare and use generic methods... Java Methods  ... is required to import in the program to invoke the methods. As you can see here we Can we replace main() with static block or with other static methods? Can we replace main() with static block or with other static methods? what is the use of public static void main()?can't we achieve the same thing through static block or through other static methods Post your Comment
http://www.roseindia.net/discussion/21992-Calling-(-Invoking-)-Methods-through-Reflection.html
CC-MAIN-2013-20
refinedweb
849
53.21
- NAME - DESCRIPTION - GETTING STARTED - NAMESPACE LAYOUT - SEE ALSO - CONTACTING US - SOURCE - MAINTAINERS - AUTHORS NAME Test2 - Framework for writing test tools that all work together. DESCRIPTION Test2 is a new testing framework produced by forking Test::Builder, completely refactoring it, adding many new features and capabilities. WHAT IS NEW? - Easier to test new testing tools. From the beginning Test2 was built with introspection capabilities. With Test::Builder it was difficult at best to capture test tool output for verification. Test2 Makes it easy with Test2::API::intercept(). - Better diagnostics capabilities. Test2 uses an Test2::API::Context object to track filename, line number, and tool details. This object greatly simplifies tracking for where errors should be reported. - Event driven. Test2 based tools produce events which get passed through a processing system before being output by a formatter. This event system allows for rich plugin and extension support. - More complete API. Test::Builder only provided a handful of methods for generating lines of TAP. Test2 took inventory of everything people were doing with Test::Builder that required hacking it up. Test2 made public API functions for nearly all the desired functionality people didn't previously have. - Support for output other than TAP. Test::Builder assumed everything would end up as TAP. Test2 makes no such assumption. Test2 provides ways for you to specify alternative and custom formatters. - Subtest implementation is more sane. The Test::Builder implementation of subtests was certifiably insane. Test2 uses a stacked event hub system that greatly improves how subtests are implemented. - Support for threading/forking. Test2 support for forking and threading can be turned on using Test2::IPC. Once turned on threading and forking operate sanely and work as one would expect. GETTING STARTED If you are interested in writing tests using new tools then you should look at Test2::Suite. Test2::Suite is a separate cpan distribution that contains many tools implemented on Test2. If you are interested in writing new tools you should take a look at Test2::API first. NAMESPACE LAYOUT This describes the namespace layout for the Test2 ecosystem. Not all the namespaces listed here are part of the Test2 distribution, some are implemented in Test2::Suite. Test2::Tools:: This namespace is for sets of tools. Modules in this namespace should export tools like ok() and is(). Most things written for Test2 should go here. Modules in this namespace MUST NOT export subs from other tools. See the "Test2::Bundle::" namespace if you want to do that. Test2::Plugin:: This namespace is for plugins. Plugins are modules that change or enhance the behavior of Test2. An example of a plugin is a module that sets the encoding to utf8 globally. Another example is a module that causes a bail-out event after the first test failure. Test2::Bundle:: This namespace is for bundles of tools and plugins. Loading one of these may load multiple tools and plugins. Modules in this namespace should not implement tools directly. In general modules in this namespace should load tools and plugins, then re-export things into the consumers namespace. Test2::Require:: This namespace is for modules that cause a test to be skipped when conditions do not allow it to run. Examples would be modules that skip the test on older perls, or when non-essential modules have not been installed. Test2::Formatter:: Formatters live under this namespace. Test2::Formatter::TAP is the only formatter currently. It is acceptable for third party distributions to create new formatters under this namespace. Test2::Event:: Events live under this namespace. It is considered acceptable for third party distributions to add new event types in this namespace. Test2::Hub:: Hub subclasses (and some hub utility objects) live under this namespace. It is perfectly reasonable for third party distributions to add new hub subclasses in this namespace. Test2::IPC:: The IPC subsystem lives in this namespace. There are not many good reasons to add anything to this namespace, with exception of IPC drivers. Test2::IPC::Driver:: IPC drivers live in this namespace. It is fine to create new IPC drivers and to put them in this namespace. Test2::Util:: This namespace is for general utilities used by testing tools. Please be considerate when adding new modules to this namespace. Test2::API:: This is for Test2 API and related packages. Test2:: The Test2:: namespace is intended for extensions and frameworks. Tools, Plugins, etc should not go directly into this namespace. However extensions that are used to build tools and plugins may go here. In short: If the module exports anything that should be run directly by a test script it should probably NOT go directly into Test2::XXX. SEE ALSO Test2::API - Primary API functions.. CONTACTING US Many Test2 developers and users lurk on irc://irc.perl.org/#perl-qa and irc://irc.perl.org/#toolchain. can be found at. MAINTAINERS AUTHORS This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See
https://metacpan.org/pod/Test2
CC-MAIN-2017-13
refinedweb
828
60.01
Opened 4 months ago Closed 4 months ago #29488 closed Bug (wontfix) FilteredSelectMultiple Controls Don't Work if The Name of the Widget Has a Space Description (last modified by ) If the label for the FilteredSelectMultiple has a space in it, the controls for moving data between the boxes are not activated. The widget still works if one double clicks on the elements in the left box, they move to the right box. But the arrows don't work. Tested on Chrome 67.0.3396.62 (Official Build) (64-bit) and Firefox 60.0.1 (64-bit) using runserver to run my django code. Take a look at the attached image first.png. There are two FilteredSelectMultiple widgets, one called Pets and one called Pet Names. Note that in the widget Pet Names, the controls are not highlighted, but the controls in the Pets widget are highlighted. Take a look at the attached image second.png. The only change to the code was to change the Pet Names label to Pet_Names (ie replaced the space between Pet and Names with an underscore Pet Names -> Pet_Names). Now the controls are working. Mark Attachments (6) Change History (20) Changed 4 months ago by Changed 4 months ago by Screen shot showing a FilteredSelectMultiple widget without a space in the label and the controls are working. comment:1 Changed 4 months ago by comment:2 follow-up: 3 Changed 4 months ago by I'm not sure that a space in the label is the issue. For example, I couldn't reproduce the problem with "Available user permissions" on the change user page. Could you debug a bit further and confirm the source of the problem? comment:3 Changed 4 months ago by I am happy to help debug this issue. I am not sure where to start or how to proceed. Any suggestions as to a plan of attack would be very helpful! Replying to Tim Graham: I'm not sure that a space in the label is the issue. For example, I couldn't reproduce the problem with "Available user permissions" on the change user page. Could you debug a bit further and confirm the source of the problem? comment:4 Changed 4 months ago by I would try some JavaScript debugging in django/contrib/admin/static/admin/js/SelectFilter2.js. comment:5 Changed 4 months ago by comment:6 Changed 4 months ago by Tim, I am not a Javascript guy, but I did manage to get this far - perhaps it will help. In SelectFilter2.js, I tracked what was going on to the line where it is going to toggle the add link on (see picture A). When I step through the code using the widget named Pets and selecting an element, I get to picture B in the bowels of the jQuery code. You will note that "match = 3" for this control. When I step through the same code, but select an element in Pet Names, I get to the same place, but now match = null (picture C). From here, the program goes in different directions, and not being a js guy, I am a little lost. I hope this helps! Mark comment:7 Changed 4 months ago by Tim, I can't upload the three images because the tracker thinks they are spam. Mark Changed 4 months ago by Javascript debugging A Changed 4 months ago by Javascript debugging B Changed 4 months ago by Javascript debugging C comment:8 Changed 4 months ago by Finally found a file name that was not considered spam..... Mark comment:9 Changed 4 months ago by Can you provide a sample project with the minimal code needed to reproduce the issue? comment:10 Changed 4 months ago by Tim, The project is rather large. I can try to trim it down, but I am in the middle of fixing other bugs right now. In what form and how should I send you the trimmed down project? Mark PS Sorry for my newbieness...;) comment:11 Changed 4 months ago by You can attach a file on the ticket or put the minimal project on GitHub. I tried this but could not reproduce the problem: from django.db import models class Pet(models.Model): name = models.CharField(max_length=20) def __str__(self): return self.name class PetName(models.Model): name = models.CharField(max_length=20) def __str__(self): return self.name class Whatever(models.Model): pets = models.ManyToManyField(Pet) pet_names = models.ManyToManyField(PetName) from django.contrib import admin from .models import Pet, PetName, Whatever admin.site.register((Pet, PetName)) admin.site.register(Whatever, filter_horizontal=['pets', 'pet_names']) The widget says, "Available pet names". comment:12 Changed 4 months ago by Tim, After looking at your example, I see I did a bad job of explaining the bug. In my program, I create the fields for the adminForm on the fly because the fields are not defined until run time. I have these lines of code: class MetaDataNames(models.Model): meta_name_id = models.AutoField(primary_key = True) name = models.CharField(max_length=200, unique=True) class MetaDataValues(models.Model): meta_value_id = models.AutoField(primary_key = True) meta_name_id = models.ForeignKey(MetaDataNames, on_delete=models.CASCADE,) value = models.CharField(max_length=200, unique=True) class Document(models.Model): test_id = models.AutoField(primary_key = True) doc_name = models.CharField(max_length=200, unique=True) metadata = jsonfield.JSONField(blank=True) This lines of code in the DocumentAdminForm(forms.ModelForm) contains : def __init__(self, *args, **kwargs): metadata_names = MetaDataNames.objects.all() # create the fields for metadata in metadata_names: self.fields[metadata.name] = forms.ModelMultipleChoiceField(widget=FilteredSelectMultiple(metadata.name, is_stacked=False), required=False, queryset=MetaDataValues.objects.filter(meta_name_id=metadata.meta_name_id).order_by('value'),) ..... The issue with the FilteredSelectMultiple widget is that when the field name has a space, the controls don't work. In the code above, where it says self.fields[metadata.name], the metadata.name part is the string 'Pet Names', with a space. If I use 'Pet_Names', then the controls in the widget work. I may be offending the Django Gods by having a field name with a space in it. One cannot do that in a model, as far as I can figure out. However, it is an interesting edge case for the widget, since it is perfectly reasonable for one to create fields as I am doing. I can continue to create the test case based on the above code if you want. However, if the answer is that I cannot have a field name with a space, then I won't bother. Please let me know if I should continue with a full test case, or that nothing will be done because I have a field name with a space in it. Thanks, Mark comment:13 Changed 4 months ago by I have attached a test django application (testFSM) that shows the behavior I am seeing. It was tested using Django 2.0.5 and Python 3.4.3 in a virtualenv on Ubuntu 14.04. To recap: If a field has a space in the field name for a ModelMultipleChoiceField (ie fields are generated in the init function of a ModelForm), then the controls for the associated FilteredSelectMultiple widget are not enabled. One can still double click on an element in the widget, but the controls don't work. Also, one cannot clear the selected items or deselect the items in the list. Please see the README in the attached django project. Mark Changed 4 months ago by Django project that illustrates the bug in a FilteredSelectMultiple widget with a space in the field name Screen shot showing a FilteredSelectMultiple widget with a space in the label and the controls not working.
https://code.djangoproject.com/ticket/29488
CC-MAIN-2018-43
refinedweb
1,276
67.04
Sam Saffron - Registered on: 07/22/2012 - Last connection: 12/29/2014 Activity Reported issues: 19 12/29/2014 - 11:16 PM Ruby trunk Bug #10669: Incorrect url parsing in 2.2.0 - I get that, but a trend is developing here that is concerning.... - 08:56 PM Ruby trunk Bug #10673: Ruby 2.2.0 bug in UTF-8 encoding with Postgres - pg gem have this fixed and plan to release a new stable version in a few more days. - 01:03 AM Ruby trunk Bug #10669 (Rejected): Incorrect url parsing in 2.2.0 - Ruby 2.2 is incorrectly treating invalid URLs as correct due to parser change. Ruby 2.2.0 ``` irb(main):001:0>... 12/04/2014 - 10:49 PM Ruby trunk Bug #10561: Improve function of Thread::Backtrace::Location #path and #absolute_path - I think the name #path should always refer to a #path of sorts using it as #filename is kind of odd. Which is a big r... 12/02/2014 - 06:55 AM Ruby trunk Bug #10561 (Open): Improve function of Thread::Backtrace::Location #path and #absolute_path - I was working on this issue in Rails and hit an area where Backtrace Location can be improved... 05/03/2014 - 05:56 AM Ruby trunk Bug #9800 (Feedback): Ship 2.1.2 with GC_HEAP_OLDOBJECT_LIMIT_FACTOR of 1.3 - Many users are complaining about memory bloat in 2.1 series of Ruby As denoted in... 04/17/2014 - 04:11 AM Ruby trunk Bug #9751 (Closed): Process.wait does not work correctly in a thread - The following code fails under Ruby 1.9+, used to work on 1.8 with green threads ``` def test if pid = fork ... 02/22/2014 - 09:07 PM Ruby trunk Feature #9113: Ship Ruby for Linux with jemalloc out-of-the-box - @nobusan I think that would be a reasonable approach @eric / @ko1 / everyone here are the results of runnin... 02/19/2014 - 12:05 AM Ruby trunk Feature #9113: Ship Ruby for Linux with jemalloc out-of-the-box - @Eric sure bench needs a bit more love to be totally representative of a rails request. Also this test will do ko... 02/18/2014 - 11:53 PM Ruby trunk Feature #9113: Ship Ruby for Linux with jemalloc out-of-the-box - Note, this pattern of 1. Retaining large number of objects 2. Allocating a big chunk of objects (and releasing) ...
https://bugs.ruby-lang.org/users/5660
CC-MAIN-2015-32
refinedweb
398
65.73
Shallow routing allows you to change the URL without running data fetching methods again, that includes getServerSideProps, getStaticProps, and getInitialProps. You'll receive the updated pathname and the query via the router object (added by useRouter or withRouter), without losing state. To enable shallow routing, set the shallow option to true. Consider the following example: import { useEffect } from 'react' import { useRouter } from 'next/router' // Current URL is '/' function Page() { const router = useRouter() useEffect(() => { // Always do navigations after the first render router.push('/?counter=10', undefined, { shallow: true }) }, []) useEffect(() => { // The counter changed! }, [router.query.counter]) } export default Page The URL will get updated to /?counter=10. and the page won't get replaced, only the state of the route is changed. You can also watch for URL changes via componentDidUpdate as shown below: componentDidUpdate(prevProps) { const { pathname, query } = this.props.router // verify props have changed to avoid an infinite loop if (query.counter !== prevProps.router.query.counter) { // fetch data based on the new query } } Shallow routing only works for URL changes in the current page. For example, let's assume we have another page called pages/about.js, and you run this: router.push('/?counter=10', '/about?counter=10', { shallow: true }) Since that's a new page, it'll unload the current page, load the new one and wait for data fetching even though we asked to do shallow routing. When shallow routing is used with middleware it will not ensure the new page matches the current page like previously done without middleware. This is due to middleware being able to rewrite dynamically and can't be verified client-side without a data fetch which is skipped with shallow, so a shallow route change must always be treated as shallow.
https://nextjs.org/docs/routing/shallow-routing
CC-MAIN-2022-40
refinedweb
289
62.27
0 Hi guys Im trying to make a maths game for a friend of mines little brother jase, but I need it to generate 2 random numbers between 1 and the maximum number that jase said. I cant figure out how to get the rand1 = (rand()%10)+1; so that the 10 = whatever number he entered. here is the code i have so far before i became stuck #include<iostream> #include <ctime> #include <cstdlib> using namespace std; int main() { int choice cout << "Welcome to jase's maths game, what operations would you like to practice\n"; cout << "1) Addition +\n"; cout << "2) Subtraction -\n"; cout << "Please enter your choice 1 or 2 then press enter: \n"; cin >> choice << endl;// Find out what jase wants to do if (choice == 1)//If jase wants to do addition { int lowest=1 int highest int rand1 int rand2 cout << "What is the biggest number you want to use: "; cin >> highest << endl; rand1 = (rand()%10)+1;// I need 10 to be the maximum number jase entered Can somebody please show me how to do this i would be very grateful? Many thanks HLA91
https://www.daniweb.com/programming/software-development/threads/101723/random-numbers-based-on-user-input
CC-MAIN-2018-43
refinedweb
188
51.38
Use c-style braces instead of indentation. This is an encoding, you can also import this module in sites.py, it will register the encoding on import. to use this, add the magic encoding comment to your source file: # coding: cstyle Then you can import it to run it. or if you added the encoding to your sites.py, you can use idle to view the decoded file. Use notepad++ or the editor of your choice. Do not use this. DO NOT USE THIS. and do not use this. this works, but this is not a good idea. Currently just works for “if|elif|else|for|while|def|with” statements. do not mix indentation and braces. you can do that, but it is not recommended. Treat this module just as a toy, or if you have any special purpose to use it. This program is open source, lgpl, you can edit or use it for free. One usage for this module can be reducing source code size (compressing). You can also decode cstyle literals: import cstyle a = b'cstyle code' a.decode('cstyle') To know how to code with cstyle examine the examples provided here. This is not an alpha release, this is not a beta release, this is not a release at all. this is not a real program, at least at this point. use for educational purposes or whatever. There’s no warranty. There might be parsing errors, it is tested on the code provided in the examples here. This will convert: if(1 in {1,2,3}){ print(5) for(x in c){ print(c) } } To this: if(1 in {1,2,3}): print(5) for x in c: print(c) it works for messy code too. see how this can work on this long oneline code:)}}} The result from the previous is:) Github project page: Mail me at: persian.writer [at].
https://pypi.org/project/cstyle/
CC-MAIN-2017-22
refinedweb
315
85.49
The video 'Scripting in QF-Test (Basics)' explains the basic concepts about scripting. If you want to know more about scripting have a look at the video 'Scripting in QF-Test (Advanced) explains the basic concepts about scripting. One of QF-Test's benefits is that complex tests can be created without writing a single line of code. However, there are limits to what can be achieved with a GUI alone. When testing a program which writes to a database, for example, one might want to verify that the actual values written to the database are correct; or one might want to read values from a database or a file and use these to drive a test. All this and more is possible with the help of powerful scripting languages like Jython, Groovy or JavaScript. 4.2+ While Jython is supported since the beginning of QF-Test, Groovy has found its way into QF-Test a bit later (QF-Test version 3). This language might be more convenient than Jython for those who are familiar with Java. Version 4.2 enabled JavaScript which might be more suitable for web developers. It's mainly a matter of individual preference whether to utilize Jython, Groovy or JavaScript scripting inside QF-Test. In this chapter the basics of the scripting features available in all supported languages are explained. Most of the examples can be applied exactly or with few changes in other script languages. Methods calls which vary in syntax are exemplified in the affected languages. Particularities of the script languages are described in the sections Fundamentals of the Jython integration, Scripting with Groovy and Scripting with JavaScript. 3.0+ The scripting language to use for a given 'Server script' or 'SUT script' node is determined by its 'Script language' attribute, so you can mix all three languages within a test-suite. The default language to use for newly created script nodes can be set via the option Default script language for script nodes. The approach to scripting in QF-Test is inverse from that of other GUI test tools. Instead of driving the whole test from a script, QF-Test embeds scripts into the test-suite. This is achieved with the two nodes 'Server script' and 'SUT script'. Both nodes have a 'Script' attribute for the actual code. 3.0+ The internal script editor has some useful features to ease the typing of code. Reserved keywords, built-in functions, standard types, literals and comments are highlighted. Indentation is handled automatically inside of code blocks. With [TAB] and [Shift-TAB] respectively several selected lines can be indented manually. However, the probably most useful feature - at least for the QF-Test newbie - might be the input assistance for many built-in methods. Type, for example, rc. and maybe some initial letters of a method name. Then press [Ctrl-Space] to open a pop-up window displaying the appropriate methods and descriptions of QF-Test's run-context (cf. chapter 45). Select one of the methods and confirm with [Return] to insert it into the script code. To get a list of all objects equipped with help, just press [Ctrl-Space] with the mouse cursor positioned after white space. 'Server scripts' are useful for tasks like calculating the values of variables or reading and parsing data from a file and using it to drive a test. 'SUT scripts' on the other hand give full access to the components of the SUT and to every Java API that the SUT exposes. An 'SUT script' might be used to retrieve or check values in the SUT to which QF-Test doesn't have access. The 'SUT script' node has a 'Client' attribute which requires the name of the SUT client to run in. 'Server scripts' are run in script interpreters for the different script languages embedded in QF-Test itself, while 'SUT scripts' are run in a script interpreter embedded in the SUT. These interpreters are independent of each other and do not share any state. However, QF-Test uses the RMI connection between itself and the SUT for seamless integration of 'SUT scripts' into the execution of a test. Through the menu items »Extras«-»Jython terminal...« or »Extras«-»Groovy terminal...« etc. you can open a window with an interactive command prompt for the language interpreter embedded in QF-Test. You can use this terminal to experiment with Jython scripts, get a feeling for the language, but also to try out some sophisticated stuff like setting up database connections. The keystrokes [Ctrl-Up] and [Ctrl-Down] let you cycle through previous input and you can also edit any other line or mark a region in the terminal and simply press [Return] to send it to the interpreter. In that case QF-Test will filter the '>>>' and '...' prompts from previous interpreter output. Similar terminals are available for each SUT client. The respective menu items are located below the »Clients« menu. Note When working in a SUT script terminal, there's one thing you need to be aware of: The commands issued to the interpreter are not executed on the event dispatch thread, contrary to commands executed via 'SUT script' nodes. This may not mean anything to you and most of the time it doesn't cause any problems, but it may deadlock your application if you access any Swing or SWT components or invoke their methods. To avoid that, QF-Test provides the global method runAWT (and runSWT respectively) that executes arbitrary code on the event dispatch thread. For example, to get the number of visible nodes in a JTree component named tree, use runAWT("tree.getRowCount()") (or runAWT { tree.getRowCount() } in Groovy) to be on the safe side. When executing 'Server scripts' and 'SUT scripts', QF-Test provides a special environment in which a variable named rc is bound. This variable represents the run-context which encapsulates the current state of the execution of the test. It provides an interface (fully documented in section 45.5) for accessing QF-Test variables, for calling QF-Test procedures and can be used to add messages to the run-log. To 'SUT scripts' it also provides access to the actual Java components of the SUT's GUI. For those cases where no run-context is available, i.e. Resolvers, TestRunListeners, code executing in a background thread etc. QF-Test also provides a module called qf with useful generic methods for logging and other things. Please see section 45.6 for details. One thing the run-context can be used for is to add arbitrary messages to the run-log that QF-Test generates for each test-run. These messages may also be flagged as warnings or errors. When working with compact run-logs (see the option Create compact run-log), nodes which most likely will not be needed for error analysis may be deleted from the run-log to preserve memory. This does not apply to error messages ( rc.logError). They are kept, along with about 100 nodes preceding the error. Warnings ( rc.logWarning) are also kept, however, without preceding nodes. Normal messages ( rc.logMessage) may be subject to deletion. If you really need to make sure that a message will definitely be kept in the run-log you can enforce this by specifying the optional second parameter dontcompactify, e.g. Most of the time logging messages is tied to evaluating some condition. In that case, it is often desirable to get a result in the HTML or XML report equivalent to that of a 'Check' node. The methods rc.check and rc.checkEqual will do just that: The optional last argument changes the error level in case of failure. Possible values are rc.EXCEPTION, rc.ERROR, rc.OK or rc.WARNING. QF-Test has different kinds of variables. On the one hand you find variables belonging to the QF-Test environment and on the other variables of the script languages. Variables of the script languages are separated in server and SUT side variables of the specific script interpreter. The following graphic clarifies theses differences: To share the different kinds of variables between QF-Test and the script interpreters provides the rc object which has several methods for the purpose. The methods are explained in the next section. Using QF-Test variables in scripts is not difficult. You can use the run-context's lookup method (see section 45.5 for API reference) whenever you want to access a QF-Test value as a string. To make the results of a script available during further test execution, values can be stored in global or local variables. The effect is identical to that of a 'Set variable' node. The corresponding methods in the run-context are rc.setGlobal and rc.setLocal. After executing the above example $(fileExists) will expand to true if the file /tmp/somefile exists and to false if it doesn't. To clear a variable, set it to None, to clear all global variables use rc.clearGlobals() from a 'Server script'. Sometimes it is helpful to have a variable available in several scripting nodes of the same language. If the value of the variable is not a simple string or integer, it is normally not sufficient to use rc.setGlobal(...) to store it in a global QF-Test variable because the value will be converted to a string in the process. Instead, such a variable should be declared global as shown in the following example. The globalVar is now accessible within all further scripting nodes of the same type ('Server scripts' or 'SUT scripts' of the same client). For changing the value of globalVar within another script, the global declaration is necessary again. Otherwise, a new local variable is created instead of accessing the existing global. Use the del statement to remove a global Jython variable: In Groovy and JavaScript the global variables declaration is even easier than in Jython. All variables that are not declared locally are assumed to be global. Sometimes one would like to use variable values that have been defined in one interpreter in a different interpreter. For example, an 'SUT script' might have been used to create a list of items displayed in a table. Later we want to iterate over that list in a 'Server script'. To simplify such tasks, the run-context provides a symmetrical set of methods to access or set global variables in a different interpreter. For 'SUT scripts' these methods are named toServer and fromServer. The corresponding 'Server script' methods are toSUT and fromSUT. The following example illustrates how an 'SUT script' can set a global variable in the QF-Test Jython interpreter: After the above script is run, the global variable named "tableCells" in the QF-Test Jython interpreter will hold the array of cell values. Note The cell values in the above example are not necessarily strings. They could be numbers, date values, anything. Unfortunately Jython's pickle mechanism isn't smart enough to transport instances of Java classes (not even realizable ones), so the whole exchange mechanism is limited to primitive types like strings and numbers, along with Jython objects and structures like arrays and dictionaries. For 'SUT scripts' the run-context provides an additional method that is extremely useful. Calling rc.getComponent("componentId") will retrieve the information of the 'Component' node in the test-suite with the 'QF-Test ID' "componentId" and pass that to QF-Test's component recognition mechanism. The whole process is basically the same as when simulating an event, including the possible exceptions if the component cannot be found. If the component is located, it will be passed to Jython, not as some abstract data but as the actual Java object. All methods exposed by the Java API for the component's class can now be invoked to retrieve information or achieve effects which are not possible through the GUI alone. To get a list of a component's method see section 5.5. You can also access sub-items this way. If the componentId parameter references an item, the result of the getComponent call is a pair, the component and the item's index. The index can be used to retrieve the actual value. The following example shows how to get the value of a table cell. Note the convenient way Jython supports sequence unpacking during assignment. The run-context can also be used to call back into QF-Test and execute a 'Procedure' node. In the example above the 'Procedure' named "clearField" in the 'Package' named "text" will be called. The parameter named "component" is set to the value "nameField" and the parameter named "message" is set to the value "nameField cleared". The same example with Groovy syntax: And in JavaScript: The value returned by the 'Procedure' through a 'Return' node is returned as the result of the rc.callProcedure call. Note Great care must be taken when using rc.callProcedure(...) in 'SUT script' nodes. Only short-running 'Procedures' should be called that won't trigger overly complex actions in the SUT. Otherwise, a DeadlockTimeoutException might be caused. For data-driven tests where for some reason the data must be determined in the SUT, use rc.toServer(...) to transfer the values to QF-Test interpreter, then drive the test from a 'Server script' node where these restrictions do not apply. Many of the options described in chapter 37 can also be set at runtime via rc.setOption. Constants for option names are predefined in the class Options. It is automatically available for all script languages. A real-life example where this might be useful is if you want to replay an event on a disabled component, so you need to temporarily disable QF-Test's check for the enabled/disabled state: After replaying this special event, the original value read from the configuration file or set in the option dialog can be restored by unsetting the option as the following example shows: NoteBe sure to set QF-Test options in a 'Server script' node and SUT options in an 'SUT script' node, otherwise the setting will have no effect. The option documentation in chapter 37 shows which one to use. You might face a situation where you want to work with a component which you have to search before working with it. Sometimes recording all required components can be exhaustive or might be too complicated. For such cases you can use the method rc.overrideElement to set the found component (either by generic components or via scripting) to a QF-Test component. Now you can work with the assigned component and use all available QF-Test nodes. Let's imagine that we have a panel and we want to work with the first textfield, but because of changing textfields we cannot rely on the standard way of the recognition. Now we can implement a script, which looks for the first textfield and assigns that textfield to the PriorityAwtSwingComponent from the standard library qfs.qft. Once we have executed that script we can work with any QF-Test nodes using the PriorityAwtSwingComponent, which actually performs all actions on the found textfield. This concept is very useful if you know an algorithm to determine the target component of your test-steps. You can find such priority-components for all engines in the standard library qfs.qft. You can also find an illustrative example in the provided demo test-suite carconfig_en.qft, located in the directory demo/carconfig in your QF-Test installation. NoteJython is based on Python 2, not Python 3, so whenever just "Python" is mentioned in relation to Jython throughout this manual it refers to Python 2. Python is an object oriented scripting language written in C by Guido van Rossum. A wealth of information including an excellent Python tutorial is available at. Python is a standard language that has been around for years with extensive freely accessible documentation. Therefore, this manual only explains how Jython is integrated into QF-Test, not the language itself. Python is a very natural language. Its greatest strength is the readability of Python scripts, so you should have no problems following the examples. Jython (formerly called JPython) is a Java implementation of version 2 of the language Python. It has the same syntax as Python and almost the same set of features. The object systems of Java and Jython are very similar and Jython can be integrated seamlessly into applications like QF-Test. This makes it an invaluable tool for Java scripting. Jython has its own web page at. There is also an extensive tutorial available which may help you get started with this scripting language. QF-Test uses Jython version 2.7 which supports a large majority of the standard Python 2 library. The Jython language is not only used in 'Server script' and 'SUT script' nodes but also in $[...] expressions and to evaluate conditions like the 'Condition' attribute of an 'If' node. NoteIn Jython scripts QF-Test variables with the syntax $(var) or ${group:name} are expanded before the execution of the script. This can lead to unexpected behavior, especially if the values of those variables contain multi-line strings or backslash characters ('\'). rc.lookup(...), which will be evaluated during execution of the script, is the preferred method in this case (see subsection 12.2.3.1 for details). Modules for Jython in QF-Test are just like standard Python modules. You can import the modules into QF-Test scripts and call their methods, which simplifies the development of complex scripts and increases maintainability since modules are available across test-suites. Modules intended to be shared between test-suites should be placed in the directory jython under QF-Test's root directory. Modules written specifically for one test-suite can also be placed in the test-suite's directory. The version-specific directory qftest-5.3.2/jython/Lib is reserved for modules provided by Quality First Software GmbH. Jython modules must have the file extension .py. The following Jython module defines a procedure sorting an array of numbers. The procedure defined in above module is beeing called in the following Jython script: Python comes with a simple line-oriented debugger called pdb. Among its useful features is the ability for post-mortem debugging, i.e. analyzing why a script failed with an exception. In Python you can simply import the pdb package and run pdb.pm() after an exception. This will put you in a debugger environment where you can examine the variable bindings in effect at the time of failure and also navigate up to the call stack to examine the variables there. It is somewhat similar to analyzing a core dump of a C application. Though Jython comes with pdb, the debugger doesn't work very well inside QF-Test for various reasons. But at least post-mortem debugging of Jython scripts is supported from the Jython terminals (see section 12.3). After a 'Server script' node fails, open QF-Test's Jython terminal, for a failed 'SUT script' node open the respective SUT Jython terminal, then just execute debug(). This should have a similar effect as pdb.pm() described above. For further information about the Python debugger please see the documentation for pdb at. Jython now has a real boolean type with values True and False whereas in older versions integer values 0 and 1 served as boolean values. This can cause problems if boolean results from calls like file.exists() are assigned to a QF-Test variable, e.g. "fileExists" and later checked in a 'Condition' attribute in the form $(fileExists) == 1. Such conditions should generally be written as just $(fileExists) or rc.getBool("fileExists") which work well with all Jython versions. Summary and advice 5.3+ Characters in Jython literal strings like "abc" used to be limited to 8 bit, causing problems when trying to work with international characters. QF-Test version 5.3 introduces a solution for international characters in Jython scripts and 'Condition' attributes based on the option Literal Jython strings are unicode (16-bit as in Java). If you start using QF-Test with version 5.3. or higher, that option is turned on by default. A small percentage of existing scripts will need to be updated when switching to unicode literals, so if QF-Test encounters an existing older system configuration the option remains off until explicitly turned on. Turning the option on is strongly recommended. The "Trouble shooting" section below explains what to do in case you encounter problems. If Jython unicode literals are activated, the option Default character encoding for Jython should be set to "utf-8" for maximum flexibility. The main thing to avoid, regardless of the option setting, is expansion of QF-Test variables in literal Jython strings like "$(somevar)". It can cause syntax errors or have unexpected results if the expanded variable contains newlines or backslash characters. Use rc.lookup("somevar") instead. Background and history of Jython in QF-Test In Java all strings are sequences of 16-bit characters, whereas Jython has two kinds of Strings: 8-bit "byte strings" (type <str>) and 16-bit "unicode strings" (type <unicode>). The majority of strings used in QF-Test Jython scripts are either string constants like "abc", called literal strings, or Java string values converted to Jython, e.g. the result of rc.lookup("varname"). Conversion from a Java string always results in a 16-bit unicode Jython string. For literal strings the result depends on the setting of the option Literal Jython strings are unicode (16-bit as in Java). When unicode and byte strings are compared or concatenated, Jython needs to convert one into the other. Conversion from unicode to byte strings is called encoding, the other way decoding. There are many different ways to encode 16-bit strings to 8-bit sequences and the rules to do so are called encodings. Common examples include "utf-8" or "latin-1". The option Default character encoding for Jython specifies the default encoding to use. For backwards compatibility the default used to be "latin-1" before QF-Test 5.3 and is now "utf-8", which is preferable because it is the most flexible and supports all international character sets. Jython in QF-Test is based on Python version 2. In early Python versions strings were made of 8-bit characters. Later, unicode strings with 16-bit characters were added. In Python 2 literal strings like "abc" are 8-bit byte strings, prepending 'u', i.e. u"abc" turns them into unicode strings. In Python 3 literal strings are unicode and one needs to prepend 'b', i.e. b"abc" to get 8-bit strings. In Jython 2.2, Java strings were converted to 8-bit Python strings based on the default encoding of the Java VM, typically ISO-8859-1 (also known as latin-1) in western countries. Since Jython 2.5, every Java string gets interpreted as a unicode Jython string. With 8-bit literal string this results in a lot of implicit conversion between 8-bit and unicode strings, for example when concatenating a Java string - now unicode - and a literal string like rc.lookup("path") + "/file". 5.3+ Before QF-Test version 5.3 the Jython script nodes had further problems with characters outside the 8-bit range, because of the way scripts were passed from QF-Test to the Jython compiler. In the process of fixing these issues it was decided that the best way to reduce problems with Jython literal strings was to adapt a feature already available in Python 2, namely from future import unicode_literals and make it possible to treat Jython literal strings in QF-Test as unicode. This results in literal strings being the same in all three scripting languages of QF-Test and fully compatible with Java strings, so the interaction of Jython scripts with everything else in QF-Test gets far more natural. The new option Literal Jython strings are unicode (16-bit as in Java) determines whether or not literal Strings in QF-Test Jython scripts are treated as unicode. For backwards compatibility reasons the default remains 8-bit if QF-Test encounters an exsiting system configuration, otherwise unicode literals are now the default. The recommended Jython option settings are on for Literal Jython strings are unicode (16-bit as in Java) and "utf-8" for Default character encoding for Jython. Trouble shooting Jython encoding issues As explained in the previous sections, Jython has two string types, <type 'str'> for 8-bit "byte" strings and <type 'unicode'> for 16-bit "unicode" strings. Literal strings can be prepended with 'b' ( b"abc") to get byte strings or with 'u' ( u"abc") for unicode strings. Plain literal strings ( "abc") are unicode if the option Literal Jython strings are unicode (16-bit as in Java) is turned on and byte strings otherwise. Java strings resulting from Java function calls like rc.lookup("somevar") are unicode strings. The following advice should help minimizing Jython string encoding issues: "$(varname)"have always been problematic and should be replaced with rc.lookup("varname"). rc.lookup("filename")(see above) and prepending 'r' (for "raw string") to literal strings, e.g. qftestDir = r"C:\Program Files\QFS\QF-Test". qf.println(...)instead of print ...because the latter gets passed through an 8-bit stream with the default Java encoding (and in case of an 'SUT script' node also of the operating system) and thus may lose international characters on the way. str(some_object). As stris the byte string type this always creates a byte string and triggers encoding. Unless you specifically need a byte string it is much better to use unicode(some_object). typesJython module provides the constant types.StringTypeand types.UnicodeTypeas well as the list types.StringTypescotaining both. The latter is very useful when checking if an object is any type of string, regardless of 8-bit or 16-bit. Instead of if type(some_object) == types.StringType if type(some_object) in types.StringTypes array.array(b'i', [1, 2, 3]) And of course our support is always there to help. This simple operation is surprisingly difficult in Jython. Given a Java object you would expect to simply write obj.getClass().getName(). For some objects this works fine, for others it fails with a cryptic message. This can be very frustrating. Things go wrong whenever there is another getName method defined by the class, which is the case for AWT Component, so getting the class name this way fails for all AWT/Swing component classes. In Jython 2.2.1 the accepted workaround was to use the Python idiom obj.__class__.__name__. This no longer works in Jython 2.5 because it no longer returns the fully qualified class name, only the last part. Instead of java.lang.String you now get just String. The only solution that reliably works for version 2.5 is: from java.lang import Class Class.getName(obj.getClass()) This also works for 2.2, but it is not nice, so we initiated a new convenience module with utility methods called qf that gets imported automatically. As a result you can now simply write qf.getClassName(obj). We are going to close this section with a complex example, combining features from Jython and QF-Test to execute a data-driven test. For the example we assume that a simple table with the three columns "Name", "Age" and "Address" should be filled with values read from a file. The file is assumed to be in "comma-separated-values" format with "|" as the separator character, one line per table-row, e.g.: John Smith|45|Some street, some town Julia Black|35|Another street, same town The example verifies the SUT's functionality in creating new table rows. It calls a QF-Test procedure that takes the three parameters, "name", "age", and "address", creates a new table-row and fills it with these values. Then the Jython script is used to read and parse the data from the file, iterate over the data-sets and call back to QF-Test for each table-row to be created. The name of the file to read is passed in a QF-Test variable named "filename". After filling the table, the script compares the state of the actual table component with the data read from the file to make sure everything is OK. Of course, the example above serves only as an illustration. It is too complex to be edited comfortably in QF-Test and too much is hard-coded, so it is not easily reusable. For real use, the code to read and parse the file should be parameterized and moved to a module, as should the code that verifies the table. This is done in the following Jython script with the methods loadTable to read the data from the file and verifyTable to verify the results. It is saved in a module named csvtable.py. An example module is provided in qftest-5.3.2/doc/tutorial/csvtable.py. Following is a simplified version: The code above should look familiar. It is an improved version of parts of example 12.21. With that module in place, the code that has to be written in QF-Test is reduced to: Groovy is another established scripting language for the Java Platform. It was invented by James Strachan and Bob McWhirter in 2003. All you need for doing Groovy is a Java Runtime Environment (JRE) and the groovy-all.jar file. This library contains a compiler to create Java class files and provides the runtime when using that classes in the Java Virtual Machine (JVM). You may think of Groovy as being Java with an additional .jar file. In contrast to Java, Groovy is a dynamic language, meaning that the behavior of an object is determined at runtime. Groovy also allows to load classes from sources without creating class files. Finally, it is easy to embed Groovy scripts into Java applications like QF-Test. The Groovy syntax is similar to Java, maybe more expressive and easier to read. When coming from Java you can embrace the Groovy style step by step. Of course we cannot explain all aspects of the Groovy language here. For in-depth information, please take a look at the Groovy home page at or read the excellent book "Groovy in Action" by Dierk Koenig and others. Perhaps the following tips may help a Java programmer getting started with Groovy. println 'hello qfs'means the same as println('hello qfs'). for (i in 0..<len) { ... }instead of for (int i = 0; i < len; i++) { ... }. java.lang.*, java.util.*, java.io.*, java.net.*, groovy.lang.*, groovy.util.*, java.math.BigInteger, java.math.BigDecimal. obj.getXxx(), you can simply write obj.xxxto access a property. ==checks for equality, not identity, so you can write if (somevar == "somestring")instead of if (somevar.equals("somestring")). The method is()checks for identity. defkeyword. Using def x = 1allows for example to assign a Stringvalue to the variable xlater in the script. int[] a = [1, 2, 3]or def a = [1, 2, 3] as int[]. With def a = [1, 2, 3]you define a Listin Groovy. isInteger()method to any Stringobject in a Groovy script. That's what is called GDK (according to the JDK in Java). To get a list of those methods for an arbitrary object obj, you can simply invoke obj.class.metaClass.metaMethods.nameor use the following example: Closureis an object which represents a piece of code. It can take parameters and return a value. Like a block, a Closureis defined with curly braces { ... }. Blocks only exist in context with a class, an interface, static or object initializers, method bodies, if, else, synchronized, for, while, switch, try, catch, and finally. Every other occurrence of {...}is a Closure. As an example let's take a look at the eachFileMatchGDK method of the Fileclass. It takes two parameters, a filter (e. g. a Pattern) and a Closure. That Closuretakes itself a parameter, a Fileobject for the current file. Listsand Mapsis simpler than in Java. Just like Java classes, Groovy source files ( .groovy) can be organized in packages. Those intended to be shared between test-suites should be placed in the directory groovy under QF-Test's root directory. Others that are written specifically for one test-suite can also be placed in the directory of the test-suite. The version-specific directory qftest-5.3.2/groovy is reserved for Groovy files provided by Quality First Software GmbH. The file MyModule.groovy could be saved in a sub directory my below the suite directory. Then you can use the add method from MyModule as follows: This code also shows another groovy feature: Type aliasing. By using import and as in combination you can reference a class by a name of your choice. JavaScript has become the most widely used programming language in the web area and is one of the most popular script languages. QF-Test supports scripting with ECMAScript, which provides a common standard for the variety of different implementations of JavaScript. QF-Test must run with at least Java 8 to use JavaScript. It is possible to write code for the ECMAScript 6 standard. QF-Test automatically transpiles the code to the EcmaScript 5 standard before the execution. Special features of JavaScript as compared to other programming languages: undefinedand null. A variable is undefinedwhen it has no value. nullis an intended null value that has to be assigned. ==operator checks for equality instead of identity. So you can use if (somevar == "somestring")to check for equality. To check for identity use the ===operator. letkeyword are dynamically typed. E.g. let x = 1makes it possible to assign Stringto x. Constants can be declared with const. The following example shows how functionality can be transfered in a module. The module must be placed in the javascript directory inside the QF-Test root directory. The module can look like this: The moremath.js module defines the two function: fibonacci and sumDigits. Each function has to be exported to . This can be achieved via Node.js like function exports. The following code can now be used inside the script node to take advantage of the moremath.js modules functions: There are multiple ways to import modules. Modules provided by QF-Test can be imported using the import function. Java classes can also be imported using the import function. It is also possible to use the "require" function for importing npm modules, which are explained in the following section. npm is a package manager for JavaScript with over 350.000 packages. The available packages are listed here. The packages can be used in QF-Test scripts. They need to be installed in the javascript folder of the QF-Test root directory. npm install underscore This line installs the npm underscore package from the os command line. There are a few npm modules that are incompatible with the ECMAScript standard as they were written for Node.js. Besides console.log() there is another method implemented in QF-Test to show output on the terminal. Note that this JavaScript scripts are not executed inside the browser but in a specific engine on the server or SUT side. This engine is called Oracle Nashorn Engine and comes with JDK 8. It allows the execution of EcmaScript directly in the JVM.
https://www.qfs.de/en/qf-test-manual/lc/manual-en-user_scripting.html
CC-MAIN-2021-31
refinedweb
5,898
65.01
When writing webapps that make heavy use of javascript it gets hard to keep track of the javascript used, especially when the webapp loads content dynamically like in AJAX applications. The problem is that every container page has to be aware of the javascript needed by all the dynamcally loaded component markup. The problems are: - Which javascript must be loaded - How to prevent duplicate variable assignments - dependencies between javascript libraries In such cases a framework for dynamic javascript loading comes handy, because it relieves the container from the need to know what script is needed in the component. The component declares the needed script and is called after the script is available. This article describes how to write dynamic javascript enabled webapps with the ztemplates java webframework. Beginning with version 2.3.0 ztemplates includes a small javascript library named zscript that makes dynamic JavaScript loading easy. zscript is fully integrated into ztemplates. See Dynamic Javascript Loading with zscript and jquery published on dzone for a short introduction to zscript. The library uses jquery, so make sure to include your preferred version. To use add the following lines to your html head section: <head> <script type="text/javascript" src="jquery-x.x.x.js"></script> <script type="text/javascript" src="${contextPath}/ztemplates/zscript.js"></script> <script type="text/javascript" src="${contextPath}/ztemplates/zscript-definitions.js"></script> </head> Define a java class and annotate it with @ZScriptDefinition. Provide a name to the annotation. The name must match the name of the variable defined in the javascript snippet. The class-name does not matter, but You may find it useful to name your script classes ZScript_name so you can easily find them. Now for each reusable piece of javascript functionality you want to make available to your webapp add a annotated class and a template containing the javascript. The annotation ensures that ztemplates can find the javascript. ztemplates makes the script available to the zscript library and adds it to zscript-definitions.js For example to define a javascript variable (keeping a service object) named 'user' create a java class. @ZRenderer(ZVelocityRenderer.class) //this defines a javascript object called 'user' @ZScriptDefinition("user") public class ZScript_user { } and a javascript template in ZScript_user.vm at the same location as the class: if(typeof user=='undefined') { var user = function() { //private area var loggedIn; function isLoggedIn() { return loggedIn; } function setLoggedIn(loggedInParam) { loggedIn = loggedInParam; } //public area contains methods that can be used from outside return { isLoggedIn: function () { return isLoggedIn(); }, setLoggedIn: function (loggedIn) { setLoggedIn(loggedIn); } }; }(); } This code calls a method and assigns the return value to a variable called user. This happens only if the variable has not already been created. The return value is a collection of functions that are available for calling. The other functions are private. To use the javascript from your html page write this to your javascript tag: <script> zscript.requires(['user'], function(){ user.setLoggedIn(true); if(user.isLoggedIn()){ alert('logged in!'); } }); zscript.requires(['mylib'], function(){ mylib.doSomething(); }); </script> This states that the javascript library 'user' is required in the callback body, so pass the name as first parameter to the requires method. The second parameter is a callback that will be called as soon as the 'user' library is available (which could also be immediately if the library has already been loaded). If the code contains more than one call to zscript.requires() the order in which the callbacks are called is preserved. In the example the callback for 'user' is always called before the callback for 'mylib'. You may define other javascripts like this: zscript.define('mylib', '/js/mylib.js'); //map the name 'mylib' to the url ztemplates will ensure the javascript is loaded whenever you use it in a zscript.requires(['mylib'], function(){}) call. Be aware of the cross-domain restrictions placed upon the locations of your scripts, so best load them from the same server as your html. Because there is no defined order in which the scripts are loaded at runtime the libraries should not contain logic that references other dynamically loaded libraries: var user = function() { var loggedIn; //Declare all the libraries used by this library here, will be executed before requires callbacks zscript.requiresInScript(['dialog', 'mylib' ]); //wrong, mylib may not be loaded mylib.doSomething(); function isLoggedIn() { //OK, because dependency has been declared above and call is in function mylib.doSomething(); return loggedIn; } function setLoggedIn(loggedInParam) { //OK, as dependency has been declared above. dialog.show('Something'); loggedIn = loggedInParam; } return { isLoggedIn: function () { return isLoggedIn(); }, setLoggedIn: function (loggedIn) { return setLoggedIn(loggedIn); } }; }(); Note the if(typeof user=='undefined') condition that ensures that the variable is created only once. Using this pattern you get a clean separation between java, javascript and html and don't have to worry about script tags in your html header markup or duplicate instantiations of variables. All needed javascript is loaded on demand, or if you don't want that you can instruct ztemplates to collect all javascript in zscript-definitions.js Weblinks: - the annotation based java webframework. - Dynamic Javascript Loading with zscript and jquery published on dzone for a short introduction to zscript. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/clean-your-javascript
CC-MAIN-2016-07
refinedweb
859
54.93
You are receiving this email because you have contacted me about the Cyrus 2 packages. I apologise if I included any of you in the recipient list by mistake, I had about 100 mails on Cyrus2 in my inbox... Due to the unusual high number of people interested in this upload, Debian Weekly News and Debian User are being bcc'ed as well. Cyrus 2.1 has been installed in Debian. I will block the packages from entering Woody because I feel they are not mature enough for Debian stable yet. They are working, though (as long as you get SASL2 to do what you want, that is). Changes: cyrus21-imapd (2.1.3CVS20020403-1) unstable; urgency=low * New CVS source, fixes a few bad bugs in sieve and lmtp * Improve ext3 handling in cyrus-makedirs, now request journaled data since that improves performance a lot. Your ext3 spool partitions should have a large journal, btw. * Make sure to tag the vendor string with "Debian (unstable)". It will change to something else when I start thinking of this package as a more stable 'for production use' (i.e. not tracking CVS blindly) * Move imtest and pop3test to a new cyrus21-clients package, as requested by GNUS users. Also move them from /usr/sbin to /usr/bin since they are apparently to be regarded as regular user tools... Changes: cyrus21-imapd (2.1.3CVS20020331-1) unstable; urgency=low * Initial Debian Release, codenamed "The Higgs Bogons Are Out There", based on the cyrus-imapd packaging by Michael-John Turner <mj@debian.org>, and work done by David Parker <david@neongoat.com>. Thanks to David D. Kilzer for help with libcom-err, finding missing build dependencies, and lots of testing and suggestions, especially the upgrade from old cyrus packages guide. * THERE IS NO AUTOMATED UPGRADE PATH FROM THE OLD CYRUS 1.5 PACKAGE. Read the upstream docs for the manual upgrade procedure, and the Debian upgrade docs in /usr/share/doc/cyrus21-common/. * Initial set of changes to upstream: - Use non-ancient config.sub, config.guess and others - Log to MAIL facility, instead of LOCAL6, LOCAL7 - Rename reconstruct, master, deliver, quota to cyrreconstruct, cyrmaster, cyrdeliver, cyrquota (to avoid namespace polution) - Fix annoying minor bugs (sent upstream) - Move all files that must not be run directly to /usr/lib/cyrus (such as imapd, pop3d, lmtpd, timsieved) - daemonized cyrmaster, added proper pidfile cron-style locking * Removed semi-broken INN support. If someone is willing to install it, test and help packaging it properly, it will be readded to the package. * Tracking CVS until we reach 2.1.4. Breakage may occur. -- "One disk to rule them all, One disk to find them. One disk to bring them all and in the darkness grind them. In the Land of Redmond where the shadows lie." -- The Silicon Valley Tarot Henrique Holschuh Attachment: pgp8NR6jh7_LV.pgp Description: PGP signature
https://lists.debian.org/debian-user/2002/04/msg00746.html
CC-MAIN-2017-13
refinedweb
480
65.01
Management Views¶ A management view is a view configuration that applies only when the URL is prepended with the manage prefix. The manage prefix is usually /manage, unless you've changed it from its default by setting a custom substanced.manage_prefix in your application's .ini file. This means that views declared as management views will never show up in your application's "retail" interface (the interface that normal unprivileged users see). They'll only show up when a user is using the SDI to manage content. There are two ways to define management views: - Using the substanced.sdi.mgmt_view decorator on a function, method, or class. - Using the substanced.sdi.add_mgmt_view() Configurator (aka. config.add_mgmt_view) API. The former is most convenient, but they are functionally equivalent. mgmt_view just calls into add_mgmt_view when found via a scan. Declaring a management view is much the same as declaring a "normal" Pyramid view using pyramid.view.view_config with a route_name of substanced_manage. For example, each of the following view declarations will register a view that will show up when the /manage/foobar URL is visited: The above is largely functionally the same as this: Management views, in other words, are really just plain-old Pyramid views with a slightly shorter syntax for definition. Declaring a view a management view, however, does do some extra things that make it advisable to use rather than a plain Pyramid view registration: - It registers introspectable objects that the SDI interface uses to try to find management interface tabs (the row of actions at the top of every management view rendering). - It allows you to associate a tab title, a tab condition, and cross-site request forgery attributes with the view. - It uses the default permission sdi.view. So if you want things to work right when developing management views, you'll use @mgmt_view instead of @view_config, and config.add_mgmt_view instead of config.add_view. As you use management views in the SDI, you might notice that the URL includes @@ as "goggles". For example, is the URL for seeing the folder contents. The @@ is a way to ensure that you point at the URL for a view and not get some resource with the __name__ of contents. You can still get to the folder contents management view using that folder contains something named contents. mgmt_view View Predicates¶ Since mgmt_view is an extension of Pyramid's view_config, it re-uses the same concept of view predicates as well as some of the same actual predicates: - request_type, request_method, request_param, containment, attr, renderer, wrapper, xhr, accept, header, path_info, context, name, custom_predicates, decorator, mapper, and http_cache are supported and behave the same. - permission is the same but defaults to sdi.view. The following are new view predicates introduced for mgmt_view: tab_title takes a string for the label placed on the tab. tab_condition takes a callable that returns True or False, or True or False. If you state a callable, this callable is passed context and request. The boolean determines whether the tab is listed in a certain situation. tab_before takes the view name of a mgmt_view that this mgmt_view should appear after (covered in detail in the next section.) tab_after takes the view name of a mgmt_view that this mgmt_view should appear after. Also covered below. tab_near takes a "sentinel" from substanced.sdi (or None) that makes a best effort at placement independent of another particular mgmt_view. Also covered below. The possible sentinel values are: substanced.sdi.LEFT substanced.sdi.MIDDLE substanced.sdi.RIGHT Tab Ordering¶ If you register a management view, a tab will be added in the list of tabs. If no mgmt view specifies otherwise via its tab data, the tab order will use a default sorting: alphabetical order by the tab_title parameter of each tab (or the view name if no tab_title is provided.) The first tab in this tab listing acts as the "default" that is open when you visit a resource. Substance D does, though, give you some options to control tab ordering in larger systems with different software registering management views. Perhaps a developer wants to ensure that one of her tabs appears first in the list and another appears last, no matter what other management views have been registered by Substance D or any add-on packages. @mgmt_view (or the imperative call) allow a keyword of tab_before or tab_after. Each take the string tab name of the management view to place before or after. If you don't care (or don't know) which view name to use as a tab_before or tab_after value, use tab_near, which can be any of the sentinel values MIDDLE, LEFT, or RIGHT, each of which specifies a target "zone" in the tab order. Substance D will make a best effort to do something sane with tab_near. As in many cases, an illustration is helpful: from substanced.sdi import LEFT, RIGHT @mgmt_view( name='tab_1', tab_title='Tab 1', renderer='templates/tab.pt' ) def tab_1(context, request): return {} @mgmt_view( name='tab_2', tab_title='Tab 2', renderer='templates/tab.pt', tab_before='tab_1' ) def tab_2(context, request): return {} @mgmt_view( name='tab_3', tab_title='Tab 3', renderer='templates/tab.pt', tab_near=RIGHT ) def tab_3(context, request): return {} @mgmt_view( name='tab_4', tab_title='Tab 4', renderer='templates/tab.pt', tab_near=LEFT ) def tab_4(context, request): return {} @mgmt_view( name='tab_5', tab_title='Tab 5', renderer='templates/tab.pt', tab_near=LEFT ) def tab_5(context, request): return {} This set of management views (combined with the built-in Substance D management views for Contents and Security) results in: Tab 4 | Tab 5 | Contents | Security | Tab 2 | Tab 1 | Tab 3 These management view arguments apply to any content type that the view is registered for. What if you want to allow a content type to influence the tab ordering? As mentioned in the content type docs, the tab_order parameter overrides the mgmt_view tab settings, for a content type, with a sequence of view names that should be ordered (and everything not in the sequence, after.) Filling Slots¶ Each management view that you write plugs into various parts of the SDI UI. This is done using normal ZPT fill-slot semantics: - page-title is the <title> in the <head> - head-more is a place to inject CSS and JS in the <head> after all the SDI elements - tail-more does the same, just before the </body> - main is the main content area SDI API¶ All templates in the SDI share a common "layout". This layout needs information from the environment to render markup that is common to every screen, as well as the template used as the "main template." This "template API" is known as the SDI API. It is an instance of the sdiapi class in substanced.sdi.__init__.py and is made available as request.sdiapi. The template for your management view should start with a call to requests.sdiapi: <div metal: The request.sdiapi object has other convenience features as well. See the Substance D interfaces documentation for more information. Flash Messages¶ Often you perform an action on one view that needs a message displayed by another view on the next request. For example, if you delete a resource, the next request might confirm to the user "Deleted 1 resource." Pyramid supports this with "flash messages." In Substance D, your applications can make a call to the sdiapi such as: request.sdiapi.flash('ACE moved up') ...and the next request will process this flash message: - The message will be removed from the stack of messages - It will then be displayed in the appropriate styling based on the "queue" The sdiapi provides another helper: request.sdiapi.flash_with_undo('ACE moved up') This displays a flash message as before, but also provides an Undo button to remove the previous transaction. - title, content, flash messages, head, tail
http://docs.pylonsproject.org/projects/substanced/en/latest/mgmtview.html
CC-MAIN-2014-49
refinedweb
1,292
62.88
Building a production ready website with GatsbyJS & Prismic.io GatsbyJS is one of my favourite open source frameworks for building static websites. Being able to take advantage of their plugin ecosystem to build rapid websites whilst using some of the latest tools such as GraphQL makes the experience both fun and simple. Prismic is a headless CMS with a powerful API which takes away the pain of managing rich content such as images, headings, lists etc which feature regularly in web content. This website you're currently on is built on this exact technology stack, so we'll be using it as reference throughout. Why Prismic? The first iteration of our website was built without a CMS system. All content was managed from within the code, importing various assets / markdown files. This quickly became painful as the website started to grow, and required contributions to be made directly from within the repository. The hunt for a headless CMS started with Contentful, however something just didn't click. Although the developer experience seemed good, the actual UI and handling of content block just seemed over-complicated. Nothing else really tickled my toes until I found Prismic. The simplistic UI and the ability to 'slice data' made complete sense to me. Structuring GatsbyJS This website is built with two page types: static and dynamic. - Static: Those pages which you're expecting and form the foundation of your website (homepage, about, contact). - Dynamic: Generated content, such as blog posts (like this one!). Tackling static pages with Gatsby & Prismic is the easiest place to get started. We know our website structure, and it's a simple matter of adding "types" which define key areas of each page. Lets get started. Static Pages With a Prismic repository setup, we can jump across to the "Custom Types" section of the dashboard: A type is "something" that we'll be able to query against to pull our data into the website. Lets go ahead and create a new "Homepage" type, as a "Single Type" (static page). A "Single Type" means we'll only be able to create content against it's schema once, which is exactly what we need. Prismic gives us a variety of blocks to work with, from an image, rich text, date pickers and more. By adding these to our new Homepage type, we can build up the schema of exactly what's needed to populate our homepage. The homepage of this website has a number of areas to edit, from the hero to the about Invertase section. I simply created a number of "Title" and "Rich Text" named blocks which will correspond to that area on our page: Once saved, head back over to the "Content" section of the Prismic dashboard. When creating a new piece of content, you'll be able to work on your Homepage type. Now it's a simple matter of filling out your content as you see fit. You may have noticed that on our homepage we have a few areas which aren't as they appear below, for example the technologies is a comma separated list, whereas the website shows it automatically cycling through each technology. This is where our code comes in to play; we'll pull that text out of Prismic as-is, then split it by commas to form generate our list to cycle through. Prismic provides the content, rather than the functionality. Dynamic Pages A dynamic post allows the user to be more creative with the content. A blog post for example requires the end user to submit a slug, the rich content and maybe a few other nice-to-haves such as a hero image, author name etc. Within Prismic, create a new Custom Type as a "Repeatable Type" called blog_post. You'll be able to create multiple content articles using the schema of this type. For example, our blog post Custom Type is setup as follows: Each section is clearly labelled for it's purpose, however the content section is simply a rich text block waiting for the user to start writing their content. We also leave the slug field up to the user. Back in our Content section of the dashboard, we can now create our own blog post... here's the one I'm currently working on (blogception right here): Sourcing Prismic content with GatsbyJS GatsbyJS allows us to source data, which is a fancy way of saying it creates a GraphQL schema from our data. Prismic provides an API to pull our content from a client. Now, we could go about this ourselves using various API requests to grab our content and add it to the GraphQL database, however we've luckily got access to a gatsby-source-prismic which takes care of that work for us. Go ahead and install the plugin, adding it to your gatsby-config.js file, grabbing the API key from your Prismic dashboard. Start the Gatsby development server ( gatsby develop) and the plugin will download and source the created content into GraphQL for us - pretty neat! The plugin populates the database with all of our content with the prefix prismic. Open the GraphQL explorer Gatby hosts at Lets access the homepage data, querying the prismicHomepage node the plugin has created for us: We can now extend this query to grab any additional fields, e.g. about. Whether you use text or html is your call, however where the HTML schematics is important, it's better to use the text field and control HTML yourself. Creating static pages: useStaticQuery Using React 16.8+, we're able to take advantage of React Hooks. GatsbyJS exposed a function called useStaticQuery which allows our pages to statically query data which is injected into our component for consumption. If you're using a lower version of React, the StaticQuery component has you covered. Create a new file in pages/index.js within your Gatsby project. Gatsby will automatically generate a new page at the root of your website (the homepage). In that file, we can now query our data using, and inject it into our website: import React from 'react'; import { graphql, useStaticQuery } from 'gatsby'; export default function Homepage() { const { homepage } = useStaticQuery(graphql` query { homepage: prismicHomepage { data { heading { text } } } } `); return <div>{homepage.data.heading.text}</div>; } You will now find the content pulling through to your page. Now it's just a matter of building out your website in usual React fashion and querying the data as and when it's needed. This works create for static content that we are expecting, and you're even able to inject rich content into the website by passing the html output from Prismic into React's dangerouslySetInnerHTML. Creating dynamic pages: createPage A website containing dynamic content such as blog posts means you're not able to statically query the page data, since we're not exactly sure what pages or content is going to be created. Dynamic pages commonly use a repeatable page template too. Rather than manually adding a new pages/blog/my-new-post.js file and querying the same data schema for every post, we can instead create these pages during Gatsby's build process using the createPage action. Go ahead and open (or create) the gatsby-node.js file in the root of your project. We're going to take advantage of the gatsby-source-prismic plugin and the createPages API from Gatsby. As we have seen, the plugin creates "nodes" of data and Gatsby transforms them into a GraphQL schema which we can query. The createPages API runs after the GraphQL schema has been created and allows us to create our own pages as the name suggests. We previously made a blog_post type, which our plugin also knows about. It's taken the data for this type and created BlogPost nodes, which we can query with allPrismicBlogPost: Using this data, we can now pragmatically create pages! Add the following to the gatsby-node.js file: , component: blogTemplate, context: { id: node.id, }, }) }); };, component: blogTemplate, context: { id: node.id, }, }) }); }; exports.createPages = async ({ reporter, actions, graphql }) => { const { createPage } = actions; const blogTemplate = path.resolve('src/templates/blog-post/index.js'); // Query our blog posts const result = await graphql(` { posts: allPrismicBlogPost { edges { node { id uid } } } } `); if (result.errors) { reporter.panic(result.errors); } result.data.posts.edges.forEach(({ node }) => { // Create a page for each blog post createPage({ path: /blog/${node.uid} The uid is our unique identifier we defined within Prismic which we use to generate the page slug, and the id is the internal GatsbyJS generated ID for our data node. The id is passed into the context which we can use to query the node data within our page template. Above we defined a page template within src/templates/blog-post/index.js, this will be used to for all of our blog posts. We now need to create that file, and return a React component: import React from 'react'; function BlogPost() { return ( <> <h1>Blog post title</h1> <article>Blog post content</article> </> ); } export default BlogPost; Rerunning gatsby develop will now create new blog posts with a unique slug for each, however they'll all return the same component as we just defined. In the previous step we passed the node id to content. We are able to perform a page query, which exposes any context values we passed through to query with. Using the generated node id, lets query the blog post content: import React from 'react'; import { graphql } from 'gatsby'; function BlogPost() { // .. } export const pageQuery = graphql` query BlogPost($id: String!) { prismicBlogPost(id: { eq: $id }) { data { title { text } content { html } } } } `; export default BlogPost; This page query will be queried on every page creation with the context, and the result of the entire query will be passed through as the data prop to our component. Now it's a simple matter of taking our Prismic content and building a page with React: function BlogPost({ data }) { const { title, content } = data.prismicBlogPost.data; return ( <> <h1>{title.text}</h1> <article dangerouslySetInnerHTML={{ __html: content.html }} /> </> ); } Summary In summary, both Gatsby and Prismic provide a flexible ecosystem for building websites without having to worry about dealing with rich content and user access/knowledge. Prismic doesn't just have to be used for mapping content directly to a website, the flexible content and type system means you can store any data and use it in whatever way you see fit - for example storing list of product IDs and querying those IDs during the Gatsby build process to create an order-able product listing. Share this blog post:
https://invertase.io/blog/gatbsyjs-prismicio/
CC-MAIN-2019-30
refinedweb
1,763
61.16
First published by IBM at.. Like DOM, SAX parsers control the complete parsing process. By default, a SAX parser starts parsing at the beginning of a document and continues until the end. Client event handlers are informed through callbacks about the events during this parsing process. To avoid unnecessary overhead during document screening, such an event handler may want to stop the parsing process once it has gathered the required information. A common technique for achieving this in SAX is throwing an exception, which is discussed in the developerWorks tip "Stop a SAX parser when you have enough data" by Nicholas Chase. This will cause SAX to stop the parsing process. The information gathered by the event handler must be encoded in an error message that's wrapped in an exception object and posted to the parser's client. A special error handler in the client receives this exception and must parse the parser's error message to retrieve the required information! This may be a solution to the screening problem, but it's a complicated one. Enter StAX StAX offers a pull parser that gives client applications full control over the parsing process. A client application may decide at any time to discontinue the parsing process, and no tricks are required to stop the parser. This is ideal for screening purposes. Listing 1 shows what a simple document classifier might look like. I use the cursor-based StAX API for this example. At the very first start tag of the document (the root element tag), I retrieve the kind attribute from this element. The value of this attribute is then passed back to the client and the parsing process is discontinued. The client may now act upon this returned value. kind import java.io.*; import javax.xml.stream.*; public class Classifier { // Holds factory instance private XMLInputFactory xmlif; public static void main(String[] args) throws FileNotFoundException, XMLStreamException { Classifier router = new Classifier(); String kind1 = router.getKind("somefile.xml"); String kind2 = router.getKind("otherfile.xml"); } /** * Return the document kind * @param string - the value of the "kind" attribute of the root element */ private String getKind(String filename) throws FileNotFoundException, XMLStreamException { // Create input factory lazily if (xmlif == null) { // Use reference implementation System.setProperty( "javax.xml.stream.XMLInputFactory", "com.bea.xml.stream.MXParserFactory"); xmlif = XMLInputFactory.newInstance(); } // Create stream reader XMLStreamReader xmlr = xmlif.createXMLStreamReader(new FileReader(filename)); // Main event loop while (xmlr.hasNext()) { // Process single event switch (xmlr.getEventType()) { // Process start tags case XMLStreamReader.START_ELEMENT : // Check attributes for first start tag for (int i = 0; i < xmlr.getAttributeCount(); i++) { // Get attribute name String localName = xmlr.getAttributeName(i); if (localName.equals("kind")) { // Return value return xmlr.getAttributeValue(i); } } return null; } // Move to next event xmlr.next(); } return null; } } Note, that I use an instance field to hold the XMLInputFactory instance. This is done to improve efficiency. Compared to the actual parsing process (which is blazingly fast), the execution of XMLInputFactory.newInstance() and xmlif.createXMLStreamReader() cause considerable overhead. While createXMLStreamReader() must be executed once for each new document, you may reuse the XMLInputFactory instance and thus avoid the repeated execution of XMLInputFactory.newInstance(). XMLInputFactory XMLInputFactory.newInstance() xmlif.createXMLStreamReader() createXMLStreamReader() Next steps This tip demonstrated the use of StAX parsers for screening and classification of XML documents. In the next tip, I will show how XML documents can be created through the StAX API. Resources
http://www.devx.com/ibm/Article/20271
crawl-001
refinedweb
556
50.02
Archives XML Namespace URIs I came across an article discussing the need (or lack of) for XML namespace URIs on that was interesting especially if you’ve worked with a lot of different namespace URIs over the years. While I don’t necessarily agree with several of the statements made (read it yourself to form your own opinion though), there were some points that made sense especially when it comes to determining if you really need namespaces in a particular XML document. Indigo Video on MSDN MSDN recently released a great introductory video on Indigo. Definitely worth the….watch….if you’re interested in seeing the future of messaging. Another Indigo Video Robert Scoble added a comment to my initial Indigo video post and let me know of another one located on the Channel 9 site. Steve Millet gives a nice overview of Indigo (with demo) which I'd recommend even if you're not interested in learning more about Indigo and just want to know more about security as well as the contract first methodology for SOAs. You may get a little sea-sick with the camera moving around some during the code demo but it helped keep me more focused. :-)
http://weblogs.asp.net/dwahlin/archive/2005/04
CC-MAIN-2014-42
refinedweb
202
55.27
In this Google flutter code example we are going to learn how to use Stack 'stackStack(), ); } } stack.dart import 'package:flutter/material.dart'; class BasicStack extends StatelessWidget { //A widget that positions its children relative to the edges of its box. //you can place widgets on top of each other with this widget, for ex, placeing a Text wdiget on top an Image @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar(title: Text("Stack Widget")), body: Stack( //here, we'll place a text on top an image and position the text to the bottom left corner children: <Widget>[ new Image.asset( 'assets/images/umbrella.png', height: double.infinity, width: double.infinity, fit: BoxFit.fill, ), //lets position the text to the bottom left corner Positioned( bottom: 15.0, left: 20.0, child: Text("Inducesmile.com", style: TextStyle( fontWeight: FontWeight.bold, fontSize: 28.0, color: Colors.white)), ), ], ), ); } } If you have any questions or suggestions kindly use the comment box or you can contact us directly through our contact page below.
https://inducesmile.com/google-flutter/how-to-use-stack-widget-in-flutter/
CC-MAIN-2019-22
refinedweb
168
52.46
current position:Home>Blurring and anonymizing faces using OpenCV and python Blurring and anonymizing faces using OpenCV and python 2022-01-31 02:51:53 【Haiyong】 「 This is my participation 11 The fourth of the yuegengwen challenge 4 God , Check out the activity details :2021 One last more challenge 」 Author URI : Hai Yong Author's brief introduction :HDZ Core group members 、 High quality creators in the whole stack field 、 Reelection C Standing in the top ten of the weekly list Fan benefits : Into the Fans group Send four books a week ( Everyone has ), Send all kinds of small gifts every month ( Nuggets enamel cup 、 Pillow 、 Mouse pad 、 Mugs, etc ) In this paper , We will learn how to use OpenCV and Python Blur and anonymize faces . So , We will use cascaded classifiers to detect faces . Make sure to download the same from this link xml file : drive.google.com/file/d/1PPO… Method - First , We use the built-in face detection algorithm , Face detection from real-time video or image . ad locum , We will use a cascaded classifier method to extract data from real-time video ( Use a webcam ) Face detection in . - then , Read frames from real-time video . Store the latest frame and convert it to grayscale , To better understand the characteristics . - Now? , In order to make the output beautiful , We will make a color border rectangle around the detected face . however , We want the detected face to be blurred , So we use the median fuzzy function to do the same thing , And mention the area where the face should be blurred . - and , Now we want to show the blurred face , Use imshow The frame read by the function , We want it to be shown , Until we press a key . Step by step implementation : step 1: Import face detection algorithm , It is called cascade classifier . import cv2 # Face detection cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml") Copy code step 2: Capture frames from video , To detect faces from frames video_capture = cv2.VideoCapture(0) while True: # Capture the latest frame from the video check, frame = video_capture.read() Copy code step 3: Change the captured frame to grayscale . # Convert frames to grayscale ( Black and white shadows ) gray_image = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) face = cascade.detectMultiScale(gray_image, scaleFactor=2.0, minNeighbors=4) Copy code step 4: Draw a color rectangle around the detected face . for x, y, w, h in face: # Draw a border around the detected face # ( The border color here is green , The thickness is 3) image = cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 3) Copy code step 5: Blur the part inside the rectangle ( Contains the detected face ). # Blur the face in the rectangle image[y:y+h, x:x+w] = cv2.medianBlur(image[y:y+h, x:x+w], 35) Copy code step 6: Show final output , That is, the detected face ( In rectangle ) It's fuzzy . # Show blurred faces in the video cv2.imshow('face blurred', frame) key = cv2.waitKey(1) Copy code Here is the complete implementation : import cv2 # Face detection cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml") # VideoCapture It's a function , Used to capture video from cameras connected to the system # You can pass on 0 or 1 # 0 For laptop webcam # 1 For external webcam video_capture = cv2.VideoCapture(0) # One while Loop runs infinite times , Capture an unlimited number of frames for video , Because video is a combination of frames while True: # Capture the latest frame from the video check, frame = video_capture.read() # Convert frames to grayscale ( Black and white shadows ) gray_image = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Multiple faces are detected in the captured frame # scaleFactor: Parameter specifies how much the image size is reduced at each image scale . # minNeighbors: Parameter specifies how many neighbors each rectangle should have to keep it . # The rectangle contains the detection object . # The object here is the face . face = cascade.detectMultiScale( gray_image, scaleFactor=2.0, minNeighbors=4) for x, y, w, h in face: # Draw a border around the detected face . #( The border color here is green , The thickness is 3) image = cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 3) # Blur the face in the rectangle image[y:y+h, x:x+w] = cv2.medianBlur(image[y:y+h, x:x+w], 35) # Show blurred faces in the video cv2.imshow('face blurred', frame) key = cv2.waitKey(1) # This statement runs only once per frame . # Basically , If we get a key , And that key is a q if key == ord('q'): break # We will pause and exit while loop , # And then run : video_capture.release() cv2.destroyAllWindows() Copy code Output : Write in the last I've been writing a technology blog for a long time , And mainly through the Nuggets , This is one of my articles OpenCV and Python Blur and anonymize faces . I like to share technology and happiness through articles . You can visit my blog : juejin.cn/user/204034… To learn more . I hope you will like ! You are welcome to put forward your opinions and suggestions in the comment area ! author[Haiyong]
https://en.pythonmana.com/2022/01/202201310251516010.html
CC-MAIN-2022-27
refinedweb
842
63.09
PicoCTF 2018 Writeup: Forensics Oct 13, 2018 08:56 · 1346 words · 7 minute read Forensics Warmup 1 Problem Can you unzip this file for me and retreive the flag? Solution Just unzip the file. flag: picoCTF{welcome_to_forensics} Forensics Warmup 2 Problem Hmm for some reason I can’t open this PNG? Any ideas? Solution Using the file command, you can see that the image is, in fact, in jpeg format not png: ❯ file flag.png flag.png: JPEG image data, JFIF standard 1.01 Open the image as a jpeg file to get the file. flag: picoCTF{extensions_are_a_lie} Desrouleaux Problem Our network administrator is having some trouble handling the tickets for all of of our incidents. Can you help him out by answering all the questions? Connect with nc 2018shell2.picoctf.com 10493. incidents.json Solution Here is the solution script: from sets import Set from pwn import * import json sh = remote('2018shell2.picoctf.com', 10493) with open('./incidents.json') as f: data = json.loads(f.read()) # question 1 src = {} for each in data[u'tickets']: src_ip = each[u'src_ip'] if src_ip in src: src[src_ip] += 1 else: src[src_ip] = 1 print sh.recvuntil('ones.\n') sh.sendline(max(src, key=src.get)) # question 2 target = sh.recvuntil('?\n').split(' ')[-1][:-2] target_ls = {} count = 0 for each in data[u'tickets']: if each[u'src_ip'] == target and each[u'dst_ip'] not in target_ls: target_ls[each[u'dst_ip']] = True count += 1 sh.sendline(str(count)) # question 3 hashes = {} for each in data[u'tickets']: hash = each[u'file_hash'] if hash not in hashes: hashes[hash] = Set() hashes[hash].add(each[u'dst_ip']) avg = 0 for each in hashes: e = hashes[each] avg += len(e) avg = (avg * 1.0) / len(hashes) print sh.recvuntil('.\n') sh.sendline(str(avg)) sh.interactive() flag: picoCTF{J4y_s0n_d3rUUUULo_a062e5f8} Reading Between the Eyes Problem Stego-Saurus hid a message for you in this image, can you retreive it? Solution This problem is about using the Least Significant Bit algorithm for image steganography. It can be solved using an online decoder. flag: picoCTF{r34d1ng_b37w33n_7h3_by73s} Recovering From the Snap Problem There used to be a bunch of animals here, what did Dr. Xernon do to them? Solution This problem is about recovering files from a FAT filesystem. It can be done using TestDisk, a powerful free data recovery software. You can follow this guide to recover the theflag.jpg file. theflag.jpg flag: picoCTF{th3_5n4p_happ3n3d} admin panel Problem We captured some traffic logging into the admin panel, can you find the password? Solution If you look for http requests, you will see two login attempts, and the second request contains the flag: POST /login HTTP/1.1 Host: 192.168.3.128: 53 Connection: keep-alive Upgrade-Insecure-Requests: 1 user=admin&password=picoCTF{n0ts3cur3_9feedfbc} flag: picoCTF{n0ts3cur3_9feedfbc} hex editor Problem This cat has a secret to teach you. You can also find the file in /problems/hex-editor_4_0a7282b29fa47d68c3e2917a5a0d726b on the shell server. Solution You can get the flag by looking at the hex hump of the image or just print out all the readable parts of the file: ❯ strings hex_editor.jpg | grep pico Your flag is: "picoCTF{and_thats_how_u_edit_hex_kittos_dF817ec5}" flag: picoCTF{and_thats_how_u_edit_hex_kittos_dF817ec5} Truly an Artist Problem Can you help us find the flag in this Meta-Material? You can also find the file in /problems/truly-an-artist_3_066d6319e350c1d579e5cf32e326ba02. Solution The flag is in the EXIF meta-data of the image: ❯ exiftool 2018.png ExifTool Version Number : 11.01 File Name : 2018.png Directory : . File Size : 13 kB File Modification Date/Time : 2018:10:09 23:34:05+08:00 File Access Date/Time : 2018:10:10 09:15:07+08:00 File Inode Change Date/Time : 2018:10:09 23:34:06+08:00 File Permissions : rw-r--r-- File Type : PNG File Type Extension : png MIME Type : image/png Image Width : 1200 Image Height : 630 Bit Depth : 8 Color Type : RGB Compression : Deflate/Inflate Filter : Adaptive Interlace : Noninterlaced Artist : picoCTF{look_in_image_eeea129e} Image Size : 1200x630 Megapixels : 0.756 flag: picoCTF{look_in_image_eeea129e} now you don’t Problem We heard that there is something hidden in this picture. Can you find it? Solution You can create another image with only one shade of red and diff that image with the one provided to get the flag: ❯ convert -size 857x703 canvas:"#912020" pure.png ❯ compare nowYouDont.png pure.png diff.png diff.png flag: picoCTF{n0w_y0u_533_m3} Ext Super Magic Problem. Solution You are given a ext3 file image that is broken. To fix the image, you have to correct the magic number of the file. You can read more about the ext3 file format over here. Here is the script that writes the magic number 0xEF53 into the file: # flag: picoCTF{a7DB29eCf7dB9960f0A19Fdde9d00Af0}nc 2018shell2.picoctf.com 2651 from pwn import * with open('./ext-super-magic.img', 'rb') as f: data = f.read() print enhex(data[1024:1024+82]) print enhex(data[1024+56:1024+56+2]) data = data[:1024+56] + p16(0xEF53) + data[1024+56+2:] with open('fixed.img', 'wb') as f: f.write(data) flag: picoCTF{a7DB29eCf7dB9960f0A19Fdde9d00Af0} Lying Out Problem Some odd traffic has been detected on the network, can you identify it? More info here. Connect with nc 2018shell2.picoctf.com 27108 to help us answer some questions. Solution Just read the graph and do this problem by hand. flag: picoCTF{w4y_0ut_de051415} What’s My Name? Problem Say my name, say my name. Solution The hint is very helpful. It asks If you visited a website at an IP address, how does it know the name of the domain?. The answer to this question is that a domain is resolved through DNS packets. If we only look for DNS packets in wireshark, we will find the flag. flag: picoCTF{w4lt3r_wh1t3_33ddc9bcc77f22a319515c59736f64a2} core Problem This program was about to print the flag when it died. Maybe the flag is still in this core file that it dumped? Also available at /problems/core_1_722685357ac5a814524ee76a3dcd1521 on the shell server. Solution Let’s first take a look at the program using radare2: [0x080484c0]> s sym.print_flag [0x080487c1]> pdf ┌ (fcn) sym.print_flag 43 │ sym.print_flag (); │ ; var int local_ch @ ebp-0xc │ ; CALL XREF from sym.main (0x8048802) │ 0x080487c1 55 push ebp ; ./print_flag.c:90 │ 0x080487c2 89e5 ebp = esp │ 0x080487c4 83ec18 esp -= 0x18 │ 0x080487c7 c745f4390500. dword [local_ch] = 0x539 ; ./print_flag.c:91 ; 1337 │ 0x080487ce 8b45f4 eax = dword [local_ch] ; ./print_flag.c:92 │ 0x080487d1 8b048580a004. eax = dword [eax*4 + obj.strs] ; [0x804a080:4]=0 │ 0x080487d8 83ec08 esp -= 8 │ 0x080487db 50 push eax │ 0x080487dc 684c890408 push str.your_flag_is:_picoCTF__s ; 0x804894c ; "your flag is: picoCTF{%s}\n" ; const char *format │ 0x080487e1 e82afcffff sym.imp.printf () ; int printf(const char *format) │ 0x080487e6 83c410 esp += 0x10 │ 0x080487e9 90 ; ./print_flag.c:93 │ 0x080487ea c9 leave └ 0x080487eb c3 return As you can see, the flag pointer is located at eax*4 + obj.strs or 0x804a080+0x539*4 in memory: ❯ python >>> hex(0x804a080+0x539*4) '0x804b564' Now, we can use gdb and the core file to restore the application state and extract the flag from that address: $ gdb ./print_flag ./core ... gef➤ x 0x804b564 0x804b564 <strs+5348>: 0x080610f0 gef➤ x 0x080610f0 0x80610f0: "e52f4714963eb207ae54fd424ce3c7d4" flag: picoCTF{e52f4714963eb207ae54fd424ce3c7d4} Malware Shops Problem There has been some malware detected, can you help with the analysis? More info here. Connect with nc 2018shell2.picoctf.com 46168. Solution Just read the graph and do this problem by hand. ❯ nc 2018shell2.picoctf.com 46168 You'll need to consult the file `clusters.png` to answer the following questions. How many attackers created the malware in this dataset? 5 Correct! In the following sample of files from the larger dataset, which file was made by the same attacker who made the file 3ce8eb6f? Indicate your answer by entering that file's hash. hash jmp_count add_count 0 3ce8eb6f 33.0 28.0 1 55489271 40.0 2.0 2 33d91680 39.0 29.0 3 ebaf5ccd 9.0 17.0 4 e9c0ac07 17.0 61.0 5 628e79cf 9.0 18.0 6 b3ae7861 41.0 10.0 7 cc251d4b 16.0 41.0 8 0c91a83b 17.0 65.0 9 97a0fc46 10.0 38.0 33d91680 Correct! Great job. You've earned the flag: picoCTF{w4y_0ut_dea1794b} flag: picoCTF{w4y_0ut_dea1794b} LoadSomeBits Problem Can you find the flag encoded inside this image? You can also find the file in /problems/loadsomebits_2_c5bba4da53a839fcdda89e5203ac44d0 on the shell server. Solution Ryan Jung on our team solved this challenge. It is about looking at the least significant bit of each pixel value. flag: picoCTF{st0r3d_iN_th3_l345t_s1gn1f1c4nT_b1t5_2705826400} Feel free to leave a comment if any of the challenges is not well explained.
https://tcode2k16.github.io/blog/posts/picoctf-2018-writeup/forensics/
CC-MAIN-2020-29
refinedweb
1,426
68.47
Set::Toolkit - searchable, orderable, flexible sets of (almost) anything. Version 0.11 The Set Toolkit intends to provide a broad, robust interface to sets of data. Largely inspired by Set::Object, a default set from the Set Toolkit should behave similarly enough to those created by Set::Object that interchanging between the two is fairly easy and intuitive. In addition to the set functionality already available around the CPAN, the Set Toolkit provides the ability to perform fairly complex, chained searches against the set, ordered and unordered considerations, as well as the ability to enforce or relax a uniqueness constraint (enforced by default). use Set::Toolkit; $set = Set::Toolkit->new(); $set->insert( 'a', 4, {a=>'abc', b=>123}, {a=>'abc', b=>456, c=>'foo'}, {a=>'abc', b=>456, c=>'bar'}, '', {a=>'ghi', b=>789, c=>'bar'}, { x => { y => "hello", z => "world", }, }, ); die "we didn't add enough items!" if ($set->size < 4); ### Find single elements. $el1 = $set->find(a => 'ghi'); $el2 = $set->find(x => { y=>'hello' }); ### Print "Hello, world!" print "Hello, ", $el2->{x}->{z}, "!\n"; ### Search for result sets. ### $resultset will contain: ### {a=>'abc', b=>456, c=>'foo'}, ### {a=>'abc', b=>456, c=>'bar'}, $resultset => $set->search(a => 'abc') ->search(b => 456); ### $bar will be: {a=>'ghi', b=>789, c=>'bar'}, $bar = $set->search(a => 'abc') ->search(b => 456) ->find(c => 'bar'); ### Get the elements in the order they were inserted. These are equivalent: @ordered = $set->ordered_elements; $set->is_ordered(1); @ordered = $set->elements; ### Get the elements in hash-random order. These two are equivalent: @unordered = $set->unordered_elements $set->is_ordered(0); @unordered = $set->elements; This module implements a set objects that can contain members of (almost) any type, and provides a number of attached helpers to allow set and element manipulation at a variety of levels. By "almost", I mean that it won't let you store undef as a value, but not for a good reason: that's just how Set::Object did it, and I haven't had a chance to think about the pros and cons yet. Probably in the future it'll be a settable flag. The set toolkit is largely inspired by the work done in Set::Object, but with some notable differences: this package ... In general, take a look at Set::Object first to see if it will suit your needs. If not, give Set::Toolkit a spin. By default, this package's sets are intended to be functionally identical to those created by Set::Object (or close to it). That is, without specifying differently, sets created from the Set::Toolkit will be an unordered collection of things without duplication. None at this time. Creates a new set toolkit object. Right now it doesn't take parameters, because I have not codified how it should work. Insert new elements into the set. ### Create a set object. $set = Set::Toolkit->new(); ### Insert two scalars, an array ref, and a hash ref. $set->insert('a', 'b', [2,4], {some=>'object'}); Duplicate entries will be silently ignored when the set's is_unique constraint it set. (This behavior is likely to change in the future. What will probably happen later is the element will be added and masked. That will probably be a setting =) Removes elements from the set. ### Create a set object. $set = Set::Toolkit->new(); ### Insert two scalars, an array ref, and a hash ref; the set size will ### be 4. $set->insert('a', 'b', [2,4], {some=>'object'}); ### Remove the scalar 'b' from the set. The set size will be 3. $set->remove('b'); Note that removing things removes all instances of it (this only really matters in non-unique sets). Removing references might catch you off guard: though you can insert object literals, you can't remove them. That's because each time you create a new literal, you get a new reference. Consider: ### Create a set object. $set = Set::Toolkit->new(); ### Insert two literal hashrefs. $set->insert({a => 1}, {a => 2}); ### Remove a literal hashref. This will have no effect, because the two ### objects (inserted and removed) are *different references*. $set->remove({a => 1}); However, the following should work instead ### Create a set object. $set = Set::Toolkit->new(); ### Create our two hashes. ($hash_a, $hash_b) = ({a=>1}, {a=>2}); ### Insert the two references. $set->insert($hash_a, $hash_b); ### Remove a hash reference. This will work; it's the same reference as ### what was inserted. $set->remove($hash_a); Obviously the same applies for all references. Returns a list of the elements in the set. The content of the list is sensitive to the set context, defined by is_ordered, is_unique, and possibly other settings later. Returns a list of the elements in insertion order, regardless of whether the set thinks its ordered or unordered. This can be thought of as a temporary coercion of the set to ordered for the duration of the fetch, only. Returns a list of the elements in a random order, regardless of whether the set thinks its ordered or unordered. This can be thought of as a temporary coercion of the set to unordered for the duration of the fetch, only. The random order of the set relies on perl's treatment of hash keys and values. We're using a hash under the hood. This method will simply tell you if your set is empty. Returns 0 or 1. The twin methods first and last do not take any arguments, they simply report the first or last element of the set. Be aware that these methods imply order! Consider: my $set = Set::Toolkit->new(); $set->insert(qw(a b c d e f)); $set->is_ordered(0); ### prints something like "c a d e b f" print join(' ', @$set); ### prints "a .. f" print $set->first, ' .. ', $set->last; The first element in an unordered set would be an ephemeral, ever-changing value and, therefore, useless (I think =) So first and last are always performed with the temporary constraint that $set->is_ordered(1). Searching allows you to find subsets of your current set that match certain criteria. Some effort has been made to make the syntax as simple as possible, though some complexity is present in order to provide some power. Searches take one argument, a constraint, that can be specified in two primary ways: Specifying a constraint as a scalar value makes a very simple check against any scalar values contained in your set (and only such values). Thus, if you search for "b", you will get a subset of the parent set that contains one string "b" for each such occurrance in the super set. Consider the following: ### Create a new set. $set = Set::Toolkit->new(); ### Insert some values. $set->insert(qw(a b c d e)); ### Do a search, and then a find. ### $resultset is now a set object with one entry: 'b' $resultset = $set->search('b'); ### $resultset is now an empty set object (because we didn't insert any ### strings "x"). $resultset = $set->('x'); For scalars, it probably won't generally be useful to use search. You'll probably want to use find() instead, which simply returns the value sought, rather than a set of matches: ### Using the set above, $match now contains 'b'. my $match = $set->find('b'); However, there is a case in which you might want to use scalar searches: in sets that are not enforcing uniqueness. ### Turn off the uniqueness constraint. $set->is_unique(0); ### Add some more letters. $set->insert(qw(a c e g i j)); ### Now do some searches: ### $resultset will contain <'c','c'> $resultset->search('a'); This may be useful for counting occurrances, such as: print "There are ", $set->search('a')->size, " occurances of 'a'.\n"; On the other hand, searching by property values will probably be useful more often. Consider the following set: ### Create our set. $works = Set::Toolkit->new(); ### Insert some complex values: $works->insert( { name => {first=>'Franz', last=>'Kafka'}, title => 'Metamorphosis', date => '1915'}, { name => {first=>'Ovid', last=>'unknown'}, title => 'Metamorphosis', date => 'AD 8'}, { name => {first=>'Homer', last=>undef}, title => 'The Iliad', date => 'unknown'}, { name => {first=>'Homer', last=>undef}, title => 'The Odyssey', date => 'unknown'}, { name => {first=>'Ted', last=>'Chiang'}, title => 'Understand', date => '1991'}, { name => {first=>'John', last=>'Calvin'}, title => 'Institutes of the Christian Religion', date => '1541'}, ); We can perform an arbitrarily complex subsearch of these fields, as follows: ### $homeric_works is now a set object containing the same hash references ### as the superset, "works", but only those that matched the first name ### "Homer" and the last name *undef*. my $homeric_works = $authors->search({ name => { first => 'Homer', last => undef, }); ### We can get a specific work, "The Oddysey," for example, by a second ### "search" (or "find"): ### $oddysey_works is now a set of one. my $oddysey_works = $homeric_works->search(title=>'The Odyssey'); ### We can get the instance (instead of a set) with a "find": my $oddysey_work = $homeric_works->find(title=>'The Odyssey'); ### Which we could have gotten more easily by issuing a "find" on the ### original set: my $oddysey_work = $works->find(title=>'The Odyssey'); Searches can also be chained, if that's desirable for any reason, and find can be included in the chain, as long as it is the last link. Note that this is not a speed-optimized scan at this point (but it shouldn't be brutally slow in most cases). ### Get a resultset of one. my $resultset = $works->search(name=>{first=>'Homer'}) ->search(title=>'The Iliad'); And you can search against multiple values: ### Search against title and date to get Ovid's "Metamorphosis" (yeah, I ### realize his was plural, but give me a break here =) ### Get the set. my $resultset = $works->search( title => 'Metamorphosis', date => 'AD 8' ); ### Get the item. my $result = $works->find( title => 'Metamorphosis', date => 'AD 8' ); Returns the size of the set. This is context sensitive: $set = Set::Toolkit->new(); $set->is_unique(0); $set->insert(qw(d e a d b e e f)); ### Prints: ### The set size is 8! ### The set size is 5! print 'The set size is ', $set->size, '!'; $set->is_unique(1); print 'The set size is ', $set->size, '!'; Returns a boolean value depending on whether the set is currently considering itself as ordered or unordered. Also a setter to change the set's context. Returns a boolean value depending on whether the set is currently considering itself as unique or duplicable (with respect to its elements). Also a setter to change the set's context. Sets can be taken in a boolean context (v0.10). This can be done implicitly by using it in a boolean context. Empty sets are considered false, while sets with elements are considered true. Thus, in boolean contexts, the set answers the question, "Does this set have members?" my $set = Set::Toolkit->new(); if ($set) { print "The set has members!"; } else { print "The set is empty!"; } Under the hood, this just returns return ($self->size) ? 1 : 0; Sets can be manipulated in an array context as well. An array context enforces set order, since an array without order is just ... well, a set =) That means that for all array considerations, the set is treated as though is_ordered(1). Normal context will return when considering the array as a set toolkit. The examples below use sets with simple alphanumeric scalars. You can, of course, feel free to use objects or refs of any kind. Let's look at some code. Create our set my $set = Set::Toolkit->new(); $set->insert(qw(a b c d e f)); scan our set as an array ### Prints: a, b, c, d, e, f print join(', ', @$set); shift and unshift the set ### $first is now 'a'. This is the same as $set->first, except that ### shifting is destructive. my $first = shift @$set; ### $first will now be 'x' unshift @$set, 'x'; $first = $set->first; push and pop the set ### $last is now 'f'. This is the same as $set->last, except that ### popping is destructive. my $last = pop @$set; ### $last will now be 'z' push @$set, 'z'; $last = $set->last; get and set elements directly my $before = $set->[3]; ### $set->[3] is 'd'. $set->[3] = 8; ### Set it to '8'. my $after = $set->[3]; ### Now it's '8'. getting the size of the set (Note that setting the size is not yet supported. You'll get a warning if you try to do it.) ### These are equivalent. my $size = $set->size; my $scalar = scalar(@$set); splicing a set ### Remove the letter 'c' (position 2) splice(@$set, 2, 1); ### Replace the letter 'e' (now position 3) with 'm', 'n', 'o' splice(@set, 3, 1, qw(m n o)); In string context, the array is printed in a manner reminiscent of how refs are printed. For example, a hash $hash = {a=>1} may print as HASH(0x9301880). Similarly, a toolkit will print Set::Toolkit(...), where the ellipsus stands for a space-delimited list of the set's contents. For example, my $set = Set::Toolkit->new(); $set->insert(qw(a b c)); ### Prints, for example: "Set::Toolkit(a c b)" print "$set"; The above example is using an unordered set, so the print order is unordered. References will be treated by Perl's native ref stringification: my $set = Set::Toolkit->new(); $set->insert('a', {b=>2}, 4); ### Prints something like: "Set::Toolkit(HASH(0x9301880) 4 a)" print "$set"; You might want to use this module if the following are generally true: This module probably isn't right for you if you: In these are true, I would take a look at Set::Object instead. Set::Toolkit sets contain "things" or "members" or "elements". I've avoided saying "objects" because you can really store anything in these sets, from scalars, to objects, to references. Set::Toolkit does not currently support "weak" sets as defined by Set::Object. Because uniqueness is not enforced by keying into a hash, scalars are not flattened into strings and will not lose their magicks. This is the first module I've released. I'm open to constructive critiques, bug reports, patches, doc patches, requests for documentation clarification, and so forth. Be gentle =) Sir Robert Burbridge, <sirrobert at gmail.com> Please report any bugs or feature requests to bug-set Set::Toolkit Thanks to Jean-Louis Leroy and Sam Vilain, the developers/maintainers of Set::Object, for lots of concepts, etc. I'm not actually using any borrowed code under the hood, but I plan to in the future. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~sirrobert/Set-Toolkit-0.11/lib/Set/Toolkit.pm
CC-MAIN-2016-50
refinedweb
2,400
63.49
>> kde2017-03-01T19:48:25+00:00/2017/03/danbooru-client-0-6-1-releasedDanbooru Client 0.6.Highlights of the new version</h2> <ul> <li>Unbreak konachan.com support (change in URL format)</li> </ul> <h2 id="coming-up-next">Coming up next</h2> <p> 2017-03-01T19:43:48+00:00/2016/12/killing-the-redundancy-with-automationKilling the redundancy with automation <p>In the past three weeks, the openSUSE community KDE team has been pretty busy to package the <a href="">latest release of Applications from KDE</a>, 16.12. It was a pretty large task, due to the number of programs involved, and the fact that several monolithic projects were split (in particular KDE PIM). This post goes through what we did, and how we improved our packaging workflow.</p> <h2 id="some-prerequisites">Some prerequisites</h2> <p>In openSUSE speak, packages are developed in “projects”, which are separate repositories maintained on the OBS. Projects whose packages will end up in the distro, that is where they are being <em>developed</em> to land in the distribution, are called <strong>devel projects</strong>. The KDE team uses a number of these to package and test:</p> <ul> <li>KDE:Qt for currently-released Qt versions</li> <li>KDE:Frameworks5 for Frameworks and Plasma packages</li> <li>KDE:Applications for the KDE Applications packages</li> <li>KDE:Extra for additional software not part of the above categories</li> </ul> <p>The last three have also an <em>Unstable</em> equivalent (KDE:Unstable:XXX) where packages built straight off git master are made, and used in the <a href="">Argon and Krypton live images</a>.</p> <h2 id="a-new-development-approach">A new development approach</h2> <p>With the release of Leap 42.2, we also needed a way to keep <em>only</em> Long Term Support packages in a place were could test and adjust fixes for Leap (which, having a frozen base, will not easily accept version upgrades), so we created an additional repository with the <code>LTS</code> suffix to track Plasma 5.8 versions (and KF 5.26, which is what Leap ships).</p> <p>As you can see, the number of repositories was starting to get large, and we’re still a very low number team, with everyone contributing their spare time to this task. Therefore, a new approach was proposed, prototyped by Hrvoje “shumski” Senjan and spearheaded by Fabian Vogt during the Leap development cycle.</p> <p>The idea was to use only <em>one</em> repository as an authoritative source of spec files (for those not in the know, <strong>spec files</strong> are files needed to build RPM packages, and describe the structure of the package, it sources, whether it should be split in sub-packages, and so on), <em>only</em> do changes there and then sync back the changes to the other repositories.</p> <p>In this case, KDE:Frameworks5 was used as source. All changes were then synced to both the LTS repository and to the Unstable variant with some simple scripting and the use of the <code>osc</code> command, which allows interacting with the OBS from the CLI. This significantly reduced divergences between packages and eased maintenance.</p> <h2 id="enter-applications-1612">Enter Applications 16.12</h2> <p>When we started packaging Applications, we faced a number of problems that involved the existence of kdelibs 4.x applications that were now obsoleted by KF5 based versions (see Okular, but not only that). Additionally, there were a major number of splits, meaning that we had to track and adjust packaging to keep in mind that what used to be there wasn’t around anymore.</p> <p>We had already a source that kept track of these changes: the <em>KDE:Unstable:Applications</em> repository, which followed git master. A <strong>major</strong> problem was that its development had gone in a different direction than the original KDE:Applications, meaning that there was a significant divergence in the two.</p> <p>Initially we set up a test project and tried to figure out how to lay out a migration path for the existing packages. It didn’t work too well by hand: too many changes, too many packages to keep track of. That is when Raymond Wooninck had the idea to automate the whole process and change the development workflow of Applications packaging.</p> <p>The new workflow worked as such:</p> <ol> <li>The authoritative source of changes is the <em>Unstable</em> repository, because that’s where the changes end up first, before release</li> <li>On beta release time, packages would be copied from Unstable to the stable repository, dropping any patches present that were upstreamed</li> <li>openSUSE specific patches (integration, build system, etc.) would stay in both repositories</li> <li>upstream patches (patches already committed but not part of a release) would only stay in the stable repository</li> </ol> <p>In order to ensure that this would be done automatically, Raymond <a href="">created a repository</a> and wrote scripts to do both the Unstable->stable transition and to automate packaging of new minor releases. Once we switched to this workflow, adjustments were much easier, and we were able to finish the job, at least: yesterday (as of writing) the new Applications release was checked in Tumbleweed.</p> <p>This new workflow requires some discipline (to avoid “ad hoc” solutions) but dramatically reduces the maintenance required, and allows us to track changes in the packages “incrementally” as they happen during a release cycle. At the same time, this guarantees that all the openSUSE packaging policies are followed also in the Unstable project (which was more lax, due to the fact that it would never end up in the distro).</p> <h2 id="final-icing-on-the-cake">Final icing on the cake</h2> <p>The last bit was to ensure timely updates of all the Unstable project hierarchy avoiding to watch git commits like hawks. I took up the challenges and <a href="">wrote a script</a>, which, coupled with a <a href="">repository mapping file</a> would cache the latest “seen” (past 24h) git revision of the KDE repositories, and trigger updates only if something changed (using <code>git ls-remote</code> to avoid hitting the KDE servers too hard).</p> <p>I put this on a cron job which runs every day at 20 UTC+1, meaning that even the updates are now fully automated. Of course I have to check every now and then for added dependencies and build faiures, but the workload is definitely less than before.</p> <h2 id="final-wrap-up">Final wrap up</h2> <p>A handful of tools and some quick thinking can make a massive collection of software manageable by a small bunch of people. At the same time, there’s always need for more helping hands! Should you want to help in openSUSE packaging, drop by in the #opensuse-kde IRC channel on Freenode.</p> 2016-12-18T10:51:57+00:00/2016/11/testing-the-untestableTesting the untestableAdmit tes<p <a href="">build.kde.org</a> and a growing number of unit tests.</p> <h2 id="is-it-enough">Is it enough?</h2> <p>This however does not count <em>functional testing</em>,.</p> <p>Why does this matter to KDE? Nowadays, the dream of <a href="">always summer in trunk</a> as proposed 8 years ago is getting closer, and there are <a href="">several ways</a> to run KDE software directly from git. However, except for the above strategy, there is no additional testing done.</p> <p>Or, should I rather say, there <em>wasn’t</em>.</p> <h2 id="our-savior-openqa">Our savior, openQA</h2> <p>Those who use openSUSE Tumbleweed know that even if it is technically a “rolling release” distribution, it is extensively tested. That is made possible by <a href="">openQA</a>, which runs a full series of automated functional tests, from installation to actual use of the desktops shipped by the distribution. The recently released openSUSE Leap has also benefited from this testing during the development phase.</p> <p>“But, Luca,” you would say, “we already know about all this stuff.”</p> <p>Indeed, this is not news. But the big news is that, thanks mainly to the efforts of Fabian Vogt and Oliver Kurz, now <em>openQA is testing also KDE software from git</em>! This works by feeding the Argon (Leap based) and Krypton (Tumbleweed based) live media, which are roughly built daily, to openQA, and running a series of specific tests.</p> <p>You can see here <a href="">an example for Argon</a> and <a href="">an example for Krypton</a> (note: some links may become dead as tests are cleaned up, and will be adjusted accordingly). openQA tests both the distro-level stuff (the <code>console</code> test) and KDE specific operations (the <code>X11</code> test). In the latter case, it tests the ability to launch a terminal, running a number of programs (Kate, Kontact, and a few others) and does some very basic tests with Plasma as well.</p> <p.</p> <h2 id="is-this-enough-for-everything">Is this enough for everything?</h2> <p>Of course not. Automated testing only gets so much, so this is not an excuse for being lazy and <a href="">not filing those reports</a>..</p> <p>What needs to be done? More tests, of course. In particular, Plasma regression tests (handling applets, etc.) would be likely needed. But as they say, <em>every journey starts with the first step</em>.</p> 2016-11-27T08:27:13+00:00/2016/10/the-heroes-we-deserveThe heroes we deserveY<p>You may be aware that openSUSE Leap 42.2 is now in <a href="">the release candidate stage</a>,.</p> <h2 id="the-report">The report</h2> <p>October 6th, 2016 - A <a href="">bug iss reported against Plasma</a> reporting a hard freeze of Plasma when using the Noveau driver, but not with the closed NVIDIA blob. Although the effect is deleterious to Plasma and not other desktop environments, there is evidence that the issue is in the driver itself, but there’s only partial indications, and no conclusive proof.</p> <p>The problem is that no one of the current KDE team members has access to a NVIDIA card so it’s hard to determine what is actually going on. After thinking over it for a while, I decided it was time to call in the <em>pros</em>. And in KDE, Martin Graesslin of KWin fame is the best bet when graphics and KWin interactions are involved. He suggested to get a backtrace of the freezes and crashes to ensure what exactly is happening. At the same time, Antonio Larrosa from the KDE team tried to get hold of a test system to investigate the cause.</p> <p>Antonio eventually managed to reproduce the problem with a specific NVIDIA card and Noveau, and <a href="">his initial results</a> pointed at issues in interactions between the Noveau driver and KWin itself. Martin, being a nice person, also subscribed to the report, and once the bactraces came in he was able to find a solution to the riddle: when using OpenGL, <a href="">Mesa waited for a buffer</a> and in turn blocked KWin. The net result was an apparent freeze of the workspace when logging in.</p> <p><a href="">Patches had been proposed</a> to fix <a href="">the issue</a>, but according to upstream Noveau developers, they just made matters worse (instability).</p> <p>As an aside, Noveau, despite the heroic efforts from its developers, has still several issues when using apparently “normal” workflows: for example any application using QWebEngine will crash on Noveau because while the driver does work well with multi-threaded rendering, the Blink engine uses different threads even when Qt is using single-threaded rendering.</p> <p>Once the problem was found, the 5 eurocent question was: <a href="">what can we do to fix the situation?</a></p> <h2 id="the-hunt-for-a-solution">The hunt for a solution</h2> <p>One major problem with this issue was that not all NVIDIA cards were affected. Only specific models exhibited the problem, which meant blanket-disabling OpenGL for KWin when using Noveau was too restrictive. But at the same time, the only environment affected was Plasma. The situation was extremely dire for the default desktop in openSUSE.</p> <p>But two of our today’s heroes did not give up. Martin and Antonio sat around a virtual table and tried to work out a solution. Martin suggested to use the same mechanism that KWin used normally to determine if the use of OpenGL is “unsafe” when starting up, disabling it if any problems arose. It didn’t work in the specific case only because the freeze occurred when rendering started, that is past this checkpoint.</p> <p>The discussion was fruitful. Across the several hypotheses mentioned, Fabian Vogt, also from the openSUSE KDE team, thought about a “dead man’s switch”: KWin would get killed and restarted if a freeze occurred, but disabling OpenGL after the restart. That was enough for Antonio and Martin to come up with a strategy: checking with a timer if KWin was frozen during rendering. If the timer went off, KWin would get killed and restarted automatically, but disabling OpenGL (more technically, activating the “OpenGL unsafe protection”), and now would be able to continue without freezes. Antonio <a href="">posted his patch for review</a> and that is where we meet another hero of the day, David “d_ed” Edmundson. During the patch review, he asked Antonio what kind of card exhibited the issue, and promptly acted to get one to run tests himself.</p> <p>Patches went back and forth for a number of days, scrapping one solution after the other, until Martin was finally able to accept the final revision, which was merged by Antonio in the Plasma/5.8 branch of kwin (meaning, everyone will benefit from it). Fabian then proceeded <a href="">to submit these patches to openSUSE Leap</a> and to <a href="">openSUSE Tumbleweed</a>.</p> <p>As final icing on the cake, Antonio was able to come up with <a href="">a patch to QWebEngine</a> to disable the GPU if Noveau was detected, preventing crashes at the price of reduced performance (and adding two environment variables to force or disable the behavior, respectively).</p> <h2 id="the-bottom-line">The bottom line</h2> <p.</p> <p>So if you ever meet Antonio, David, Fabian, and Martin, please offer them a beverage of their choice. They’re the heroes Free Software deserves.</p> <h2 id="bottom-note">Bottom note</h2> <p>Other noteworthy people need to be mentioned here due to their involvement:</p> <ul> <li>Dominique Leuenberger and Ludwig Nussel, namely the Tumbleweed and Leap release managers, for keeping up at their jobs (that is, ensuring that awesome software is released timely and properly);</li> <li>The SUSE X11 developers, for their assistance on the Mesa side of things;</li> <li>The openSUSE community for bug reporting and testing, or this bug would’ve never been discovered.</li> </ul> 2016-10-29T14:43:23+00:00/2016/06/two-in-oneTwo in one. T<p>As you may know (unless you’ve been living in Alpha Centauri for the past century) the openSUSE community KDE team publishes <a href="">LiveCD images</a> for those willing to test the latest state of KDE software from the git master branches without having to break machines, causing a zombie apocalypse and so on. This post highlights the most recent developments in the area.</p> <p>Up to now, we had 3 different media, depending on the base distribution (stable Leap, ever-rolling Tumbleweed) and whether you wanted to go with the safe road (X11 session) or the dangerous path (Wayland):</p> <ul> <li>Argon (Leap based)</li> <li>Krypton (Tumbleweed based, X11)</li> <li>Krypton Wayland (Tumbleweed based, Wayland)</li> </ul> <p>So far we’ve been trying to build new images in sync with the updates to the <a href="">Unstable KDE software repositories</a>. With the recent <a href="">switch to being Qt 5.7 based</a>, they broke. That’s when Fabian Vogt stepped up and fixed a number of outstanding issues with the images as well.</p> <p>But that wasn’t enough. It was clear that perhaps a separate image for Wayland wasn’t required (after all, you could always start a session from SDDM). So, perhaps it was the time to merge the two…</p> <p>Therefore, from today, the Krypton image will contain <strong>both</strong> <a href="">this bug</a>.</p> <p>Download links:</p> <ul> <li><a href="*.iso">Argon (x86_64 only)</a></li> <li> <s>Krypton (i686)</s> <p>(Currently not available)</p> </li> <li><a href="*.iso">Krypton (x86_64)</a></li> </ul> <p>Should you want to use these live images, remember <a href="">where to report distro bugs</a> and where to <a href="">report issues in the software</a>. Have a lot of fun!</p> 2016-06-25T12:11:01+00:00/2016/05/i-have-a-problemI have a problem...Every day, a sizable number of people posts problems on the KDE Community Forums and the ever-helpful staff does their best to solve whatever issues they’re facing. But what exactly does one do when this happens? This post provides more insights on the process. NOTE: The following applies to my work<p>Every day, a sizable number of people posts problems on the <a href="">KDE Community Forums</a> and the ever-helpful staff does their best to solve whatever issues they’re facing. But what <em>exactly</em> does one do when this happens? This post provides more insights on the process.</p> <p><strong>NOTE</strong>: The following applies to my workflow for the <a href="">Kontact & PIM</a> subforum.</p> <h2 id="step-1-someone-posts-a-problem">Step 1: Someone posts a problem</h2> <p>The questions posted are various, and range from simple tasks (“how I do XXX”) to very specific workflows. It covers a large spectrum.<br /> The first thing I do when reading a post, is to go through a “mental checklist”:</p> <ol> <li>Is this known already?</li> <li>Are there enough information?</li> <li>What distro is this user on?</li> </ol> <p>Answering point 1 means I have to keep up with development of KDE software, or if I don’t know, check the mailing lists and blog posts to see if other people have raised the issue (checking Bugzilla is a last resort, due to the very large number of bugs posted there). It also helps running the latest KDE software.</p> <p>If point 2 isn’t satisfied, I ask a few more questions following the <a href="">General Troubleshooting</a> guidelines. These include conditions for reproduction of the issue, if it still occurs with a new user account, and so on.</p> <p>Point 3 is related to point 2: not all distros are equal, so knowing on which distro the user in may reveal distribution-specific issues that need to be addressed directly downstream.</p> <h2 id="step-2-going-deeper">Step 2: Going deeper</h2> <p>If the issue isn’t solved even like this, “we need to go deeper”. Usually, time permitting, I try to reproduce the issue myself if it is within my reach (for example, if it doesn’t involve company secrets on an internal server ;).</p> <p>If I can reproduce it, I tell the user to file a bug, or workarounds, if I found any. If I can’t, I ask a few more details. Usually this can lead to the issue being solved, or to a bug report being filed.</p> <h2 id="step-3-communicating">Step 3: Communicating</h2> <p>Sometimes the issue is unclear, or it is behavior where the line between feature and bug is very blurred. In this case, I need to get information straight from the horse’s mouth. I hop on IRC, and I address the developers directly, usually pointing at the forum thread, and asking for details.</p> <p>Sometimes they follow up directly, sometimes they report me useful information, and sometimes they tell me its’ a feature or a bug. In either case, I report the information to the thread starter. In rare cases, the issue is simple enough that it gets fixed shortly afterwards.</p> <h2 id="stem-4-following-up">Stem 4: Following up</h2> <p>Unfortunately not all bugs can be addressed straight away, so sometimes issues stay lingering for a long period of time. However, sometimes a commit or two may fix it, with or without a bug being filed. If I notice this (I do read kde-commits from time to time ;) I follow up on the thread writing about it being fixed, workarounded, or whatever.</p> <h2 id="whats-the-point-of-this-post-anyway">What’s the point of this post, anyway?</h2> <p>Good question. ;) I mean this to show how much work can go into a “simple” user support request post on the KDE Community Forums. This is even more important to point out since apparently, frustration can make people tell that others’ work is worthless.</p> <p>So, if you ever bump in any of the forum staff, be sure to offer them a beverage of their choice. ;)</p> 2016-05-29T07:48:50+00:00/2016/05/danbooru-client-0-6-0-releasedDanbooru Client 0>Support for width / height based filtering: now you can exclude posts that are below a specific width or height (or both)</li> <li>New dependency: KTextWidgets</li> </ul> <h2 id="coming-up-next">Coming up next</h2> <p>Sooner or later I’ll get to finish the multiple API support, but given that there’s close to no interest for these programs (people are happy to use a browser) and that I work on this <em>very</em> irregularly (every 6-7 months at best), there’s no ETA at all. It might be done this year, perhaps the next.<> 2016-05-01T12:59:16+00:00/2016/02/adding-wayland-to-the-gas-mixtureOf gases, Qt, and WaylandEver,<p>Ever since the <a href="">launch of Argon and Krypton</a>, the openSUSE community KDE team didn’t really stand still: a number of changes (and potentially nice additions) have been brewing this week. This post recapitulates the most important one.</p> <h1 id="id-like-the-most-recent-qt-please">I’d like the most recent Qt, please</h1> <p>As <a href="">pre-announced by a G+ post</a>, the openSUSE repositories bringing KDE software directly from KDE git (KDE:Unstable:Frameworks and KDE:Unstable:Applications) have switched their Qt libraries from Qt 5.5 to the <a href="">recently released Qt 5.6</a>. This move was made possible by the heroic work of Christophe “krop” Giboudeaux, who was able to beat QWebEngine into submission and make it build in the OBS.</p> <p <em>plenty of bugs</em>. Also, the interaction of KDE software with Qt 5.6 is not completely tested: be sure to try it, and <a href="">report those bugs!</a>.</p> <p <em>must</em> remove it and replace it with KDE:Qt56.</p> <p>You can add the repository easily like this:</p> <div class="highlight"><pre><code class="language-bash" data-zypper ar -f obs://KDE:Qt56 KDE_Qt56 <span class="c"># Tumbleweed</span> zypper ar -f obs://KDE:Qt56/openSUSE_Leap_42.1 KDE_Qt56 <span class="c"># for Leap</span></code></pre></div> <p>Then force an update to this repository</p> <div class="highlight"><pre><code class="language-bash" data-zypper ref zypper dup --from KDE_Qt56</code></pre></div> <p>Then update from KDE:Unstable:Frameworks first, then KDE:Unstable:Applications.</p> <h1 id="a-bit-of-wayland-in-my-gases-too">A bit of Wayland in my gases, too!</h1> <p>Of course, this change has also trickled down to the Argon and Krypton media, which have been updated to reflect this change. But that’s not all. The KDE team is now proud to offer a <em>Wayland-based</em> Krypton image, which accompanies the standard one. Thanks to KIWI, making this was faster than shouting <a href=""><em>Ohakonyapachininko</em>!</a> Well, perhaps not, but still <em>quite</em> easy.</p> <p>If you want to try out the Wayland base image, be aware that it is nowhere near <em>alpha level</em>. There is a lot going on and development is heavy, so it may be broken in may interesting ways before you even notice it. You have been warned!</p> <p>Where do you find all this goodness? <a href="">The KDE:Medias directory on download.opensuse.org</a> has all the interesting bits. The three kinds of images residing there are:</p> <ul> <li>openSUSE_Argon (x86_64 only): Leap based, X11 based KDE git live image;</li> <li>openSUSE_Krypton (i586 and x86_64): Tumbleweed based, X11 based KDE git live image;</li> <li>openSUSE_Krypton_Wayland (i586 and x86_64): Tumbleweed based, Wayland based KDE git live image.</li> </ul> <h1 id="lets-not-forget-about-bugs">Let’s not forget about bugs!</h1> <p>Borrowing from my previous entry:</p> >As always, “have a lot of fun!”</p> 2016-02-27T15:13:11+00:00/2016/02/where-are-my-noble-gases-i-need-more-noble-gasesWhere are my noble gases? I need MORE noble gases!As KDE software (be it the Frameworks libraries, the Plasma 5 workspace, or the Applications) develops during a normal release cycle, a lot of things happen. New and exciting features emerge, bugs get fixed, and the software becomes better and more useful than it was before. Thanks to code review and<p>As KDE software (be it the Frameworks libraries, the Plasma 5 workspace, or the Applications) develops during a normal release cycle, a lot of things happen. New and exciting features emerge, bugs get fixed, and the software becomes better and more useful than it was before. Thanks to <a href="">code review</a> and <a href="">continuous integration</a>, the code quality of KDE software has also tremendously improved. Given how things are improving, it is tempting to follow development <em>as it happens</em>. Sounds exciting?</p> <p>Except that there are some roadblocks that can be problematic:</p> <ul> <li>If you want to build the full stack from source, there are often many problems if you’re not properly prepared;</li> <li>Builds take time (yes, there are tricks to reduce that, but not all people know about them);</li> <li>If things break… well, you get to keep the pieces.</li> </ul> <p>But aside personal enjoyment, KDE would <strong>really</strong> benefit from more people tracking development. It would mean faster bug reporting, uncovering bugs in setups that the developers don’t have, and so on.</p> <h1 id="what-about-noble-gases">What about noble gases?</h1> <p>Recently, an announcement about <a href="">a gas used in fluorescent lamps</a> generated quite a buzz in the FOSS community. Indeed, such an effort would solve many of the problems highlighted above, because part of the issues would be on the backs of integrators and packagers, which are much better apt for this task.</p> <p>But what, am I telling a story you already know? Not quite.</p> <h1 id="a-little-flashback">A little flashback</h1> <p>For those who don’t know, openSUSE has a <a href="">certain number of additional repositories with KDE software</a>. Some of these, since many years, have been providing the current state of KDE software as in git for those who wanted to experiment. This hasn’t been done just for being on the bleeding edge: it’s also been used by the openSUSE KDE team itself to identify and fix in advance issues related to packaging, dependencies, and occasionally help testing patches (or submit their own to KDE).</p> <p>So, in a way, there were already means to test KDE software during development. However, there was a major drawback in adoption, which involves the fact that these packages <em>replace</em> the current ones on the system. For technical reasons, it is not possible to do co-installation (for example, in a separate prefix) in a way that is maintainable long term.</p> <h1 id="so-what-now">So, what now?</h1> <p>After hearing about the announcement, we (the openSUSE KDE team) realized that we had already the foundation to provide this software to our users. Of course, if you got too much neon, you’d asphyxiate ;), so we had to look at alternative solutions. And the solutions were, like the repositories, already there, provided by openSUSE: the <a href="">Open Build Service</a> and the <a href="">KIWI image system</a> which can create images from distribution packages.</p> <p>But wait, there’s more (TM)!</p> <p>openSUSE ships two main flavors: the ever-changing (but <a href="">battle-tested</a>) Tumbleweed, and the rock-solid Leap. So, one user would ever want to experience the latest state of many applications, or just be focused on KDE software while running from a stable base. So, if we could create images using KIWI, why not create <strong>two</strong>, one for Leap and one for Tumbleweed? And you know what…</p> <p><img src="" alt="" /></p> <p>Lo and behold, in particular thanks to the heroic efforts of Raymond Wooninck, we had working images! We also like noble gases, so <a href="*.iso">Argon</a> and <a href="*.iso">Krypton</a> were born!</p> <h1 id="the-nitty-gritty-details">The nitty gritty details</h1> <p>These images work in two ways:</p> <ul> <li>They work as <em>live images</em>, meaining you can test the latest KDE software <strong>without</strong> touching your existing system, and like that, not worry about something that breaks;</li> <li>You can also <em>install</em> them, and have a fully updated Leap or Tumbleweed system with the KDE:Unstable repositories active. Use this if you know what you’re doing, and want to test and report issues.</li> </ul> <h1 id="bugs-bugs-everywhere">Bugs, bugs everywhere!</h1> >And of course, like the openSUSE login text says, “have a lot of fun!”</p> 2016-02-19T20:03:25+00:00/2016/01/kdepim-changes-in-opensuse-tumbleweedKDE PIM changes in openSUSE TumbleweedShort-ba<p>Short version: the KDE PIM in openSUSE Tumbleweed is moving from 4.14.x to the KF5 based version. More details below.</p> <h2 id="some-history">Some history</h2> <p).</p> <p>Fast forward to today: <a href="">KDE Applications 15.12.1 have been released</a>, and we’ve been testing and using the KF5 based PIM suite for all these months with no issues. In addition, PIM developers have been adding interesting new features and impressive speed optimization (<a href="">see the blog post by PIM developer Dan Vratil</a>), and at the same time dropped support for the the 4.14 PIM suite.</p> <h2 id="what-does-it-mean-for-opensuse">What does it mean for openSUSE?</h2> <p.</p> <p>You have <strike>two</strike> three options for upgrading. But bear in mind that Akonadi <strong>should not be running</strong> during the upgrade! To stop Akonadi, close all PIM applications, open a terminal and issue <code>akonadictl stop</code>. Alternatively, open the <code>akonadiconsole</code> program, and select <em>Stop Server</em> from the <em>Server</em> menu. Lastly, you can do the upgrade from a different user account.</p> <h3 id="fully-automatic">Fully automatic</h3> <p>Just start up the new KF5 PIM after the upgrade.</p> <h3 id="start-from-scratch">Start from scratch</h3> <p>Simply recreate all your accounts in the new interface and redownload all your mail. This works best with IMAP (with setups and if you have enough bandwidth.</p> <h3 id="use-the-pim-settings-exporter">Use the PIM settings exporter</h3> <p>You can export all your mail from KMail: under the <em>Tools</em> menu, there is the option <em>Import/Export KMail Data</em>,.</p> <h2 id="caveats">Caveats</h2> <p>The 4.14.x and KF5 versions of the PIM stack are <strong>incompatible</strong> with each other. Therefore if you do not fully upgrade the system you will get strange issues. Ensure you update with <code>zypper dup</code> or with the equivalent option in YaST to ensure that all packages will be picked up.</p> <h2 id="what-do-we-get-with-the-new-version">What do we get with the new version?</h2> <p>There were many changes. The most user-visible ones, in my opinion, are:</p> <ul> <li>Much faster data exchange between the PIM stack components: this leads to faster startup times and faster email sync</li> <li>Faster display of larger threads: PIM developer Dan Vratil spent quite a bit of work on finding slow code paths and he massively improved thread and email display.</li> <li>Some smaller new features: Gravatar support for email display - disabled by default , for example</li> <li>All the goodes that the KDE Frameworks 5 brought</li> </ul> <p>Plus many more that I may have forgot.</p> <h2 id="bugs">Bugs</h2> <p>Bugs? What bugs? Software is <strong>perfect</strong>! ;)</p> <p>Jokes aside, if you find a bug in the PIM stack, please <a href="">report it to the upstream bug tracker</a>. If instead it’s a bug in the openSUSE packages (missing dependencies, etc.), report it to <a href="">openSUSE’s Bugzilla</a></p> <h2 id="thanks">Thanks</h2> <p.</p> 2016-01-17T08:45:00+00:00/2015/12/danbooru-client-0-5-0-releasedDanbooru Client 0>The image window is shown again with recent Qt and KF5 versions;</li> <li>Remember the last directory saved when saving images;</li> <li>Remove (hopefully) hang when saving images.</li> </ul> <h2 id="coming-up-next">Coming up next</h2> <p>I’ve been told there are <a href="">issues with HiDPI screens</a>, so I’ll try to fix them up next (no guarantees on ETA… also I don’t own any HiDPI screens). Adding support for copying links to images would also be nicec.<> 2015-12-07T23:20:05+00:00/2015/09/the-big-forum-cleaningThe Big Forum Cleaning fo<p for all kinds of people: those that require support, those that offer support, and those who want to contribute. All of this within a (hopefully) logical structure.</p> <p>After some discussion on the actual layout organization, today I and fellow administrator Hans (with help from other people from the KDE community such as PovAddict and Mamarok) put the plan into action. It went surprisingly quick, so the new layout is already live.<br /> There are a few major changes you will notice:</p> <ul> <li>The KDE Community section has been revamped, removing some now-outdated sections-</li> <li>There is a new top-level category, <em>Contributions & Development</em> for everything regarding contributions. It gathers new forums for KDE contributors (Contributors’ Corner) and for people starting development (KDE Frameworks & Development). The old “i18n” forums were also moved there to be in a more prominent position. Lastly, the VDG, KDE websites, and MIssions forusm were all moved under this umbrella.</li> <li>The KDE Software Forum has been largely untouched, with the exlusion of moving Plasma Mobile forums in a more prominent position. A few old forums, like KOffice’s, were archived (see below).</li> <li>Localized Forums were also cleaned up.</li> <li>Links to outdated websites have been removed.</li> </ul> <p>A number of forums with no activity since years, or now outdated, were moved away. However, we thought that we’d do a disservice by removing information that might still be useful, so we created an Archive section to host all these “legacy” forums. Like this, there won’t be information loss.</p> <p>Everything should be in order. If you find something amiss, let us know by posting <a href="">in the Forum Feedback</a> forum.</p> 2015-09-26T18:32:06+00:00/2015/08/kde-applications-1508-rc-for-opensuseKDE Applications 15.08 RC for openSUSEKDE has recently released notab<p>KDE has <a href="">recently released</a> notable ones being Dolphin and Ark). After some consideration and thinking on how to allow users to test this release without affecting their setups too much, the openSUSE community KDE team is happy to bring this latest RC to openSUSE Tumbleweed and openSUSE 13.2<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>.</p> <p>To install this new release, add the <a href=""><strong>KDE:Applications</strong></a> repository either using YaST or zypper. One special mention is the PIM suite: as upstream KDE labels it as a technology preview, we decided to allow installation only with an explicit choice of the user. To do so, one should install the <em>kmail5</em> package and the <em>akonadi-server</em> package (other components of the PIM suite are also there, with the <em>5</em> suffix): this operation will uninstall the 4.14 packages (but not remove any local data) and install the new version of mail and PIM. To go back, install <em>akonadi-runtime</em> and the respective packages without the <em>5</em> suffix (e.g., <em>kmail</em>, <em>korganizer</em>).</p> <p>It is essential for upstream KDE to have proper bug reports, in particular for PIM, so <a href="">please report</a> any issues you find. If instead you find a bug in the packaging, turn over to <a href="">openSUSE’s Bugzilla</a>.</p> <div class="footnotes"> <ol> <li id="fn:1"> <p>Not all packages are available on openSUSE 13.2 due to the version of KF5 and extra-cmake-modules that is shipped there. <a href="#fnref:1" class="reversefootnote">↩</a></p> </li> </ol> </div> 2015-08-08T15:57:15+00:00/2015/05/dynamically-staticDynamically staticSince 26th December 2005, I’ve been runnning this blog with Wordpress. At the time there were little alternatives and finally I had got hold of a host (Dreamhost, at the time) that supported PHP and MySQL without being overly restrictive. 10 years later, things have somehow changed. The issue The m<p>Since <a href="">26th December 2005</a>, I’ve been runnning this blog with Wordpress. At the time there were little alternatives and finally I had got hold of a host (Dreamhost, at the time) that supported PHP and MySQL without being overly restrictive. 10 years later, things have somehow changed.</p> <h2 id="the-issue">The issue</h2> <p>The main reason lies in how Wordpress has evolved over time: no, I’m not speaking about the subjective “bloat”, but the fact that it’s been moving towards a full-blown CMS, which is not what I have in mind to run my blog. Also, performance with many plugins had somehow worsened, in particular when accessing things like the administration interface. Not to mention that plugins itself are still somewhat fragile, and upgrade could still cause harm to your whole site. Lastly, the mere fact that I had to use plugins to <em>lessen the performance impact</em> was off-putting.</p> <p>It wasn’t just Wordpress, of course. In the past years I’ve found myself unable to write long texts in a browser (and hoping that they won’t get lost in case I accidentally close a tab) and also <strong>the way I write posts</strong> changed. I much prefer a specialized editor (like <a href="">this one I’m using</a>) to compose the posts I write.</p> <h2 id="an-alternative-is-found">An alternative is found</h2> <p>So I went and loooked for alternatives around. I’ve looked at <a href="">Ghost</a>, and while it was reasonably appealing, there weren’t many themes that wanted this page to be like I wanted to (and I wasn’t convinced in handing out money for a premium theme before I was sure it did what I wanted). So I turned to static engines, and for now at least I went for <a href="">Jekyll</a>, which is what made the page as you see it today.</p> <p>The learning curve wasn’t particularly steep, and with the help of some tools I was able to convert all the posts from dnenogumi.org with little effort. I also took the time to update some very outdated sections. What took most of the time in the migration was keeping links “WP-compatible”, that is preserving the structure of the page (more or less) as it was before to prevent many 404s, in particular for feeds which are aggregated on <a href="">Planet KDE</a>. With the aid of (many) plugins and a few tweaks to the nginx configuration, I can say that most of the structure should be in place.</p> <p>As for the theme, I went for <a href="">Feeling Responsive</a> by Phlow, but not verbatim. I had to make changes (in short, a fork) because it was meant originally for portfolios and certain features I needed <a href="">were not present by design</a>. What I did was to clone the repo and hack in whatever I needed.</p> <h2 id="deployment">Deployment</h2> <p>I use my own GitLab instance <a href="">to host the repository</a> (now private, I’ll make it public the moment everything is up and running), coupled with micro <a href="">Flask application</a> that fires off the rebuilding to a script running to my server. I also wrote a couple of programs to make a new posts and commit the data (or to make new drafts).</p> <h2 id="all-that-glitters-is-not-gold">All that glitters is not gold</h2> <p>A bad note is comments: I had to go for Disqus unfortunately, as even when I managed to set up <a href="">Discourse</a>, the complexity of the platform was overwhelming for me, which required only comments for a blog, and nothing else. That, and the reliance on Docker, which meant <strong>another</strong> PostgresQL server running (I have already one up which powers <a href="">my GitLab instance</a>. I’m really not happy about it. Should you know a better solution, let me know!</p> <p>Should you find issues with the page, also let me know. I’ve been testing this for a while but of course I didn’t manage to find everything. In particular now the “Gallery” is gone, and I’ll still need to experiment for plugins to auto-create thumbnails and so on.</p> <h2 id="credits">Credits</h2> <p>Of course this leverages on work of other people, which I feel they’d be credited:</p> <ul> <li>The aforementioned Phlow for the theme;</li> <li><a href="">Melissa Adkins</a> for her work on banner images and typography.</li> </ul> 2015-05-30T07:52:40+00:00/2015/04/plasma-5-live-images-for-opensuse-and-on-the-default-opensuse-desktopPlasma 5 live images for openSUSE and on the default openSUSE desktopA lot has been happening on the KDE side of openSUSE… this post summarizes what’s been going on so far. Live media for Plasma 5 One of the most-often requested ways to test Plasma 5, given it can’t be coinstalled with the 4.x Workspace, is the availability of live images to test either in VM or bar<p>A lot has been happening on the KDE side of openSUSE… this post summarizes what’s been going on so far.</p> <h2 id="live-media-for-plasma-5">Live media for Plasma 5</h2> <p>One of the most-often requested ways to test Plasma 5, given it can’t be coinstalled with the 4.x Workspace, is the availability of live images to test either in VM or bare metal without touching existing systems.</p> <p>Given that other distributions started doing so since a while, naturally openSUSE couldn’t stay still. ;) Thanks to the efforts of Hrvoje “shumski” Senjan, we have now live media available for testing out Plasma 5!</p> <ul> <li><a href="">Download location</a>: the ISO file you’re looking for is called <em>openSUSE-Plasma5</em> (currently x86_64 only)</li> </ul> <p>The image is based on the current Tumbleweed and takes the latest code from git. If you test this in a virtual machine, bear in mind that there are some issues with VirtualBox and Plasma 5, and that QtQuick’s reliance on openGL can cause problems in general with virtual machines.</p> <p>And if you find a bug… if it’s in the core distribution, or in the KDE packaging, head over to <a href="">openSUSE’s Bugzilla</a>. If it’s instead in the software, the <a href="">KDE bug tracker</a> is your friend.</p> <p>Questions? Head over to the <a href="">opensuse-kde ML</a>, or the #opensuse-kde channel on Freenode.</p> <h2 id="plasma-5-as-default-in-opensuse">Plasma 5 as default in openSUSE</h2> <p>You may have read on a recent Softpedia article that Plasma 5 is going to become the default in openSUSE. That’s correct (what did you expect, a retraction? ;): I and the others of the team (Raymond and Hrvoje) have been using Plasma 5 for a long time, not only because we like to stay on the bleeding edge ;) but also to see how it would fare for openSUSE. In the mean time, we reported bugs, sometimes fixed them, and occasionally landed one or two features in.</p> <p>With the upcoming Plasma 5.3, we feel that it is of the level of quality expected from the default openSUSE desktop, and therefore we have set up preparations for the switch. As Rome wasn’t built in a day, it won’t happen straight away ;) but it will involve changes in the repositories and in the packaging, which are summarized below:</p> <ul> <li>We will start the migration around the end of April, as long as the basic openQA tests are ready;</li> <li>We will release KDE Applications 15.04 in Tumbleweed at the same time;</li> <li>The KF5 ports of the applications present in KDE Applications 15.04 will <strong>obsolete</strong> their existing counterparts (hence the KF5 version will replace the 4.x version). The same will happen for the <em>kdebase4-workspace</em> and Workspace 4.x packages;</li> <li>Afterwards, the 4.x Workspace will not be supported or maintained for Tumbleweed. Help from the community is welcome in case anyone wants to step up and maintain the packages.</li> <li>The default menu applet will be Kicker (as opposed to Kickoff used in the 4.x tiems;</li> <li>The default theme for Plasma 5 will be Breeze (the upstream default), and we will use the menu structure provided by upstream KDE (as opposed to the custom structure we use today);</li> <li> <p>The repository layout will change. We will have three repositories holding KDE software:</p> <ul> <li>KDE:Frameworks - the KF5 libraries and Plasma 5;</li> <li>KDE:Applications - KDE Applications releases</li> <li>KDE:Extra - Other KDE/Qt related community packages.</li> </ul> </li> <li>For each of these repositories, there will be also an “unstable” variant, tracking the current git master state.</li> </ul> <p>(<a href="">The full IRC log of the last meeting</a> outlines these points in detail)</p> <p>There are still some points open for discussion, in particularly for the update applet: should we keep Apper? Would Muon be a drop-in replacement? Or are we better off without an applet at all?</p> <p>Of course, input and help from the community is welcome. Hop on IRC or on the ML (see above for where to look) if you want to help and participate in this large transition.</p> 2015-04-06T08:35:43+00:00/2015/02/danbooru-client-0-3-0Danbooru Client 0.3.0Following up on yesterday’s release, I’ve released Danbooru Client 0.3.0. This early new release is mostly due to the fact that the QML view file wasn’t installed (sorry!) so part of the UI would not even load (or even crash). That said, I’ve managed to get some extra features in: Fade-in/out a<p>Following up on yesterday’s release, I’ve released Danbooru Client 0.3.0.</p> <p>This early new release is mostly due to the fact that the QML view file wasn’t installed (sorry!) so part of the UI would not even load (or even crash).</p> <p>That said, I’ve managed to get some extra features in:</p> <ul> <li>Fade-in/out animation when posts are being downloaded</li> <li>Support for tagging is back (EXPERIMENTAL): it is optional, and requires KFileMetaData (not yet a framework, but should be distributed with Plasma 5).</li> </ul> <h3 id="release-details">Release details</h> <p>Complete build instructions are in the <a href="">README</a>.</p> 2015-02-23T23:27:32+00:00/2015/02/danbooru-client-0-2-0-releasedDanbooru Client 0.2.0 releasedAfter my previous post, development went quicker than expected, so I’ve actually managed to get a real version out. ;) So without much ado… here’s Danbooru Client 0.2.0! This redesigned C++ version brings a few more features compared to the PyKDE4 version, notably: Infinite scrolling (experimen<p>After my previous post, development went quicker than expected, so I’ve actually managed to get a real version out. ;) So without much ado… here’s Danbooru Client 0.2.0!</p> <p>This redesigned C++ version brings a few more features compared to the PyKDE4 version, notably:</p> <ul> <li>Infinite scrolling (experimental) - Just scroll down to load the next set of posts</li> <li>QML-based thumbnail view</li> </ul> <p>Click on the image below to have a demonstration of what’s in this release (<strong>warning</strong>: 2M GIF file):<br /> <a href=""><img src="" alt="tmp" /></a></p> <h2 id="downloading">Downloading</h> <h2 id="building-and-installing">Building and installing</h2> <p>Complete instructions are in the <a href="">README</a>. You will need KDE Frameworks 5.7 and Qt 5.4.</p> <h2 id="known-issues">Known issues</h2> <ul> <li>The loading animation needs to be made prettier</li> <li>Only the first set of pools are downloaded</li> <li>Double clicking on an image does nothing (what should it do? let me know)</li> <li>No batch downloading (pretty hard to do with QML; I’ll try to figure out if it can be done)</li> <li>No tooltips for buttons (an oversight - will be fixed in the next release)</li> </ul> <p>Give it a spin, and let me know what you think!</p> 2015-02-22T20:38:02+00:00/2015/02/sometimes-they-come-back-danbooru-client-ported-to-kf5-and-c <p>Some of you may remember a semi-complex application I wrote back in the days, namely <a href="">Danbooru Client</a>. Written in PyKDE4, it provided a semi-decent interface to <a href="">Danbooru-style boards</a>. It mostly worked and received little maintenance (also because I <em>didn’t</em> have that much time for maintenance).</p> <p>In the mean time, I started learning some C++. No, it’s not that I don’t like Python (in fact I <strong>do</strong>, I use it a lot in my day job), but I wanted to gain at least some basic skills to be able to contribute directly to KDE software.</p> <p).</p> <p>Then the <a href="">KDE Frameworks 5</a> and Qt 5 arrived on the scene…</p> <h3 id="enter-kde-frameworks-5">Enter KDE Frameworks 5</h3> <p….</p> <p.</p> <p>Fast forward to today. The application has almost reached an alpha state and it more or less works. Without further ado, a screenshot!</p> <p><a href=""><img src="" alt="Image of the new version of Danbooru Client" /></a></p> <h3 id="what-works-and-what-doesnt">What works, and what doesn’t?</h3> <p>The basic functionality is there:</p> <ul> <li> <p>Listing, viewing, and saving posts;</p> </li> <li> <p>Listing and downloading pools;</p> </li> <li> <p>Tag blacklists;</p> </li> <li> <p>Password saving with KWallet;</p> </li> <li> <p>Searching by tags.</p> </li> </ul> <p>In addition, Danbooru Client has gained ”infinite scrolling”: no need to open multiple tabs! Scroll to the bottom and it’ll load automatically the next batch of posts.</p> <p>There is also stuff that is missing and / or broken:</p> <ul> <li> <p>Batch downloads (not sure how to manage this one with QML…)</p> </li> <li> <p>Tag display (only a matter of time)</p> </li> <li> <p>Loading consecutive batches doesn’t work if they don’t fill up the window</p> </li> <li> <p>Tagging (I need to see if the Baloo API is suited for this)</p> </li> </ul> <h3 id="how-do-i-try-this-out"> How do I try this out?</h3> <p>In my opinion it’s not even alpha code, but if you insist…. ;)</p> <p>You will need KDE Frameworks 5.7 minimum (because I needed to get a couple of features in for KDeclarative to make this work) and Qt 5.4(I didn’t test with earlier versions) plus the latest Extra-CMake-Modules.</p> <p>You can then clone the code from:</p> <p></p> <p>(sorry, I don’t feel comfortable in hosting my code on GitHub - although if you want to mirror it there, feel free)</p> <p>After cloning, do the CMake dance:</p> <div class="highlight"><pre><code class="language-bash" data-mkdir build <span class="nb">cd </span>build <span class="c"># Replace this with a custom prefix if you don't want to install system-wided</span> cmake -DCMAKE_INSTALL_PREFIX<span class="o">=</span><span class="k">$(</span>qtpaths --install-prefix<span class="k">)</span> make make install <span class="c"># as root or through "sudo"</span></code></pre></div> <p>For those who want to keep dependencies at minimum, here are the frameworks Danbooru Client depends on:</p> <ul> <li> <p>KIconThemes</p> </li> <li> <p>KCoreAddons</p> </li> <li> <p>KGuiAddons</p> </li> <li> <p>KTextWidgets</p> </li> <li> <p>KXmlGui</p> </li> <li> <p>KConfigWidgets</p> </li> <li> <p>KIO (I can’t live without this!)</p> </li> <li> <p>KWallet</p> </li> <li> <p>KDeclarative (needed for the QML view)</p> </li> <li> <p>KDELibs4Support (I plan to remove this soon)</p> </li> </ul> <h3 id="ive-found-a-bug-what-do-i-do">I’ve found a bug! What do I do?</h3> <p>Nothing. ;) I told you it was pre-alpha code, didn’t I? More seriously, it’s too early to even report bugs. Check back in a few weeks, and see if it’s fixed.</p> 2015-02-15T21:59:37+00:00/2015/01/plasma-5-2-for-opensuse-you-betPlasma 5.2 for openSUSE? You bet!The ever-amazing Plasma team from KDEjust put out a new release of Plasma. I won’t spend much describing how big of an improvement it is - the release announcement at KDE has all the details needed to whet your appetite. And of course, now it’s the turn of distributions to get out packages for the u<p>The ever-amazing Plasma team from KDEjust put out a new release of Plasma. I won’t spend much describing how big of an improvement it is - the release announcement at KDE has all the details needed to whet your appetite.</p> <p>And of course, now it’s the turn of distributions to get out packages for the users at large.</p> <p>This is also the case for openSUSE. <a href="">The KDE:Frameworks5 repository</a> hosts the new 5.2 goodness for released distributions (13.1 and 13.2) and Tumbleweed. Packages have also been submitted to Tumbleweed proper (pending legal review, so it will take some time).</p> <p>Don’t forget the rule of thumb, in case you find problems: bugs in the packages should be directed <a href="">towards the openSUSE bugzilla,</a> while issues in the actual software should be <a href="">reported to KDE</a>. You can also discuss your experience on the <a href="">KDE Community Forums</a>.</p> 2015-01-27T15:27:08+00:00/2014/10/kde-applications-and-platform-4-14-2-available-for-opensuseKDE Applications and Platform 4.14.2 available for openSUSEFollowing <p>Following up on<a href=""> KDE’s announcement of the latest stable release</a>, we have now packages available for 12.3 and 13.1 (a 13.2 repository will be made available after it is out). You will find them in the <a href="">KDE:Current</a> repository. Current users of this repository will get the new release automatically once they update.</p> <p>Why you should upgrade? You can take a look at the <a href="">list of changes</a> to get an idea. These fixes touch many important KDE applications, including KMail, Okular and Dolphin.</p> <p>Packages are also on their way to openSUSE Factory.</p> <p>As usual, bugs with the packaging should be <a href="">reported to openSUSE</a> and upstream bugs should be <a href="">reported to KDE</a>.</p> <p>Also, if you like what KDE is doing and you feel you can not contribute directly, you may want to support this <a href="">end of year fundraiser</a>.</p> 2014-10-15T16:30:53+00:00/2014/07/latest-4-13-newest-4-14-beta-and-plasma-5-in-opensuseLatest 4.13, newest 4.14 beta and Plasma 5 in openSUSECongratulations<p>Congratulations to KDE (of which I’m proud of being a part of) for the <a href="">newest release of the Plasma workspace</a>! At the same time, the 4.x series <a href="">has seen a new beta release</a>, and the <a href="">stable branch got updated, too</a>.</p> <p>I’m betting a few people will ask “<em>Are these available for openSUSE?</em>” and of course the answer is yes, thanks to the efforts of the openSUSE community KDE team and the Open Build Service.</p> <h3 id="plasma-5">Plasma 5</h3> <p>Plasma 5 can be installed from two different repositories:</p> <ul> <li> <p><a href="">KDE:Frameworks5</a>, which has the latest (and only one, for now ;) official stable release (you will also need KDE:Qt5 if you use openSUSE 13.1; see instructions at the link);</p> </li> <li> <p><a href="">KDE:Unstable:Frameworks</a>, which hosts git snapshots.</p> </li> </ul> <p>As the latter is under <strong>heavy</strong> development, it is not recommended unless <em>you truly know what you are doing</em>.</p> <p>Also, packages are <strong>mutually exclusive</strong> with the 4.x workspace. Installing Plasma 5 will uninstall the 4.x workspace. To revert, simply reinstall the 4.x workspace packages (kdebase4-session should suffice).</p> <p>Don’t forget, if you use Plasma 5 packages, to <a href="">report bugs in the software to KDE directly</a> (or you can use <a href="">the official discussion forum</a>) and packaging ones to <a href="">Novell’s Bugzilla</a>.</p> <h3 id="betas">4.14 betas</h3> <p>You will find the newest beta from the 4.x series in the <a href="">KDE:Distro:Factory repository</a>. Building for openSUSE 13.1 (along with Factory) has been enabled in order to get widespread testing.</p> <p>As this is a beta, although very stable, do not use it on production systems unless you know what you are doing. The same recommendations as Plasma 5 apply for reporting bugs.</p> <p>You can also join the discussion on the <a href="">KDE Community Forum’s beta releases area</a>.</p> <h3 id="latest-413-release"><strong>Latest 4.13 release</strong></h3> <p><a href="">KDE:Current</a> hosts the latest release (4.13.3) from the stable 4.13 branch. If you have the repository enabled you will automatically get it once you update packages.</p> <p>Go ahead and enjoy the latest and greatest from KDE!</p> 2014-07-16T05:43:49+00:00/2014/06/changes-in-kde-frameworks-5-and-plasma-5-packaging-in-opensuseChanges in KDE Frameworks 5 and Plasma 5 packaging in openSUSESince<p>Since co-installable, but the workspace components (Plasma 5) <strong>will</strong> confict with the existing Plasma 4.11.x installation.</p> <p>What does this mean in practice? If you want to use Plasma 5 you will not be able to use a 4.11.x Plasma Workspace. The move was made to ease maintenance and packaging, as it meant dropping a number of hacks, and also to make KF5 + Plasma 5 packages suitable for inclusion in openSUSE Factory. At the same time, the 4.11.x workspace packages were adjusted to reduce the number of components conflicting, so that applications depending on workspace libs (such as KDevelop) would remain on the system also with P5 installed.</p> <p>We’ve also said this many times but it’s worth repeating: Plasma 5 will <strong>not</strong> be the default in openSUSE 13.2; the stable, LTS release 4.11.x will (as a note: 4.11.x because the workspace did not increase its version number since becoming LTS, while the Development Platform and the Applications are at 4.13 at the moment).</p> <p>That said, if you are feeling brave, feel free to try out Plasma 5… and <a href="">don’t forget to report bugs!</a></p> <p>P.S.: Most thanks go to Hrvoje “shumski” Senjan, who did most (if not all) of the packaging work.</p> <p>P.P.S.: If you like what KDE is doing, please consider <a href="">supporting the Randa Meetings 2014.</a></p> 2014-06-28T11:59:23+00:00/2014/04/unlocking-kwallet-with-pamUnlocking KWallet with PAMRequests i<p>Requests to unlock KWallet automatically on login (assuming the wallet password and user password are the same), like gnome-keyring can do, have been going on for years: in fact, <a href="">bug reports requesting this feature are quite old</a>..</p> <p>While the module itself has not been released yet officially, it’s been used already by some distributions (Kubuntu). However documentation is lacking, so it could be hard to set it up for anyone else. This post provides some indications on how to set KWallet up with PAM.</p> <p>Before we begin, a disclaimer: <strong> as we’re deailng with pre-release software, do everything at your own risk! Errors with PAM can lock you out of your system!</strong></p> <p>Also, ther’s no guarantee that these instructions, although they worked for me, will work for you. YMMV.</p> <h3 id="prerequisites">Prerequisites</h3> <p><strong>EDIT: You will need a more recent startkde script than the one shipped in Workspace 4.11.8: ask your distro for a back-port of the latest commits to it (last 3 since 4.11.8). </strong> Thanks to Rex Dieter (Fedora) for letting me know.</p> <p>You need to have libgcrypt and its development headers installed, and at least version 1.5.0 (earlier versions won’t work), along with the PAM development headers. Before beginning, change your wallet password to be the same as your login password (you chose a strong password, didn’t you ;).</p> <p>EDIT: You wll need also <em>socat</em>, because it’s used to inject the right environment when the KDE workspace session is starting.</p> <h3 id="building-pam-kwallet">Building pam-kwallet</h3> <p>Clone the git repository holding pam-kwallet (<strong>NOTE for posterity:</strong> the URL may change in the future once the code moves properly inside KDE’s official modules):</p> <div class="highlight"><pre><code class="language-bash" data-git clone git://anongit.kde.org/scratch/afiestas/pam-kwallet.git</code></pre></div> <p>Then compile:</p> <div class="highlight"><pre><code class="language-bash" data-mkdir build<span class="p">;</span> <span class="nb">cd </span>build cmake -DCMAKE_INSTALL_PREFIX<span class="o">=</span><span class="k">$(</span>kde4-config --prefix<span class="k">)</span> ../ make</code></pre></div> <p>You may have to add -DLIB_SUFFIX=64 if you are using a 64 bit system or the library may get installed in the wrong path.</p> <p>Install either as root or using <em>sudo</em>:</p> <div class="highlight"><pre><code class="language-bash" data-make install</code></pre></div> <h3 id="hooking-pam-kwallet-to-pam">Hooking pam-kwallet to PAM</h3> <p>Once this is done, we need to hook pam-kwallet to PAM proper. These instructions have been made with <a href="">inspiration from the Arch Linux Wiki entry on GNOME keyring</a> and should be your reference in case of issues.</p> <p>We’ll have to tell PAM that it can use our freshly built module as an authentication mechanism. We will be doing so by editing specific files under /etc/pam.d. All operations should be done as root or using sudo.</p> <p>First, we edit /etc/pam.d/login (added lines are with). This is how it looks in my system (<em>note</em>: depending on your distro, it may and will look different)</p> <div class="highlight"><pre><code class="language-bash" data-<span class="c">#%PAM-1.0</span> auth requisite pam_nologin.so auth <span class="o">[</span><span class="nv">user_unknown</span><span class="o">=</span>ignore <span class="nv">success</span><span class="o">=</span>ok <span class="nv">ignore</span><span class="o">=</span>ignore <span class="nv">auth_err</span><span class="o">=</span>die <span class="nv">default</span><span class="o">=</span>bad<span class="o">]</span> pam_securetty.so auth include common-auth account include common-account password include common-password session required pam_loginuid.so session include common-session <span class="c">#session optional pam_lastlog.so nowtmp showfailed</span> session optional pam_mail.so standard -session optional pam_kwallet.so auto_start <span class="c">#### Add this line</span></code></pre></div> <p>Then we change /etc/pam.d/passwd. Notice that there is a caveat described below:</p> <div class="highlight"><pre><code class="language-bash" data-<span class="c">#%PAM-1.0</span> auth include common-auth -auth optional pam_kwallet.so <span class="c">### add this line</span> account include common-account password include common-password session include common-session</code></pre></div> <p>It is <strong>essential</strong> now that you notice whether you are using a default <em>.kde</em> for your KDE applications settings, or another name (for example <em>.kde4</em> in openSUSE). If it is different from .kde, you <strong>must</strong> add an option which tells the PAM module where it is (it only involves modifications in /etc/pam.d/passwd):</p> <div class="highlight"><pre><code class="language-bash" data--auth optional pam_kwallet.so <span class="nv">kdehome</span><span class="o">=</span>.kde4 <span class="c"># for .kde4</span></code></pre></div> <h3 id="alternative-setup">Alternative setup</h3> <p>While the setup above <em>should work</em>, it may not. In this case, you will need to edit the PAM files used by your display manager. In the case of KDM, they may be <em>/etc/pam.d/kdm _or </em>/etc/pam.d/xdm_. For LightDM, you should edit <strong>both</strong> /etc/pam.d/lightdm and /etc/pam.d/lightdm-greeter.</p> <p>Either case, put both the auth and the session line in the files, as such (example from my setup)</p> <div class="highlight"><pre><code class="language-bash" data-<span class="c">#%PAM-1.0</span> <span class="c"># LightDM PAM configuration used only for the greeter session</span> auth required pam_permit.so -auth optional pam_kwallet.so <span class="nv">kdehome</span><span class="o">=</span>.kde4 <span class="c">### added</span> account required pam_permit.so password include common-password session required pam_loginuid.so session include common-session -session optional pam_kwallet.so auto_start <span class="c">### added</span></code></pre></div> <h3 id="wrapping-it-up">Wrapping it up</h3> <p>After these changes, log out and back in. If everything is correct, you will not see password requests from KWallet, but you will see your wallet properly unlocked!</p> <h3 id="and-if-it-doesnt-work">And if it doesn’t work?</h3> <p>I warned you. ;) More seriously, look in the authentication logs for clues to see whether there were PAM errors. My suggestion would be to wait for distros to figure this out, or hope that a real PAM expert steps in, as debugging is very difficult (at least for me) at such a low level.</p> <p>EDIT: As pointed out by Rex Dieter (Fedora), putting a - in front of your PAM entries will make PAM ignore them if unavailable, reducing the amount of logging sent to your syslog.</p> 2014-04-25T09:02:06+00:00/2014/04/kdecurrent-and-4-13-packages-for-opensuseKDE:Current and 4.13 packages for openSUSEThis is a guest post by Raymond “tittiatcoke” Wooninck, with contributions from myself and Hrvoje “shumski” Senjan In the next hours the KDE:Current repository will publish the latest release from KDE (4.13). As that this release comes with a big change (the new semantic search), we would like some <p><em>This is a guest post by Raymond “tittiatcoke” Wooninck, with contributions from myself and Hrvoje “shumski” Senjan</em></p> <p>In the next hours the <a href="">KDE:Current</a> repository will publish t<a href="">he latest release from KDE (4.13)</a>. As that this release comes with a big change (<a href="">the new semantic search</a>), we would like some simple steps in order to perform the right upgrade.</p> <h3 id="before-the-upgrade">Before the upgrade</h3> <p>In order to migrate data automatically from the Nepomuk store to the new format, you will need Nepomuk up and running, and just for the time needed for the migration. Ensure that Nepomuk is running before the update (in System Settings > Desktop Search). This is only necessary in case Nepomuk is in use on the system.</p> <h3 id="the-upgrade-itself">The upgrade itself</h3> <ul> <li> <p> If you are already using KDE:Current then the upgrade should be a simple <em>zypper up</em> or upgrade packages through YaST Software Management.</p> </li> <li> <p> If you are not yet using KDE:Current, then please follow the instructions <a href="">on the wiki</a> on how to add the necessary repositories. After adding them, a <em>zypper dup</em> is required to ensure that all the KDE packages are coming from KDE:Current.</p> </li> </ul> <p>Please do not remove nepomuk, as that otherwise the migration to baloo will fail. Also after the upgrade please make sure that the <strong>baloo-file</strong> package is installed (it is required for indexing). After this check, log off and back on. The Nepomuk migrator will then run and move all the data that can be migrated to the new system. It will also turn off Nepomuk at the end of the migration.</p> <prequiring the Nepomuk framework (like bangarang, kweshtunotes, etc).</p> <h3 id="usingthe-new-search-system">Using the new search system</h3> <p>Unlike the ‘include folders to be indexed’ used with Nepomuk, the new search backend prefers to index everything and exclude unwanted folders explicitly. With the standard setup, all files and directories below the home-directory will be indexed. All other<br /> filesystems are indicated as omitted.</p> <p>This can be changed by deleting the respective entries in System Settings. To turn indexing off completely, add your home directory to the excluded folder list (bear in mind that this will prevent search from working). To remove the components completely, remove the <strong>baloo-file</strong> package. The package <strong>baloo-pim</strong> (only present when kdepim is installed) can be removed if no search capabilities are required for KMail.</p> <p>Aside from Dolphin, the only search UI available is the package called<br /> <strong>milou</strong>. Milou can be placed in the panel for easy access and its usage is quite simple. The search term is indicated and search results are shown for files, emails, and so on. You can pick which categories to use in the settings. At the moment you should <em>not</em> put Milou in the system tray, because it will cause Plasma to crash at login.</p> <p>Tags in the files are now stored using extended attributes (xattrs) instead that in the database.</p> <h3 id="known-issues">Known issues</h3> <ul> <li> <p>The initial indexing can be heavy on I/O especially if there are large text files: either wait till the indexing is complete (this step is done only once), or exclude the folder containing such files.</p> </li> <li> <p>Some data will be lost during the migration: in particular, emails will have to be re-indexed, and file<->activity associations, if used, will not be preserved.</p> </li> </ul> <h3 id="reporting-problems">Reporting problems</h3> <p>As usual, use <a href="">Novell’s Bugzilla</a> if you find issues pertaining to the specific packaging used in openSUSE: otherwise, report bugs <a href="">directly to KDE</a>.</p> 2014-04-24T20:35:47+00:00/2014/03/being-currentBeing Current soft<p>It is not news that openSUSE, through to the effort of the openSUSE community KDE team, offers <a href="">several third-party repositories</a>.</p> <p>Then there are repositories which offer <a href="">additional applications outside the main openSUSE KDE desktop packages</a>, KDE:Extra (for stable releases) and KDE:Unstable:Extra (for development snapshots). In this case, they complement the already existing KDE repositories.</p> <p <a href="">Open Build Service</a>, and also maintenance was problematic, due to the number of repositories involved and the limited manpower at disposal of the openSUSE community KDE team.</p> <p>This is why we’re announcing a change today: effective with the 4.12.4 release from KDE, the KDE:Release:4x repositories will be retired, replaced by a single resource which tracks the latest stable release from KDE, <a href="">KDE:Current</a>..</p> <p>Of course, KDE:Current will include 4.13 packages once the official release is out.</p> <p>Therefore, if you are a user of KDE:Release:4x, be aware that from tomorrow (at the same time as the 4.12.4 release)the repository will _cease to exist _and you should move to KDE:Current.</p> <p>For questions and suggestions, feel free to drop by the #opensuse-kde channel on Freenode, or use the <a href="">openSUSE KDE mailing list</a>.</p> <p>This post was brought to you by the bright (?) minds of the openSUSE community KDE team, with particular thanks to Raymond “tittiatcoke” Wooninck, who did most of the work</p> <p>P.S.: No, this is not an April Fools’ joke. ;)</p> 2014-03-31T19:00:14+00:00/2014/03/4-13-beta-1-workspaces-platform-and-applications-for-opensuse-start-the-testing-engines stan<p>Yesterday <a href="">KDE released their first beta of the upcoming 4.13 version of Workspaces, Applications and Development platform</a>. As usual with the major releases from KDE, it’s packed with a lot of “good stuff”. Giving a list of all the improvements is daunting, however there are some key points that stand out:</p> <ul> <li> <p>Searching: <a href="">KDE’s next generation semantic search</a> is a prominent feature of this release. It’s several orders of magnitude faster, much leaner on memory and generally is a great improvement from the previous situation (this writer has been testing it for the past months and he’s absolutely delighted about it).</p> </li> <li> <p>PIM: Aside with tight integration with the new search feature, KMail gained a new quick filter bar and search, many fixes in IMAP support (also thanks to <a href="">the recent PIM sprint</a>) and a <a href="">brand new sieve editor</a>.</p> </li> <li> <p>Okular has a lot of new features (tabs, media handling and a magnifier)</p> </li> <li> <p>A lot more ;)</p> </li> </ul> <p>Given all of this, could the openSUSE KDE team stay still? Of course not! Packages are available in the <a href="">KDE:Distro:Factory repository</a> (for openSUSE 13.1 and openSUSE Factory) as there are lot of changes and need more testing. The final release will be provided also in the KDE:Relase413 repository (which will be created then).</p> <p.</p> <p>As usual, this is an <strong>unstable release</strong> and it is only meant for <em>testing</em>. Don’t use this in production environments! If you encounter a bug, if it is packaging related use <a href="">Novell’s Bugzilla</a>, otherwise head to <a href="">bugs.kde.org</a>. Also, before reporting anything, please check out the <a href="">Beta section of the KDE Community Forums first</a>.</p> <p>That’s all, enjoy this new release!</p> 2014-03-07T19:45:27+00:00/2014/01/an-expedition-in-the-qml-realmAn expedition in the QML realmAmong h<p.</p> <p>Up to now I was using <a href="">this plasmoid</a> written in Python, but the code had several issues and used its own way of getting the public IP. However, I knew Plasma has <strong>already</strong> a way to give you your IP, that is the <em>geolocation _DataEngine. I thought of adjusting the current widget to use this engine, but then I thought “_What if I make one in QML?”</em>.</p> <p>It turned out to be a rather easy task, which I accomplished in less than one hour, by reading up some documentation, examples and of course pestering people on IRC. ;)</p> <p>All I needed to have the IP ready was</p> <div class="highlight"><pre><code class="language-javascript" data-<span class="nx">PlasmaCore</span><span class="p">.</span><span class="nx">DataSource</span> <span class="p">{</span> <span class="nx">id</span><span class="o">:</span> <span class="nx">dataSource</span> <span class="nx">dataEngine</span><span class="o">:</span> <span class="s2">"geolocation"</span> <span class="nx">connectedSources</span><span class="o">:</span> <span class="p">[</span><span class="s1">'location'</span><span class="p">]</span> <span class="nx">interval</span><span class="o">:</span> <span class="mi">500</span> <span class="nx">onNewData</span><span class="o">:</span> <span class="p">{</span> <span class="k">if</span> <span class="p">(</span><span class="nx">sourceName</span> <span class="o">==</span> <span class="s1">'location'</span><span class="p">)</span> <span class="p">{</span> <span class="nx">ipAddr</span><span class="p">.</span><span class="nx">text</span> <span class="o">=</span> <span class="nx">data</span><span class="p">.</span><span class="nx">ip</span> <span class="p">}</span> <span class="p">}</span></code></pre></div> <p>where <code>ipAddr</code> was a Text element.</p> <p>And that’s how is the final result (mimicking the other widget I took it from):</p> <p><img src="" alt="Plasmoid in action" /></p> <p>There are still a number of issues, for example getting the right size when started, and ensuring it’s not resized to a too little size. But I was surprised that it was so easy.</p> <p>Interested parties can grab it by cloning and installing:</p> <div class="highlight"><pre><code class="language-bash" data-git clone plasmapkg -i ip-address-viewer/</code></pre></div> <p>Suggestions on code quality are welcome.</p> 2014-01-04T08:03:18+00:00/2013/11/8-months-with-kde-and-opensuse-looking-back-after-the-13-1-release8 months with KDE and openSUSE - looking back after the 13.1 release. W<p>And so, finally <a href="">openSUSE 13.1 is out of the door</a> (I couldn’t celebrate like I wanted, as I’ve been very busy). This release has lots of improvements, and of course, <a href="">the latest stable software from KDE</a>. It is time (perhaps?) to look back and see what the team has done during this development cycle.</p> <p).</p> <p>There were also <a href="">some much-needed organizational changes in projects</a>, to keep things manageable. And thanks to the effort of shumski, openSUSE offers, like other distributions, <a href="">regularly updated KF5 packages to help with development and testing</a>. Aside for the very-bleeding-edge-it-will-kill-you software, the team eats a lot of its own dogfood, testing things as much as possible (and suffering from fallouts, sometimes ;) before pushing them to stable packages.</p> <p>The goals for the future? Make <a href="">the KLyDE splitting</a>,).</p> <p>It may not seem like a large list, but it is a lot of work. ;) So if you feel like helping, don’t be shy and drop us a note either on IRC (#opensuse-kde) or on the <a href="">opensuse-kde maling list</a>.</p> 2013-11-25T18:48:09+00:00/2013/08/qt5-on-opensuse-including-experimental-kf5-packagesQt5 on openSUSE (including experimental KF5 packages)In the past few days, the openSUSE KDE team has been working hard, following the footsteps of the nice work done by the Kubuntu and Arch Linux communities, to provide Qt5 packages for the distribution. In fact, work was already done in the past, but the packages were not coinstallable with the existi<p>In the past few days, the openSUSE KDE team has been working hard, following the footsteps of the nice work done by the <a href="">Kubuntu</a> and <a href="">Arch Linux</a> communities, to provide Qt5 packages for the distribution. In fact, work was already done in the past, but the packages were not coinstallable with the existing Qt4 installation.</p> <p>Thanks to a renewed effort, the OBS holds now Qt5 packages that won’t overwrite the existing Qt4 install: they currently live in the <a href="">KDE:Qt51 repository</a> (<a href="">Factory</a> and <a href="">openSUSE 12.3</a>) and they have been submitted to Factory itself, with the plan of having a full set of Qt5 packages for the next version of the distribution. <a href="">PyQt5</a> was also packaged, for those who are interested in using Python with Qt.</p> <p>These packages are deemed as stable and usable without issues (although, not being part of the distribution, not supported): if you spot a problem in packaging, file a bug to <a href="">Novell’s Bugzilla</a>.</p> <p>Up to this point we have talked about stable releases. But as KF5 depends on the yet-unreleased Qt 5.2, new repositories were created:</p> <ul> <li> <p><a href="">KDE:Qt5</a>, which hosts snapshots off the current Qt tree (5.2);</p> </li> <li> <p><a href="">KDE:Frameworks</a>, which contains snapshots of the current state of KF5.</p> </li> </ul> <p>In particular KF5 is installed to /opt/kf5, ensuring that it won’t overwrite your current install. Bear in mind that these packages are absolutely not meant for end users (we’re talking <em>pre-alpha</em> here!), but only for people who want to help developing KF5. For those daring enough, there is even a kf5-session package to start a whole KF5 + frameworks workspace session.</p> <p>Credit where it’s due: the packaging work is mostly the effort of Hrvoje “shumski” Senjan and Raymond “tittiatcoke” Wooninck, the two major KDE packaging powerhouses in the team. ;)</p> <p>Happy hacking!</p> 2013-08-06T12:40:19+00:00/2013/07/last-round-of-testing-4-11-rc2-packages-for-opensuseLast round of testing: 4.11 RC2 packages for openSUSEThe<p>The latest release of the KDE Platform, Workspaces, and Applications (4.11) is around the corner: in fact, <a href="">the last RC was recently made available</a>. We’re almost there, but it doesn’t mean that testing and reporting should stop: on the contrary, it is needed even more to ensure that no bad bugs crawl up in the final release.</p> <p>As part of this effort, openSUSE packages for RC2 have been released through the <a href="">OBS</a>, and are available in the <a href="">KDE:Distro:Factory repository</a>. Like always, please report upstream bugs to <a href="">KDE directly</a>, and <a href="">use Novell’s Bugzilla</a> for packaging or openSUSE specific issues.</p> <p>While 4.11 will be part of openSUSE 13.1, users of older versions will be able to install packages through the KDE:Release:411 repository which will be created after the official release. And now, back to testing!</p> 2013-07-30T05:32:23+00:00/2013/07/kde-releases-4-11-rc1-and-opensuse-packages-followKDE releases 4.11 RC1, and openSUSE packages follow!The latest release from KDE moved from beta to RC stage, thus finding and reporting bugs is more important that ever. At the same time, the distribution packaging teams are also working in polishing their packages. As far as openSUSE is concerned (not dissing other distros, just mentioning the one I<p>The <a href="">latest release from KDE</a> moved from beta to RC stage, thus finding and reporting bugs is more important that ever. At the same time, the distribution packaging teams are also working in polishing their packages.</p> <p>As far as openSUSE is concerned (not dissing other distros, just mentioning the one I’m involved in ;), you can kill two birds with one stone by installing the packages provided in the <a href="">KDE:Distro:Factory repository</a>. There are two kinds of issuses you need to report:</p> <ul> <li>Issues in the software (bugs, crashes, unexpected behaviors, regressions…): KDE would like very much your feedback, so please submit detailed bug reports to <a href="">bugs.kde.org</a>;</li> <li>Issues in the packaging (conflicts, missing files, improper installation…): in this case you may want to notify the openSUSE KDE team by filing a ticket to <a href="">Novell’s Bugzilla</a>.</li> </ul> <p>You can also discuss about the upcoming release on the <a href="">KDE Community Forums</a>.</p> <p>As this is not yet a stable release, this usually goes without saying, but I’m repeating it anyway: do <strong>not</strong> use these packages in production systems, install them only if you want to help testing. For everyone else, it’s much better to wait for the official release, which will also be part of the upcoming openSUSE 13.1.</p> <p>That’s all. So, what are you waiting for? Let’s get testing done!</p> 2013-07-16T21:30:11+00:00/2013/07/4-11-beta-2-opensuse-packages-available4.11 beta 2 openSUSE packages availableKDE released the second beta of the KDE Platform, Workspaces and Applications 4.11, and after the necessary time for the OBS to build packages from the released tarballs, packages are available for openSUSE 12.3 and openSUSE Factory. Like the previous beta, they are available through the KDE:Distro:F<p>KDE released <a href="">the second beta of the KDE Platform, Workspaces and Applications 4.11</a>, and after the necessary time for the OBS to build packages from the released tarballs, packages are available for openSUSE 12.3 and openSUSE Factory. Like the previous beta, they are available through the <a href="">KDE:Distro:Factory</a> repository. 4.11.x is is targeted for inclusion in the upcoming openSUSE 13.1.</p> <p>So far 4.11 has been pretty stable for me, but you should never forget these packages are for <strong>testing and bug reporting purposes</strong>: don’t use them on production systems. In case you find a bug:</p> <ul> <li>if it is a _packaging bug, _report it to <a href="">Novell’s bugzilla</a>;</li> <li>if it is a bug <em>in the software</em>, report it <a href="">directly upstream to KDE</a>.<br /> Don’t forget there is <a href="">a specific area on the KDE forums</a> to discuss this beta release.</li> </ul> <p>With that said, let’s find and kill these bugs out there, to make 4.11 rock!</p> 2013-07-01T18:18:12+00:00/2013/06/4-11-beta-1-packages-available-for-opensuse-12-34.11 beta 1 packages available for openSUSE 12.3As a consequence of the recent changes in the repositories, the openSUSE KDE team is happy to announce the availability of packages containing the first beta of the KDE Platform, Workspaces and Applications 4.11. Packages are available in the KDE:Distro:Factory repository. As it is beta software, it<p>As a consequence of <a href="">the recent changes in the repositories</a>, the openSUSE KDE team is happy to announce the availability of packages containing the first beta of the KDE Platform, Workspaces and Applications 4.11.</p> <p>Packages are available in the <a href="">KDE:Distro:Factory</a> repository. As it is beta software, it may have not-yet-discovered bugs, and its use is recommended only if you are willing to test packaging (reporting bugs to <a href="">Novell’s bugzilla</a>) or the software (reporting bugs <a href="">directly to KDE</a>). For specific queries on the 4.11 beta not related to specific openSUSE packaging, use the <a href="">KDE Community Forums 4.11 Beta/RC area</a>.</p> <p>Have a good test!</p> 2013-06-17T05:50:55+00:00/2013/06/upcoming-changes-to-opensuse-kde-repositoriesUpcoming changes to openSUSE KDE repositoriesSince KDE has released the first beta of Platform, Workspaces, and Applications 4.11, there will be some changes in the packages offered in the openSUSE repositories. In short: KDE:Distro:Factory will now start tracking 4.11 betas and RCs: packages are being worked on. Use this version to test p<p>Since KDE <a href="">has released the first beta of Platform, Workspaces, and Applications 4.11</a>, there will be some changes in the packages offered in the openSUSE repositories.</p> <p>In short:</p> <ul> <li><a href="">KDE:Distro:Factory</a> will now start tracking 4.11 betas and RCs: packages are being worked on. Use this version to test packages and to report bugs upstream.</li> <li><a href="">KDE:Release:410</a> has been decoupled from KDE:Distro:Factory. If you were using 4.10 packages from KDF, you’re highly encouraged to move to this repository.</li> <li>KDE:Unstable:SC will keep on carrying snapshots from KDE git repositories.</li> </ul> <p>If you test the 4.11 packages, report bugs in the <strong>packaging</strong> (or openSUSE-specific functionality) to Novell’s bugzilla, and bugs <strong>in the software</strong> to bugs.kde.org. Also, please use the <a href="">dedicated area on the KDE Community Forums</a> to discuss issues.</p> <p>Let the testing commence!</p> 2013-06-14T06:11:07+00:00/2013/06/kde-platform-workspace-and-applications-4-10-4-for-opensuseKDE Platform Workspace, and Applications 4.10.4 for openSUSEThese of<p>These posts kind of sound like a broken record, right? ;) Anyway, since <a href="">KDE has released new versions of Platform, Workspaces and Applications</a> as part of the stable release cycle, thanks to the OBS we have packages available for openSUSE 12.2 and 12.3. The 4.10.4 update will also be released as an official update for 12.3 in due time.</p> <p>Where you can get the packages? Two places, as usual:<>What to look forward to in this release? More than 50 bugs being fixed, including:</p> <ul> <li>CSS compliance fixes in KHTML</li> <li>Bug fixes in Gwenview (display after image rotation, duplicate entries in recent folders)</li> <li>Assorted fixes in KMail: polishing of external editor support, CalDAV fixes, UI adjustments…</li> </ul> <p>For more you can always turn to the <a href=" Number&list_id=675254">full list of fixed bugs</a>.</p> <p>As with any good broken record, some more repetition: report bugs in packaging to Novell’s Bugzilla, and bugs in the software directly to KDE.</p> <p>Have fun with 4.10.4!</p> 2013-06-06T05:47:41+00:00/2013/05/kde-platform-workspaces-and-applications-4-10-3-for-opensuseKDE Platform, Workspaces and Applications 4.10.3 for openSUSEKDE released 4.10.3 versions of the Platform, Workspaces and Applications yesterday, with more than 70 bugs being fixed. Notably: Several fixes in handling encrypted mails in KMail Fixes for KDEPIM syncing and ownCloud A number of improvements in Dolphin, including crash fixes Optimizations<p><a href="">KDE released 4.10.3 versions of the Platform, Workspaces and Applications</a> yesterday, with more than 70 bugs being fixed. Notably:</p> <ul> <li>Several fixes in handling encrypted mails in KMail</li> <li>Fixes for KDEPIM syncing and ownCloud</li> <li>A number of improvements in Dolphin, including crash fixes</li> <li>Optimizations in the Plasma Workspaces</li> </ul> <p><a href=" Number&list_id=638034">The full list</a> has other important changes.</p> <p>As usual, there are two different repositories from which you can get them:<>In case you upgrade now, you should be aware of an issue with KDM that makes it not start: thankfully t<a href="">here’s a workaround available</a>, and updated packages are already being built by the OBS, so it will be solved soon.</p> <p>Report bugs in packaging to Novell’s Bugzilla, and bugs in the software directly to KDE.</p> <p>Have fun with 4.10.3!</p> 2013-05-07T18:46:56+00:00/2013/04/kde-platform-workspaces-and-applications-4-10-2-packages-available-for-opensuseKDE Platform, Workspaces and Applications 4.10.2 packages available for openSUSEKDE has released its monthly update for the 4.10 release, and after a brief wait while the Open Build Service worked over the released tarballs, the openSUSE KDE team is pleased to announce the availability of the 4.10.2 release packages for openSUSE 12.2 and 12.3. Despite being a minor release, m<p><a href="">KDE has released its monthly update for the 4.10 release</a>, and after a brief wait while the <a href="">Open Build Service</a> worked over the released tarballs, the openSUSE KDE team is pleased to announce the availability of the 4.10.2 release packages for openSUSE 12.2 and 12.3.</p> <p><a href=""><img src="" alt="KDE Plasma Workspaces 4.10.2 and Dolphin" /></a></p> <p>Despite being a minor release, more than 100 bugs were fixed, in particular there were many KDEPIM fixes touching both the low level stack and KMail/KAddressbook/Kontact. Some highlights on the fixed issues:</p> <ul> <li> <p>Issues creating IMAP folders with KDEPIM (KDE bugs <a href="">291143</a>, <a href="">292418</a>, <a href="">305987</a>)</p> </li> <li> <p>Issues with encrypted mails in KDEPIM (KDE bugs <a href="">301088</a>, <a href="">313478</a>)</p> </li> <li> <p>KMail not creating required folders at startup (KDE bug <a href="">303117</a>)</p> </li> <li> <p>Crashes when using the semantic desktop framework (KDE bug <a href="">313478</a>)</p> </li> <li> <p>Improvements to CalDAV support in KDEPIM</p> </li> </ul> <p>And this is just a small part of <a href="">the complete list</a>.</p> <p>As usual, packages live in the KDE:Release:410 (<a href="">openSUSE 12.3</a>, <a href="">openSUSE 12.2</a>) repository. You can add the repositories <a href="">through zypper or YaST</a>.</p> <p>The <a href="">KDE:Distro:Factory</a> repository has also been updated. If you want to contribute and help KDE packaging in openSUSE, use the KDE:Distro:Factory version, otherwise stick to the KDE:Release:410.</p> <p>The package manager may complain about needing a downgrade of the <em>branding</em> packages: it is harmless, as some packages were splitted and as such they report a lower version number. Just accept the downgrade in the branding packages and all will be well.</p> <p>Report bugs in packaging to <a href="">Novell’s Bugzilla</a>, and bugs in the software directly <a href="">to KDE</a>.</p> <p>Have fun with 4.10.2!</p> 2013-04-06T16:04:13+00:00/2013/02/kde-platform-workspaces-and-applications-4-10-available-for-opensuseKDE Platform, Workspaces and Applications 4.10 available for openSUSEHot <p>Hot on the heels of <a href="">the announcement from KDE</a>, the openSUSE KDE team is happy to announce the availability of packages for the latest stable release of the KDE Platform, Workspaces, and Applications.</p> <p>Packages are available in the <a href="">KDE:Distro:Factory repository</a> (which is where the packages to land in 12.3 are tested) for openSUSE Factory (soon to be 12.3) and openSUSE 12.2 and soon (when the Open Build Systen finishes rebuilding a number of packages) in the <a href="">KDE:Release:410 </a> repository for openSUSE 12.2 users.</p> <p>If you want to contribute and help KDE packaging in openSUSE, use the KDE:Distro:Factory version, otherwise stick to the KDE:Release:410 repository.</p> <p>Enjoy!</p> 2013-02-06T06:47:14+00:00/2013/01/kde-platform-workspaces-applications-4-10-rc3-opensuse-packages-availableKDE Platform, Workspaces, Applications 4.10 RC3: openSUSE packages available:F<p>Following up on <a href="">the announcement from KDE</a>, the openSUSE KDE team is happy to announce the availability of 4.10 RC3 packages. Remember that they are packages meant for <strong>testing and reporting bugs</strong>, so that the next release will be as polished as possible.</p> <p>You will find the packages in the <a href="">KDE:Distro:Factory repository</a>. An updated live media based on the upcoming openSUSE 12.3 (<a href="">see previous post</a>) <a href="">is also available</a> (files named KDE4-4.10.RC3) . The openSUSE 12.2 based version is also available (files named KDE Reloaded) at the same address.</p> <p>Enjoy!</p> 2013-01-19T21:03:52+00:00/2013/01/test-the-upcoming-opensuse-12-3-and-kde-workspace-applications-and-platform-4-10-rc2Test the upcoming openSUSE 12.3 and KDE Workspace, Applications and Platform 4.10 RC2Following up on my previous post, a different type of image has been made by the openSUSE KDE community members. In particular, alin has created images sporting the same software from KDE (4.10 RC2) but using the upcoming openSUSE 12.3 as base. Download links: 32 bit version 64 bit version R<p>Following up on my previous post, a different type of image has been made by the openSUSE KDE community members. In particular, alin has created images sporting the same software from KDE (4.10 RC2) but using the upcoming openSUSE 12.3 as base.</p> <p>Download links:</p> <ul> <li><a href="">32 bit version</a></li> <li><a href="">64 bit version</a></li> <li><a href="">Release directory</a> (in case the above links go 404; the files are named KDE4-.4.10.RC2-Live)</li> </ul> <p>These images are provided not only to test 4.10 in openSUSE, but also to test part of the distribution itself, without touching existing systems. Should you encounter a bug, please report it as follows:</p> <ul> <li>Bugs in KDE software will need to be reported to <a href="">bugs.kde.org</a></li> <li>Bugs in openSUSE will need to be reported to <a href="">bugzilla.novell.com</a></li> </ul> <p>Don’t forget that those images are not persistent, i.e. the settings will not be saved between sessions.</p> <p>With that said, let the testing commence (or continue)!</p> 2013-01-13T13:38:45+00:00/2013/01/kde-workspaces-and-applications-4-10-on-live-images-courtesy-of-opensuseKDE Workspaces and Applications 4.10 on live images courtesy of openSUSEThe 4.10 release for the KDE Development Platform, Workspaces and Applications is drawing nigh… as you may have read, there is now an additional release candidate in order to test some last-minute changes. Of course, the KDE developers can only do so much: it’s impossible to test all possible combin<p>The 4.10 release for the KDE Development Platform, Workspaces and Applications is drawing nigh… as you may have read, there is now an additional release candidate in order to test some last-minute changes.</p> <p…</p> <p.</p> <p>Some screenshots for the impatient:</p> <p><a href=""><img src="" alt="Desktop" /></a> <a href=""><img src="" alt="Dolphin" /></a> <a href=""><img src="" alt="Gwenview" /></a> <a href=""><img src="" alt="Amarok" /></a></p> <p>And of course, the download links (EDIT: now fixed for good!):</p> <ul> <li> <p><a href="">x86 (32bit) image</a></p> </li> <li> <p><a href="">x86_64 (64 bit) image</a></p> </li> </ul> <p>It’s more than 650 Mb, so it won’t fit on a CD, but it will on an USB stick. <a href="">Follow these instructions</a> to install them to USB media. You can also burn these images to DVD.</p> <p>If you decide to use it, don’t forget to test (<a href="">see here what’s needed</a>) and submit detailed bug reports to the developers.</p> <p>Let’s make 4.10 rock solid!</p> 2013-01-10T23:24:10+00:00/2012/12/testing-kscreen-packages-available-for-opensuseTesting KScreen packages available for openSUSEYesterday <p>Yesterday <a href="">Alex Fiestas showed on his blog a video of a recent development version of the KScreen library</a>, created to handle easily multiple monitor setups in KDE, almost in an “automagic” way. As this is a project where configurations and setups are <em>highly</em> heterogeneous, a lot of testing is required to ensure things work reliably.</p> <p>Of course, you cannot ask a developer to have all sorts of screen combinations, but remember one of the strengths of FOSS: “many eyes make bugs shallow”. And that’s why the KDE team prepared <strong>testing</strong> packages for KScreen for openSUSE users.</p> <p>Before you jump to the repository, bear in mind that these packages are for <strong>testing</strong> and <strong>bug reporting purposes</strong>. They can potentially cause unwanted effects, connect your displays to some random alien homeland, make your house blow up, and so on.</p> <p>If you are still daring, you can find them in the <a href="">KDE:Unstable:Playground repository</a>. Install both the <code>libkscreen</code> and <code>kscreen</code> packages, and you’ll see a new entry in System Settings when you go to the monitor configuration control panel.</p> <p>Make sure you report all bugs (along with detailed information on monitor setups etc) to <a href="">bugs.kde.org</a>.</p> <p>Let the testing commence!</p> 2012-12-27T09:29:17+00:00/2012/12/systemd-and-kde-workspaces-in-opensuse-12-3Systemd and KDE Workspaces in openSUSE 12.3openS <p>openSUSE is migrating to the use of <a href="">systemd</a>.</p> <p><a href="">As ConsoleKit is deprecated</a>,.</p> <p <a href="">patches to support systemd in Fedora</a>..</p> <p><a href="">Other</a> <a href="">patches</a> were directly pushed upstream by Red Hat engineers, and include a better interaction between the workspaces’ power management infrastructure and systemd itself.</p> <p>In short, the next version of openSUSE (12.3) should be fully capable of handling systemd. Of course, to ensure it’s as bug free as possible it requires testing, so <a href="">why don’t you jump into the fray</a> and share your experience with us?</p> 2012-12-15T14:01:31+00:00/2012/12/4-10-beta-2-packages-available-for-opensuse4.10 Beta 2 packages available for openSUSEThe KDE community has just released Beta 2 of the upcoming 4.10 release of the Development Platform, Workspaces, and Applications. Of course, distributions are providing binary packages for the adventurous… and how could the green distro be left out? In fact, it is not. Beta 2 packages were uploaded<p>The KDE community has just released <a href="">Beta 2 of the upcoming 4.10 release</a> of the Development Platform, Workspaces, and Applications. Of course, distributions are providing binary packages for the adventurous… and how could the green distro be left out?</p> <p>In fact, it is not. Beta 2 packages were uploaded and built in the <a href="">KDE:Distro:Factory repository</a>. Updated packages have also been submitted to the development version of openSUSE (Factory) as the ultimate goal is having 4.10 in openSUSE 12.3.</p> <p>Before attempting to install them, be aware that:</p> <ul> <li> <p>This release is mostly aimed at contributors and testers to ensure that the final version is as polished as possible</p> </li> <li> <p>You should expect <strong>bugs</strong> of all forms and kinds</p> </li> <li> <p>It is not officially supported by openSUSE</p> </li> </ul> <p>If you understand all of the above, add KDE:Distro:Factory from YaST or zypper (see the link above; in case of zypper, use<code> zypper ar -f <linktorepo> <reponame></code>), then trigger an upgrade using your method of choice (in the case of zypper, <code>zypper dup --from <reponame></code>; don’t forget the <code>--from</code>!).</p> <p>In case you find something that is not working and you are not sure, try posting your impressions in <a href="">this special area at the KDE Community Forums</a>, and afterwards<a href=""> file a bug on bugs.kde.org</a> (as detailed as possible). Feel free to report packaging errors (not bugs in the software) on the <a href="">opensuse-kde mailing list</a> or on IRC (#opensuse-kde on Freenode).</p> <p>Happy testing!</p> 2012-12-04T19:17:48+00:00/2012/12/making-kde-applications-python-3-friendlyMaking KDE applications Python 3 friendlyWhen Pyt<p>When I’m not on <a href="">forum duty</a> or handling <a href="">openSUSE</a>.</p> <p>As you may know, Python 3 isn’t the standard in many distributions (Arch Linux excluded), but despite the slow start, it is slowly gaining steam. PyKDE4 was built keeping in mind Python 3 support, so theoretically we supported it right from the start.</p> <p <em>mu</em>). Since 4.10 was bumping the minimum required CMake version to 2.8.8, I moved in and <a href="">rewrote the macros making use of the new functionality offered by upstream CMake</a> (with a big help from Rolf Eike Beer).</p> <p>As I worked more on PyKDE4, I hit a second snag: i18n() calls were chocking on Unicode. As QString is Unicode-aware, that was really difficult to debug. The root cause was actually <a href="">a SNAFU of my own caused by swapping lines</a>, and will be fixed in 4.9.4 and 4.10.</p> <p>Lastly, I got word that <a href="">Kate’s Pate</a> was <a href="">not building on Python 3</a>. This caused a huge back and forth of mails and IRC conversations between me and Pate’s developer, Shaheed Haque. I initially fixed building with <a href="">a very crude patch</a>,!</p> <p><a href=""><img src="" alt="" /></a></p> <p>In short, 4.10 should be a good release for Python 3 users. I have some ideas for 4.11 as well: time will tell if I get around to doing them.</p> 2012-12-02T18:00:57+00:00/2012/09/another-story-of-a-patch-or-of-bugs-investigation-and-fixingAnother story of a patch, or of bugs, investigation, and fixing<p>As others, bigger members in the KDE community say, <a href="">“nobody will do it for you, and therefore they will”</a>. The patch from the title comes from such a story.</p> <p>Let’s give some background first: I’m really a heavy activity user, especially when working. My home PC has about five activities, my work one 3, and I managed to compartimentalize the various “topics” that each activity does pretty well.</p> <p.</p> <p>Like many KDE users before me, I found myself in front of a <strong>bug</strong>.</p> <h2 id="we-got-a-problem-what-to-do">We got a problem… what to do?</h2> <a href="">to use the bug tracker</a>. However, my initial report was very sloppy, as I didn’t know what was the cause. There was no way it could be useful for Ivan (the kactivities library’s developer).</p> <h2 id="hunting-the-beast">Hunting the beast</h2> <p.</p> <p>I chose the latter.</p> <p.</p> <p>But it was working before, wasn’t it? Exactly <em>when</em>, though? In that moment, I remembered that git offered a tool that would help me:<a href=""> git bisect</a>. About thirty minutes later I found the commit causing the regression I was seeing. I then added all the information to the bug report. Not content, I also asked a few other people to reproduce it.</p> <h2 id="the-solution">The solution</h2> <p>One of these was Aaron himself, who confirmed the issue and then brought it up to Ivan’s attention. With the information he had, he was able to find the cause and <a href="">commit a possible fix</a> to KDE’s git repository.</p> <p.</p> <p.</p> 2012-09-29T21:14:07+00:00/2012/09/story-of-a-patch-or-united-we-standStory of a patch, or: united we standRecently Fedora’s Lukas Tinkl pushed to kdelibs (for the 4.10 release) a patch that enabled Solid to talk to udisks2, which is a replacement for udisks. Fedora already moved to udisks2 (and killed HAL) and future GNOME releases will only use udisks2, so the need for a working backend was a necessit<p>Recently Fedora’s Lukas Tinkl pushed to kdelibs (for the 4.10 release) a patch that enabled Solid to talk to <a href="">udisks2</a>, which is a replacement for udisks. Fedora already moved to udisks2 (and killed HAL) and future GNOME releases will only use udisks2, so the need for a working backend was a necessity, and at the same time they acted like good open source citizens, and pushed the code both to 4.10 and the KDE Frameworks branch of kdelibs.</p> <p>Unfortunately, there was a snag: the code was there, but not getting compiled. But no one noticed, as Fedora was patching the CMakeLists.txt during their build process (it wasn’t a mistake: they were removing things they didn’t need from upstream KDE, such as Solid’s HAL backend). At the same time, Alin on the openSUSE Factory mailing list noticed that <a href="">with udisks2 he didn’t get devices offered by the Device Notifier widget</a>.</p> <p>So that’s where myself and Raymond, an openSUSE member and contributor, went to investigate the issue. Indeed, there wasn’t any reference to the UDisks2 backend in the build system for Solid. That’s where Raymond took off, and started to adapt it. At some point, we were stuck, so I pinged the helpful Rex Dieter from Fedora and he directed me and Raymond to #fedora-kde. There a quick discussion with Lukas Tinkl himself and Kevin Kofler helped ironing the final thigns out. In the end, Raymond produced and pushed to <a href="">KDE’s repository a patch to enable the building of the backend</a>.</p> <p>What’s the lesson learned from this? That despite distro wars, differences, and even heated debates, collaboration is still a key aspect of FOSS. And in the end the talks between upstream (myself, although I’m not a professional coder by any means), openSUSE (Alin, Raymond) and Fedora (Lukas, Rex, and Kevin) ended up with an improvement that benefited KDE and by association everyone using it.</p> <p>In short: FOSS rocks also for this.</p> <p>_Note for Planet openSUSE readers: _This is my first post for planet openSUSE, so hello to everyone. Perhaps a proper introduction will come later…</p> 2012-09-22T20:26:44+00:00/2012/05/of-brainstorm-ideas-and-seeking-helpOf Brainstorm, ideas and seeking helpMany <p>Many of you know that <a href="">KDE Brainstorm</a>.</p> <p>Recently, a few of them got too busy and thus we’re experiencing a backlog of ideas staying in the Vault (the staging area for evaluation) for longer times than usual. The existing staff is already quite busy, so despite the efforts some deficencies still remain.</p> <p.</p> <p).</p> <p>Speaking of Brainstorm, I and Hans have decided to categorize the current ideas, to see which ones are most sought. Counting votes and what not has issues due to size bias, so in the end <a href="">we settled for reddit’s algorithm</a>. Ben Cooksley was kind enough to point me to the queries I needed to run on the database to grab the information. Without much ado, here’s the top ten list (after weeding out invalid, duplicates, etc.):</p> <ol> <li><a href="">Plasmoid Calendar: Calendar events view and edit</a></li> <li><a href="">Improve consistency in System Settings</a></li> <li><a href="">KDE Theme Editor</a></li> <li><a href="">KMail notification popup should be more informational</a></li> <li><a href="">Improve KDE Help Center</a></li> <li><a href="">Jabber video support</a></li> <li><a href="">External subtitles support in KMPlayer or Dragon Player</a></li> <li><a href="">File Emblems</a></li> <li><a href="">Synchronizations of settings between two KDE4 installation</a></li> <li><a href="">Save/Restore desktop settings</a></li> </ol> 2012-05-27T16:01:28+00:00/2012/03/want-to-make-kde-brainstorm-more-usefulWant to make KDE Brainstorm more useful?The <p><a href="">The recent post by Dario on the KDE Workspace Vision</a> raised some concerns on why Brainstorm was not used.<a href=""> One commenter even said</a> _Right now it feels like “Throw an idea over a wall for no-one but end users to discuss until it bitrot’s”. _</p> <p>_ <br /> _</p> <p>The Brainstorm section is indeed in need of help. To make it more useful, a couple of things are needed:</p> <ul> <li>Statistics to evaluate which ideas are best representative: it can’t be just the number of votes per se as there are things like confirmation bias or controversies that may inflate the numbers</li> <li>Integration with <a href="">Bugzilla</a>: a way to automatically (using XML-RPC) send the ideas flagged as representative to a bug report filed under “wishlist”.<br />.</li> </ul> 2012-03-31T07:11:34+00:00/2012/02/some-more-nepomuk-pleaseSome more Nepomuk, pleaseRecently we’ve seen several blog posts on Planet KDE related to Nepomuk. Reading those I thought that I could add some (little) semantic features to Danbooru Client. Danbooru Client already makes use of Nepomuk: if enabled, tags extracted from Danbooru items are added as Nepomuk tags. But since at l<p>Recently we’ve seen <a href="">several</a> <a href="">blog posts</a> on <a href="">Planet KDE</a> related to Nepomuk. Reading those I thought that I could add some (little) semantic features to <a href="">Danbooru Client</a>.</p> <p>Danbooru Client already makes use of Nepomuk: if enabled, tags extracted from Danbooru items are added as Nepomuk tags. But since at least some Danbooru boards are specialized in certain types of images (e.g., wallpapers only, for examples) I found it would be nice to have Nepomuk show me only the images that come from a specific Danbooru board.</p> <p>After a quick talk on IRC, the task proved to be easier than expected. What I did was to create a new Nepomuk.Resource using the image board URL. Then I set it type as <code>NFO::Website</code> and added it as a related property to the resource pointing to the file. In code, this translates to (excerpt from the main file):</p> <div class="highlight"><pre><code class="language-python" data-<span class="n">resource</span> <span class="o">=</span> <span class="n">Nepomuk</span><span class="o">.</span><span class="n">File</span><span class="p">(</span><span class="n">KUrl</span><span class="p">(</span><span class="n">absolute_path</span><span class="p">))</span> <span class="k">for</span> <span class="n">tag</span> <span class="ow">in</span> <span class="n">tags</span><span class="p">:</span> <span class="k">if</span> <span class="n">blacklist</span> <span class="ow">is</span> <span class="ow">not</span> <span class="bp">None</span> <span class="ow">and</span> <span class="n">tag</span> <span class="ow">in</span> <span class="n">blacklist</span><span class="p">:</span> <span class="k">continue</span> <span class="n">nepomuk_tag</span> <span class="o">=</span> <span class="n">Nepomuk</span><span class="o">.</span><span class="n">Tag</span><span class="p">(</span><span class="n">tag</span><span class="p">)</span> <span class="n">nepomuk_tag</span><span class="o">.</span><span class="n">setLabel</span><span class="p">(</span><span class="n">tag</span><span class="p">)</span> <span class="n">resource</span><span class="o">.</span><span class="n">addTag</span><span class="p">(</span><span class="n">nepomuk_tag</span><span class="p">)</span> <span class="k">if</span> <span class="n">board_url</span> <span class="ow">is</span> <span class="ow">not</span> <span class="bp">None</span><span class="p">:</span> <span class="n">website_resource</span> <span class="o">=</span> <span class="n">Nepomuk</span><span class="o">.</span><span class="n">Resource</span><span class="p">(</span><span class="n">board_url</span><span class="p">)</span> <span class="n">website_resource</span><span class="o">.</span><span class="n">addType</span><span class="p">(</span><span class="n">Nepomuk</span><span class="o">.</span><span class="n">Vocabulary</span><span class="o">.</span><span class="n">NFO</span><span class="o">.</span><span class="n">Website</span><span class="p">())</span> <span class="n">website_resource</span><span class="o">.</span><span class="n">setLabel</span><span class="p">(</span><span class="n">board_url</span><span class="o">.</span><span class="n">prettyUrl</span><span class="p">())</span> <span class="n">resource</span><span class="o">.</span><span class="n">setDescription</span><span class="p">(</span> <span class="n">i18n</span><span class="p">(</span><span class="s">"Retrieved from %1"</span><span class="p">)</span><span class="o">.</span><span class="n">arg</span><span class="p">(</span><span class="n">board_url</span><span class="o">.</span><span class="n">prettyUrl</span><span class="p">()))</span> <span class="n">resource</span><span class="o">.</span><span class="n">addIsRelated</span><span class="p">(</span><span class="n">website_resource</span><span class="p">)</span></code></pre></div> <p>Lo and behold, this is what happens after downloading one such image from Danbooru Client, in Dolphin (notice the “is related to”):</p> <p><img src="" alt="" /></p> <p>Clicking on the link will open in Dolphin all items related to the board in question. Neat, isn’t it? Of course, I’m very willing to add other features like that if there’s interest. Also, critiques on the approach are most welcome!</p> 2012-02-12T19:57:06+00:00/2011/10/screensavers-and-the-kde-workspaces-your-opinion-is-neededScreensavers and the KDE Workspaces - your opinion is neededRecently lock<p.</p> <p>There is however a trade-off: such implementation would mean that screensavers that rely on X (also called <em>X screensavers</em>)).</p> <p>That said, this looks like a large change because existing functionality will change or be removed. Hence, to quote KWin maintainer’s Martin Graesslin,</p> <blockquote>._</blockquote> <p>And how to gather your opinion? Through a poll. <a href="">A poll has been opened in the KDE Community Forums</a> by Martin himself to gather opinions on this upcoming change. Please jump in and let the developers know what you think!</p> 2011-10-02T11:26:37+00:00/2011/06/pykde4-queries-with-nepomukPyKDE4: Queries with Nepomuk<p>In one of my previous blog posts I dealt with <a href="">tagging files and resources with Nepomuk</a>. But Nepomuk is not only about storing metadata, it is also about <em>retrieving</em> and _interrogating _data. Normally, this would mean querying the metadata database directly, using queries written in SPARQL. But this is not intuitive, can be inefficient (if you do things the wrong way) and error prone (oops, I messed up a parameter!).</p> <p>Fortunately, the Nepomuk developers have come up with a high level API to query already stored metadata, and today’s post will deal with querying tags in Nepomuk. As per the past tutorials, the full source code is available <a href="">in the kdeexamples module</a>.</p> <p>Let’s start off with the basic imports:</p> <div class="highlight"><pre><code class="language-python" data-> <span class="kn">from</span> <span class="nn">PyKDE4.nepomuk</span> <span class="kn">import</span> <span class="n">Nepomuk</span> <span class="kn">from</span> <span class="nn">PyKDE4.soprano</span> <span class="kn">import</span> <span class="n">Soprano</span></code></pre></div> <p>Then let’s create a simple class that wil be used for the rest of this exercise:</p> <div class="highlight"><pre><code class="language-python" data-<span class="k">class</span> <span class="nc">NepomukTagQueryExample</span><span class="p">(</span><span class="n">QtCore</span><span class="o">.</span><span class="n">QObject</span><span class="p">):<">NepomukTagQueryExample</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="n">__init__</span><span class="p">(</span><span class="n">parent</span><span class="p">)</span></code></pre></div> <p><strong>init</strong> is just used to construct the instance, nothing more. The bulk of the work is in the query_tag() function, which we’ll take a look at in parts.</p> <div class="highlight"><pre><code class="language-python" data-<span class="k">def</span> <span class="nf">query_tag</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">tag</span><span class="p">):</span> <span class="sd">"""Query for a specific tag."""</span> <span class="n">tag</span> <span class="o">=</span> <span class="n">Nepomuk</span><span class="o">.</span><span class="n">Tag</span><span class="p">(</span><span class="n">tag</span><span class="p">)</span></code></pre></div> <p?</p> <p>For our job, we need to use <em>properties</em> which define the terms of our query. As we’re looking for tags, we’ll use Soprano.Vocabulary.NAO.hasTag():</p> <div class="highlight"><pre><code class="language-python" data-<span class="n">soprano_term_uri</span> <span class="o">=</span> <span class="n">Soprano</span><span class="o">.</span><span class="n">Vocabulary</span><span class="o">.</span><span class="n">NAO</span><span class="o">.</span><span class="n">hasTag</span><span class="p">()</span> <span class="n">nepomuk_property</span> <span class="o">=</span> <span class="n">Nepomuk</span><span class="o">.</span><span class="n">Types</span><span class="o">.</span><span class="n">Property</span><span class="p">(</span><span class="n">soprano_term_uri</span><span class="p">)</span></code></pre></div> <p, <a href="">listed in the Soprano API docs</a>.</p> <p):</p> <div class="highlight"><pre><code class="language-python" data-<span class="n">comparison_term</span> <span class="o">=</span> <span class="n">Nepomuk</span><span class="o">.</span><span class="n">Query</span><span class="o">.</span><span class="n">ComparisonTerm</span><span class="p">(</span><span class="n">nepomuk_property</span><span class="p">,</span> <span class="n">Nepomuk</span><span class="o">.</span><span class="n">Query</span><span class="o">.</span><span class="n">ResourceTerm</span><span class="p">(</span><span class="n">tag</span><span class="p">))</span></code></pre></div> <p():</p> <div class="highlight"><pre><code class="language-python" data-<span class="n">query</span> <span class="o">=</span> <span class="n">Nepomuk</span><span class="o">.</span><span class="n">Query</span><span class="o">.</span><span class="n">FileQuery</span><span class="p">(</span><span class="n">comparison_term</span><span class="p">)</span></code></pre></div> <p>Lastly, we want to get some <em>results</em> out of this query. There are different methods, but for this tutorial we’ll use the tried-and-tested KIO technology:</p> <div class="highlight"><pre><code class="language-python" data-<span class="n">search_url</span> <span class="o">=</span> <span class="n">query</span><span class="o">.</span><span class="n">toSearchUrl</span><span class="p">()</span> <span class="n">search_job</span> <span class="o">=</span> <span class="n">KIO</span><span class="o">.</span><span class="n">listDir</span><span class="p">(</span><span class="n">kdecore</span><span class="o">.</span><span class="n">KUrl</span><span class="p">(</span><span class="n">search_url</span><span class="p">))</span> <span class="n">search_job</span><span class="o">.</span><span class="n">entries</span><span class="o">.</span><span class="n">connect</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">search_slot</span><span class="p">)</span> <span class="n">search_job</span><span class="o">.</span><span class="n">result</span><span class="o">.</span><span class="n">connect</span><span class="p">(</span><span class="n">search_job</span><span class="o">.</span><span class="n">entries</span><span class="o">.</span><span class="n">disconnect</span><span class="p">)</span></code></pre></div> <p>First we convert the query to a nepomuksearch:// url, which then we pass to KIO.listDir, to list the entries. Unlike <a href="">my previous post on KIO</a>, this job emits entries() every time one is found, so we connect the signal to our search_slot method. We also connect the job’s result() signal in a way that it will disconnect the job once it’s over.</p> <p>Finally, let’s take a look at the search_slot function:</p> <div class="highlight"><pre><code class="language-python" data-<span class="k">def</span> <span class="nf">search_slot</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">job</span><span class="p">,</span> <span class="n">data</span><span class="p">):</span> <span class="c"># We may get invalid entries, so skip those</span> <span class="k">if</span> <span class="ow">not</span> <span class="n">data</span><span class="p">:</span> <span class="k">return</span> <span class="k">for</span> <span class="n">item</span> <span class="ow">in</span> <span class="n">data</span><span class="p">:</span> <span class="k">print</span> <span class="n">item</span><span class="o">.</span><span class="n">stringValue</span><span class="p">(</span><span class="n">KIO</span><span class="o">.</span><span class="n">UDSEntry</span><span class="o">.</span><span class="n">UDS_DISPLAY_NAME</span><span class="p">)</span></code></pre></div> <p>Entries are emitted as <a href="">UDSEntries</a>: to get something at least understandable, we turn them into the file name, which is obtained by the stringValue() call using KIO.UDSEntry.UDS_DISPLAY_NAME.</p> <p>That’s it. As you can see, it was pretty easy. Of course there’s more than that. For further reading, take a look at <a href="">Nepomuk’s Query API docs</a>, and <a href="">Query Examples</a>. Bear in mind however that to the best of my knowledge, the “fancy operators” mentioned there will not work with Python.</p> <p>Happy Nepomuk querying!</p> 2011-06-29T19:27:42+00:00/2011/06/access-multiple-google-calendars-from-korganizerAccess multiple Google Calendars from KOrganizerRecently, a question came up on the KDE Community Forums regarding the use of multiple Google Calendars with KOrganizer. The preferred access up to now has been with googledata Akonadi resource, however that doesn’t support more than one calendar, and (at least from my unscientific observation) seems<p>Recently, a question came up on the KDE Community Forums <a href="">regarding the use of multiple Google Calendars with KOrganizer</a>. The preferred access up to now has been with googledata Akonadi resource, however that doesn’t support more than one calendar, and (at least from my unscientific observation) seems to be rather unmaintained these days.</p> <p>Luckily, not all’s lost. Akonadi recently gained the opportunity of accessing CalDAV resources, and Google Calendar also offers a CalDAV interface, hence this is possible.</p> <p>This post will briefly describe how (thanks go to PIMster krop, which casually mentioned the possibility on IRC and prompted me to investigate).</p> <!-- more --> <p><strong>Notice</strong>: I am running trunk (4.7) so I have no idea if the steps posted below are possible in 4.6. Also, this worked for <em>me</em> with my particular setup. YMMV.</p> <p>First of all, you need to obtain the <em>calendar IDs</em> you want to use. This is done in the web version of Google Organizer, in the settings page of your specific calendar, near the private links: it’s a string of alphanumeric characters followed by <em>@gmail.com</em>. Copy it in full (even the address part) as you will need it later, and do it for every calendar you want to use.</p> <p>Next, open KOrganizer, locate the list of the calendars, right click on an emtpy spot and select <em>Add Calendar:</em></p> <p>_ <br /> _</p> <p><em><img src="" alt="" /></em></p> <p>_ <br /> _</p> <p>In the next screen, select “DAV Groupware resource”, then a wizard will come up. Fill in username and password (apologies for the language! I haven’t found a quick way to switch these dialogs to English) and click on Next:</p> <p><img src="" alt="" /></p> <p>_ <br /> _</p> <p>In the following screen, choose <em>Configure the resource manually:</em></p> <p>_ <br /> _</p> <p><em><img src="" alt="" /></em></p> <p>_ <br /> _</p> <p>Click on <em>Finish</em>, but you’re not finished yet. In fact, we will have to add more stuff here. In the new window, select the display name (here shown as <em>Nome visualizzato</em>) of the calendar, then click on Add (which is translated as <em>Aggiungi</em> in this screen):</p> <p><img src="" alt="" /></p> <p>In the next screen we’ll have to add what’s needed for our calendar to work. In <em>Remote URL</em> put <em></em> (https,** not** http) then put (again) your Google account credentials in the relevant places. Then click on “Download” (<em>Scarica</em> here) and you will see (after a while) your Calendar being loaded in the “Found collections” pane, with the name you set in Google Calendar. Click OK to save the configuration.</p> <p><img src="" alt="" /></p> <p>This will bring you back to the previous window. For more calendars, repeat the steps (click on Add, insert URL, Download, OK) for all the calendars you have to display.</p> <p>That’s it. If you encounter trouble, have a look at ~/.xsession-errors to see whether Akonadi managed to connect and download your existing items correctly. And don’t forget to <a href="">file bugs!</a></p> <p>_ <br /> _</p> <p>_ <br /> _</p> 2011-06-11T10:06:29+00:00/2011/04/taking-video-snapshots-quickly-kde-vlc-snapperTaking video snapshots quickly: KDE VLC SnapperSome<p>Some of the oldest readers of this blog are well aware of <a href="">a certain hobby of mine</a>. Over the years I’ve always wanted to write more about that, including the stuff I’m viewing nowadays, but I found a hassle to collect snapshots from videos / DVDs, selecting them, and so on.</p> <p>Recently I learnt that VLC has <a href="">some rather complete Python bindings</a>, and I thought, <em>why not make the process automated?</em> Yesterday I had some free time on my hands and a quick session of hacking brought some results already.</p> <p>As the stuff is somewhat past prototypal stage, I thought I would push somewhere for others to use. Lo and behold, here I present you <em>KDE VLC Snapper</em>.</p> <p><img src="" alt="" /></p> <p>As you can see, it’s a minimal dialog: just select your source video file (any file supported by VLC will do), the number of screencaps, the destination directory, and the program will do the rest. Currently it works <em>somewhat</em> OK (see caveats below) and is good enough for my use cases.</p> <h2 id="how-do-i-get-it">How do I get it?</h2> <p>Just clone this repository:</p> <div class="highlight"><pre><code class="language-bash" data-git clone</code></pre></div> <p>followed by</p> <div class="highlight"><pre><code class="language-bash" data-sudo python setup.py install</code></pre></div> <p>You can then invoke the program with</p> <div class="highlight"><pre><code class="language-bash" data-kdevlcsnapper</code></pre></div> <p><strong>Requirements</strong> include PyKDE4 (tested on KDE Dev Platform 4.6), numpy (just for its “linspace” function, alternatives are welcome) and VLC installed (you don’t need the bindings, however: I provide a local copy).</p> <p>What about <strong>bugs</strong>? Well, currently there are two issues that I’m unsure on how to fix: the first is a crash on exit, the second is that certain media files make VLC crash in the background when called from the bindings.</p> <p>In any case, if you try it out, let me know what you think in the comments!</p> 2011-04-10T12:40:26+00:00/2011/01/improvements-to-the-git-hooksImprovements to the Git hooksAs you may already know, recently the KDE sysadmins completely overhauled the commit hooks used with the Git infrastructure. Written in Python, they have already brought significant improvements to the current workflows. These hooks include keywords that when specified trigger particular actions: the<p>As you may already know, recently the KDE sysadmins completely overhauled the commit hooks used with the Git infrastructure. Written in Python, they have already brought significant improvements to the current workflows. These hooks include keywords that when specified trigger particular actions: the most used are to CC specific email addresses (CCMAIL), to CC bug reports (CCBUG) or to close bug reports (BUG).</p> <p>With the adoption of <a href="">Review Board</a> to facilitate code reviews, there were also requests for a REVIEW keyword that could close the review requests without asking the submitters to do so manually (which is slow and not always effective). Since the hooks for Git were written in Python, I thought I could give a hand there.</p> <p>I looked into the Review Board API, which is a simple REST API: tasks are performed with HTTP GET, POST, or PUT. As I didn’t want to dive too much into the technicalities, I decided to use a wrapper that would make things easier: <a href="">python-rest-client</a>. Once that was in place, it was just a matter of adding some sugar to handle replies, errors and logging. All in 78 lines of code.</p> <p>Now that the “field tests” passed with flying colors, I’m happy to announce that such a hook exists and is operational for KDE’s Git infrastructure. By using the REVIEW keyword at the start of a line, followed by a number, the hook will notify the Review Board instance and close the request. It will also publish a comment stating the commit’s SHA1 and the person who did it.</p> <p>You can take a look at the finished results <a href="">in this review request.</a></p> <p>Credits for this also go to Ben “bcooksley” Cooksley for helping with testing and fixes, and Eike “Sho” Hein for helpful suggestions.</p> 2011-01-26T11:59:04+00:00/2011/01/pykde4-retrieve-data-using-kioPyKDE4: Retrieve data using KIOOne the<p>One they don’t freeze your GUI when you are in the middle of a process.</p> <p>In this post I’ll show how to use KIO to retrieve files from network resources using PyKDE4. The whole example is also available <a href="">in the kdeexamples module</a>.</p> <p):</p> <p><img src="" alt="Image of the example form" /></p> <p>Once this is done, we turn our attention to code. We start customary imports:</p> <div class="highlight"><pre><code class="language-python" data-<span class="c">#!/usr/bin/env python</span> Qt4.QtGui</span> <span class="kn">as</span> <span class="nn">QtGui<></code></pre></div> <p>These will provide for everything we need. Then we set up our widget:</p> <div class="highlight"><pre><code class="language-python" data-<span class="kn">from</span> <span class="nn">ui_textbrowser</span> <span class="kn">import</span> <span class="n">Ui_Form</span> <span class="k">class</span> <span class="nc">TextArea</span><span class="p">(</span><span class="n">QtGui</span><span class="o">.</span><span class="n">QWidget</span><span class="p">,</span> <span class="n">Ui_Form</span><span class="p">):</span> <span class="sd">"""Example class used to show how KIO works."""<">TextArea</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="n">__init__</span><span class="p">(</span><span class="n">parent</span><span class="p">)</span> <span class="bp">self</span><span class="o">.</span><span class="n">setupUi</span><span class="p">(</span><span class="bp">self</span><span class="p">)</span> <span class="bp">self</span><span class="o">.</span><span class="n">downloadButton</span><span class="o">.</span><span class="n">clicked</span><span class="o">.</span><span class="n">connect</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">start_download</span><span class="p">)</span> <span class="bp">self</span><span class="o">.</span><span class="n">clearButton</span><span class="o">.</span><span class="n">clicked</span><span class="o">.</span><span class="n">connect</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">textWidget</span><span class="o">.</span><span class="n">clear</span><span class="p">)</span></code></pre></div> <p>Nothing strange in the initializer here. We simply make two connections, one to the clear() slot of the clear button, and the other to start the KIO process, that is the retrieval of the index from. Let’s take a look at the start_download slot:</p> <div class="highlight"><pre><code class="language-python" data-<span class="k">def</span> <span class="nf">start_download</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span> <span class="n">kdeui</span><span class="o">.</span><span class="n">KMessageBox</span><span class="o">.</span><span class="n">information</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">parent</span><span class="p">(),</span> <span class="s">"Now data will be retrieved from "</span> <span class="s">" using KIO"</span><span class="p">)</span> <span class="c"># KIO wants KUrls</span> <span class="n">data_url</span> <span class="o">=</span> <span class="n">kdecore</span><span class="o">.</span><span class="n">KUrl</span><span class="p">(</span><span class="s">""</span><span class="p">)</span> <span class="n">retrieve_job</span> <span class="o">=</span> <span class="n">KIO</span><span class="o">.</span><span class="n">storedGet</span><span class="p">(</span><span class="n">data_url</span><span class="p">,</span> <span class="n">KIO</span><span class="o">.</span><span class="n">NoReload</span><span class="p">,</span> <span class="n">KIO</span><span class="o">.</span><span class="n">HideProgressInfo</span><span class="p">)</span> <span class="n">retrieve_job</span><span class="o">.</span><span class="n">result</span><span class="o">.</span><span class="n">connect</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">handle_download</span><span class="p">)</span></code></pre></div> <p.</p> <p <a href="">KIO namespace page (C++ version).</a></p> <p>As a last step, we connect the result signal (emitted when the job is complete) to a slot to handle the download. This is what makes KIO useful, because it’s asynchronous, so you can perform long downloads without blocking the user interface of your program</p> <p>Lastly, we see the “handle_download” slot:</p> <div class="highlight"><pre><code class="language-python" data-<span class="k">def</span> <span class="nf">handle_download</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">job</span><span class="p">):</span> <span class="c"># Bail out in case of errors</span> <span class="k">if</span> <span class="n">job</span><span class="o">.</span><span class="n">error</span><span class="p">():</span> <span class="k">return</span> <span class="k">print</span> <span class="s">"This slot has been called. The job has finished its operation."</span> <span class="n">data</span> <span class="o">=</span> <span class="n">job</span><span class="o">.</span><span class="n">data</span><span class="p">()</span> <span class="bp">self</span><span class="o">.</span><span class="n">textWidget</span><span class="o">.</span><span class="n">setPlainText</span><span class="p">(</span><span class="n">QtCore</span><span class="o">.</span><span class="n">QString</span><span class="p">(</span><span class="n">data</span><span class="p">))</span></code></pre></div> <p.</p> <p>What if something goes wrong? We can check for errors if job.error() returns True: in that case we can perform recovery, or simply tell our user that something went wrong. Especially with networked resources, this should always be present in your code.</p> <p>So that’s all for now. As you can see, it was pretty simple, and also very effective.</p> 2011-01-01T21:09:26+00:00/2010/10/pykde4-tag-and-annotate-files-using-nepomukPyKDE4: Tag and annotate files using NepomukSome arbitrar<p>Some arbitrary files using its API in PyKDE4.</p> <p>Before starting, let me say that creating this tutorial was only possible thanks to the help of Sebastian Trueg, who helped me by pointing out some mistakes I was doing.</p> <p>The example here is not showing the extra methods to set up a KApplication, etc.: the full code for this tutorial<a href=""> is available in the kdeexamples module</a>.</p> <p>Let’s start with the basics.</p> <div class="highlight"><pre><code class="language-python" data-<span class="kn">import</span> <span class="nn">sys</span> <span class="kn">from</span> <span class="nn">PyQt4</span> <span class="kn">import</span> <span class="n">QtCore</span> <span class="kn">from</span> <span class="nn">PyKDE4</span> <span class="kn">import</span> <span class="n">kdecore</span> <span class="kn">from</span> <span class="nn">PyKDE4</span> <span class="kn">import</span> <span class="n">kdeui</span> <span class="kn">from</span> <span class="nn">PyKDE4.nepomuk</span> <span class="kn">import</span> <span class="n">Nepomuk</span></code></pre></div> <p>This will import all the bits needed to test our experiment. As a second step, we’ll create a dummy empty file.</p> <div class="highlight"><pre><code class="language-python" data-<span class="n">dummy_file</span> <span class="o">=</span> <span class="nb">open</span><span class="p">(</span><span class="s">"dummy.txt"</span><span class="p">,</span> <span class="s">"w"</span><span class="p">)</span> <span class="n">dummy_file</span><span class="o">.</span><span class="n">write</span><span class="p">(</span><span class="s">"Some text</span><span class="se">\n</span><span class="s">"</span><span class="p">)</span> <span class="n">dummy_file</span><span class="o">.</span><span class="n">close</span><span class="p">()</span></code></pre></div> <p>Or, if we have Python 2.6+ (as pointed out in the comments):</p> <div class="highlight"><pre><code class="language-python" data-<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s">"dummy.txt"</span><span class="p">,</span> <span class="s">"w"</span><span class="p">)</span> <span class="k">as</span> <span class="n">handle</span><span class="p">:</span> <span class="n">handle</span><span class="o">.</span><span class="n">write</span><span class="p">(</span><span class="s">"Some text</span><span class="se">\n</span><span class="s">"</span><span class="p">)</span></code></pre></div> <p>Now that we have our file, it’s time to do something productive with it. But first and foremost, we have to ensure that Nepomuk is running. To do so, we make a simple check (EDIT: fixed the syntax):</p> <div class="highlight"><pre><code class="language-python" data-<span class="n">result</span> <span class="o">=</span> <span class="n">Nepomuk</span><span class="o">.</span><span class="n">ResourceManager</span><span class="o">.</span><span class="n">instance</span><span class="p">()</span><span class="o">.</span><span class="n">init</span><span class="p">()</span> <span class="k">if</span> <span class="n">result</span> <span class="o">!=</span> <span class="mi">0</span><span class="p">:</span> <span class="k">return</span></code></pre></div> <p>Neomuk.instance().init() must return 0 if Nepomuk is properly set up. Once this is taken care of, we can manipulate the semantic information of our file. Thus, Nepomuk needs to be made aware of it: this is done by creating a <em>resource</em> that points to the actual file:</p> <div class="highlight"><pre><code class="language-python" data-<span class="n">file_info</span> <span class="o">=</span> <span class="n">QtCore</span><span class="o">.</span><span class="n">QFileInfo</span><span class="p">(</span><span class="s">"dummy.txt"</span><span class="p">)</span> <span class="n">absolute_path</span> <span class="o">=</span> <span class="n">file_info</span><span class="o">.</span><span class="n">absoluteFilePath</span><span class="p">()</span> <span class="n">resource</span> <span class="o">=</span> <span class="n">Nepomuk</span><span class="o">.</span><span class="n">Resource</span><span class="p">(</span><span class="n">kdecore</span><span class="o">.</span><span class="n">KUrl</span><span class="p">(</span><span class="n">absolute_path</span><span class="p">)</span></code></pre></div> <p>Notice that we <strong>must</strong> use an absolute file path, or the resource will be not created properly and although no errors will happen when tagging, changes will not be made. Let’s now create a tag, which is done by simply constructing a Nepomuk.Tag instance:</p> <div class="highlight"><pre><code class="language-python" data-<span class="n">tag</span> <span class="o">=</span> <span class="n">Nepomuk</span><span class="o">.</span><span class="n">Tag</span><span class="p">(</span><span class="s">"test_example"</span><span class="p">)</span> <span class="n">tag</span><span class="o">.</span><span class="n">setLabel</span><span class="p">(</span><span class="s">"test_example"</span><span class="p">)</span></code></pre></div> <p>In the first line we create the tag, then we associate it with a label, so that it will be displayed in applications such as Dolphin. The nice thing is that if the Tag already exists, it will be recycled: no duplicates will occur. A simple call to addTag to the resource we created earlier will now tag it:</p> <div class="highlight"><pre><code class="language-python" data-<span class="n">resource</span><span class="o">.</span><span class="n">addTag</span><span class="p">(</span><span class="n">tag</span><span class="p">)</span></code></pre></div> <p>We can also add comments that can show up in Dolphin as well by using the setDescription method:</p> <div class="highlight"><pre><code class="language-python" data-<span class="n">resource</span><span class="o">.</span><span class="n">setDescription</span><span class="p">(</span><span class="s">"This is an example comment."</span><span class="p">)</span></code></pre></div> <p>What if we want to remove tags and descriptions? To wipe them all, we can use the remove() method of the Resource, otherwise we can strip elements by using removeProperty along with the tagUri() or descriptionUri() methods of the resource:</p> <div class="highlight"><pre><code class="language-python" data-<span class="n">resource</span><span class="o">.</span><span class="n">remove</span><span class="p">()</span> <span class="c"># strip everything</span> <span class="n">resource</span><span class="o">.</span><span class="n">removeProperty</span><span class="p">(</span><span class="n">resource</span><span class="o">.</span><span class="n">descriptionUri</span><span class="p">())</span> <span class="c"># remove comment</span> <span class="n">resource</span><span class="o">.</span><span class="n">removeProperty</span><span class="p">(</span><span class="n">resource</span><span class="o">.</span><span class="n">tagUri</span><span class="p">())</span> <span class="c"># remove tags</span></code></pre></div> <p>That’s it. As you can see, adding semantic information from PyKDE4 isn’t that hard. Sooner or later I’ll try my hand at queries and report back my findings.</p> 2010-10-26T19:37:16+00:00/2010/07/what-this-might-ever-beWhat this might ever be? The rest is up to you to figure out. <p><img src="" alt="" /></p> <p>The rest is up to you to figure out.</p> 2010-07-27T22:41:14+00:00/2010/07/ocs-and-kde-forums-work-continuesOCS and KDE Forums - work continuesWith my last entry, I announced the start of the work for an OCS library for the KDE Community Forums. Today I’d like to blog again about the recent developments. First of all, now there isn’t one, but two Python modules: _ocslib, _ a pure Python module that can be used to interface with OCS-bas<p>With my last entry, I announced the start of the work for an OCS library for the KDE Community Forums. Today I’d like to blog again about the recent developments.</p> <p>First of all, now there isn’t one, but <em>two</em> Python modules:</p> <ul> <li>_ocslib, _ a pure Python module that can be used to interface with OCS-based forum systems;</li> <li><em>ocslibkde</em>, a PyKDE4 based module that can be used to interface with OCS-based forum system in KDE applications.</li> </ul> <p>Currently ocslib supports reading and posting, while ocslibkde only reading (as of now). Both can be retrieved from the <a href="">kde-forum-mods repository</a> under the <em>ocs-client</em> subdirectory. The Python lib needs unit-testing, then I’ll be able to push a tarball soon for people to test (but you can always check out the Git repository). With regards to the PyKDE4 library, I plan on making a proof-of-concept plasmoid soon that shows how to use the API.</p> <p>Speaking of API, here are some examples using ocslib:</p> <div class="highlight"><pre><code class="language-python" data-<span class="o">>>></span> <span class="kn">from</span> <span class="nn">ocslib</span> <span class="kn">import</span> <span class="n">service</span><span class="c"># Connect to OCS</span> <span class="o">>>></span> <span class="n">ocs_service</span> <span class="o">=</span> <span class="n">ocslib</span><span class="o">.</span><span class="n">service</span><span class="o">.</span><span class="n">OCService</span><span class="p">(</span><span class="s">""</span><span class="p">)</span> <span class="c">#Retrieve all forums</span> <span class="o">>>></span> <span class="n">forums</span> <span class="o">=</span> <span class="n">ocs_service</span><span class="o">.</span><span class="n">list_forums</span><span class="p">()</span> <span class="c"># Elements have attributes for name, posts, etc.</span> <span class="o">>>></span> <span class="k">print</span> <span class="n">forums</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">name</span> <span class="s">"Test forum"</span> <span class="c">#Retrieve threads for forum 15</span> <span class="o">>>></span> <span class="n">threads</span> <span class="o">=</span> <span class="n">ocs_service</span><span class="o">.</span><span class="n">list_forum_threads</span><span class="p">(</span><span class="n">forum_id</span><span class="o">=</span><span class="mi">15</span><span class="p">)</span> <span class="c"># Retrieve thread 8945 from forum 15</span> <span class="o">>>></span> <span class="n">messages</span> <span class="o">=</span> <span class="n">ocs_service</span><span class="o">.</span><span class="n">show_thread</span><span class="p">(</span><span class="n">forum_id</span><span class="o">=</span><span class="mi">15</span><span class="p">,</span> <span class="n">topic_id</span><span class="o">=</span><span class="mi">8945</span><span class="p">)</span> <span class="o">>>></span> <span class="k">print</span> <span class="n">messages</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">text</span> <span class="s">"Hello world!</span> <span class="c">#Post to a forum - requires authentication</span> <span class="o">>>></span> <span class="n">ocs_service</span> <span class="o">=</span> <span class="n">service</span><span class="o">.</span><span class="n">OCService</span><span class="p">(</span><span class="s">""</span><span class="p">,</span> <span class="n">username</span><span class="o">=</span><span class="s">"foo"</span><span class="p">,</span> <span class="n">password</span><span class="o">=</span><span class="s">"bar"</span><span class="p">)</span> <span class="o">>>></span> <span class="n">message</span> <span class="o">=</span> <span class="s">"Hello, KDE people!"</span> <span class="o">>>></span> <span class="n">subject</span> <span class="o">=</span> <span class="s">"Test message"</span> <span class="o">>>></span> <span class="n">ocs_service</span><span class="o">.</span><span class="n">post</span><span class="p">(</span><span class="n">forum_id</span><span class="o">=</span><span class="mi">15</span><span class="p">,</span> <span class="n">subject</span><span class="o">=</span><span class="n">subject</span><span class="p">,</span> <span class="n">message</span><span class="o">=</span><span class="n">message</span><span class="p">)</span> <span class="bp">True</span> <span class="c"># Return code of operation</span></code></pre></div> <p>Feedback (especially on the API) welcome!</p> 2010-07-25T18:49:22+00:00/2010/07/open-collaboration-services-and-kde-forumsOpen Collaboration Services and KDE ForumsFor<p>For, replying and interacting with posters are dramatically different. And that is why some developers find themselves uncomfortable with the <a href="">KDE Community Forums</a>.</p> <p>A dedicated application would be usually much better than a browser, because you can work around the intrinsic limitations of the browser itself. The problem is that you can’t really access a forum with anything else than a browser. That is, it <em>used to be</em> like this, but now things are changing.</p> <p>In the past months fellow administrator bcooksley has been working quite hard implementing the <a href="">Open Collaboration Services (OCS) specification</a> in the KDE Community Forums. For the uninformed, it’s the same API that powers <a href="">OpenDesktop.org</a> and related web pages. This means that you could access the forum contents programmatically using a REST API and parsing the XML that is returned by the service.</p> <p>Unfortunately, bcooksley had no time to implement a client that would make use of this newly-made service.</p> <p>That’s where I stepped in. This morning <a href="">I committed in the kde-forum-mods repository</a> the first implementation of a backend to access the forums’ OCS service. Currently it’s extremely basic - just a few classes to wrap the XML responses into decent data representation, and a basic class to perform reading requests: that means that technically it is possible to request forum listings, thread listing, and posts. I’m still working on the ability of replying and posting messages.</p> <p>Being a Pythonista, the backend is written entirely in Python: currently it uses the standard library plus <a href=""><em>dateutil</em></a> and <a href=""><em>lxml</em></a> to do its bidding, but the next steps would be to turn it into a PyKDE4 library to access all the KDE related goodness (hello, KIO!). Bear in mind that currently there is no application using this: I merely completed (part of) the backend.</p> <p>If you’re interested, the code can be found on gitorious.org, in the <em>ocs-client</em> directory, branch <em>experimental</em>, inside the <a href="">kde-forum-mods repository</a>.</p> 2010-07-18T10:00:39+00:00/2010/06/whats-cooking-at-the-kde-community-forumsWhat's cooking at the KDE Community Forums, th<p.</p> <p.</p> <h2 id="help-me-post-a-topic">“Help me post a topic”</h2> <p>Upon logging in, you will be greeted by a new “New Post” button:</p> <p><img src="" alt="" /></p> <p>You can either click on the arrow to quickly post an idea for Brainstorm, a new discussion, access the “getting started” forum or contact the staff:</p> <p><img src="" alt="" /></p> <p>Or if you just click on the button itself, you access the guided post section:</p> <p><img src="" alt="" /></p> <p>The “Share an idea” and “Chat and discuss” buttosn will bring you to the relevant forums (Brainstorm and Discussions and Opinions), while “Ask a question” will bring about an additional screen:</p> <p><img src="" alt="" /></p> <p>You’ll be able to select your favorite application and you’ll be able to post directly in the relevant forum.</p> <h2 id="open-collaboration-services">Open Collaboration Services</h2> <p>But that’s not all. Thanks to the hard work of Ben Cooksley (fellow admin and System Settings maintainer) there is also an implementation of the <a href="">Open Collaboration Services (OCS)</a>, the same system that powers the well-known Get Hot New Stuff connected to <a href="">OpenDesktop.org</a>.).</p> 2010-06-05T11:56:51+00:00/2010/05/akademy-my-own-bof<p><a href=""><img src="" alt="I'm going to Akademy 2010 image" /></a></p> <p:</p> <p><strong>KDE and bioinformatics: the missing link</strong></p> <p>Although in the KDE community we have our fair share of scientists (hey there, Stuart!), my BoF will focus on the adoption of KDE in the field of <a href="">bioinformatics</a> <em>high-throughput technologies</em>, technologies which produce huge amounts of data from a very small number of experiments (“<a href="">ultramassive sequencing</a>” and <a href="">DNA microarrays</a> are examples of such a technology).</p> <p.</p> <p>Which brings us to the heart of the matter: how does KDE stand in all of this? Sadly, not too well. I’ve done some research in the published literature, but there’s just <strong>one</strong> hit returned that’s proper: <a href="">a KDE application for neuroscience</a> ?</p> <p <strong>not</strong>.</p> <p>Promo efforts and better bindings are the keys to spread KDE more in the field of the bioinformatics. This is what my BoF is about, plus an informal discussion on the use of FOSS in academia and related matters.</p> <p>Interested? If you are, you can come to the BoF which will be on <strong>Tuesday, 6th July</strong> at <strong>15.00</strong> in the Area 2 of the main room at Demola.</p> <p>I’ll also be around later till the following morning (sadly, two days is the best I can do to attend) in case you’re interested for a chat.</p> 2010-05-29T19:55:37+00:00/2010/03/pykde4-new-style-signals-and-slotsPyKDE4: new style signals and slotsThose.p<p>Those who use PyQt and PyKDE4 are certainly familiar with the syntax used to connect signals and slots:<!") QObject.connect(self.pushbutton, QtCore.SIGNAL("clicked()"), self.button_pushed) def button_pushed(self): print "Button clicked" [/python] </code></pre> <p>The main advantage of this syntax is that it’s very close to the C++ equivalent, and so you can translate easily from C++ to Python. Unfortunately the advantages of this syntax end here. The disadvantages, at least from a Python coding perspective, outweigh the advantages:</p> <ul> <li>It’s <em>extremely</em> error-prone: make a typo, and not only your signal won’t be connected, but you won’t even get a warning, your program will simply do nothing;</li> <li>In case you have overloaded signals, you have to type the exact signature, going back to the first problem;</li> <li>It’s not Pythonic at all.</li> </ul> <p>So, in recent PyQt versions (and thus also in PyKDE4) a <em>new style</em>!") # New style self.pushbutton.clicked.connect(self.button_pushed) def button_pushed(self): print "Button clicked" [/python] </code></pre> <p>As you can see it’s much clearer, and much more Pythonic. Also, typos <strong>will</strong> trigger an AttributeError, which means you’ll be able to track where the problem is.</p> <p>What about overloaded signals? Normally the first defined is the default, but you can use a dictionary-like syntax to access other overloads (signal names are completely made up here):</p> <p>[python]<br /> # One signal is without arguments, the other has a bool</p> <h1 id="signal-without-arguments">Signal without arguments</h1> <p>self.my_widget.connected.connect(self.handle_errors)<br /> # Signal with a book<br /> self.my_widget.connected[bool].connect(self.handle_errors)<br /> [/python]</p> <p>Signals are emitted with the emit() function and disconnected with the disconnect() function:</p> <p>[python]<br /> # Emit a signal<br /> self.pushbutton.clicked.emit()<br /> # Emit a signal with a value (an int)<br /> self.my_widget.valueChanged.emit(int)<br /> # Disconnect another<br /> self.my_tabwidget.currentIndexChanged.disconnect()<br /> [/python]</p> <p>To define new signals, you can use the <em>pyqtSignal</em> function, specifying which values will the signal take (if any): just define that as a class constant (like in the example) and then you can access them like the wrapped ones:</p> <p>[python]</p> <p>class MyWidget(QWidget):</p> <pre><code># Signal with no arguments operationPerformed = QtCore.pyqtSignal() # Signal that takes arguments valueChanged = QtCore.pyqtSignal(int) [/python] </code></pre> <p>I merely scratched the surface with this. For more information, check out <a href="">PyQt’s reference manual</a>, which also covers other cases.</p> 2010-03-06T08:04:48+00:00/2010/02/the-world-of-kio-metadata-checking-the-http-response-from-a-serverThe world of KIO metadata - checking the HTTP response from a serverRecently, h<p>Recently, I investigated how to perform some checks on web addresses using KIO for <a href="">Danbooru Client</a>. My old code was synchronous, so it blocked the application while checking, thus causing all sort of troubles (UI freezing, etc.). Therefore, making the switch to KIO was the best solution. However, I had one problem: <em>how could I check the HTTP response?</em></p> <p>I knew already that the various ioslaves can store metadata, consisting of key-value pairs which are specific on the slave used. Normally you can get the whole map by accessing the <a href=""><em>metaData</em></a> function of the job you have used, in the slot connected from the <em>result</em> signal. For some reason, however, in PyKDE4 calling metaData() triggers an assert in SIP, which ends in a crash (at least in my application; I stil need to debug further). KIO jobs have also the <a href=""><em>queryMetaData</em></a> function, which returns the value of the key you have queried. Unfortunately, there was no way I could find the name.</p> <p <a href="">DESIGN.metadata</a> (link is for the branch version). After checking with webSVN, that was exactly the thing I was looking for! It lists all the keys for the metadata, indicating also to which ioslave they begin. After that, the solution was easy.</p> <p>Of course I’m not leaving you hanging there and now I’ll show you how, in PyKDE4, you can quickly check for the server response:</p> <p>[python]<br /> from PyKDE4.kio import KIO<br /> from PyQt4.QtCore import SIGNAL<br /> […]</p> <p>class my_widget(QWidget):<br /> […]</p> <pre><code) </code></pre> <p>[/python]</p> <p>This snippet does a few things. Firstly, it gets the specified URL, using KIO.get (KIO.stat doesn’t set the required metadata). Notice that the call is not wrapped in the new-style PyQt API because <em>result (KJob *)</em> isn’t wrapped like that (<a href="">there’s a bug open for that</a>). In any case, the signal passes to the connecting slot (slot_result) where we first check if there’s an error (perhaps the address didn’t exist?) and then we use <em>queryMetaData(“responsecode”)</em> to get the actual response code.</p> <p>If you want to do error checking basing on the result, bear in mind that KIO operates asynchronously, so you should use a signal to tell your application that the result is what it expected or not.</p> <p>I wonder if this should be documented in Techbase…</p> 2010-02-18T21:42:46+00:00/2010/01/learning-by-exampleLearning by exampleWith my brand-new SVN account, I just committed some code to kdeexamples, KDE’s example code module. In particular, I committed a simple example which shows how to use KConfigXT via PyKDE4, a simplified version of what I wrote about here. As most of KDE is C++, and the Python API docs are translate<p>With my brand-new SVN account, <a href="">I just committed some code</a> to kdeexamples, KDE’s example code module. In particular, I committed a simple example which shows how to use KConfigXT via PyKDE4, a simplified version of <a href="">what I wrote about here</a>.</p> <p>As most of KDE is C++, and the Python API docs are translated directly from the C++ API docs, it is essential to have good examples to help newcomers learn faster. There are some PyKDE4 examples in the kdebindings module already, but I put mine in kdeexamples for a number of reasons:</p> <ul> <li><em>Clear purpose</em>: kdeexamples is meant exactly for this - example code;</li> <li><em>Visibility:</em> A central place to find KDE examples even for bindings is optimal, makes easier to find what one is looking for.</li> </ul> <p>Visibility is also important as currently the examples are rather buried inside kdebindings, and as far as I know they aren’t included in the packages of some distributions (at least not openSUSE; YMMV).</p> <p>I decided to take this route because PyKDE4 is basically the job of one person (Simon Edwards): he does already a great job, but the work is too much for a single person to handle. And due to shortage of human resources, PyKDE4 lacks examples and documentation, and thus it’s not always easy to understand how to use the C++ API in Python. Writing snippets of working code, with extensive comments, is a step in the good direction. And also an opportunity to contribute back to KDE after all these years!</p> <p>For now there’s just KConfigXT, but I plan on tackling KIO next, as soon as I have time. Of course, help is welcome!</p> 2010-01-13T21:37:38+00:00/2009/12/danbooru-client-0-5-is-outDanbooru Client 0.5 is outSometimes worki<p>Sometimes answering apparently harmless questions on instant messaging can have unexpected results. In particular, I was telling about Danbooru Client <a href="">to someone</a> and a question popped up “Why don’t you support pages?”. It seemed a nice idea, so I branched off the code (yay for git!) and started working on it.</p> <p>Well, it took me more than a <em>month</em> to get this thing done… I didn’t spend every day coding, but it was a challenge. Glad it’s over now, which means that Danbooru Client 0.5 is finally available. Grab it <a href="">at the usual place on kde-apps.org</a>.</p> <p>Changes in this version:</p> <ul> <li>Massive code refactoring and documentation</li> <li>Support for multiple pages: the same query can be repeated multiple pages (shown in a tabbed interface), kind of like browsing the actual Danbooru board;</li> <li>Rating information added to the API;</li> <li.</li> </ul> <p>Improvements that I have in the queue:</p> <ul> <li>Suppport for pools (every board out there changes the API, so it will require some work);</li> <li>Support for storing password/username using KWallet (through <a href="">python-keyring</a>, so it works even without KWallet installed);</li> <li>Review usability of the dialogs (I have a separate branch for that);</li> <li>Improve the image download dialog.</li> </ul> <p.</p> <p>Here’s a screenshot of the new interface (click to enlarge):</p> <p><a href=""><img src="" alt="Screenshot of the new interface" /></a></p> <p>Comments and suggestions are always welcome, so don’t hesitate to drop me a line.</p> <p…</p> 2009-12-27T08:54:54+00:00/2009/12/living-on-the-edgeLiving on the edgeK <p><a href="">KDE SC 4.4 Beta 1</a> has been released, and of course I couldn’t stay still. Thanks to the friendly <a href="">openSUSE Build Service</a>, there were packages available, so I just pointed my zypper sources to <a href="">KDE:KDE4:UNSTABLE:Desktop</a> repository, adjusted a few other things (mainly other third-party repositories) and updated.</p> <!-- more --> <p>It was <em>a mostly painless</em> <em>that</em>.</p> <p>So, how’s the SC faring, so far? Pretty nice, overall. There are of course quirks (for example KWin not responding to global shortcuts unless it’s restarted), but generally the experience has been quite positive. The user-facing components, such as KWin and Plasma, have improved quite a bit (new effects, the new widget explorer, new applets….), but also less visible parts such as Nepomuk (with the Virtuoso backend it works OK, although for some reason I can’t access the metadata panel in Dolphin or use the search, also in Dolphin). I also took the time to explore other applications, for example Cantor, as I’m a (reluctant) user of the R programming language. The version built by openSUSE doesn’t ship with an R backend, so I had to compile it on my own. It is more minimalistic than, say, <a href="">rkward</a>, but I can foresse already some uses for it (especially running already-made scripts).</p> <p>I also tried out the netbook shell on my Eee. Again, it’s pretty nice, although sometimes the performance is still lacking (but my Eee uses an i915 chipset, so it’s really the end-of-line and may have played a role in this.</p> <p>Overall I’m very impressed, so congratulations to everyone who’s making this possible!</p> <p>Lastly, following up what <a href="">Stuart</a> and others did, here are two screenshots of two of my (many) activities (click for full picture):</p> <p><a href=""><img src="" alt="" /></a></p> <p><a href=""><img src="" alt="" /></a></p> 2009-12-07T20:29:35+00:00/2009/11/after-a-hiatus-klassrooms-continueAfter a hiatus, Klassrooms continue!Do you like KDE? Did you ever find yourself in a position of wanting to help, but you didn’t know what to do, or who to talk to? Do you feel you could use help to get started? Today, the KDE Community Forums would like to provide the opportunity to answer those questions by annoucing the continuatio<p>Do you like KDE? Did you ever find yourself in a position of wanting to help, but you didn’t know what to do, or who to talk to? Do you feel you could use help to get started?</p> <p>Today, the KDE Community Forums would like to provide the opportunity to answer those questions by annoucing the continuation of the tutorial courses known as <em>Klassrooms</em>.</p> <h2 id="what-are-klassrooms">What are Klassrooms?</h2> <p>Klassrooms are tutorial “lessons” held in a specific area of the forum. Held by one or more “mentors”, they are focused in guiding people through helping KDE by tackling a particular problem. Examples of such problems include:</p> <ul> <li>Fixing simple bugs in an application</li> <li>Taking junior jobs in a specific project</li> <li>Helping with documentation</li> <li>Promotion work (for example, screencasts)</li> <li>Helping with translations</li> </ul> <p>As you can see, Klassrooms are not limited to coding at all.</p> <p>Usually the sessions last from one to two weeks, with a maximum of 5 “students” participating. The work is coordinated in a specific area of the forum.</p> <h2 id="public-call-for-mentors">Public call for mentors</h2> <p>The key to hold Klassrooms is having mentors. Their role is to present the problem and guide students through the course. Compared to a live session, using the forum requires less time, and both the students and the mentor can set their most convenient schedule.</p> <p>That is why _we need you! _You don’t need to be a developer: non-coding courses are as welcome as coding ones. How do you become a mentor? <a href="">These guidelines</a> explain everything that is needed to apply.</p> <p>If you feel like helping, this is the perfect opportunity. Let us know!</p> 2009-11-29T22:25:34+00:00/2009/11/kde-marketing-sprint-day-2KDE Marketing Sprint - Day, , they had been just names or nicks on IRC. That means I was a bit nervous.</p> <p>The first encounter worked out pretty well, actually. I went with the others to have a dinner out, and I got to talk to “famous” KDE people such as Troy, Lydia, or Jos. I also had the nice opportunity to meet up with my fellow forum administrator neverendingo, and we discussed a bit on how to improve the forums. In short, the evening was really nice. I even got to see a N900! I thought I’d never see that. A very nice piece of hardware, I’d say.</p> <p>The following morning, aside a little incident (the other people forgot about me!) I walked up to KONSEC where the meeting was held. While Cornelius led the discussion (the Dot will have more details in due time), I worked on helping out with a promo booklet the team is making. I’m used to writing, but writing in an appealing way to a less specialized audience is much harder. Thanks go to Jos who got me on the right track.</p> <p>Then part of the people moved to another room to discuss about getting new contributors to KDE while I stayed with Jos and Stuart to work on other material. It was a little draining, but very productive overall. I am actually happy to be part of this, for a change, rather than passively reading about it on the web. It’s nice to give something back to your favorite project, as little as may be.</p> <p>Lastly, we went out for a dinner in an Indian restaurant (nice food, not too much though), and we went back (with Eckhart showing us innovative ways to get back by changing multiple subway trains). And here I am, writing a small report of this day.</p> <p>It’s been a very positive experience so far. I finally saw more people who use KDE, and they’re also both fun and nice. Now it’s time for bed, I still have a good half of a day for work before I get back.</p> 2009-11-14T23:26:28+00:00/2009/10/danbooru-client-a-client-for-danbooru-based-sitesDanbooru Client - a client for Danbooru based sitesA<p>A while ago I presented <a href="">“danbooru2nepomuk”</a>, a small program to tag images coming from <a href="">Danbooru-based image boards</a>. Today I want to present the evolution of that program, that is a PyKDE4 client for those boards.</p> <!-- more --> <h1 id="danbooru-is-it-something-you-eat">Danbooru? Is it something you eat?</h1> <p.</p> <p>The API could be technically used also for client applications, in order to free the user from using a browser. That is what Danbooru Client is aiming to do.</p> <h1 id="introducing-danbooru-client">Introducing Danbooru Client</h1> <p>Danbooru Client fits exactly these needs by providing a GUI to (part of) the Danbooru API.</p> <p><strong>Features:</strong></p> <ul> <li>Connect to any Danbooru board (three predefined)</li> <li>Download up to 100 images with selectable tags;</li> <li>Download or view images with the KDE preferred image viewer;</li> <li>Tag semantically the images using Nepomuk.</li> </ul> <p><strong>Requirements:</strong></p> <ul> <li>PyQt (at least version 4.5)</li> <li>PyKDE4 (tested with PyKDE 4.3 only)</li> <li>(optional) Nepomuk</li> <li>Python (at least version 2.5)</li> </ul> <p><strong>Screenshots</strong></p> <p><a href=""><img src="" alt="" /></a> <a href=""><img src="" alt="" /></a></p> <p>(click to enlarge)</p> <h2 id="download--installation">Download & Installation</h2> <p>You can obtain Danbooru Client <a href="">from kde-apps.org</a>. For the bleeding edge people (but are there such users for such an application?) there is a <a href="">git repository set up at Gitorious</a>. Once downloaded, you need to use CMake to install the files. Unfortunately due to the way CMake is set up, you’ll need the KDE development headers and a working C++ compiler, even though you won’t need them for the installation.</p> <p>The installation process is very straightforward:</p> <p>[bash]cd /path/to/source<br /> mkdir build; cd build<br /> cmake -DCMAKE_INSTALL_PREFIX=<code>kde4-config --prefix</code> ../<br /> make # This just byte-compiles Python files<br /> sudo make install[/bash]</p> <p>Then, just launch “danbooru_client”.</p> <h2 id="known-limitations">Known limitations</h2> <p>There are plenty for now, it’s just version 0.1:</p> <ul> <li>Zero documentation (although it’s kind of straightforward to use)</li> <li>Empty cells are created when a row is not filled with images</li> <li>No support for multi-download</li> <li>Untested login/password access</li> <li>The interface may be horrid</li> <li>Danbooru does not support rating filtering via API, so it’s not currently possible to do so</li> </ul> <p>The client is licensed under the GPL v2 or later. The artwork for the splash screen is also under the GPL and was made by <a href="">Melissa Adkins</a>.</p> 2009-10-25T20:16:50+00:00/2009/10/howto-kconfigxt-with-pykde4HOWTO: KConfigXT with PyKDE4If sh<p>If you read around the <a href="">KDE Techbase</a>, or if you develop KDE applications, you may have heard about KDE’s <a href="">KConfigXT</a>.):</p> <ul> <li>KConfigXT requires an XML file and an INI-like file to be compiled by kconfig_compiler in order to produce C++ files</li> <li>There is no such a tool (at least to my knowledge) that does the same job for bindings</li> </ul> <p>So what to do? Either give up on the niceness of KConfigXT, or work around the issue. I chose the latter.</p> <!-- more --> <h2 id="bypassing-the-kconfigcompiler-limitation">Bypassing the kconfig_compiler limitation</h2> <p.</p> <p>[python]<br /> from PyQt4.QtCore import *<br /> from PyQt4.QtGui import *<br /> from PyKDE4.kdeui import *</p> <p>#UI stuff - see later<br /> from ui_generalpage import Ui_GeneralPage</p> <p>class Preferences(KConfigSkeleton):</p> <pre><code>""()) </code></pre> <p>[/python]</p> <p (<a href="">see the API docs</a> ).</p> <p).</p> <p>I set all the attributes as “hidden” to prevent direct manipulation. To access them, I set up properties (following the example of minirok, another PyKDE4 application).</p> <p>[python]<br /> @property<br /> def boards_list(self):<br /> return self._danbooru_boards.value()<br /> [/python]</p> <p>As you can see, the value of the configuration items is accessed via the value() function. Once tihs is done, we’re done with regards to the KConfigSkeleton part.</p> <h2 id="kconfigdialog">KConfigDialog</h2> <p <strong>essential</strong> that the widget that will store our configuration details have the name <em>kcfg_CONFIGOPTION</em>,.</p> <p>The following is an example of a general configuration page widget (the UI_* is the pykdeuic4 generated file):</p> <p>[python]<br /> class GeneralPage(QWidget, Ui_GeneralPage):</p> <pre><code>def __init__(self, parent=None, preferences=None): super(GeneralPage, self).__init__(parent) self.setupUi(self) self.kcfg_danbooruUrls.insertStringList(preferences.boards_list) [/python] </code></pre> <p.</p> <p>And once we have this set up, we can finally create the KConfigDialog!</p> <p>[python]<br /> class PreferencesDialog(KConfigDialog):</p> <pre><code] </code></pre> <p.</p> <h2 id="wrapping-it-up-calling-kconfigdialog">Wrapping it up: calling KConfigDialog</h2> <p>Finally, how to call your dialog? First of all, you need to instantiate your preferences object, for example in your main window application code:</p> <p>[python]self.preferences=preferences.Preferences()[/python]</p> <p>Then, in your code, you do something like this:</p> <p>[python]<br /> if KConfigDialog.showDialog(“Preferences dialog”):<br /> return<br /> else:<br /> dialog = preferences.PreferencesDialog(self, “Preferences dialog”,<br /> self.preferences)<br /> dialog.show()<br /> [/python]</p> <p>The first if ensures that if there is already one dialog open, it won’t open another. If that’s not the case, we instantiate the dialog, passing it the name (which must be the same as the if above), the parent, and the preferences instance.</p> <p>Voila’. In the end, you’ll get something like this (slightly different):</p> <p><img src="" alt="Image of the example KConfigDialog" /></p> <p>I know my own UI sucks here, but it’s something I’m still experimenting…</p> <p>For this tutorial, thanks go to Pino “pinotree” Toscano, who pointed me to the “minirok” project, which makes use of KConfigXT, and Adeodato Simò, the author of minirok.</p> 2009-10-19T21:43:10+00:00/2009/09/introducing-kdialogueIntroducing KDialogueIn line with the project’s committment to openness, the KDE developers and contributors are not a secretive bunch. In fact, the “People Behind KDE” initiative has provided the community with interviews of quite a number of the developers. And by reading those interviews, haven’t you ever felt the nee<p>In line with the project’s committment to openness, the KDE developers and contributors are not a secretive bunch. In fact, the <a href="">“People Behind KDE”</a> initiative has provided the community with interviews of quite a number of the developers. And by reading those interviews, haven’t you ever felt the need of asking a specific question, outside from those prepared by the interviewer? For example, more details about what the specific developer is doing, or what his/her plans are for the next version of KDE.</p> <p>So far, all that was just a passing thought in someone’s mind. Not anymore… because today I’d like to introduce the newest initiative by the KDE Community Forums, in cooperation with other members of the KDE community: <strong>KDialogue!</strong></p> <p><img src="" alt="" /></p> <p>(image courtesy of forum staff member Hans)</p> <p><em>How does it work?</em></p> <p>At fixed intervals, a KDE contributor will be asked to participate in a dialogue. The community will be able to propose questions in a special forum set up at the KDE Community Forums, and people can vote on questions they would like to see answered (in a similar vein, although simplified, to the KDE Brainstorm). It means that <em>you</em>, the community members, choose the questions. The voting will stay opened into seven days before the dialogue, and at that point that the top-voted questions will be emailed to the contributor. The answers will be then published on the <a href="">behindkde.org</a> web page.</p> <p>The first KDialogue will be announced soon. Take the chance to <strong>be part</strong> of the community!</p> 2009-09-20T18:46:22+00:00/2009/09/interesting-plasmoid-drop2tagInteresting plasmoid: Drop2TagWhile descri<p>While browsing around kde-look.org, I’ve stumbled upon <a href="">a nice little Plasma scripted widget</a>, and I’m publishing this to have it get more exposure.</p> <p.</p> <p <a href="">Drop2Tag</a> comes in.</p> <p><img src="" alt="Drop2Tag in plasmoidviewer" /></p> <p.</p> <p.</p> <p>In any case, I’d like to congratulate its author (nik3nt3) for a job well done.</p> 2009-09-15T20:06:59+00:00/2009/09/kde-brainstorm-monthly-digest-issue-3KDE Brainstorm Monthly (?) Digest - issue 3Hello people! Yes, it’s been a while but I haven’t forgotten about it… with the Brainstorm’s new look and most technical issues resolved, I’m able to make Brainstorm Digests more often (hopefully!) Issue 3 - Special summary issue Since we have skipped a few months, what I’d like to present today (a<p>Hello people! Yes, it’s been a while but I haven’t forgotten about it… with the Brainstorm’s new look and most technical issues resolved, I’m able to make Brainstorm Digests more often (hopefully!)</p> <h1 id="issue-3---special-summary-issue">Issue 3 - Special summary issue</h1> <p>Since we have skipped a few months, what I’d like to present today (aside the usual data about the past month), is a general overview of the state of Brainstorm since the start of the initiative a while ago. Now we finally have data to draw some conclusions. But I’ll get to that at the end of the digest.</p> <h2 id="statistics">Statistics</h2> <p>As of the past month, 96 valid ideas have been posted, on an average of 3.2 ideas per day. Not a lot, but we need to consider that at least in Europe August is generally a period where people go on holiday. Two ideas have been forwarded to bugs.kde.org, and two have been marked as implemented. 10 ideas were already existing in KDE, while the duplicate and invalid ideas posted this month were both 7. Last but not least, two ideas were rejected.</p> <h2 id="earning-recognition">Earning recognition</h2> <p>This month’s top voted idea is “<a href="">[Plasma][KWin]Applications minimize to plasmoids</a>” by forum user dflemstr. The idea proposes to have plasmoids associated to a specific application, that could be shown in the application’s tooltip when in the taskbar. Also, by dragging the item from the taskbar to the desktop, the application’s associated plasmoid will be shown. dlfemstr provided a mockup to illustrate his idea:</p> <p>.</p> <p><img src="" alt="Mockup image" /></p> <p>Is it too hard to implement? Is it useful? The community clearly showed interest, but let’s wait for the response of the Plasma team (after they’ve recovered from <a href="">Tokamak 3</a>).</p> <p>For the most discussed idea of this month, we have drIDK’s “<a href="">a simple way to install .bin and .run file!</a>”., which is also this month’s most controversial idea. drIDK proposes to have .bin files for installers (like Google Earth) be opened in a terminal and have their executable bit set automatically. He suggests the use of a “runner” of some sort, and provides a mockup:</p> <p><img src="" alt="Mockup image #2" /></p> <p>There have been a lot of posts to this idea, many raising concerns about the security and thinking about how this approach could be improved. There is no clear idea yet.</p> <p>What about this month’s top poster? You may not believe it, but it’s not TheBlackCat (who is on holiday at the moment)! Instead, the top poster award goes to Madman, who is being very active in the various discussions. Congratulations, and keep being productive.</p> <h2 id="status-of-the-project">Status of the project</h2> <p>No monthly stats this time. Instead, I’ll show you the progress of the ideas since the start of the project and until today:</p> <p><img src="" alt="All days' statistics" /></p> <p>This was calculated with a sliding window average of 4 days. The dotted line indicates the median number of ideas submitted per day. As you can see, the number is around 5 ideas per day, save the first big burst. I think it’s OK for now, because the number of ideas posted does not tell anything about the discussion going on in the already submitted ones, nor if ideas have been implemented, and so on.</p> <p>Speaking of ideas, our physicist-to-be on the team, Hans, has plotted a distribution of the frequency of the number of ideas per day:</p> <p><img src="" alt="Distribution image" /></p> <p>That’s all for now - I hope to be more regular in the future, now that we have a reliable stats. By the way, if KDE users (but developers too!) want more statistics, let us know!</p> <h2 id="words-from-the-team">Words from the team</h2> <p>The forum team would like to remind everyone that the old way to tag ideas in <br /> Brainstorm (by adding keywords within brackets [ ]) is now obsolete. Brainstorm now uses the same tag system as the rest of the forums, and all members can easily tag ideas with the new tag editor. See <a href="">this announcement</a> for more information.</p> <h2 id="credits">Credits</h2> <ul> <li>sayakb - Data gathering and infrastructure</li> <li>Hans (Mogger) - Stats and graphs</li> </ul> <h2 id="feedback">Feedback</h2> <p>Did you like this digest? You didn’t? Let us know so we can improve it!</p> 2009-09-06T19:46:55+00:00/2009/08/kde-4-3-released-thanks-to-all-developersKDE 4.3 released - thanks to all developers!As you all know, KDE 4.3.0 has been releeased today! Now it’s the time to tell the developers “thanks” for their hard work. Join us in the KDE Community Forums and spread the word! <p>As you all know, KDE 4.3.0 has been releeased today! Now it’s the time to tell the developers “thanks” for their hard work. <a href="">Join us in the KDE Community Forums</a> and spread the word!</p> 2009-08-04T18:33:40+00:00/2009/08/scripting-languages-and-kdeScripting languages and KDEUp t<p, <a href="">is doing a nice job in explaining his learning experience</a>).</p> <p.</p> <p.</p> <p>Before someone flames, let me state that <strong>I’m not advocating a reduction of C++ based programs in KDE</strong>. Just more choice. FOSS is all about it, right?</p> <p++.</p> 2009-08-03T18:23:26+00:00/2009/08/kde-community-forums-present-the-new-kde-brainstormKDE Community Forums present the new KDE BrainstormThe. T<p href="">a brand new version of the KDE Brainstorm</a>.</p> <p><img src="" alt="kb_overview_small.png" /></p> <p>The new interface resembles the IdeaTorrent sites, but it is still powered by the forum software (phpBB), a clear example of the flexibility of the platform. Posting new ideas, voting, commenting and filtering the lists is now extremely easy and requires just a few mouse clicks.</p> <p.</p> <p>Aside from the new looks, the KDE Brainstorm keeps its strong foundations: the Idea Moderators, whose hard work is essential to pre-screen and approve the ideas, the KDE Community Forums staff to keep the system up and runing, and of course the KDE community.</p> <p>Check out the new Brainstorm, and <a href="">let us know what you think!</a></p> 2009-08-02T21:40:17+00:00/2009/07/the-next-iteration-of-the-plasma-faq-call-for-helpThe next iteration of the Plasma FAQ - call for helpA<p>A few may know that I more or less maintain the <a href="">Plasma FAQ</a> page on <a href="">KDE’s UserBase</a>..</p> <p.</p> <p>The big advantage is that you don’t need any special coding skills, just a knowledge of Plasma and being able to run 4.3 (RC2 at this stage). I have already made a <a href="">skeleton page</a>: people can edit bits of information, removing outdated items and adding new ones. Also, Aaron’s excellent screencast can be used to see which new features are in Plasma.</p> <p>If you have wondered how to help KDE without being able to code, this is a good opportunity to do so!</p> 2009-07-11T19:18:35+00:00/2009/06/a-brand-new-look-for-kde-comunity-forumsA brand new look for KDE Community forumsToday, a major upgrade of the KDE Community Forums took place. The change brings quite a number of changes to the forums themselves, and it’s a further step towards providing a better experience for KDE users (and developers too!). The new forums, powered by phpBB, have a whole new theme, heavily <p>Today, a major upgrade of the <a href="">KDE Community Forums</a> took place. The change brings quite a number of changes to the forums themselves, and it’s a further step towards providing a better experience for KDE users (and developers too!).</p> <!-- more --> <p>The new forums, powered by <a href="">phpBB</a>, have a whole new theme, heavily inspired by KDE 4.3’s “Air” Plasma theme:</p> <p><a href=""><img src="" alt="Image of the new forum theme" /></a></p> <p>Not only the looks have improved, however. A number of features have been added:</p> <ul> <li>The ability of tagging a specific thread;</li> <li>A new, improved reputation system;</li> <li>A <em>friends connection</em> system where you can mark other users as friends and interact more easily with them;</li> </ul> <p>The popular section “KDE Brainstorm” has also received improvements, such as the ability of having a neutral vote, an improved voting pad, and a brand new look to match the one of the forums.</p> <p><a href=""><img src="" alt="Image: new Brainstorm look" /></a></p> <p>As a final note, we would like to thank phpBB developers cs278, naderman and NeoThermic for their kind assistance and great support during the migration process. Thanks a lot!</p> <p>Take a tour around the new KDE Community Forums, and <a href="">let us know what you think!</a></p> 2009-06-27T23:18:47+00:00/2009/05/kde-brainstorm-monthly-digest-issue-2KDE Brainstorm Monthly Digest - issue 2Hello,,<p>Hello, and welcome to the second issue of the KDE Brainstorm Digest! This issue comes in slightly late, due to some real time commitments, but I couldn’t leave you without it, could I?</p> <!-- more --> <h1 id="issue-2---april-26th---may-29th-2009">Issue 2 - April 26th - May 29th, 2009</h1> <h2 id="statistics">Statistics</h2> <p.</p> <p.</p> <h2 id="earning-recognition">Earning recognition</h2> <p>This month’s top voted idea (not counting negative votes) is <a href="">“Make videodvd support like audiocd”</a>,.</p> <p>The most discussed idea for this month was <a href="">“Let’s use Gtk2 theme in KDE (Additional Option)”</a>..</p> <p>The idea with most votes (including negative ones) is<a href=""> “[Dolphin] Preload an instance after KDE startup”</a>..</p> <p>This month’s top idea poster is still <strong>TheBlackCat</strong>, who again posted an incredible number of proposals (even well detailed). Will he keep his role as the idea champion? We’ll find out next month!</p> <h2 id="status-of-the-project">Status of the project</h2> <p>It’s been two months since the KDE Brainstorm has launched. How it is going now? This month we have not one, but <em>two</em> graphs to describe the posting of ideas in relation to time.</p> <p><img src="" alt="Time series image for the ideas" /></p> <p.</p> <p>Still, it’s not like we can gather meaningful insights from a plot like this. So, thanks to the aid of fellow forum staff Hans (Mogger on IRC), we have the second graph for this Digest:</p> <p><img src="" alt="Graph of a sliding average of ideas over 3 days" /></p> .</p> <p><strong>Credits</strong></p> <ul> <li>sayakb - Data gathering and infrastructure</li> <li>Mogger - Sliding average calculation</li> </ul> <p><strong>Feedback</strong></p> <p>Did you like this digest? You didn’t? Let us know so we can improve it!</p> 2009-05-30T20:18:18+00:00/2009/05/new-refactored-system-settingsNew, refactored System SettingsA (<p>A rather big change has gone into KDE’s SVN recently: Ben Cooksley (bcooksley) and Mathias Soeken (msoeken) have committed a complete rework of System Settings.</p> <p.</p> <!-- more --> <p>Screenshots are better than words so here goes (click for larger pictures):</p> <p><a href=""><img src="" alt="" /></a><a href=""><img src="" alt="" /></a></p> <p>What’s more is that by searching you get a nice “highlighted effect” to indicate the match (clearly visible in this icon view screenshot):</p> <p><a href=""><img src="" alt="" /></a></p> <p>Lastly, the about screen:</p> <p><a href=""><img src="" alt="" /></a></p> <p>The best about this new System Settings implementation is that it was born **thanks to the KDE Community Forums. **<a href="">A thread posted on January 29th<.</p> 2009-05-01T07:53:14+00:00/2009/04/amarok-21-beta-in-opensuseAmarok 2.1 beta in openSUSECurrently,<p>Currently, the openSUSE Build Service contains Amarok 2.1 beta packages only for the KDE:KDE4:UNSTABLE repository (i.e. current trunk, soon-to-be 4.3). However, a <a href="">quick search</a> identified a repository that contains a RPM of current 2.0.90 <a href="">compiled for the KDE:KDE4:Factory Desktop</a> (which will change to 4.3 with Beta 1) and<a href=""> another one for the KDE:42 repository</a>. So you can add them using YaST or zypper and download the relevant package. I had to force a repository using zypper, otherwise zypper would complain and try to install amarok 1.4 from Packman…</p> <p><em>Disclaimer: _I’m **not **affiliated in any way to openSUSE, nor did I make these packages. _Use them at your own risk</em>. Also if you find bugs, report them on <a href="">bugs.kde.org</a> so that the developers can fix them.</p> 2009-04-28T14:34:16+00:00/2009/04/kde-brainstorm-monthly-digest-issue-1KDE Brainstorm Monthly Digest - issue 1<p>Hello, and welcome to the first “issue” of the KDE Brainstorm monthly digest.</p> <p.</p> <h1 id="issue-1---march-23th---april-25th-2009">Issue 1 - March 23th - April 25th, 2009</h1> <h2 id="statistics">Statistics</h2> <p.</p> :</p> <ul> <li><a href="">Per-mimetype thumbnails in Dolphin</a></li> <li><a href="">Movable tabs in Dolphin like in Konqueror</a></li> <li><a href="">Extract menu in Ark when using drag and drop</a></li> </ul> <p>In particular, the last one was the first Brainstorm idea to be implemented by a KDE developer. Many thanks to Harald Hvaal for this!</p> <h2 id="getting-noticed">Getting noticed</h2> <p>This month, the top voted idea was <a href=""><strong>Easy, Beautiful Progress Notification in the Task Bar</strong></a> ).</p> <p>The community is heterogeneous, and some ideas are bound to be controversial. This month, the top controversial idea is <a href=""><strong>Payment/donation to get bugs fixed</strong></a> .</p> <p>Speaking of getting noticed, forum user <strong>TheBlackCat</strong> is our idea submitter champion for this month. When Brainstorm was created, TheBlackCat ported over many ideas discussed earlier in the Discussion forum, and also proposed quite a number of ideas on diverse fields (mostly Dolphin related).</p> <h2 id="status-of-the-project">Status of the project</h2> <p>How is the Brainstorm faring this month? Has the initial enthusiasm faded out? The simplest way to look at it is by viewing the number of votes over the days, as shown by the followng graph:</p> <p><img src="" alt="" /></p> <p.</p> <h2 id="credits">Credits</h2> <p>Credit were it’s due, of course. The following people have contributed a lot to make this possible:</p> <ul> <li>sayakb - Data gathering and various bits of PHP magic</li> <li>Mogger - Development of the controversial idea score</li> </ul> <h2 id="feedback">Feedback</h2> <p>Did you like this digest? You didn’t? Let us know so we can improve it!</p> 2009-04-25T07:02:01+00:00/2009/04/first-kde-brainstorm-idea-implementedFirst KDE Brainstorm idea implemented!Via Harald Hvaal’s blog I learnt that the first non-forum suggestion has been implemented! I think this shows without doubt that initiatives like the KDE Brainstorm are undoubtedly useful to the community at large, both users and developers. Keep on rocking! <p>Via <a href="">Harald Hvaal’s blog</a> I learnt that <a href="">the first non-forum suggestion has been implemented</a>! I think this shows without doubt that initiatives like the <a href="">KDE Brainstorm</a> are undoubtedly useful to the community at large, both users and developers. Keep on rocking!</p> 2009-04-04T10:06:45+00:00/2009/03/gene-search-applet-suggestions-and-code-review-neededGene search applet: suggestions and code review needed <p.</p> <!-- more --> <p>I found a way thanks to the <a href="">Biopython project,</a> which offers a Python module to access the resources of the <a href="">National Center for Biotechnology Information (NCBI)</a> by providing an interface to their <a href="">EUtils</a>.).</p> <p>The code lives in <a href="">a git repository at github</a>. **WARNING: **The code may be a complete mess (I’m not too well versed in GUI stuff, I mostly do text file manipulation) If you are so daring, you can obtain and install it in a very simple manner:</p> <p>[bash]git clone git://github.com/cswegger/plasma-genesearch.git<br /> cd plasma-genesearch<br /> zip -r ../plasma-genesearch.plasmoid *<br /> plasmapkg -i ../plasma-genesearch.plasmoid[/bash]</p> <p>After that you will see an “Entrez Gene Searcher” in your add applets dialog. Once added, it’ll look like this:</p> <p><img src="" alt="Gene searcher image" /></p> <p <em>AKT3</em>:</p> <p><img src="" alt="Gene search results image" /></p> <p>“Search again” will bring you back to the search form.</p> <p <em>does….</em></p> <p>Other things that need to be improved are:</p> <ul> <li>The Plasma.TextEdit is not cleared upon clicking. Is there a signal I can catch for that, so I can connect it to clear()?</li> <li>Proper searching. Bio.Entrez already does this: what I need is a way to display the records properly.</li> <li>A way to link the names to URLs, and have them open in Konqueror.</li> </ul> <p>That should be it. I hope to work on it some more next weekend….</p> 2009-03-31T17:33:09+00:00/2009/03/kde-brainstorm-after-the-launchKDE Brainstorm: after the launchNow appro<p>Now appropriate ideas (not bug reports, not duplicates…) are on the forum. People have also begun voting, although slowly: it’s understandable, given the fact that there are so many threads in so little time.</p> <p.</p> <p>I’m satisfied, so far. It clearly shows that there was a need to request features (also shown <a href="">by Aaron’s post on openFATE a while ago</a>) without clogging up Bugzilla. In a while, we’ll make sure that the most voted features will get forwarded to the relevant developers. As I said on the Dot story, I’m hoping this can bring users and developers more close together, and build a better community.</p> <p!</p> 2009-03-22T09:55:48+00:00/2009/03/kde-brainstorm-is-live unlike<p>It’s finally there: <a href="">KDE Brainstorm has been launched today!</a>).</p> <p>Got a creative idea? Hop over <a href="">to the forums</a> and tell the world about it!</p> 2009-03-20T20:02:59+00:00/2009/03/i-love-poisonI love poisonNo, it’s not_ <p>No, it’s <strong>not</strong>_ that is unconstructive and rather trollish. This wouldn’t be such a big problem, if not for the fact that posts from said individual have had a rather negative effect: for example Aaron was forced to turn comment moderation on on his blog, <a href="">Jos’ entry on the new Plasma in the upcoming KDE 4.3</a> was the theatre of a flame-fest in the comments, and now I’ve seen poisonous comments also on <a href="">Nookie’s</a> and <a href="">Socceroos’</a> blogs.</p> <p>Clearly this person hasn’t read the <a href="">KDE Code of Conduct</a>….</p> <p>To I Love: try to express your opinion in a form that is compatible with civil discussion, instead of going on a rampage. That is, assuming that you _are _actually interested in a civil discussion…</p> 2009-03-18T16:56:33+00:00/2009/03/bilbo-bloggerBilbo BloggerMt <p>Mtux, of <a href="">choqok</a> fame, along with another person, has written <a href="">Bilbo Blogger</a>,.</p> <p>It’s not released yet, but for the daring, you can actually try and compile it. You need to check out and install the blogging library (BlboKBlog) first:</p> <div class="highlight"><pre><code class="language-cpp" data-<span class="n">git</span> <span class="n">clone</span> <span class="nl">git</span><span class="p">:</span><span class="c1">//gitorious.org/bilbokblog/mainline.git bilbokblog</span> <span class="n">cd</span> <span class="n">bilbokblog< <code>sudo</code>.</p> <p>Then, you need to check out the actual application:</p> <div class="highlight"><pre><code class="language-cpp" data-<span class="n">git</span> <span class="n">clone</span> <span class="nl">git</span><span class="p">:</span><span class="c1">//gitorious.org/bilbo/mainline.git bilbo</span> <span class="n">cd</span> <span class="n">bilbo<>Again, a <code>make install</code> will do the trick.</p> <p>After that, you can start the application and create a new blog (see the<a href=""> screenshot section on Bilbo’s web page</a>), and the program will try to figure out what is needed automatically. Neat. After that, you can just start writing entries.</p> <p!</p> 2009-03-17T15:58:07+00:00/2009/02/science-and-kde-kileScience and KDE: kileDuring-scienti<p>During the course of my research work, I may obtain results that are worthy of publication in scientific journals. Since my master’s thesis I’ve been using <a href="">LaTeX</a> as my writing platform, mainly because I can concentrate on content rather than presentation (I find it useful also for writing non-scientific stuff as well). Also, I can handle bibliography (essential for a scientific publication) very well without using expensive proprietary applications (such as Endnote).</p> <p>In my early days I used kLyX first, then <a href="">LyX</a>, but I found the platform to be too limited for my tastes, and also LaTeX errors were difficult to diagnose. I needed a proper editor, and that’s when I heard of <a href="">kile, a KDE front-end for LaTeX</a>. Kile is currently at version 2.0.2 and is a KDE 3 application. However, in KDE SVN work is ongoing to produce a KDE4 version (2.1) and that’s what I’ll look at in this entry.</p> <!-- more --> <p><strong>Obtaining kile 2.1</strong></p> <p>First and foremost, a disclaimer. kile 2.1 has not been released yet in any form, and so should be considered unstable and crash-prone. That said, it runs more or less well on my platform.</p> <p>The first thing to do is to grab the sources from SVN:</p> <p><code>svn checkout svn://anonsvn.kde.org/home/kde/trunk/extragear/office/kile</code></p> <p>That will put kile’s sources in a directory called “kile”. The next step is to compile it (as usual, you need KDE4 development packages/files installed):</p> <p><code>cd kile mkdir build; cd build cmake -DCMAKE_INSTALL_PREFIX=</code>kde4-config –prefix<code> ../ make</code></p> <p>Followed by the usual <code>make install</code> as root or using <code>sudo</code>.</p> <p><strong>kile 2.1 at a glance</strong></p> <p>This is how kile looks when loaded on my system:</p> <p><a href=""><img src="" alt="kile1.png" /></a></p> <p>(For the inquisitive people, it’s not a scientific work, rather a sci-fi like book I’m writing).</p> <p:</p> <p><a href=""><img src="" alt="kile4.png" /></a></p> <p.</p> <p><a href=""><img src="" alt="kile2.png" /></a></p> <p.</p> <p><a href=""><img src="" alt="kile3.png" /></a></p> <p>Lastly, kile has a plethora of other options, including customizing what you can use to build LaTeX files and view them (DVI, PS, PDF…), as shown in this screenshot.</p> <p><a href=""><img src="" alt="kile5.png" /></a></p> <p><strong>Conclusions</strong></p> <p.</p> 2009-02-22T20:49:20+00:00/2009/02/science-and-kde-rkwardScience and KDE: rkward I try to use FOSS extensively for my scientific work. In fact, when possible, I use only FOSS tools. Among these there is the R programming language. It’s a Free implementation of the S-plus language, and it’s mainly aimed at statistics and mathematics. As the people who read my scientific posts know<p>I try to use FOSS extensively for my scientific work. In fact, when possible, I use <em>only</em> FOSS tools. Among these there is the R programming language. It’s a Free implementation of the S-plus language, and it’s mainly aimed at statistics and mathematics. As the people who read my scientific posts know, I don’t like R much. But sometimes it’s the only alternative.</p> <p>Well, what does R have to do with KDE? With this post I’d like to start a series (hopefully) of articles that deals with KDE programs used for scientific purposes. In this particular entry, I’ll focus on rkward, a GUI front-end for R.<br /> <!-- more --><br /> <strong>Introduction</strong></p> <p>Although R is a programming language, it’s mainly used in an interactive session, started from the terminal. The standard installation can be improved by the use of add-on packages, <em>libraries</em> in R-speak, which can be installed from the Internet (Comprehensive R Archive Network or CRAN) or from local files. One of the most famous third party repositories is the Bioconductor project, which hosts a lot of packages used by life scientists who do bioinformatics.</p> <p>The Windows version of R has a GUI (Rgui) which provides extra functionality, such as package management and loading, and other goodies. Although there were plan for a GTK+ frontend for Linux, the project is (as far as I know) stuck in a limbo.</p> <p>That’s where rkward comes to the rescue. It’s a GUI front-end for R for KDE4, which aims to provide a graphical shell for many R commands and environments (and especially the publication-quality plotting figures).</p> <p><strong>Getting rkward</strong></p> <p>rkward is available from <a href="">Sourceforge.net</a>. Unfortunately, if you use a recent (>=2.8) version of R it won’t compile, due to the changes in R itself. For that, you need to directly download the sources off SVN with a command like this</p> <div class="highlight"><pre><code class="language-cpp" data-<span class="n">svn</span> <span class="n">co</span> <span class="nl">https</span><span class="p">:</span><span class="c1">//rkward.svn.sourceforge.net/viewvc/rkward/trunk/rkward/</span></code></pre></div> <p>Either way, the sources are compiled the usual, way, that is</p> <div class="highlight"><pre><code class="language-cpp" data-<span class="n">cd</span> <span class="n">rkward</span><span class="o">-</span><span class="n">xxx</span> <span class="err">#</span> <span class="n">Your</span> <span class="n">rkward</span> <span class="n">source</span> <span class="n">dir</span> <span class="n">mkdir</span> <span class="n">build</span><span class="p">;</span> <span class="n">cd</span> <span class="n">build</span> <span class="n">cmake</span><span class="err"> < sudo, depending on your distribution.</p> <p><strong>rkward at a glance</strong></p> <p><strong>[singlepic id=263 w=320 h=240 float=center]</strong></p> <p>This is how rkward looks when loading it up (yes, it’s in Italian because that is my own locale). You have the R console (which I brought up) and then an output window which is used to display results. There is also another tab called “mio.dataset” (my.dataset) which keeps data, in a spreadsheet-like form. This is useful when you want to create your own datasets from scratch, or if you want to inspect one you have loaded.</p> <p>So how do you start coding? You can create a new script using the “Script File” button. Like that, you can input R commands and then execute them all at once, or the current line. If you prefer interactive work, you can use the R command line (shown in the screenshot).</p> <p>[singlepic id=264 w=320 h=240 float=center]</p> <p>You can also use rkward to import data: R provides a series of functions (like <code>read.table</code>) to load data sets (usually comma- or tab-delimited text files). rkward provides a complete GUI to those functions, which is shown in the screenshot above. Notice that for working, it requires PHP (the line command version).</p> <p>[singlepic id=266 w=320 h=240 float=center]</p> <p>Ok, we have data loaded. Now we may want to do some operations: rkward provides front-ends to many of R’s statistical functions. In the screenshot, we can see the GUI for a two-variable t-test. Notice how it shows also the code, so the most experienced R people can view exactly what it does.</p> <p>Like with statistics, R has powerful support for graphics, and even in this case rkward offers some frontends, for example histograms, boxplots, and scatter plots. You can also plot all kinds of distributions.</p> <p>[singlepic id=265 w=320 h=240 float=center]</p> <p>Lastly, rkward can manage your R packages (R package management is akin to one of a Linux distribution), and als your package sources. You can install or upgrade packages, and select where they’ll get installed to.</p> <p><strong>Conclusions</strong></p> <p>rkward is a nice frontend for the R programming language, which adds a GUI with the power of KDE to R. Unfortunately the program is still somewhat unstable (also shown by a warning when you run it) and its main developer has currently very little time to work on it. In case you may want to help, you can hop to the r<a href="">kward-devel mailing list.</a></p> 2009-02-07T18:55:53+00:00/2009/01/fishing-for-ideasFishing for ideasI have been thinking of doing another Kourse at the KDE Forums, similar to the one that has produced three nice screencasts. My idea would be to show very brief and focused screencasts, a sort of “how do I…”. I have a few ideas, but I’d like to ask the KDE community at large. I’m mostly interested i<p>I have been thinking of doing another Kourse at the <a href="">KDE Forums</a>, similar <a href="">to the one that has produced three nice screencasts</a>.</p> <p>My idea would be to show very brief and focused screencasts, a sort of “how do I…”. I have a few ideas, but I’d like to ask the KDE community at large. I’m mostly interested in showing single features (short videos), preferably of the “eye-opener” kind.</p> <p>If you have any suggestions, leave a comment.</p> 2009-01-31T00:16:27+00:00/2009/01/the-answer"The<p><img src="" alt="The Answer" /></p> <p>Yes, I know I’m a bit late to the party (unfortunately today was one of the busiest days ever where I work), but I thought I’d join the other members of the KDE community, because <a href="">KDE 4.2 has been released today</a>.</p> <p>Take a look at <a href="">the visual guide</a>, or see if your distribution has already <a href="">packages for you</a>. <br /> For your enquiries, <a href="">the KDE Forums</a> are at your disposal.</p> 2009-01-27T21:08:49+00:00/2009/01/hello-planet-kdeHello Planet KDEIf. I’ve been using<p>If.</p> <p <a href="">Plasma FAQ</a>, and I’ve been working together with Simon Edwards for some <a href="">Python Plasma tutorials</a>.</p> <p>In more recent times, I entered the <a href="">KDE Forums</a> staff as a mentor (and all-around writer) and helped students with the second Kourse of the forum, namely <a href="">Plasma screencasts</a>. In the past I have done <a href="">a few Plasma screencasts myself</a> in the past.</p> <p>That’s all for now. I’ll try to blog mostly on KDE Forum related matters, with “opinion” pieces every now and then.</p> 2009-01-25T16:22:38+00:00/2009/01/new-plasma-with-python-tutorialsNew Plasma with Python tutorialsI just finished writing a new Python tutorial on KDE’s Techbase. This one deals with writing DataEngines in Python (a complement to Simon Edwards’s own Using DataEngines). Let me know what you think. As it’s a wiki, comments and suggestions are welcome. <p>I just finished writing a new Python tutorial on KDE’s Techbase. <a href="">This one</a> deals with writing DataEngines in Python (a complement to Simon Edwards’s own <a href="">Using DataEngines</a>).</p> <p>Let me know what you think. As it’s a wiki, comments and suggestions are welcome.</p> 2009-01-24T11:09:36+00:00/2009/01/last-plasma-screencastLast Plasma screencastIt turned out I forgot to add the last screencast produced by the students of Kourse 2, so I’ll fix my mistake right now. Here’s Panel settings, by Kourse student TeaAge: [blip.tv ?posts_id=1653914&dest=-1] <p>It turned out I forgot to add the last screencast produced by the students of Kourse 2, so I’ll fix my mistake right now. Here’s <strong>Panel settings</strong>, by Kourse student TeaAge:</p> <p>[blip.tv ?posts_id=1653914&dest=-1]</p> 2009-01-10T21:09:17+00:00/2009/01/more-plasma-screencastsMore Plasma screencastsStudents from Kourse 2 fengshaun and Primoz have prepared two nice screencasts, dealing with the Zooming User Interface (ZUI) and desktop settings respectively. Without further ado, here they are: Zooming User Interface by fengshaun [blip.tv ?posts_id=1648944&dest=-1] Desktop settings by Primoz [<p>Students from Kourse 2 fengshaun and Primoz have prepared two nice screencasts, dealing with the Zooming User Interface (ZUI) and desktop settings respectively. Without further ado, here they are:</p> <p><strong>Zooming User Interface by fengshaun</strong></p> <p>[blip.tv ?posts_id=1648944&dest=-1]</p> <p><strong>Desktop settings by Primoz</strong></p> <p>[blip.tv ?posts_id=1646835&dest=-1]</p> <p>As usual, both Free and non-Free versions are available. The students are also at work on subtitled versions, without the Notes plasmoid. I’ll be sure to post them once they’re done. Don’t forget to share these!</p> 2009-01-08T21:03:19+00:00/2009/01/kourse-2-first-finished-screencastKourse 2 - First finished screencast u<p>As some people already know,<a href=""> I’m mentoring a group of students on KDE Forum to create Plasma screencasts</a> , <a href="">you can download the ogg video version.</a></p> <p>[blip.tv ?posts_id=1646835&dest=-1]</p> 2009-01-05T16:33:53+00:00/2008/12/why-plasma-is-the-best-thing-since-sliced-breadWhy Plasma is the best thing since sliced breadToday resoluti<p>Today I was adjusting a bit the layouts of the Activities (<a href="">as defined in Plasma</a>) <em>is</em> an issue on my recently-resuscitated EeePC (1024x600).</p> <p:</p> <p><a href=""><img src="" alt="Auto-hide plus activity bar" /></a></p> <p…</p> 2008-12-29T14:22:29+00:00/2008/07/plasma-resizing-and-moving-panelsPlasma - resizing and moving panelsToday I’m in a posting spree…This clip shows how to resize and move Plasma panels around. [youtube][/youtube] <p>Today I’m in a posting spree…This clip shows how to resize and move Plasma panels around.</p> <p>[youtube][/youtube]</p> 2008-07-13T20:34:01+00:00/2008/07/plamsa-creating-a-sidebar-panelPlasma - creating a sidebar panelFollowing up on my previous post, here is another screencast showing off how to create a sidebar panel and add a few plasmoids to it. As usual, the version on Youtube has annotations. [youtube][/youtube] <p>Following up on my previous post, here is another screencast showing off how to create a sidebar panel and add a few plasmoids to it. As usual, the version on Youtube has annotations.</p> <p>[youtube][/youtube]</p> 2008-07-13T12:15:59+00:00/2008/07/plasma-zui-videoPlasma ZUI videoI’ve put together a small video that shows what you can do with zooming in and out with Plasma’s Zooming User Interface (ZUI). Enjoy. (note: the version on Youtube has also annotations that explain better what is going on) [youtube][/youtube] If you can, pl<p>I’ve put together a small video that shows what you can do with zooming in and out with Plasma’s Zooming User Interface (ZUI). Enjoy. (note: the version on Youtube has also annotations that explain better what is going on)</p> <p>[youtube][/youtube]</p> <p>If you can, please spread the link to the video. We need more correct information out there.</p> 2008-07-08T22:37:23+00:00/2008/07/spread-the-wordSpread the word). So, if you want t<p).</p> <p>So, if you want to help KDE, please spread the word on the Plasma FAQ! If people have questions on Plasma, just direct them to</p> <p><a href=""></a></p> <p>so that the developers can at least save their (precious) time by avoiding to answer to the same questions over and over. This will also hopefully limit the spread of disinformation, or even FUD.</p> <p>If you want to help KDE, at least a little, this is a good place to start!</p> 2008-07-07T19:47:08+00:00/2008/06/annoying-fork-talksAnnoying fork talksNow even the (once respected) Steven J. Vaughan-Nichols is jumping on the bandwagon of Plasma haters. With a rather uninformed and rant-ish entry that just advocates a fork, like some other people said on the kde-devel mailinglist a week or two ago. The entry is rather dismissive of everything in KD<p>Now even the (once respected) Steven J. Vaughan-Nichols is jumping on the bandwagon of Plasma haters. <a href="">With a rather uninformed and rant-ish entry that just advocates a fork</a>, like some other people said on the kde-devel mailinglist a week or two ago.</p> <p>The entry is rather dismissive of everything in KDE 4 save Plasma (and a mention on Dolphin’s single click icon), and also gets some facts and links wrong (yes, the Plasma web site is outdated, but since the developers are busy coding, someone else should step up and help). I wanted to point out a link to the <a href="">Plasma FAQ</a>, but apparently I can’t (I assume it’s because of my ISP, as usual).</p> <p>In any case, I didn’t find mention of specific problems, save the fact that apparently SJVN doesn’t like Plasma. But why? Why didn’t he point out the specific problems? My question is, was he _really _interested in having problems fixed, or he just wanted to advocate a fork for the sake of some unspecified reason? I think the latter.</p> <p>I’m no developer, but I’m liking where Plasma is doing. I already set up three different activities on my computer here at home, which uses <a href="">nightly Kubuntu packages</a>. I switch between general, writing and coding activities, which have different plasmoids loaded. The difference in layout and the like are the bigger advantage over traditional virtual desktops.</p> <p>Fork talks like the aforementioned blog entry are no good, they only want to kindle more flames, just as the behavior of trolls on the Dot reached the point of no return. Not a good sign.</p> 2008-06-29T20:51:50+00:00
https://www.dennogumi.org/category/kde/feed/atom.xml
CC-MAIN-2017-13
refinedweb
38,373
54.42
Logging on Windows – OutputDebugString In the post about rsyslog three days ago, I explained how to log from Linux programs using the rsyslog daemon. It’s slightly different in Windows. There’s a built in function called OutputDebugString(LPCWSTR str) that you can call from anywhere in your program. It dumps the string str into the Output window if you are debugging it in Visual Studio. If you are running this outside of a debugger, the output is lost unless you can capture it with a suitable utility. DebugView from SysInternals.com (it redirects to Microsoft) is one such utility. That’s a screenshot of it below. Just run DebugView and leave it there. It might catch other stuff from Windows, but when you run your program from the command line or double click on it, it will execute quickly and you’ll see any strings captured like this one. This is the program that I ran. In Release it compiles to a 9 KB exe! Because OutPutDebugString needs a LPCWSTR (Long Pointer to a WideString), I declared the text as wchar_t. #include <Windows.h> int main() { wchar_t * text=L"Hello World!\n"; OutputDebugString(text); } At work I developed a very large program that only worked running on another computer. I used OutputDebugString extensively and without it, debugging would have been much harder.
https://learncgames.com/tag/debugview/
CC-MAIN-2021-43
refinedweb
223
66.44
. Is everything stored locally? (ie not network data files etc) I don't seem to be encountering any issues with a local install using local data, single user That is what I’ve noticed. ArcGIS Pro just doesn’t have the network capabilities that ArcMap does. When I move everything local it seems to work fine but when working on the network, it is pathetically slow. Do you think this is going to be the new ‘recommended’ process going forward? Only using local data for ArcGIS Pro? Eric Meyers GIS Programmer, Business Analyst - City of Beaverton 12725 SW Millikan Way, Beaverton, OR 97076 p: 503.526.2404 | f: 503.526.3720 emeyers@beavertonoregon.gov<> I am not convinced that Pro is the problem since it is 64 bit, whereas ArcMap is based on 32 bit... It could be individual hardware and software specs are understated or that the network/server/database requirements are the real problem. I haven't seen anyone really weigh in on how Pro (aka new-ish technology) plays nice with a 5-10 year old server/database configuration. I am fortunate to be able to work 'local' and not have to rely on any externalities... You have clearly identified that it is an externality to your installation that is the problem I had the same issue with Pro being slow on my old computer and determined it was the outdated video card. Pro wants something with a least 4Gb of VRAM which is separate from CPU RAM. I purchased a new Dell XPS with a NVIDIA GTX 1050 video card (as well as 16Gb of CPU RAM) and now Pro runs MUCH faster.. One particular problem I see is the continuous re-drawing taking place in the Map and Layout views. While Pro is multi-threaded and tasks like the drawing and interaction with other user interface components supposedly largely independent because of this, in reality the drawing taking place in the Map view heavily affects many parts of using the user interface, like the use of geoprocessing tools. I have seen already running tools more or less grind to a halt while the Map was still being refreshed, which with a complex map may actually take considerable time. I haven’t used Pro much, outside of 3 separate projects. I do like some of the new features but have found some limitations. I’ve also submitted a few bugs already to the ESRI team when using DBMS connections to SQL. I haven’t begun using it for expansive python testing/model building but will see what other issues I run into then. I will also keep in mind what you are saying about extensive TOC items within the map panel. Eric Meyers GIS Programmer, Business Analyst - City of Beaverton 12725 SW Millikan Way, Beaverton, OR 97076 p: 503.526.2404 | f: 503.526.3720 emeyers@beavertonoregon.gov<> Hi Marco, I ran across this thread while searching for something else and wanted to point you toward this thread:. If you look down in the comments, there is a way to somewhat pause the drawing. Dev is considering a full pause option, but the majority of requests we've received around this are for 3D scenes rather than 2D maps, so I would weigh in on the other thread. For 3D, there are other things you can do as well such as leveraging the new OGC i3s standard .slpk files, and setting reasonable visibility limits when working with heavyweight data so it's not trying to draw all of the time. GitHub - Esri/i3s-spec: This repository hosts the specification for Scene Layers which are containers for arbitrarily la… Eric, You can check your machine against what is recommended for Pro here:. From my user's reported challenges, I know that if you don't have access to a GPU/Video card, your computer is going to be using CPU instead for everything at a poor rate of efficiency, which could be contributing to your experience. If you're running in a Virtual Desktop Instance (VDI), shared GPU aka Virtual GPU (vGPU)=good. XenApp, a VDI without vGPU, or a remote desktop to a VDI=bad. This actually makes a big difference in performance. I also wrote another GeoNet post here: Best Practices for Running ArcMap/ArcGIS Desktop/ArcGIS Pro on a Mac in Parallels on how to get the best performance when running in Parallels if you're on a Mac. As Dan mentioned above, network issues could be affecting things, so i3s might help you too. Sorry I don't have any technical specifics or solutions for you. I'm just an Account Manager who stumbled upon this thread and wanted to provide some info based on what some of my users have encountered. Hope it helps you both, -Tripp I think there is an issue with Pro and multi-threading - as my assumption is that a 64 bit application (that claims to be able to take advantage of multi-threading) should run faster on my machine than a 32-bit ArcMap application. Like the OP, I am also seeing much slower performance with Pro vs ArcMap on my machine. Doing very simple things like calculate field, selecting features within an attribute table, doing edits, changing symbology all take substantially longer in Pro... As a side note, I am getting crashes when I try to Insert a new map into the project that appears to be in relation to Multi-Threading (but maybe I am wrong on that diagnosis). Event Viewer shows the following: Application: ArcGISPro.exe Framework Version: v4.0.30319 Description: The process was terminated due to an unhandled exception. Exception Info: System.ArgumentException at ArcGIS.Desktop.Internal.DesktopService._IMapAuthoringService.CreateNewMap(Int32, System.String, ArcGIS.Core.CIM.MapViewingMode, ArcGIS.Core.CIM.MapType, ArcGIS.Desktop.Mapping.Basemap, Boolean) at ArcGIS.Desktop.Mapping.MappingModule+<>c__DisplayClass443_0.<CreateNewMapAsync>b__0() at System.Threading.Tasks.Task`1[[System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].InnerInvoke() at System.Threading.Tasks.Task.Execute() at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(System.Threading.Tasks.Task) at ArcGIS.Desktop.Mapping.MappingModule+<InternalCreateNewMapAsync>d__1191.MoveNext() at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(System.Threading.Tasks.Task) at ArcGIS.Desktop.Mapping.MappingModule+<InternalOpenCreateNewMapAsync>d__1334.MoveNext() Exception Info: System.AggregateException at System.Threading.Tasks.TaskExceptionHolder.Finalize() My machine specs are as follows: All my data is local on a very fast 250GB Samsung NVME SSD (so read/write is not the issue here I am afraid). I am also running a NVIDIA Quadro M4000 GPU and ArcGIS Pro 2.0.1, but this issue first started occuring in 2.0.0 - I have submitted countless crash reports. Our organization has a maintenance subscription so if anyone from support can reach out on this issue, it would be greatly appreciated. Agreed, it's really the common tasks that were consistently instantaneous for years in ArcMap that all of the sudden are slow in Pro. This is really a spit in the face to end-users. If these issues aren't present in ArcMap, but are present in Pro, how did they suddenly become "network" issues? I think Pro communicates with the network more. It would be interesting to compare performance with working on versus offline which is possible in some configurations. "I think Pro communicates with the network more." is an understatement. Think I found out why every single change to a symbology property takes forever and locks up Pro: Simply changing a symbol from "ArcGIS 2D: Circle 1" to "ArcGIS 2D Circle 3" resulted in 10,475 separate TCP calls evenly split between send, receive, and copy events which took 00:00:02.10 seconds to execute; the same exact operation in ArcMap? 0 TCP calls, 00:00:00.23 seconds to execute. Sometimes the delay is several minutes. It appears that with every symbology change, Pro is re-downloading the entire layer from the source. Why? Arcmap doesn't do this (and isn't as slow at common operations such as symbol styling). Making pure SQL queries to the same data in a variety of clients other than Pro is near-instantaneous. This is regardless of the layer cache settings applied in Pro (I tried all of them, same barely-usable response). Thomas, It would definitely be nice to see a real response of ESRI to your detective work regarding the TCP calls or ArcGIS Pro. Whether you are on to something, I can't say, but it does raise questions and points ESRI should look into. Marco Hi Thomas, It would be great if you could describe your scenario in more detail. What steps are you taking before you update the symbology? How is Pro installed on your network? Feel free to email me at CGaddam@esri.com. Thanks, Chait Just imagine what happens when your network admin put real time virus scanning on your network and local machine and every one of those calls is intercepted by the virus scan before it goes through... The software is rendered useless. My experience mirrors yours in that Pro does generate even more network traffic than ArcMap. Anything that interferes with this affects the user experience and it can definitely be network side monitoring and port limiting. I have also had this issue with virus scanning. ArcMap runs just fine using data saved in exactly the same location and using the same computer (above recommended specs). This is making it impossible to migrate to ArcGIS Pro as I can't even change layer symbology without the application freezing! Hi Anneka, A few question this brings up: Thanks for any info you can provide! -Scott Hi Scott, I don't know for sure that the speed/freezing issues are entirely caused by my AV software. But having had our IT support investigate the issue, they said that it was isolating and scanning each file every time ArcGIS pro refreshed (or in this case anytime I tried to edit the data, change symbology etc.) The problem appears to be worse when using layers from ArcGIS online or dropbox synced files. Most of our data is synced with dropbox (we're a small organisation and all work remotely) so this can't be avoided. I experience none of these issues when carrying out exactly the same operations in ArcMap... The AV software is Sophos Endpoint. I believe it's possible to set up exceptions, but our organisation has been victim to several attempted cyber attacks recently and our IT contractors are hesitant to do this. I've submitted a support case so hoping to find a solution. Thanks! Thanks Anneka. What catches my attention here is Dropbox. What kind of files are you syncing with Dropbox? Hi Scott, we back up and share lots of our project files via dropbox, including GIS data. The files are saved to the disk drive and only 'synced' to Dropbox for backup/sharing. I 'pause' syncing (disable Dropbox) when I'm working on GIS to stop it running in the background and only turn on sync at the end of the day to back up any changes. This system has worked fine with ArcMap but is this an issue for ArcGIS Pro? Hi Anneka, so the actual Pro project files (aprx) are in the synced Dropbox? And as for the GIS data, are you using shapefiles/file geodatabase, or both, and if both, have you noticed that one is more/less performant than the other? Hi Kory, the project files are usually saved in a Dropbox synced folder, but sometimes (esp if i'm in a rush) they'll be saved in the default folder in another location. ArcGIS online layers are definitely the slowest, but I haven't been comparing shapefiles vs. file geodatabases. I will start paying more attention and let you know! My organization also uses Sophos which impacts ArcPro. It is a major concern when editing and you can't do anything but wait. I've complained to the powers that be that Sophos is a network nightmare when used with ESRI products. The high amount of internet/network use does seem to be part of the problem. When I am on an internet connection that is unstable or laggy, ArcGIS Pro becomes almost unusable. This happens even if there are no online layers loaded. It happens even I'm just running a local model and there is nothing at all in the map. Agreed. This is actually the reason I haven't adopted ArcGIS PRO yet. It's freaking slow and it grays out and takes forever to execute actions when compared to ArcMap. ESRI definitely needs to do something about it. My i7/24 GB RAM PC config is well in excess of the recommended by ESRI and still it's pretty slow w/ Pro. Also, I don't like having to save a project before I can start playing w/ it. Sometimes I just don't wanna save it ....so? I agree 100% regarding the hassle of having to create a project before being able to use arcGIS pro. All the time I will go into catalog or arcmap to just add a few layers or look at the attribute data. My role is a developer, so rarely am I actually making final maps. Unfortunately, it looks like this is the direction ESRI wants us to go because some settings/functionality only works if a default database and home directory is set up. Maybe ESRI wants us to utilize software like ArcReader to do this type of work... not sure. You'll see that Open ArcGIS Pro without having to Save is marked as In Product Plan and we should see this with the next release, ArcGIS Pro 2.3. I have noticed the slow responsiveness in Pro while using basic tools and functionality. Expecting people in an organization to work locally is unreasonable. We place our data in network folders so that they are accessible to everyone and so that they are backed up regularly. I'm trying to do more in Pro to acclimatize myself with it, but the lag is too much for some of the simple tasks. I end up going back to Desktop for some things. My PC is well in excess of the recommended specs and I have an NVIDEA Quadro card. I agree wholeheartedly with this as well. I thought the whole point of moving to an enterprise system was to move away from solo databases but pro in a way can be a step backwards. The map documents are now stored locally, causing a runaway solo MXD repository. ( I'm modeling a QC workflow using Pro tasks and the whole first few steps of filtering your data set (which can't be done by task command and has to be done with a python model), selecting all features, and zooming to them takes 5 or more minutes. That's just to get to the data you want to look at. This does not include making edits to the data. Manually, in ArcMap this takes me 1 minute max. I work for a state agency who is transferring all of the data to central network locations so Pro may be a no-go for us now. "...expecting people in an organization to work locally is unreasonable..." I agree with that. I would also characterize it as unrealistic. I have data all over the place...I would like to have one enterprise database where all my data resides, but that is never going to happen. And if it did happen it would be on a networked server, not on my local machine. I keep checking in with Pro and these threads, but I do the majority of my work in Arc Desktop because I need to get things done. I was all for pushing through the learning curve for Pro, but I feel that the performance issues are the real reason I end up going back to Desktop. Yup! Can't stand Pro and is one reason I didn't go to the ESRI User Conference last year. I can't stand that they use ArcPro and push it as if it were the greatest thing ever. IT'S NOT! It's is slow! Riddle me this: when I use the Spatial Analyst > Sample tool in ArcMap I can process a 52 x 32 mile raster grid that is 200 ft x 200 ft coarse in less than 5 minutes. I'm doing the same in Pro at the moment as of this post and it's taking over 25 minutes now and counting! It is too bad they don't have a competitor out in the GIS world that would give them pause to even release a crappy product. Shame on you ESRI. Rick: Do you have the ability to create a case with tech support so maybe they can find out what the issue is with the slowness of ArcPro in your case? Also maybe you can add your vote to this thread: Four Year Check-In: What is your competency level with ArcGIS Pro? as well as add your 3 or 4 biggest gripes with the Pro software to give the dev team exposure to issues that existing ArcMap customers are seeing with Pro. Hi Rick, I think all would agree that something that takes 5 minutes in ArcMap should probably take 5 minutes or less in ArcGIS Pro. So there is something going on that needs to be investigated. I think that Michael's idea of working on this with technical support is a good one as there could be many factors at play. I was curious to see if this is a "generic problem", meaning that it could be reproduced with any data. I created a random raster of 52 miles X 32 miles and cell size 61 meters (about 200 feet). I also created 1000 random points and ran Sample in ArcMap and ArcGIS Pro. Results: ArcMap = 1.36 seconds ArcGIS Pro = 1.22 seconds I don't think there is anything inherently slower about this operation in ArcGIS Pro. But it would be valuable to investigate the specific performance issue you're experiencing with technical support so that we can understand what's happening. I wonder if the storage location can play a big part in this problem. I don't work with data on the local c-drive, but I have found file gdb data on a network drive to perform much better in Pro than Oracle based SDE data (enterprise gdb). Kory - were you using local c-drive data in your testing? Yes. Local data on C. Yes, "many factors at play" which is why we need to have more details:) Rick Gonzalez we want to help, we just need some more info, and maybe data. Probably best to move it off this thread, so email me directly and we can circle back to close the loop here. Kory, do you have setups that you could perform a 'task' on these various storage types vs ArcMap and ArcGIS Pro. I keep seeing the non-locally stored data cropping up time and again. Is it the nature of the beast? network issues? a communications issue between software and the database? I have trolled far and wide and never seen any thing that says …. when using X (storage type and location ) expect a Y% increase in processing time compared to using Z (storage type and location) I know that the operation may have an influence, but you would expect to see some comparison. The one Dan posted from ESRI at least shows that at least 2 of the 4 have 'issues', but Dan's times are even worse than ESRI's Kory, Good morning! Thanks for reaching out! I am going to have my IT guys do some research and testing on this soon. If we can’t come up with a solution and will circle back to you. I will let you know. Thanks! Rick Gonzalez Kory, Trying to finally circle back to you on this. Sorry it's taken so long. I want to ask, did you you sample the arcgrid to a points fc? That's not really what I was doing. I was just simply outputting a dbf table with xyz values from the arcgrid using the Sample tool under Spatial Analyis > Extraction. Essentially this outputs the xyz values for each of the cells in an arcgrid. What are you using as the input location raster or point features? I am using the same raster as under Input rasters. In other words, the same raster I am sampling is also the input location raster. I use Bilinear resampling. Plus, on the Environments menu I set the extents to a rectangle shapefile that is about one fourth the area of the raster extents. What raster format? Is it in a file geodatabase, .tif, etc.? Kory, the Data Type is a File Geodatabase Raster stored on one of our network drives/servers. Below is the Raster information Rick - also, if you're willing/able to send data and steps directly to me at kkramer@esri.com we can probably take a quicker look. Email me and if we need to set up an ftp file transfer site we can. Thanks Kory, I will e-mail you as an ftp site would be best. Thanks. Thanks, Rick. I'll look for that email. Cheers Rick - the Spatial Analyst team has been looking at this and in the daily builds of Pro 2.4 I'm running the tool on your data in less than 3 seconds. So it looks like this will be working well with the release of 2.4. To anticipate your next question (I don't have access to 2.4, what about now?) what you could do would be to run something like Raster to Point which will give you the elevation values for each cell, and then run Add Geometry Attributes on that to end up with x,y,z. Hopefully that would keep you going until 2.4 hits the streets. Speaking of raster to point, this will do several thousand in about 10 seconds: import arcpy import numpy as np update_feature_class = arcpy.GetParameterAsText(0) source_dem= arcpy.GetParameterAsText(1) elev_attrib= arcpy.GetParameterAsText(2) if str(source_dem) == 'Twin Creeks: Elevation in Feet': dem = '\\\\inpgrsms06vm\\GISDATA\\GIS_Final\\data\\basedata\\elevation\\GRSM_10MDEM_2180604\\grsm10dem' if str(source_dem) == 'Twin Creeks: Elevation in Meters': dem = '\\\\inpgrsms06vm\\GISDATA\\GIS_Final\\data\\basedata\\elevation\\GRSM_10DEM_Meters_2180603\\mgrsm10dem' if str(source_dem) == 'HQ: Elevation in Feet': dem = '\\\\inpgrsms05vm\\GISDATA\\GIS_Final\\data\\basedata\\elevation\\GRSM_10MDEM_2180604\\grsm10dem' if str(source_dem) == 'HQ: Elevation in Meters': dem = '\\\\inpgrsms05vm\\GISDATA\\GIS_Final\\data\\basedata\\elevation\\GRSM_10DEM_Meters_2180603\\mgrsm10dem' rast = arcpy.Raster(dem) desc = arcpy.Describe(rast) ulx = desc.Extent.XMin uly = desc.Extent.YMax lly = desc.Extent.YMin #Spatialrefernce sr = desc.spatialReference rstArray = arcpy.RasterToNumPyArray(rast) with arcpy.da.UpdateCursor(update_feature_class,["SHAPE@",elev_attrib]) as uc: for row in uc: pnt = row[0].projectAs(sr) #assuming every point falls to the left and below uperleft corner deltaX = pnt.centroid.X - ulx deltaY = lly- pnt.centroid.Y arow = int(deltaY/rast.meanCellHeight) acol = int(deltaX/rast.meanCellWidth) row[1] = rstArray[arow,acol] uc.updateRow(row) I have found "Calculate Value," "Select by Attributes," and "Excel to Table" to all be painfully slow in ArcPro. Hi everyone, Getting updates on this thread from my earlier comment. Unfortunately, again only being an Account Manager I'm not technical enough to have specific recommendations on everyone's scenario for resolution, but wanted to pass along some resources again. If your organization has Technical Support with Esri, I would encourage you to contact them so that the staff can help you look into fixing your individual situation. This is a great blog post on troubleshooting performance in ArcGIS Pro: Troubleshooting Performance Issues in ArcGIS Pro | ArcGIS Blog that may yield some answers. This similar thread: Why is ArcGIS Pro so slow to do anything? also has some commentary from our Dev team with some suggestions and tools. Finally, if you're working with a central data source, consider putting Pro on a Virtual Desktop Machine (appropriately specc'd) so that it sits closer to the data and has a stronger network connection (in instances where you're seeing the same slowness in ArcMap and Pro), as ArcGIS Pro will work better in a virtual environment than ArcMap will. Tripp, I think the universal issue in this an other "performance" threads, is, we're not seeing these issues doing the same thing with the same data sources using Arc Map. And how realistic is "putting Pro on a Virtual Desktop Machine"? I hear you Thomas, and that is frustrating. Unfortunately I don't have any perspective to add as to why it might be happening :/ There's no point in putting Pro on a VDI if you're not pushing/pulling a large amount of data over your network. If this is the case however Pro has been improved for virtualization compared with ArcMap. You'll need a GPU-backed VDI, which can be supported with hardware appliances like this one from Dell+NVIDIA. Below is a blog that has a lot of useful links on virtualization at the bottom, and a presentation from UC last year if you're considering virtualization: Virtualizing ArcGIS Pro: Nvidia GRID Telsa M60 | ArcGIS Blog I've been using ArcMap since around 2001. It was good from the beginning and, with every update, it was becoming better and better. Pro, in my view, is just a one giant failure on ESRI's part. The lack of familiar basic functionality (even zoom tools are all messed up) the ever-changing ribbon bar (added just because it's a trend?), and is excruciatingly SLOW. In its fourth year, it's still a poorly designed program. My company runs ArcMap in a virtual environment, with all the data residing on a network - and it's fast! A similar Pro setup is pretty much unusable, because no one has time to wait for a week for what used to take a day of work. Just recently I tested out a local setup (app+data) and the performance improvement was noticeably better (not that it made me love it more). But for my multi-user organization this approach is impractical. Sorry for the radical thought, but it's time for ESRI to admit that the whole design is flawed, and start from scratch. I was just talking to one of my coworkers about this, you took the words right out of his mouth. I understand they wanted to integrate 64-bit functionality, which is a step in the right direction, but why did they have to implement the Microsoft model (ribbons)??? Maybe there was no other way to solve this issue... not sure. I would be surprised if ESRI started ArcPro again from scratch. They have too much invested in it at this point. Maybe in a future major release they will redesign the interface, which could improve the user experience. We can only hope I could see all of us, long time ESRI users, complaining about some third-party program, how it goes against all our user's habits and expectations. But the Pro is from ESRI! It feels like the developers just completely ignored and betrayed their users' experiences! I almost want to say "ArcMap was not broken, so..." Just adding my 2 cents here. If I add sub-types using model builder then it will take much less time than if done via UI. Same action but using alternative way and time is very less. So its still application and not network really? Just wanted to post this here since it looks like many users prefer the ArcMap layout....you can make an ArcMap-style toolbar for ArcGIS Pro! This blog will show you how, along with other tips and tricks: ArcGIS Pro: Ribbons, Toolbars, and UI Hacks Also, here is a link on how to translate common workflows from ArcMap and it's equivalent in ArcGIS Pro: For ArcMap users—ArcGIS Pro | ArcGIS Desktop Hope this helps ease the frustration! Ashley Well said. I do greatly appreciate some of the increased functionality of, and access to cartography tools, and like that masking isn't buried four of five menus deeps. However, I've just begun using ArcPro outside of webinars, or workshops, and my first instinct now is that the ribbon design is going to take so much more clicking in repeating workflows that require more than one ribbon. I did find where you can make your own ribbons and sections within that ribbon (customizations, like in ArcMap), but I haven't yet determined if building my own ribbon with common tools will overcome the inefficiency of the ribbon structure. I'm beginning to wonder if it was really necessary to take away toolbars. I also greatly agree with another comment about projects. I often open ArcMap and view data without ever saving the mxd. Requiring a saved project will create so many junk projects. Agreed! I to have found Arcpro to be effectively unusable, its too slow, crashes and I'm having issues of not being able to update feature classes. The ribbon user interface looks good but is painful to use. The arrogance of knowing what you are doing dictates what you can access directly, I'm for ever hunting and changing through the various ribbons to access the functionality I want. It also takes up a lot of screen real estate... Time for reflection and words of wisdom from Dilbert on software alternatives I was having a terrible time with Pro hanging up and crashing and just being excruciatingly slow, even on fairly basic 2D maps just panning around and zooming. I used the frame rate display (Shift + E) and the Diagnostic Monitor (Ctl + Alt + M) and found out that I was having major "hangs" and it was using Direct3D. So I switched to using OpenGL in preferences, and it has made a huge difference. It was basically unusable before and now it is not too bad. There are still spots where it is slower than I'd like, but overall it is at least as useable as ArcMap for most day to day things. I don't know why it made a difference, as I am only working on 2D maps and you wouldn't think the GPU would matter, but in my case it definitely made a big difference. I'm on a laptop which has a basic Intel graphics card and also an Nvidia card, and depending on what you are doing, it uses whichever card is best for power vs performance. Maybe when I was on Direct3D it wasn't even using the Nvidia card, and switching it to OpenGL forced it to use that graphics card (?). I don't understand it, but it worked for me. (For now, anyway...) It's a pity. I simple cannot get to enough time to explain and describe everything that makes me cry using ArcGIS Pro. It would be a long, long story of crying. Nobody will hear long, long story of crying, I understand that. I thought with the 2.1.1 update I would be able to see at least some of improvements. Nadda. Niente. Nix. Null. Zero. It is very disappointing for me. And very frustrating at the same time, because I am the guy who should bring ArcGIS PRO workflows to our company. And I simply don't understand ESRI. Ever since downloading ArcGIS PRO I have been having issues with ArcMap and PRO. I re-downloaded and installed both programs again today; last time was this past August. This process consumes an inordinate amount of time. . Today I attempted multiple times to export a joined table to a shapefile and geodatabase. The Copy Features tool stalls at 6% and hangs there for hours until I close down the program and reopen it. ArcMAP will no longer do an optimized hot spot analysis on point data, no explanation at all. I then take the same data with the same process, and it works fine in ArcGIS PRO.. It seems to me ArcGIS PRO intends to be a money machine for ESRI. Charges for every analysis supposedly in ArcGIS online only. Does anyone else have negative credits on their account? Working with ESRI products has become increasingly frustrating over the years. I believe they are focused on the development of their offerings and silo their subject matter experts inhibiting a manageable integration of the parts into a workable whole. The bottleneck created by the software hurts my brain, decreases efficiency, and inhibits creativity. Note the effort to export the joined file referred to above was completed: "Start Time: Tuesday, March 27, 2018 4:52:35 PM Succeeded at Tuesday, March 27, 2018 6:09:20 PM (Elapsed Time: 1 hours 16 minutes 43 seconds)"! To me the frustration stems from the perception that ESRI expects its customers to QA/QC their products for them by submitting arduous and time consuming bug reports (and replications) to support. That might be fine for public sector workers, but I work for a for profit company and time is money. We pay a lot of money for ESRI products and the level of bugs and issues that go out the door and then remain unaddressed (especially in pro) is unparalleled in my experiences with other software companies (like Adobe). One example is that when opening a project, snapping remains on even when the snapping button (in Layout) is turned off. You need to cycle the button twice before it actually turns off. When the project is saved after snapping is turned off, it gets turned on again when you reopen the project (with the button showing its off). Every time you open that same project you need to cycle the button a couple of times. This very obvious bug has never been noticed by the dev team, despite being there for now 2 or 3 major patches? The snapping example may not seem like a big deal (and its not) but its indicative of what I can only assume is an extremely poor QA/QC process that I can only imagine is there to save money. There is also an issue with symbology updating causing Pro to crash when you change the data source of a layer, which has remained in Pro gosh, since 1.0.x, despite bug reports that I took valuable time to report. Tyler: Can you provide a link to the symbology update bug in Pro thread as I would like to try to reproduce on my system in 2.2.0? Take a raster, apply a color ramp symbology to it (in my case I use "classify" symbology). Change the source to another, different raster (right click the layer, properties, change source) and see if that causes a crash. Its as if the classes that are set with the old raster dont match the new raster and Pro crashes because of it. The work around I have found is to remove the symbology (back to "stretch") on the raster and also hide the symbology pane first before changing the source and then reapplying the symbology after you have changed the source. The reason I am changing sources instead of adding new data and applying symbology that way is because I have many layers setup as templated symbology styles where its very quick and easy to simply change the source of the layer (and preserving the symbology) instead of restyling all the symbology every time. I also employ saved symbology layers (.lyrx) and apply symbology to layers that way as well. I have had similar issues with polygon feature classes (like block groups) using the same process, but its less frequent and inconsistent. I am not sure where to find the thread you mentioned. Tyler, does this crash for you on 2.2? I followed the steps you provided but Pro doesn't crash. Could you send me example data and steps to reproduce so that we can take a look? Thank you! Kory Kramer I was able to repro and crash. I'm CC'ing you on the dump. Tom - you said you repro'd so I tried again with the second raster as a layer in the map and got the crash. Will get this to the framework team right now to analyze. Thank you! Kory, I just tested as well and I am still crashing and its reproducible. Where can I find the crash dump report and how do I get it into your hands? I outlined the process in my post above. Raster in question is a raster derived from Kernel Density using various UTM projections (depending on what part of the country our study is in). Perhaps I need to capture a video to better show you whats happening? Let me know on both of those points. Out of all the crashes I get, this one is the most common and most frustrating so I would love if we can get this resolved. Pro v2.2.0 Thank you! Thanks, Tyler. I really apologize that you're experiencing this crash. I was able to reproduce it, so we have the crash dump file - not necessary to get anything else from you at this point. Of course we never want to see crashes, but when they do occur, please fill in the error report along with your email and a description of what happened. This can help if we end up needing to reach out for more info. More here: Report software errors—ArcGIS Pro | ArcGIS Desktop "Error report files have the extension .dmp and are saved to the application data location on your local hard drive, typically C:\Users\<User Name>\AppData\Local\ESRI\ErrorReports. The 10 most recent reports are saved." Thank you. Yep, filled out the error report probably at least 100 times for this exact crash and Ive watched multiple versions come and go with no acknowledgement or fix Tyler, you talk out of my heart. Every single word you say is true. Now, they want me to be more constructive in my complaints. I understand that. But, the matter of fact is, even if I want it (which I do), I don't have got that time. Simply as that. In my work, I am heavily involved in software development and I can say, that is a lot of time and many mails to get things right and done. I cannot see how could I do that for ESRI. And for what? Yes, to participate in their work free of charge, still paying a lot of money for a working software. Like in good sense of ESRI community. No problem, I can do that- one day when I get some time, when I am not in work and when I am in the mood (not resting exhausted of my own work). Except for, when I see how much they care year after year about what all of us users write, I get progressivly out of that mood. And that could be a problem. Cheers! I just did a Pro training course. For simplicity, I took our (admittedly elderly) dual 2.2GHz, 4Gb Windows 10 laptop. This has run Desktop (many versions) perfectly fine for years along with a suite of other GIS/DB tools, and seemed to work okay with some basic testing of Pro after installation. Once I loaded up some tutorial data for exercises, it was agonising. 99+% CPU usage, most of the GPU and perilously close to thrashing page file (I shut down everything possible to shut down in Windows) running consistent 1.5-2GB RAM usage. It was basically unusable, 1-2 minute waits to do anything in Windows, and I could see Pro literally rendering every object second by second. Sometimes it just gave up and said "meh, close enough for government work" This was with a few tiny tutorial datasets and the web basemap/hillshade. I didn't even dare trying 3D seeing as Pro was unable to even handle a file explorer dialog. After falling further and further behind the class simply because Pro does not run on a Desktop-capable computer, I gave up and ditched it for my workstation (2014 quad 3.4GHz, 16GB) for which is was fine, although the NVidia Quadro struggled a bit with frame rates in dynamic 3D. Looking at what it was doing, I think Pro is spinning up a large number of background processes/services which eat up resources (geoprocessing handler, job handler, etc.) I could also see it dynamically building displays, panes and elements, sometimes rejigging them in real time as things opened and changed to get the desired layout, so there is a lot of work going on under the hood which is much more demanding and sophisticated than Desktop. Every time you click on something it is running through a whole set of rendering/workflow/data management dependencies.. Short story: Unless you have a fairly new computer, expect that you will have to upgrade it to run Pro. ." This is precisely my experience as well. But its not completely because you have a slow computer. Even with a very fast computer, it runs slow. As you mentioned, it seems pro needs to re-render and reprocess everything anytime you click. I am running an Intel i7 2.8ghz with 16gb ram and an NVME SSD hard drive and I get the spinning wheel anytime I click something in pro (and this is the second computer Ive run Pro on). This is not acceptable. Are you able to submit that very obvious performance issue to tech support? I see pretty much the same thing. Will do I'll just add that both computers used above are standard (not premium) SATA SSDs and all the data, OS and temp data was on them, not a spinning platter. We're looking to go to M.2 NVMe for our next hardware upgrade, based on where high-end gaming is going. How do you find this works in a GIS environment? It doesn't. Having to buy a gaming computer in order to run GIS software that previously didn't need a gaming computer is "unsustainable", quoting the folks that I had to ask for $ with which to buy the computers....to your specific question, I think you'll get more ROI on faster processor/more cores versus a high-end NVMe drive. Oh of course, it's cpu-bound, I could see that easily. I was interested in how you found the technology compared to contemporary storage. We've had the same issues with cost. Senior managers have difficulty understanding that a bog-standard micro desktop suitable for government form-fillers with some light Excel, Word and Outlook use is completely useless for GIS/geoprocessing. We've actually had replacement requests knocked back because we "don't need a gaming/multimedia laptop" or even "you don't need a laptop, you can use a desktop" (for field work?) and had to escalate back up the chain to get the hardware needed for the job. Even ICT doesn't get GIS and big data/processing requirements. In the last PC roll out they dodged the question of backups and when I complained (while they were decommissioning machines) we had sixty 1TB drives to copy they said "there isn't enough network capacity for that, just back up your data to usb thumb drives" *snort* Sure.... Replying to myself, how narcissistic! Ego aside, here's an update for those interested in the hardware side of Pro requirements. We're about to replace our Dell Precision quad-3.4GHz 16GB with Lenovo ThinkCentre hex-2.4GHz 16GB (don't ask why we are 'sidegrading', public service) Of particular interest is the comparison between the 2014-era consumer Liteon 256GB SSD on SATA and the 2019 prosumer Samsung 500GB SSD on NVMe. I won't go into the other benchmarks but they are roughly comparable as shown. As you can see above even though Passmark is just a synthetic benchmark, the performance difference between the two SSD models is dramatic, and remains the same regardless of the other hardware configurations (all Lenovos had the same class drive) as shown below. This results in a smoother and more responsive experience in Pro, making up for some of the possible disadvantages of the system. Based on this and initial testing I would highly recommend adopting M.2 NVMe in your next scheduled upgrade. I still haven't rebuilt my gaming rig (shame, all I can run is Warframe. Badly) but I'll update you with the results, although that example won't be so clear cut as there's more variables beyond the primary drive being changed in support of *ahem* real-time dynamic rendered entertainment suites. Finally, if you don't have the opportunity to move to NVMe and must stay with SATA (for example, you do not have M.2 support on your computer) be aware that even all 'standard' SSDs are not the same. Samsung (and others) are built with different tech it seems the big difference is that QLC ram in the Samsung QVO is cheaper but less durable than TLC ram in the Samsung EVO, max speed also drops. Ref. "MLC vs TLC vs QLC: Why QLC Matters" tldr: Replace your old SSDs with TLC-SSDs for maximum performance and durability. For Samsung this is the EVO gamerspec model, not QVO desktop model. This article was not sponsored or endorsed by any of the manufacturers mentioned and remains my subjective and probably worthless opinion only. Edit: Added world benchmark link for the Liteon in case you thought I'd accidentally benched my optical drive (yes the Dell Precision has one). Who knew Liteon SSDs were even a thing? Surprise! ESRI, it's a shame that we have to have discussions like this four years after the product was introduced! Not all of the us, users, have enough time, chance, clout or privilege to test and choose hardware. Most of us work with what's available. With ArcMap, all we had to ask our IT was "Give me enough RAM" (if that). I have stronger words to express my feelings, but... This forum thread is like an echo chamber. But it would be really interesting to know how/if ESRI going to address this problem. I'm wondering if someone is addressing it during the User Conference? To be blunt and to the point, the only way it's going to get addressed is via tech support. I have at least 2, and more in the pipeline, cases just on performance. SDE performance. Blowing up my CPU cooling fans. Some GP processes take twice as long as AM. The so-called "recommended" hardware in fact is "barely start Pro" specs. Here's how this works: I have an SDE case, and they find that the Transmorgifier, only present in a few customer environments, is generating excessive SQL queries to the bunny rabbit. You open your SDE case, and they find that the Supercalifragilisticexpialidocious Equalizer is spinning in the wrong direction if you installed Pro on the 3rd Wednesday of an odd month while it's raining and you forgot your lunch. You share this on GN, and I discover that my Supercalifragilisticexpialidocious Equalizer doesn't spin at all. These are things that no QA Test Plan will ever discover, and no intern in the test lab is ever going to think of. So, instead of one customer ******* and moaning to TS about SDE (me), there's now 2, or 4, and they have enough SDE Intercepts and Traces to actually change some code. But, don't be discouraged about sharing your experience on GN. I probably solve half my problems on GN based on ya'll's input, which keeps the TS lines open..... I posted it already somewhere before - a major redesign of Pro is needed. But it's $$$$$$$$$$. Loved your explanation. I'm of the same mind with Tyler. I'm frustrated because even though we've all paid for completed software I think we are experiencing incomplete and extremely buggy software. UI testing and QA/QC needs to be on the developers time and money, not the Users. It says something when I'm taking time to learn an entirely new (to me) open-source GIS system because time-wise I'm getting a similar ROI to what is supposed to be the new flagship product for the GIS system I've used for 17 years. (FWIW - I've already spent a LOT of time on Tech support calls too. I'm hoping that all of that time will help them make improvements. However, it's disheartening when they still haven't fixed basic performance issues like table editing in 2.2). Thomas you bring up a good point that there are some things, so specialized, that no QA/QC team will discover, and I accept that. I think that ESRI should incorporate SDE data from multiple types of databases (SQL Server and Oracle) into the Pro training classes (currently they use file gdbs), so the trainers can see first hand some of the slowness issues that Pro clients are experiencing with these more complex datasources. I think then internal ESRI people might help to persuade ESRI management to focus more effort on improving the base product instead of focusing on bells and whistles for tools in Pro that only very specialized GIS people use. My company has the same issues. EVERYTHING runs slower when on a network. I do not understand how, in this day and age, this is a problem and why it is not being addressed immediately. We have been working on a network slowdown issue for a year with Esri to no avail. 1.4 works fine but when we upgraded to 2.X everything dragged. Couldn't agree w/ briertel more. Why have read/write speeds slowed down with the 2.x version of Pro?? It has made the product completely unusable since our team continuously shares projects w/ each other over our server and are accessing 5+ gb of data. Changing symbology of a SDE point feature class in Pro 2.2: Note how long it takes. Looking at wireshark logs while this is occurring, an incredible amount of traffic is going back and forth to SDE. Why? Because in Arc Map, I can change the symbology 5 times in the same amount of time it takes to change symbology in Pro. And guess how much back and forth is occurring with SDE when I do this in Arc....not much... Performance gets even worse when you turn off caching for a map layer, which was the solution to BUG-000113527 ArcGIS Pro Attribute pane fails to dynamically update data for values calculated through SQL for published feature services. The result of the bug that I logged on this in 2.1 is "Implemented in 2.2", so I'm hopeful that others in the community are experiencing this same specific issue: "Changing symbology in Pro takes 2-10 times longer in Pro than the same exact SDE data source does in Arc" and can get some attention on this issue that, judging from the other posts in this thread, probably affects a very large number of enterprise customers. I plan to reopen the case, but expect a "cannot reproduce" outcome...... Thomas... so the issue seems to be with how SDE and Pro interacts.. ArcMap, you suggest, behaves nicely with SDE?? all the time? Do you have issues with locally stored data? I am trying to get my head around who is the 'villain' in all this In terms of data access, locally stored always is faster, in Arc and in Pro. However, with locally, there are other Pro performance issues. I just used what I think are one of your Py decorator examples to time stamp some common GP tasks. Everyone of them is slower in Pro/Pro Py. Neither behave nicely with SDE all of the time, IMHO, it's just Arc Map "chatters" less with SDE than Pro does. That much is evident from numerous traces, profilers, and perfmon counters. Back in the good old "T1 Line" Days, you couldn't even connect to a remote SDE with any ESRI product. So a lot of this is function of your WAN speed, but, like I exemplified above, even on the same LAN connection (a 7 MB VPN today), Arc outperforms Pro. Thomas: Besides Wireshark, do you have the ability to monitor database activity from Pro with TOAD software, as I don't believe Wirehshark provides details about database connections. I say this because in 2 different Oracle databases (one SDE, one spatially enabled with SDE ST_Geometry libraries), I am seeing many more database connections in TOAD from Pro than from ArcMap. For 1 user connection in ArcMap, there were 3 or 4 user connections from Pro. Have you seen phenomenon such as this in your extensive testing of Pro. No, the only other thing I've used for SQL benchmarking is GitHub - clinthuffman/PAL: Performance Analysis of Logs (PAL) tool , but not for this issue. TOAD looks like it has some cost to it, which would take about 97 years for me to get through the approval pipes and levers. Tom, I realize that you are working on this as a case with technical support. I think you've referenced the Can you run it results earlier in this thread. But can you share the specifics about the video card from the machine where you captured those .gifs? And yes, the video driver is the most recent. Thanks. Same behavior whether I assign Pro to a profile or not. How can this topic be marked "ASSUMED ANSWERED"? This thread has been going on since August 2017, multiple people are experiencing slow down issues. Can someone from Esri PLEASE tell everyone what they are doing to solve this crucial issue? I'm the owner of the thread and I don't even considered it answered... seems like ESRI just set it to that for me. I was looking for a way to mark it back to OPEN but couldn't find anything. Do you want it marked as a Discussion? (that can be done) It is getting stale-dated and given the version changes since the initial post. Let me know. Well, Esri staff (and others) have been replying recently so I am hoping it is still on their radar. It should be marked in whatever way makes them address the issue asap. Has anyone reported this as a bug? I believe several cases are open on this issue, I have a few myself. Unfortunately we're in "Cannot Reproduce" territory, but discussing at least weekly as we churn through different test and logging methods. Which leads me to: it would really help me, and others having the problem, if you'd log a case and have good steps to reproduce in Arc and Pro that show the same task in Arc not showing the performance issue versus the same task in Pro. I will say, applying a blanket "Everything is slow" won't get a lot of attention, my cases focus on two very specific tasks, with the idea being that there is one specific cause that will resolve all of them. ESRI has acknowledged "There is no question you're having a problem" based on hours of screen sharing (with one analyst saying "I don't see how you get anything done"). So far we haven't gotten close to the "glaring and obvious problem", so again, if you are having a problem, love to see your details on GN but I'm hoping one day ESRI will call me and say "User so and so submitted this SDE case and we found the problem! Here's the fix...". 1 888 377 4575, Option 2, Option 1, customer #...... Thomas: Do you have any idea if the ESRI analysts you have been working with think the problem is database agnostic (Does not matter whether its SQL Server, Oracle, Postgres, etc.)? I only ask this because I have logged bugs with ESRI in regards to SDE and Pro that are specific to Oracle. This has also been the case in the past with ArcMap. Bugs that apply across multiple DBMS would probably get more attention from ESRI than bugs that are tied to a specific DBMS. So the user has to sacrifice display quality in order for Pro to perform normally? Doesnt seem like a realistic solution at all. Pro runs slower when working on a project on a network. Period. Even if you lowered the display settings it would still run slower than if you ran the same project locally. Just in support of your comment if you hadn't seen it already, further up the thread Thomas Colson reported seeing some pretty intensive network usage for simple operations. As Thomas just discussed general whinging "it doesn't work good" will be passed over by Esri. What we need to do is show provable and replicatable examples of how it does not perform properly in specific cases, and how there is a demonstrable difference between Desktop and Pro in the same operation. I'm sure Esri is already doing/has done this, but once isolation tests are done, you can get down to specific cases and the parts of the application causing issues can be optimised or fixed. Probably what is happening (or what we often see as GIS developers) is that one minor issue conflates with some other issues and maybe a bug, or sporadic problems with the OS/network/database to create a noticeable performance problem. What the user sees is "doesn't work/grinds slowly" but actually solving that can be more complicated than just fixing one thing. Not passed over, just not actionable. General whinging is really frustrating for development teams. Absolutely! This is productive, but admittedly, is the hard part that may (will likely) require time troubleshooting on the user side if we cannot readily reproduce a performance issue from a given description. We are committed to continually improving performance and we'll be much more successful accomplishing that by working together to get to the cause. Thanks for the balanced assessment Andrew Quee Nothing to do with all the network issues, but I've had a pretty good performance boost by turning off all the antialiasing options, and setting rendering quality Low in the Options--Display menu. Might be worth a shot for everyone following this thread. Thanks for that. I thought those options would depend only on the GPU - I was running 40% load most of the time so I didn't try them. I was consistently CPU bound, pegging it to 99%+ for the whole time. Worth a shot to test though. If Pro is using the CPU to render graphics and ignoring your video card's capabilities, well that explains a lot, in particular why this thread exists. I am working to update python scripts run with ArcMap 10.5.1 to python scripts run with Pro 2.2.1. I am noticing output differences from geocoding which will necessitate updating the python scripts and while researching this topic by performing the geocoding process manually in either ArcMap 10.5.1 or Pro 2.2.1, I have noticed that ArcMap 10.5.1 (the old 32-bit software) is significantly slower than the latest and greatest Pro 2.2.1 (new 64-bit software) with this geocoding process. Starting from a table with approximately 222,000 records I get a geocode throughput of 28 million records per hour for ArcMap 10.5.1, but only 15 million records per hour for Pro 2.2.1. The computer running the Pro software exceeds specs in all areas, so I'm wondering if there is a configuration issue with my geocode setup to cause it to be twice as slow as ArcMap (I would think it should at least be equivalent if not faster)? I have a few examples of GP-Tool -> Python stuff working slower in Pro than in Arc that I'm not quite ready to share yet, however, might I suggest moving your PY question as new one to the Py section, where Dan Patterson will likely have an answer before you post it! Are your GP Tool performance issues complex compared to just running the Geocode Addresses tool on a standalone table in my test case? I ask because I would be interested in testing your GP tool scenarios in my environment to see if I encounter the same issues. Your idea to publish AGS services directly from Pro should be available with AGS 10.7 as per ESRI product manager. Would your org be able to upgrade to 10.7 quickly after it is released (a few months) or does this type of process take a considerable amount of time (6 months or greater)? I'll share with you my toolbox. It's the same one that I've used in a couple of cases. Incredibly complex! On the 10.7.x thing, we have a loosely enforced "check it before you wreck" it policy in the datacenters, which I myself follow strictly (as I'm the one usually doing the testing and certification), which will soon be a non-negotiable rule, with perhaps as long as up to a year or more before a new enterprise version gets rolled out past testing. For example there was a M$ issue with 10.6.1 that caused us to put the brakes on testing, and now I'm certifying and finding a few more minor issues (that' I'm not quite sure of yet, and will sit on them 'till I can repro them). Just to chime in - in my case I was having similar issues. I access all data over the network. I finally discovered that my layout must have had a corrupt element. This seemed to affect many aspects of the software even when not working with the layout. I simply exported the layout from catalog, deleted the layout from catalog, then reimported and now the software is working really well. I hope this might help someone. Thanks for sharing, Jeff. I don't suppose you'd have a project with a "bad" layout anymore?!?! Sure you can grab a copy of the empty project here Dropbox - Demo_Corrupt_Layout.aprx It doesn't really exhibit noticeable issues until you add a few dozen layers. Doesn't seem to matter if it's local or network data. It gets progressively worse with more layers. The most common issue I would face with this project is UI hangs. For example I would get 30-45 seconds of UI greyout after doing something like "make this the only selectable layer". Drawing speed has always been fine. Once the layout is completely removed from the project the issues are gone. I hope this helps a bit with your research. I would appreciate a follow-up if possible. Kory Kramer I haven't tried to repro (it looks like the issue might be happening when there are 100+ layers?), but: Note the commas and spaces in the path and that the path is populating: Thanks Jeff Meyer and Thomas Colson. I'll try to find some time to look into this more today and will report back. Jeff - from what I've looked at, it isn't clear that this is dependent on the project you sent. I open a fresh project and when I have a number of layers in a map (like you said, maybe several dozen) I see severe hangs when making a layer the only selectable layer, or collapsing all of the layers to not show the symbol... Looking at this with the development team. Thank you for providing some tips on how to reproduce the hanging. Kory, while you're on it, could you please keep in mind, and possibly take this up with the same dev team members, the performance issue and suspision I raised concerning "Tasks" and toolvalidation in Pro: ArcGIS Pro: Stop queuing senseless "UI Tasks" (performance issue) This may be related, and I have never heard a definitive answer of you whether this was actually being looked and worked on seriously by the dev team, while I think it should. Thank Kory - let me know if you hear anything. Our jurisdiction is watershed based with over 2000km2 of large scale data. Rendering speed is always lightning fast which is why I rolled it out for my users. I didn't anticipate all the UI hangs, or whatever is happening. Would you know if the corrupt layout was imported from ArcMap or created brand new in the Pro project(s)? Wanted to add my two cents since this thread still seems somewhat active and I have not seen a solution anywhere that solved my “Why is everything so slow?” performance issues. In my case, it turned out to be OS settings. I added ArcGISPro.exe process and common ArcGIS file types to Exclusions in Windows Defender. This solved the biggest issue of AntiMalware Service Executable sniffing and dramatically slowing down all incoming and outgoing connections. This helped performance especially for network hosted datasets. [Windows Defender Security Center > Virus & threat protection settings > Add or remove exclusions] You might have a look at other antivirus/malware software that could be scanning files actively used by ArcGIS Pro and add exclusions there as well. Next up was graphics performance. Downloaded and installed latest drivers for graphics card directly from manufacturer. Windows update was not automatically updating for me. Set graphics settings to “High Performance” for the ArcGIS Pro application. This will make sure graphics processing is done by graphics card if available, rather than CPU. For other laptop users, you should see a slight improvement when setting power mode to “Best Performance”. Not sure if this is an option with desktop version of Windows 10. This won't help every case, but might be a good place to start if these settings are at defaults. For most organizations, excluding anything from security controls is not an option, and not something that had to be done with ArcMap. We have the same AV problem with ArcGIS Server: AV scans of the install directory slows it to a crawl. Thanks Matt. Never thought of this. I'm going to give it a try. I've been looking at pro for the last years. Always same conclusion: that it is too slow (mind-blowing slow). I remember going from arcview 3 to arcgis, and this is not the same user resistance issue. As an arcmap user I had the same wish list as everyone else: - multithread - to speed things up with my multi-core cpu Result: pro is unbearably slow - antialiasing - to see things as they print or make better presentations Result: it does anti-aliasing, but is so slow I can't use it - 64 bit - I want to use more memory to be faster! Result: pro is unbearably slow - several layouts in a project - I want to be more productive, have less projects for a single task Result: pro is unbearably slow - cartography/symbology - I want better/more rules so I am more productive and can avoid complex layer/scale range/symbology setups (same layer loaded multiple times, different query defs, scale ranges, symbols) Result: pro is unbearably slow So the result is - I can't get the good stuff that would make me abandon arcmap. As time passes and arcmap gets further behind I'll be looking at the competition for a new best-in-class GIS desktop product. I already found one that offers all of the above items (QGIS). If esri doesn't/can't make a good product I will move on. I'm a client, I am not a fan. I owe you money not loyalty. You owe me quality service. Thanks @Duarte Carreira for this post. I was already thinking I am alone. I would like to write a post on my own, but there are so many things I would like to address about this sloppy thing called ArcGIS Pro, that I simply cannot find the time for that. Unfortunately, ESRI is completely wrong in so many ways related to this issue and they whether don't get it or don't want to get it. It is really pitty. Wow! And to think, they will keep peddling Pro at the ESRI User Conference as if it was the best thing ever. I laugh at them the last time I attended UC when they smiled and smirked about its 64 bit capability. No, Mr. Dangermond! It's a useless product! Maybe we need a quiet revolt at the UC and say enough is enough; challenge the presenters and developers on its lack of performance! ESRI, please take back Pro and keep ArcMap! Hello, I want to re-iterate what Michael Volz just replied. I had opened a ticket last month with ESRI Premium tech support, and used the ‘Select By Location’ tool as an example in that case. The simple selection I was doing was selecting the parcels on a remote SQL Server with one district polygon – ultimately selecting a subset of about 200 parcels. In ArcMap this operation takes about 1 – 2 seconds, but in Pro it took over 8 minutes on my end. Thankfully ESRI could reproduce the error (faster than 8 minutes, but still unacceptable) and the results are below. They have logged it as a bug and will send it on to development. The issue seems to be working with any remote database….Pro is very chatty and slow in that regard. Overall, I like using Pro and use it everyday, but I agree that the performance needs work still, and this issue is preventing us (LA County) from fully switching over. Like has been said earlier in the string of this post, if we all work with Tech Support on different Geoprocessing tasks then they will get a loud and clear message that the software needs further configuration. This Select By Location task was just one example, I've had performance issues with joins, exporting to excel, spatial joins and other fairly straightforward Geoprocessing tasks. From ESRI… File Geodatabase: – ArcMap 10.6.1: Less than a second. – ArcGIS Pro 2.2.4: Less than a second. – ArcGIS Pro 2.3 Beta: Less than a second. SQL Server: – ArcMap 10.6.1: Less than a second. – ArcGIS Pro 2.2.4: 3 minutes, 45 seconds. – ArcGIS Pro 2.3 Beta: 3 minutes, 9 seconds. Oracle: – ArcMap 10.6.1: Less than two seconds. – ArcGIS Pro 2.2.4: 3 minutes, 57 seconds. – ArcGIS Pro 2.3 Beta: 3 minutes, 25 seconds. PostgreSQL: – ArcMap 10.6.1: About 3 seconds. – ArcGIS Pro 2.2.4: 1 minute, 43 seconds. – ArcGIS Pro 2.3 Beta: 1 minute, 3 seconds. Thanks, Dan Dan: I'm not sure you have data that can be tested like mine, but I have a bug created for SDE performance in Pro for a specific scenario (no editing involved which is even slower) that you can try replicating with your data. I have a fairly large feature class with about 28,000 records that has a relationship class to a table with about 5,000 records. One of my auditing steps to find orphan related records is to select all the records in the feature class and then open the related table where it would have all the related records selected. Any unselected records are orphans. This operation can take anywhere from 20 - 50 seconds in ArcMap 10.5.1 but consistently takes 3 - 5 minutes to execute in Pro 2.2.4. The bug for this performance issue is BUG-000119165, which if you can replicate then you can ask to be added to this bug. My hope would be that the more people that can report the same issue to ESRI, the more likely ESRI will attempt to address the issue. I also hope that whatever ESRI finds to be the root cause of this problem, will dramatically increase the speed of editing SDE data in Pro as well which would be a major step to the adoption of Pro in production for my organization (instead of just being mostly sandbox software). HI Michael, I will try a similar workflow you described when I find time to do so. It's good to hear that you've logged this as a bug and it sounds like a similar situation to mine. I share your hope in that more people will report these issues to ESRI and send a message about these performance problems that a lot of us are having. Thanks, Dan I tried your workflow on my system, but I did not see the drastic performance issues that your data show, so it would be interesting to see how slow my workflow would be on your system. When you say your server is remote what do you mean exactly? At my org the servers are located at a remote location, but are connected by fiber so the throughput is very good (I'm not sure of the metric on this though). How is your remote server connected to your network as this could be part of the issue? ArcGIS Pro is SLOW, really SLOW. That's a fact and it means it's not a case by case issue. It's a general issue; every user has noticed and experienced how slow Pro is. At this point, ArcMap remains the number one Esri's products. I believed that Pro was supposed to run next to ArcMap and ArcCatalog on the same machine! At least, that was what we were told. If so, why would 8G works fine for ArcMap and not for Pro? Far from being technical, let me flatly say that ArcGIS Pro thinks too much before executing any type of geoprocessing. It is frustrating. You want to turn a work over to your boss ASAP, ArcMap is the way to go, sadly. However, don't get me wrong; I like ArcGIS Pro concept. I use ArcGIS Pro frequently, I like using it. It would have been a game changer had Esri not rushed its release. It is an unfinished software. The base architecture needs some works not patches. Honestly, ArcGIS Pro is a user-friendly software once you familiarize with it; and it has great potentialities as well. Things you can easily do that are quite daunting in ArcMap: things like editing, 2D to 3D and vice-versa, data sharing, ArcGIS online connection, Tasks, Link Views, and much more. I would like to finish by saying that what Esri needs to do at this junction is take a step back, re-work the whole ArcGIS Pro project and come up with a more robust and efficient replacement for ArcMap. Hi Ezi, Can you describe this further: "let me flatly say that ArcGIS Pro thinks too much before executing any type of geoprocessing" I am guessing that you mean validation in the tool is taking too long for a tool you are running? After you input some values into a geoprocessing tool, and before the tool is run, the framework evaluates that the values you entered are valid given the requirements of each parameter. If you can provide an example of a tool or tools that take a long time to validate in your workflow we will look into improving them. Thanks, Drew Ezi Molley I completely agree with you. And I'm going to vent a bit more now... This is not a very specific case scenario where we have to help debugging the situation and provide as much reproduceable data as we can to tech support. arcgis pro does not fit in that situation. We are not talking about exotic setups or workflows. Everyone knows pro runs badly always. This is basic QC responsibilities and someone is knowingly let the product get out to the public like this. Any guy/girl at esri with a common PC will encounter the same issues we are all facing easily. Please don't tell me you don't know about network performance or you didn't detect the glaring rendering and geoprocessing issues. Don't tell me you don't make performance baseline testing. I know you do. Just look at the excellent System Design Strategies site. It's all there. So to me this is a question of honesty and transparency. So my message to esri is this... own it. In truth, arcgis pro is free of charge. So there's a tolerance for a slow, informed, programmed switch from arcmap. Key is informed... Stop avoiding the real situation and start showing good leadership. Tell us what you intend to do about it, even if it's nothing at all. You could tell us that it's an intrinsic characteristic of the new software, and it will be slower than arcmap for some hardware generations to come, that eventually will catch up. Just like going from av3 to arcgis 8 back in 1999 (it was staggering slower). Tell us to buy real top-end machines, with raid nvme disks, cgi pro video cards, max frequency 2 slots cpus with 20 cores, and min 128gb ram. At least we wouldn't be surprised that pro doesn't budge on our measly i7 sata sdd 32gb rigs. We wouldn't waste our money like this anymore. And/Or you can tell us that you have an optimization plan going on and will include performance improvements in each iteration of the following releases. And, please, in either case, create a knowledge base document on how to mitigate the existing performance issues. That would be helpful. You can start by acknowledging the problem and point to the road ahead to solve it. Be a leader. That would win over *all of us* instead of alienating most of us. I agree. Except, the transition from 3.x to 8.x was a breath of fresh air! It was an exiting change! As I already said in one f these discussions, ESRI would be better off by scrapping the whole Pro thing and starting something new from the beginning (and ditching the ribbon the process). After 4 years of struggles, it should be pretty clear that the Pro is a flop, sorry. Duarte Carreira, thank you for this post! I agree 100% with you. ESRI, I am so sorry I cannot get to some quality time to address all issues I have with this piece of ... software you call ArcGIS Pro. Unbelievable how ignorant you are, year after year. I am truly sorry for that. Hi Vladimir, I just messaged you. Please reply to me so that we can collect your thoughtful and constructive feedback. Thank you, Kory Hi, This thread is enormous! I do encounter performance issues which are unacceptable. In the meantime, I found an interesting (short) thread which greatly improved speed: Improve Field Calculator Performance I tested it this morning and it is much much better. To me, slowness issues are most of the time linked to GUIs. My issue this morning was about Calculate Field performance. However, I had similar problems editing annotation with Attributes pane open, which considerably slowed things. And I suspect updating symbology is a similar problem but I don't know how to update symobology with no GUI opened... Hope this help some of you. Get some RAM Bro! 16GB is nothing! Hi everyone, I have commented on the performance of Pro in the past - both on per operation performance as well as the frequent increase in steps compared to Desktop when required to perform a task. I see Esri is still avoiding the issue and diving into very deep specifics any time an issue is raised. My still standing summary to Esri is: Use Desktop & Pro in the real world to perform a range of steps to create maps, data products and related tasks and measure the outcome in metrics such as number of clicks, amount of mouse movement required, GUI draw wait time & process run time as well as overall project run time. In some aspects Pro is miles ahead of Desktop - I exclusively use it for any workload relating to publishing to Online. For bureau-style cartography and almost all data management I use Catalog & Desktop - often with multiple instances of each open. The Pro approach to editing in the attribute table is deplorable. There is a common misconception that you need a serious machine to run spatial workloads in Pro. You need a serious machine to run Pro. For 90% of my workloads Pro, its overhead and associated serious machine is not required. As an abstract comparison we are getting told we need the latest & greatest machine to run the Steam client. Many games themselves can run on average hardware but without going overboard with ram, cpu & graphics you cannot get to a point to actually play the game..... It may be pretty but if gets in the way it has to go sit in a corner a little while longer. Kory Kramer For some people the latest Windows 10 update 1809 may help. On two of my machines the update has made general network browsing quite a bit faster - diving into a mapped network drive folder in Catalog in both Pro & Desktop is much better after the update. Take note that 1809 will break quite a few things in AutoDesk's 2019 products. This sums it up nicely. Development team needs to really run both pieces of software side by side and if Pro isnt at least equally as efficient as ArcMap, then they need to fix it. Running a GP tool to select by attribute/location, apply symbology from a layer file among many other tasks just didnt take that long in ArcMap, nor was it needing to be run as a GP tool... It was near instantaneous and very efficient. Editing an attribute table is so incredibly clunky and inefficient compared to ArcMap.. ex: it resets the position of the attribute table every time a record is editted, needing to manually scroll back to the next row/column/record which needs to be edited (where in ArcMap hitting return takes you one record down-no need to rescroll). This is basic software development 101 and should have been caught by now... It makes me wonder what the development team is thinking... Its a bummer because Pro has a lot going for it, especially the plethora of new Symbology options available with the additional transparency functionality, and just overall easier symbology editing experience (no more windows within windows within windows). I also like the new layout and legend UI and find it much better for cartography. Which is why I still do all my analysis in ArcMap and do map exports in Pro, because ArcMap is faster, but Pro has better cartography options. Is the development team aware of these issues and just dont know how to fix them? Or are they are just completely unaware of the issues... Im not sure which is worse. All I know is that Pro should not be less efficient than ArcMap to do the basic tasks I have mentioned - running a GP tool that takes 30 seconds to do something that once took 1 sec is just unacceptable. People are not going to adopt the software if thats the case... At best, they will only use Pro for certain tasks but still primarily use ArcMap (like me). Kory Kramer I have just update to ArcGIS Pro 2.3 and came across an excellent example of changing something without thinking about performance and efficiency. When you Share a layer as a web service and select the "Configuration" tab you now need an extra click to open the most often used settings for a Feature service. I can understand why adding a WFS section is useful and why that may be a click away but why reduce efficiency of a menu item that is used so much? The funny thing is that if you want to go to the WFS section from the Feature section you have to click back (on the left side of the window) then click to open the WFS section (on the right side of the window). Maybe they should divert the UI & UX work experience students into another part of the business. Thank you for the specific feedback, Chris. I'll share that with our UX team. Adding a field using the geoprocessing tool took me 10 min. That's quite a bad performance, specially when in ArcMap it takes not longer than a minute. Would be appreciated a better performance, especially when you force to use ArcGIS Pro, like with UPDM Data Model. Working with it is a nightmare. What kind of data source are you seeing this slowness with (e.g. SDE data, file gdb, shapefile, other)? Is this add field operation slow for all types of data sources? I have been using Pro for some time, and I like a lot of the editing features. However, changing symbology is so slow. I am using SDE (Server 2008), with versioned feature. I switched from single location, to unique values, added two fields, have 10 symbology types. To get to this point it took 10 minutes.I have the map paused, so it is not needed to change the view. Now I am going through and changing the symbology to match what was used in the past. I am now 40 minutes into the change. I change the line type. It takes about a minute to minute and a half to "update". Then I click properties to change the color, dash, offset, and it takes about another 2 minutes after the apply button is pushed. WHY IS THIS OCCURRING?!? It shouldn't take 1 hour to replace 10 symbology types. When the program is running idle it is using 0.5% of my CPU and 0 Mbps of the network When I click apply it uses 8.8% of the CPU and close to 45 Mbps of the network It also pulsates the usage, 10 to 15 seconds of use then 10 to 15 seconds of idle, and so on until it refreshes. I am using ArcGIS Pro 2.3.0 Processor: Intel Xeon CPU E5-2620 v3 @2.4Ghz RAM: 16.0 My throughput on my network to the server is 1000 Mbps Come on, please fix this very basic function with ESRI. SQL Server 2008? Microsoft SQL Server database requirements for ArcGIS 10.7 and ArcGIS Pro 2.3—Help | ArcGIS Desktop Is it possible to upgrade to a supported database version, run through the same steps and circle back here with results? I made an error on the server, we are currently using 2012. We have the most current version of server/portal on our system (Just updated 10.6.1). This has always been an issue with pro. There was a great video imbedded in this thread that shows the significant decrease in performance when changing symbology. We are in the process of updating the server to Windows 2016. We are experiencing the same thing and we are running SQL 2016. My PC and video card more than meet the requirements. If everyone who has been posting here is complaining about the same slowness and bugginess, perhaps esri should take it more seriously and stop treating user complaints with such a laissez faire attitude. Something that costs as much and promises as much as Pro should just work out of the box. The user environment shouldn't be such a huge issue in most instances. If you want users to convert from Desktop you have to prove that Pro won't be more of a hindrance than a help. Tracey, there is no such thing like "...if you want users to...". They don't tick like that. We all will have to abandon ArcMap sooner or later (rather sooner). No choice left whatsoever. That is the fact. And yes, ESRI doesn't have to do anything, history learns us. They let us vote, play around, that is, bread and games. Again, you hardly have got a choice without major and painfull changes in your organisation, if you come to the cheerfull idea to switch GIS platform to anything else than ESRI. It's not that I would recommend that. Cheers! I have been keeping an eye on this thread for a while now. I thought at first that something was wrong with my set-up or network configuration etc., but after reading all these comments I see that the problem is definitely with Pro. At this point, I am using Desktop for everything up to layout, then I switch to Pro as my layout tool and import my mxd from the Desktop. Any kind of analysis, adding or calculating fields, selections, editing or creating Annotation I do in Desktop and then bring it over to Pro. I find that I can go back and forth pretty easily when I find something I need to change while I'm in Pro. I just leave Pro open, change what I need to change in Desktop and then refresh in Pro. Not the best solution, but at least it keeps my workflow flowing. Zach, This is similar to how I work but how do you handle Legends in Pro? The lack of default legend properties and the way properties are changed (lots of menus and clicking to get to the property I need to change) is really time consuming compared to ArcMap. Also do you have Layout templates that you import your maps from Desktop into? Right now since you can’t lock layouts in Pro, if I want a new map and layout in Pro I literally have to copy a similar map, paste, and rename and then import that into a layout template (also copy and pasted). Its clunky compared to ArcMap but it could just be my workflow. Just wondering how you are handling a new layout using an existing map. (I’m a one man shop and what I’m doing “feels” correct, but seems really awkward coming from ArcMap. At first I figured it was growing-pains learning the new system but I’ve been using Pro for 80%+ of my work now for 9 months and it still feels awkward). I really appreciate seeing descriptions of how people handle their workflows – even the simple ones. Thanks, Sean I'm still feeling my way with this workflow--it sounds like you have more experience with Pro than I do. I actually like the legends in Pro. After I figured out that I can make different layers visible or not in the legend by clicking in the TOC I am pretty happy with the legend. What do you mean by 'locking' a layout? When I get to the layout stage and get the map zoomed and centered where I want it I always make a bookmark, usually named 'Print View'+scale. I do the same thing in Desktop. Then if I activate the layout for some reason and zoom/pan around I can always get back to my layout view. I'm just figuring out how I want to deal with maps vs layouts in Pro, so I don't have a real good best practice for myself yet. What I mean by locking a layout is making changes to a map and layout combination for a specific task without needing to created both a new map AND Layout when in ArcMap an .mxd drove both. For example, I’ve got a map template built for a Size A Power point map (which is really common around here). I’ve also got a Size A Layout Template that goes along with the Size A Map Template. I use the Map Template as sort of a “starting Point” where I tailor the map how management wants and then marry it to the Layout Template. (This way all font and symbology sizing works for the size requested). After making the requested changes I export a .pdf or a .emf or a .jpg depending on the requirements of the end product. This work flow requires that each time I received a new request that I make a copy of the map template AND a copy to the layout Template. I must rename both to the name I want for the new map that is required, and thus my project grows by a new map and a new layout for every request I get. (Often I’ll need to revisit old maps to make revision up to years later). In ArcMap, I also had a Template – an .mxd. I’d take that template and make one copy, adjust the Map view and layout to specs, export to .pdf, .emf, .jpg and All I’d have is another .mxd. Basically it’s one half the “bloat”. Both workflows accomplish the task – but ArcMap felt more streamlined since it generated ½ the files. Regarding Legends: I like the functionality that you describe (re-ordering and whatnot) but that’s where it ends for me. In ArcMap because I just changed elements I needed in the maps and the Layout was locked to that map, I’d never have to re-create a Legend but in Pro because I want to maintain my map template, I need to copy the map, and I want to maintain my Layout template so I have to copy that. This requires I build a new Legend each time versus having my Legend dynamically update. When building a new Legend there are no options for a “legend template” so I have to change all legend settings every time I make a new Legend. In ArcMap since all of my Legend settings were tied to the .mxd, I set the legend settings once and that was it. Building a new legend in Pro without any template means that every time I do this I need to dig into the settings, set my border properties, background properties, border and background offsets, All Fonts, My Legend title, font-settings, and alignment. Further there is no way (that I’ve discovered) to universally set the font. I have to go into Legend>Title>Font Settings, Legend>Headings>Font Settings, Legend>Labels>Font Settings etc – for every legend setting. Most of the time the font is ALL Tahoma in the legend, but other times it’s all in another font and I can’t figure out why. I don’t necessary mind the UI for the Legends settings if I could set all of them once – for all further layouts, but there is no way to do this (that I have found). Sorry for the long reply – hopefully that makes sense. So much of what people do in Arc products relies on the industry and the organization itself within an industry. I make all paper maps, but many industries are all digital. In some ways Arc is the Swiss-army of data/maps-representation-of-data. Its understandable that a new flagship programmed from the ground up may not accomplish the same task more efficiently in every possibly way. That said, it feels like ESRI didn’t research real-life work-flows as thoroughly as I’d have liked and threads like this one seem to be proof of that especially as more and more people work on switching over to PRO. I do appreciate Kory Kramer and other folks efforts sorting through all of these complaints. I just wish PRO felt like the big step forward from ArcMap that we were led to believe. There are certain UI improvements that feel like they are really heading in the right direction (especially if the program was as snappy and quick as we all hoped). Unfortunately, as this thread points out, improvements to speed are lacking. Not just in the specifics mentioned here, but in a LOT of places. I’ve noticed PRO acting slow when I copy elements in Catalog and Paste them like in my example above, When I change Labeling settings, even when I close a project (like why don’t we have an option for Close and Save the project – one button and done). I can open multiple layouts at the same time but each needs to open – which takes time – and then when I click on the tab – it has to draw which takes more time. I can’t export multiple layouts at the same time so when I make a series of maps that all need to be /pdf’s – I have to run through that export procedure for each map, wait for it to generate the .pdf, and then do the next one. I should be able to export X-number of layouts at the same time to the same format, and close my project while those are exporting. The project may run in the background till those exports are complete – but It should “release me back to Windows” without a wait. This has turned into a rant but I know others in this thread will empathize. That said – I reached out to you about specific work-flows because SOME headaches might be alleviated by sharing with one another how we accomplish even basic requests/Tasks using PRO – and anything we do for each other is something ESRI is at least partly off the hook for. Thanks for sharing your workflow! Sean The font issue is indeed frustrating. I made this positive update to ArcGIS Pro 2.2.1 painful slow using Basic License after ArcGIS Pro 2.3 was released. The issue with the Basic-license underperforming vs. Standard-license might be of interest to some of you as well: All in all ArcGIS Pro is evolving into a stable and productive application for us. @Thomas L, In my experience, slow opening and saving in Pro is directly, and linearly, related to the memory consumption of Pro. Pro in its current state, needs copious amounts of RAM / Virtual Memory compared to ArcMap. I have some very complex topographic map style Map document with hundreds of Query Layers accessing a PostGIS database. ArcMap manages to handle these documents in its 2GB memory limit. Not Pro... The bigger the underlying database in terms of GB storage, the bigger Pro's memory consumption. I have seen Pro consume up to 70GB(!) of memory (my laptop only has 32GB RAM, so rest is consumed as Windows "Virtual Memory" on disk). Such documents take up to 10 minutes to close, simply to release all the used (virtual) memory. I have actually monitored this process, by opening the Resource Monitor from the Performance TAB of the Windows Task Manager. Pro will not close until it has released all this virtual memory, and during this 10 minute waiting time, I slowly see the memory consumption of Pro decline in the Resource Monitor until the app finally closes when almost all memory has been released. I therefor think this is not really a memory leak, but "by design"... probably combined with to conservative releasing of memory during application use, meaning Pro does not release its memory even if it could or should. I do think ESRI should have a closer look at this memory consumption issue of Pro. Having Pro consume anywhere from 5 to 30x more memory for what is essentially the same map document (I literally import ArcMap documents in Pro), is worrying to say the least. Marco, for this complex mapping project, have you experimented with putting some of your layers in a base map layer? This seems like an ideal use case for using that functionality. I wonder if it would help with your huge amount of memory consumption. No doubt, especially if a raster basemap is created. Reducing 400+ vector layers connected to a PostGIS database to a single raster layer, will surely take the memory usage down to next to zero, but at the cost of a huge amount of processing time to create a raster tile cache. And this is no solution for high quality vector output to e.g. PDF, which is my main interest at this point. And there is the added problem, that there seems to be no current option to retain the full complex symbology in a basemap. All options seem to drop at least part of the complex symbology and labeling options (e.g. vector tiles), or don't allow high dpi raster tile output so you sacrifice quality. Neither is particularly appealing to me at this point, especially also since my main interest is outputting PDFs as I already wrote. Can this be moved to a bug, improvement, something..... This still had not gotten any better when utilizing an SDE. Especially when the basic functions are slow, selecting by attributes/location. Editing is fine and love it, but get the other functions corrected. I recently attend to version a data set, add global id, and Editor Tracking. It took me almost 10 minutes to get this accomplished using my desktop, connecting to our Sde on a 2016 server. This use to take, 1 maybe 2 minutes using arcmap . Can't even try to traverse a line. Enter the Direction/Length and it takes a solid 20 seconds to update and display the change. I only have 127 more lines or ~42 minutes remaining for an operation I could do in 5 minutes in ArcMap. And that is just to enter the data. Don't forget trying to split polygons using those new lines, and then editing the attributes for each polygon. 2 hrs later for a single map correction. Forget about making a mistake and having to undo, 3 minutes later the menus finally stop being grayed out... Try to delete a line, menu grayed out. Hit Enter to input another line, menu grayed out 20 seconds. Everything is locked up right now and I haven't even done anything substantial. Well come back today and try again. Pro actually responded pretty darn well. Nearly what I would expect from the next gen platform. Not quite there, still some lag with menus, but I was able to traverse the entire parcel pretty efficiently. Did yall come in and mess with my machine overnight? It occurred to me that one of the things that could bog down Pro (especially on a machine that has limited CPU) would be the auto-indexing capability. It is somewhat unclear to me from the help article exactly what gets indexed, but if folder connections are auto-indexed by the scheduler, that could slow things down. One thing to try is to turn that off and index manually (or set it to index at a time when you know you do not use Pro [but are logged in] during your daily routine. Update the search index for project items—ArcGIS Pro | ArcGIS Desktop Konfiguration: 64bit W7, 8GB RAM, Intel i5 @3,1-3,3 GHz a.) no other Programm running than Arc GIS PRO 2.3 1 shapefile (GADM) imported into project's file geodatabase = 256 features project stored on local USB3 drive new column "area_sqkm" task: calculate geometry: takes exactly 38 MINUTES finished without any error messages, values visible in attribute table, stored project re-opening of project not possible: damaged database b.) compared to QGIS: less than 1 minute, no problems "project stored on local USB3 drive" is your problem there. Arcview 3.2....Pro 1789.345 (coming out in 500 years) will never work when accessing data or project files off a USB, not even USB3.0. Hmmm... I run a +500(!) GB PostGIS database based on OpenStreetMap data for the whole of Europe of an external USB 3.1 connected (sometimes over 3.0) 2TB Samsung EVO SATA drive in an Oracle VirtualBox instance running Ubuntu as guest system for PostgreSQL (the USB drive is connected to a Core i7 HQ quadcore laptop running Windows 10). I think you are underestimating what can be done with a reasonable drive, but I do agree having a newer faster NVMe drive instead of SATA, probably will boost performance. USB3 is capable of 5Gb/s, which is much faster than most people's network connections, and as fast as many SSDs. USB is not likely to be a bottleneck in this situation. This thread is getting hopelessly long and becoming hard to follow but I wanted to add my two cents. I too have really struggled with the "slowness" of ArcGIS Pro. It has many great features but even the most simple tasks seem to take much longer in Pro than ArcMap. The typical response from Esri seems to be contact Support. In my experience this has just been me going in circles trying to figure out what is wrong and Esri Support just guessing. I did not have this experience with Support with ArcMap. I know very little about system architecture so don't really understand what it means to be multi-threaded but if this is what multi-threaded means, then it is not a good thing. I also saw that network connections are to blame. I would venture a guess that the majority of Esri users have their data "on the network" because they run in an Enterprise environment. This is something that Esri has pushed for years by the way. I've specifically noticed that even simple geoprocessing tasks that would have taken seconds in ArcMap take 30 seconds plus in Pro. For example, joining a table to your data requires a geoprocessing window to open which usually has a delay and then after making the appropriate changes to the script, pushing run still takes several more seconds. Individual tasks taking seconds longer is no big deal but a series of them really adds up over the course of a day. My more recent experience has been with working with Excel tables in Pro. That is a totally different topic but still relates to poor performance in Pro. This process has been quite painful and has resulted in hours of frustration even though I have been following the advice in the Help section. Josh, the join performance example that you bring up is something I think the development team is already looking into depending on the specifics of your case. When you run the Join, what does the actual processing time show? I've seen cases where a project may have multiple maps, layouts, tables open, etc. and the perceived lag may be due to the application syncing back to all threads. So maybe that is what you're seeing, maybe it isn't. Could you let us know? Also, depending on what the details are behind your comment about working with Excel in Pro, ArcGIS Pro 2.4 will see a revamp in Excel support which should make it very similar to how Excel is read in ArcMap. Kory, Here is the latest example. I wish mine was only 0.78 seconds. Now, to be fair, this was the one with the Excel table. I believe I did have a number of multiple maps, layouts, and tables open. I have since closed most and only have the required map and layout open but the overall performance has not improved. The multiple tabs open is about convenience, if this feature was working well it would be a major enhancement from ArcMap due to the ability to see multiple maps and layouts at the same time but it does make sense that that could be a contributing factor to lowered performance. Also, thanks for your comment about Excel at 2.4. I am looking forward to it. Also, note again that this 7.84 seconds does not count the time it took to open the tool in the geoprocessing window which was probably several seconds more at least. I can only guess at how long that part was. I also see the following the roadmap: Geoprocessing leveraging spatial databases - New options to run certain geoprocessing tools like Buffer, Spatial Join, Select, or Intersect as queries in databases that support these operations, which will result in improved performance. I'm hoping this will also help me out a lot. 1. How long does that exact same join take in ArcMap using the Add Join tool:? What about using the right-click > Joins and Relates > Join (you may need to use a stopwatch to get an estimate as you don't get Elapsed Time this way)? Using the right click option is seemingly instantaneous, I tried to time it but it is probably a second or less. Unfortunately, when I tried the Add Join tool in ArcMap it gives me an error The error doesn't make any sense (it's an Excel table, it has no OID) and also means I cannot fully replicate the process used in Pro. I know this ventures away from Pro but any thoughts as to why the error is coming up? Perhaps this is similar to the issue Pro is having? On the awesome side, ArcMap seems to have no issues with working with an Excel table. I may just have to create this particular map in ArcMap for now. Don't you just love troubleshooting???? I just did a calculate geometry (X and Y) field calculator on a 900 row attribute table and it took 15 minutes. WTF. Even if you ignore the whole local vs. network and processing, it's all the mini loading times that destroy productivity. Professional users don't need a fancy ribbon and big 'graphicy' contextual menus, they need a sharp clean, reactive GUI where things are locked into place so you can use muscle memory and keystrokes to fly through work. The software should instantaneously load all of the frequently used windows like other have mentioned (i.e. symbology, select by location etc. No load time, just load. Look at Autodesk software for inspiration. Such a clean and productive interface. I do like the query builder for building definition queries ('includes the values...' drop down etc.). ESRI should send a survey to Pro users getting them to rate all the aspects of the interface from 1-5. I'm pretty sure they would soon see what rocks and what sucks. Peter- We are teaching with 2.4 this semester and finding serious performance issues over our network. So much so that we have students copy their data down to the local machine, do the work and then copy it back to the network for storage. Even the littlest things, like creating a new feature class, fail across the network. Connecting to a folder over the network is painfully slow. OMG. It almost seems like indexing is carried our every time I open the folder location. The weird thing is, imagery from our server draws very fast and performs well. We're finding that doing operations off the local drive are reasonable. Haven't see the long local processing times you mention, although we are just getting in to other functions and operations like joins. We have brand new fast Dell workstations with Nvidia graphics processors and SSD drives, so maybe that makes a difference. Still, our network uses a 27 TB SSD cache drive, so it should be just fine. This is almost a show stopper right now. If we didn't have good local machines with large hard drives we would be sunk. Network performance appears to be a serious problem that ESRI is hopefully working on because it renders their software useless and erodes users trust. Don't really know how this network performance issue escaped the developers and quality control folks at ESRI. Baffling. I've been working with ESRI software for over 30 years and this problem is in line with some of the worst performance problems I've seen. Bill You may want to call tech support and see if you're experiencing BUG-000118068 ArcGIS Pro projects stored in the My Documents folder hosted on a network file share with an offline feature service takes up to 20 minutes to open. Calculate field > enable undo, resulted in an order-of-magnitude increase in processing time on the same file, with an almost identical operation. "Enable undo" turned on took 24 minutes to run... and 32 seconds with "enable undo" turned off. What can be done so that "enable undo" becomes a usable option that isn't so expensive for efficiency? Hmm, I just had the opposite experience. Calculating a field with Enable Undo on completed in 6 seconds: With Enable Undo off it took 15 seconds: I can play with this some more, but more details may be needed... Bob, are you working in a project with a number of maps/layouts? If you were to open an Untitled Pro project and work with the same dataset, what do your performance numbers look like? While it is expected for processing to take longer with Enable Undo on: Undo geoprocessing tools—ArcGIS Pro | ArcGIS Desktop Performance and scalability When geoprocessing tools are run in an edit session, performance will decrease compared to when the same tool is run outside an edit session. Similarly, scalability will decrease, as fewer features can be processed in an edit session compared to outside an edit session. the example you're giving of 32 minutes doesn't seem right. I just did another test ArcGIS Pro 2.4.2 49,542 points in shapefile. Calculating Long field. Enable Undo on = 1 min 14 seconds Enable Undo off = 47.45 seconds Are you able to provide a project package that we could use to investigate the performance you're seeing? Yes, the issue seems to occur with too many maps/layouts open in the same project. Thanks, Bob Sas At this point, we can assume and hope that the work already done in ArcGIS Pro 2.5 will resolve the issue you included in the comment above. If you are able to share the project package, I can get it to the appropriate team as a real-world test case which can either validate the development work done, and/or may reveal areas for further optimization. Thank you! Hi Bob Sas Would you be able to share the project package where you're seeing the long lag shown in the screenshot? We believe that what you're showing will be improved in ArcGIS Pro 2.5 with better handling of projects that contain multiple maps/layouts, but we can't be sure unless with test the case you're showing. If possible, and if the package is small enough to email, could you send to kkramer@esri.com? If it is too large to email, send me a message and I can set up a file transfer site. Thank you! Kory, Can you expand on how 2.5 will improve project performance where we have many maps and layouts? I am experiencing significant slow downs with my projects (which have grown over the past year). I have been working around the problem by using one project as a “working” project where I do all my editing and then housing all of my layouts that I export out as PDF’s or .jpgs in another separate (large) project. This way I can at least edit things efficiently although my “export” projects can take 3-5 minutes to load versus about 20-30 seconds for my editing projects (which contain one map with all the layers I need to edit). I’ve also noticed that closing all layouts and maps in my “export” project is critical. If I leave anything open when I save and close the project can take 10+ minutes to load. Its good to hear that ESRI is aware of the project size issue and is working on it. Sean Hlousek GIS Manager 1001 17th Street Suite 1250 Denver CO 80203 39°44’56.52” 104°59’36.43” 303-226-9516 (O) 303-565-0224 (M) Attachments Hey Sean! The improvements specifically cited here are in relation to geoprocessing performance in a project with many maps and layouts. Bob's screenshot shows exactly the kind of thing that can occur when there are a number of objects to update after geoprocessing has completed. So, you might see that the tool reports it completed in 27 seconds, which it did, but then it could take a long time for everything else in the project to sync up. I can't speak specifically to whether the export situation would be alleviated with the Pro 2.5 improvements, but if it doesn't, you can let us know. Cheers Specifically, as best I can detect, the slowness occurred because of an active layout consisting of a mapseries of 60 sheets with 3 dynamic maps and 15 fields of dynamic text per sheet. The issue was not actually because of undo edits in geoprocessing but in updating those edits into too many dynamic layout elements. Thanks for the info, Bob. We'll take a look.
https://community.esri.com/thread/199197-why-does-arcgis-pro-have-to-be-so-slow
CC-MAIN-2020-05
refinedweb
19,659
71.55
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Dear all interested readers, The next sketch makes this: A 3D grid with spheres, to be rotated with the use of peasy. I would, however, replace the spheres by ellipses, to make a better PDF output. But, of course, when I replace a sphere with an ellipse, the ellipse would rotate and loses the shape of a perfect circle. What would it take to maintain the orientation of the ellipse while rotating the cube? Thanks in advance for helping me out :-) code: import processing.pdf.*; import peasy.*; import peasy.org.apache.commons.math.*; import peasy.org.apache.commons.math.geometry.*; PeasyCam cam; int afstand = 70; boolean record; void setup() { size(600, 600, P3D); cam = new PeasyCam(this, 10000); noStroke(); fill(0); ortho(-width, width, -height, height); } void draw() { background(#ffffff); if (record) { beginRaw(PDF, "cube.pdf"); } for (int i=0; i<8; i++) { for (int j=0; j<8; j++) { for (int k=0; k<8; k++) { pushMatrix(); translate((i-3)*afstand, (j-3)*afstand, (k-3)*afstand); sphere(10); popMatrix(); } } } if (record) { endRaw(); record = false; } } void keyPressed() { record = true; } Answers look at reference of PeasyCam; is there a way to receive the current angles (i guess there are 2)? @joshuakoomen -- Hmm. Just brainstorming -- I wonder if you could use screenX to retrieve 3D coordinates and then draw ellipses on a separate buffer image at the corresponding screen location. Instructions: *** Remember that in peasyCam you can double click in your sketch to reset the current cam view of your sketch. While keeping this in mid, then: 1- Use key rto toggle the option to show rotated ellipses or no rotation at all 2- Use key sto save into pdf 3- Assuming you start from init cam state, press either x, yor zto rotate 90 deg on that axis. You can use dbl-click to see the action of recovering your init cam orientation aka. what axis the restoration is happening on 4- Use pto print the current rotation matrix on the console as reported by peasycam Kf But the OP wants the ellipses to always face the front, like billboard sprites. Whenever I've done this I use a 2d renderer and do the 3d rotation calculations myself and just use the X and y coords of the result. You're using ortho so there's no perspective to worry about. You won't be able to use peasycam though. I wonder if you could use PeasyCam as the interface, translate to get the xyz coordinates at 0,0,0 -- and then retrieve a 2D x and y for each using e.g. The catch is: In order to get flat/ortho results you would then need to draw the ellipses onto a 2D PGraphics -- not into the P3D space. So the base sketch is P3D with PeasyCam interface, and the output buffer is default Java2D. However I am not sure how/if rendering a PGraphics interacts with the PDF beginRaw(). Thank you all so much for the tips. Kfrajer's post was really helpful. @jeremydouglass I have been thinking about that kind of solution, but I wouldn't know how to plot it correctly. Also, Kfrajer's solution was very effective for what I needed to achieve. there's no reason the pdf writing part has to use the interactive part of the code - just put it inside a big if (record)block
https://forum.processing.org/two/discussion/26739/replace-sphere-with-ellipses-in-3d-mode
CC-MAIN-2019-13
refinedweb
584
67.38
kalashnikov 0.5.1+1 Kalashnikov # You start the game by dealing 4 cards to each player and the rest you leave in center, this is the garbage pile, but imagine it contains parts of gun you need to build, which is objective of game: "Build gun from scrap", The parts you need of course is A, K, 4, and 7. The two players rush to build gun, whoever builds gun first shoots and the other takes damage. The first one to deal 20 damage to the opponent wins (though this doesn't always have to be the case) The card game was introduced on the youtube channel Life of Boris Changelog # 0.5.1+1 # - Updated documentation - Fixed a potential bug in Card.hashCode 0.5.1 # - Made compatible with dart 2.4.0 - Formatted lib/players.dart 0.5.0 # - Start a changelog - Create an example - Add analysis options - Miscelaneous things to make package healthier example/main.dart import 'package:kalashnikov/gamestate.dart'; main() { var state = GameState( numberOfPlayers: 2, ); state.dealFromScrap(); print(state.currentCards); state.playerDiscard(0, state.scrap.removeLast()); state.nextTurn(); print(state.currentCards); state.playerDiscard(0, state.scrap.removeLast()); state.nextTurn(); print(state.shelf); } Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: kalashnikov: ^0alashnikov/cards.dart'; import 'package:kalashnikov/gamestate.dart'; import 'package:kalashnikov/players.dart'; We analyzed this package on Oct 9, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.5.1 - pana: 0.12.21 Platforms Detected platforms: Flutter, web, other No platform restriction found in libraries. Health suggestions Format lib/cards.dart. Run dartfmt to format lib/cards.dart.
https://pub.dev/packages/kalashnikov
CC-MAIN-2019-43
refinedweb
288
59.7
#include <stdio.h> #include <stdlib.h> #include <string.h> #include "avcodec.h" Go to the source code of this file. Tim Ferguson For more information about the id CIN format, visit: This video decoder outputs PAL8 colorspace data. Interacting with this decoder is a little involved. During initialization, the demuxer must transmit the 65536-byte Huffman table(s) to the decoder via extradata. Then, whenever a palette change is encountered while demuxing the file, the demuxer must use the same extradata space to transmit an AVPaletteControl structure. id CIN video is purely Huffman-coded, intraframe-only codec. It achieves a little more compression by exploiting the fact that adjacent pixels tend to be similar. Note that this decoder could use libavcodec's optimized VLC facilities rather than naive, tree-based Huffman decoding. However, there are 256 Huffman tables. Plus, the VLC bit coding order is right -> left instead or left -> right, so all of the bits would have to be reversed. Further, the original Quake II implementation likely used a similar naive decoding algorithm and it worked fine on much lower spec machines. Definition in file idcinvideo.c. vmd_decode(), vmdvideo_decode_frame(), vmdvideo_decode_init(), vqa_decode_frame(), and xan_decode_frame(). Definition at line 116 of file idcinvideo.c. Referenced by idcin_decode_init(). Definition at line 84 of file idcinvideo.c. Referenced by huff_build_tree(). Definition at line 247 of file idcinvideo.c. Definition at line 211 of file idcinvideo.c. Definition at line 146 of file idcinvideo.c. Definition at line 175 of file idcinvideo.c. Referenced by idcin_decode_frame(). Initial value: { .name = "idcinvideo", .type = AVMEDIA_TYPE_VIDEO, .id = CODEC_ID_IDCIN, .priv_data_size = sizeof(IdcinContext), .init = idcin_decode_init, .close = idcin_decode_end, .decode = idcin_decode_frame, .capabilities = CODEC_CAP_DR1, .long_name = NULL_IF_CONFIG_SMALL("id Quake II CIN video"), } Definition at line 257 of file idcinvideo.c.
http://www.ffmpeg.org//doxygen/trunk/idcinvideo_8c.html
crawl-003
refinedweb
285
52.66
#include <library> #include "library" in the Preferences panel, there is now a tab called Theme with a drop-down menu containing 'Default Theme'.no matter what one does there, one can not change the 'Default Theme' to any other theme. btw, is it now possible to write, debug and edit the libraries, as easily as the main code? atm, in my experience, one has to do a work-around in using a text editor to save the libraries as *.h and *.cpp. a bit awkward for us beginners on that subject. ...The way it works is you create a folder named theme in your sketchbook folder (the location of which you can find at File > Preferences > Sketchbook location). You put all the themes you want to choose from in .zip files and put them under that folder. The tricky part I ran into is that the files need to be directly under the .zip file, not in a folder... atm, i don't have a way of rezipping up the unzipped files. does the IDE 1.8.7 unzip the ***theme.zip files itself? it also appears that the installation instructions on those github links for the theme files need updating - the info is for older versions of IDE eg, up to 1.8.5, not for the 1.8.6 and 1.8.7. pert, do you mean the '...<sketchbook folder>/theme/***theme.zip'? [ *** being the name of a particular theme, e.g, DarkArduinoTheme.zip ] what is the exact directory outline? do you mean by '...under the .zip file, not in a folder...' that all the collected themes need to be zipped up into one big .zip file, and not stored individually inside theme folder? i don't have a way of rezipping up the unzipped files. Yes, you do. In Finder, right-click (or control+click) -> Compress. True. Likely the authors don't even know about this new feature. I haven't updated the installation instructions on my themes either, though it's on my "to-do" list. The old installation instructions still work fine so it's not especially urgent. For example, to download you would likely click on Clone or download > Download ZIP. The downloaded file structure looks like this:DarkArduinoTheme-master.zip|_DarkArduinoTheme-master |_DarkThemeTestCode | |_AnotherTab.ino | |_DarkThemeTestCode.ino |_theme |_theme.txt |_etc.In this case the theme is in the DarkArduinoTheme-master/theme subfolder of the .zip file so that .zip file won't work with the Arduino IDE's user selectable theme feature. You'll need to take the files from the DarkArduinoTheme-master/theme folder and zip them up. pert,. :-)
http://forum.arduino.cc/index.php?topic=443558.msg3882812
CC-MAIN-2018-51
refinedweb
438
77.84
20 Nov 2009: Updated per author request: Under heading Open content at the schema document level in paragraph 4, sentence 2, changed string "A value of mode indicates..." to "A value of none indicates..." Introduction During the W3C Workshop on XML Schema 1.0 User Experiences (see Resources), schema versioning was one of the major concerns from schema users. When the XML data changes, the corresponding schemas also need to change. How do you ensure a level of compatibility to reduce disruptions to the applications? People often talk about two kinds of compatibility. In the schema versioning context, backward compatibility requires that valid instances of schema version n remains valid under schema version n+1. This is what people often have in mind when they talk about compatibility, and it's the easier one to support, because the authors of schema version n+1 have access to both the schema and instances of version n. The other kind is forward compatibility, where valid instances of schema version n+1 are also valid under schema version n. This is normally harder to achieve, because the author does not know what kind of changes might be introduced in the next version. All you can do is leave extension points in the schema to allow future extensions. Because of the importance and difficulty in achieving forward compatibility, one of the major goals in XML Schema 1.1 is to make it easy to write forward compatible schemas. Wildcards play a key role in defining extension points in schemas, and are the focus of this article. The next article in the series will discuss other features related to schema versioning. The W3C XML Schema working group published a Versioning Guide for XML Schema 1.1 (see Resources). Those who seek help for versioning their schemas might also find its content interesting. Weakened wildcards Schema authors who create a complex type definition where they mix a sequence of elements and wildcards that allow the same namespace(s) as the other elements might discover that the schema they have written is invalid. The most likely reason for this error is a violation of the Unique Particle Attribution (UPA) rule defined in XML Schema 1.0 which basically states that the matching particle (for example, <xs:element> or <xs:any> in the complex type definition) can be unambiguously determined for each of the elements in the instance document. This determinism simplifies the implementation of the validator and can be useful for applications which require a mapping between elements in the instance document and particles in the schema. But it also challenges schema authors to naturally express the content they wish to allow. The schema snippet in Listing 1 illustrates the issue that schema authors commonly face when they attempt to create extensibility points using wildcards. Consider a complex type which models the win-loss record for a sports team. In some sports like American football, ties are allowed. In others, such as basketball, a game continues until a winner is declared. A schema author might choose to make ties an optional element (with minOccurs="0"). There are potentially other statistics which can be included in a team's record aside from wins, losses, and ties, and so you might want to allow additional content with a wildcard which can be defined in a future version of the schema. Listing 1. Schema snippet - A:any </xs:sequence> </xs:complexType> The issue with the above complex type definition can be illustrated with the instance document in Listing 2. The wins and losses elements in this instance match up with their element declarations in the schema (see Listing 1). When you attempt to map the ties element back to the complex type, you find that two choices for the particle could have matched. It could either be the ties element declaration (which is optional) or the wildcard which also allows ties to appear in the instance. Because this schema had more than one potential mapping, it violates the Unique Particle Attribution (UPA) rule in XML Schema 1.0 and thus is invalid. Listing 2. XML snippet - An invalid win-loss record element <record> <wins>20</wins> <losses>15</losses> <ties>8</ties> <points>48</points> </record> As a workaround, a schema author might place a required element in between the optional one and the wildcard as in Listing 3. Because the separator element must appear in the instance there is no ambiguity between content which matches the separator element declaration and the wildcard which follows it. Listing 3. Schema snippet - Defining a required element between optional element and optional <xs:any </xs:sequence> </xs:complexType> While you can often add a required element to avoid the UPA error, the content introduced into instances is often meaningless or forces an unnatural ordering of the data. Take a look at Listing 4. The separator element introduced contributes no information to the document yet must be there for the document to be valid. Ideally you do not want such an element to be part of the document. Listing 4. XML snippet - A valid win-loss record element <record> <wins>20</wins> <losses>15</losses> <ties>8</ties> <separator/> <points>48</points> </record> To make it easier for schema authors to create more natural content models, XML Schema 1.1 has introduced the concept of a weakened wildcard. The weakened wildcard is a relaxation of the UPA rule which resolves the contention between an element declaration and wildcard by stating that the element declaration always takes precedence over the wildcard. As a consequence, the complex type definition in Listing 1 becomes valid in XML Schema 1.1 because the ambiguity between the element declaration and the wildcard no longer exists. The reason the wildcard was added in the first place was to allow for schema evolution. Imagine that at some point in the future we updated the definition of the record type to include a points element as in Listing 5. Now the points element in the instance in Listing 2 is defined and because of the weakened wildcard rule it unambiguously matches its element declaration. Listing 5. Schema snippet - An <xs:any </xs:sequence> </xs:complexType> Negative wildcards Sometimes it is desirable for a wildcard to not match certain names. For example, in schema 1.0, ##other can be specified as the value of the namespace attribute on a wildcard ( <any> or <anyAttribute>), indicating that this wildcard matches namespaces in namespaces other than the target namespace of the current schema document. This feature has proven very useful in leaving extension points in schemas. But some scenarios cannot be met by ##other. XML Schema 1.1 introduced a few mechanisms to specify exceptions for wildcards. They can collectively be called negative wildcards. Namespace exclusion ##other can only be used to exclude a single namespace: the target namespace. What if you want to exclude more than one namespace? For example, if version 1 of a schema uses target namespace ".../V1", and version 2 of the schema uses ".../V2". The author might wish to leave extension points to allow names in any namespaces except for those in the namespaces of either version 1 or version 2. Listing 6 shows how you can now express this in XML Schema 1.1. Listing 6. Schema snippet - Namespace exclusion in XML Schema 1.1 <xs:complexType> <xs:sequence> ... <xs:any </xs:sequence> </xs:complexType> With this new notNamespace attribute, you can specify namespaces that the wildcard should not match, which has the opposite meaning of the namespace attribute. Obviously, only one of these two attributes is needed on a wildcard. The notNamespace attribute expects a space separated list of any URI values. Similar to the namespace attribute, notNamespace also allows the special symbols ##targetNamespace and ##local in the list, to indicate the target namespace and the empty namespace respectively. QName exclusion Wildcards are often used to match names other than those explicitly specified. Listing 7 shows an example of such a case. Listing 7. Schema snippet - Wildcards matching names other than those explicitly specified <xs:complexType <xs:sequence> <xs:element <xs:element <xs:any </xs:sequence> </xs:complexType> Each reference type requires a uri child element and an optional description child element followed by any number of child elements for extensions. This seems to work fine; unfortunately, it also allows the following instance (Listing 8): Listing 8. XML snippet - A reference element with multiple uri children <reference> <uri>...</uri> <uri>...</uri> </reference> Now the application processing the reference element will have trouble deciding which uri child element to use. This is caused by the wildcard matching more names than intended. To fix this, you can use the new disallowed names concept introduced in XML Schema 1.1, as in Listing 9. Listing 9. Schema snippet - Using disallowed names <xs:complexType <xs:sequence> <xs:element <xs:element <xs:any </xs:sequence> </xs:complexType> With the notQName attribute, the schema author can provide a list of QNames that the wildcard should not match. This updated type definition forbids the above instance with two uri child elements. Exclusion of known siblings Sometimes the schema author might wish to exclude a long list of names, which makes it difficult to use the notQName attribute specifying all those names. XML Schema 1.1 identified two cases that can happen very often, and provided mechanisms to simplify them. If you define a complex type describing a person, there will be many elements in the type, for the name, date of birth, address, occupation, and so on. If you also want to use a wildcard (or an open content) to allow additional information to be added, then you want to limit the wildcard to not match elements already declared in the type. To do this, use the notQName attribute and list all the known element names. Not only would the exclusion list be very long, it would also be difficult to maintain. If a new element is added to the type, you have to remember to add its name to notQName. In XML Schema 1.1, such an exclusion can be easily described using ##definedSibling (Listing 10): Listing 10. Schema snippet - QName exclusion using ##definedSibling <xs:complexType <xs:sequence> <xs:element <xs:element <xs:element <xs:element <xs:any </xs:sequence> </xs:complexType> You can use the keyword ##definedSibling as a value in the notQName attribute to indicate that the wildcard does not match any element name that is already explicitly declared in the containing complex type. This includes those elements inherited (through extension) from the base type. Note that ##definedSibling does not apply to attribute wildcards ( <anyAttribute>), because XML does not allow same named attributes to appear on one element. Exclusion of known globals If future versions of a schema are expected to introduce new concepts (hence new elements or attributes) in the current target namespace, then it is important to have wildcards or open contents in complex types that allow the new names. At the same time, the wildcards should not allow concepts that are already known to the current version of the schema. Otherwise, they are already included in the complex type definitions Take the personType in Listing 10 above. If there is a global element declaration for person, because of the wildcard, the following xml snippet (Listing 11) is valid with respect to personType: Listing 11. XML snippet - A person element <person> <name>...</name> <dateOfBirth>...</dateOfBirth> <address>...</address> <occupation>...</occupation> <person>...</person> </person> To avoid this, XML Schema 1.1 provides another special keyword for use in the notQName attribute, ##defined indicates that this wildcard does not match any name for which there is a global declaration. You can update the wildcard in the personType complex type as follows (Listing 12): Listing 12. Schema snippet - personType definition <xs:complexType <xs:sequence> ... <xs:any </xs:sequence> </xs:complexType> Now it will not match either the explicitly declared elements in personType, or any globally declared elements. As a result, the instance where a person element appears in another person element is disallowed. Provided that a global element is not declared for telephone, the updated personType allows a person element as in Listing 13. Listing 13. Schema snippet - A person element definition using known globals exclusion <person> <name>...</name> <dateOfBirth>...</dateOfBirth> <address>...</address> <occupation>...</occupation> <telephone>...</telephone> </person> In the next version of the schema, if a telephone element is added, then this instance becomes invalid. This is working by design, to signal that personType in the new schema really should have been updated to include telephone, if it is expected to appear in person. Open contents In XML Schema 1.0, the sequence of sub-elements allowed by a complex type is completely determined by its content model—element declarations and wildcards organized in <sequence>, <choice>, and <all> model groups. XML Schema 1.1 extended this further by providing a mechanism to accept sub-elements other than those explicitly defined in the content model. This mechanism is commonly referred to as an open content. To understand open contents, let us consider the XML snippet from Listing 14, which is an illustration of a sample single CD entry from a CD catalog. Listing 14. XML snippet - CD entry from a CD catalog <cd id="0001"> <artist>Foo Faa</artist> <album>Blah Blah</album> <genre>Alternative</genre> <price>11.99</price> <currency>USD</currency> <release_date>01-01-2009</release_date> <song> <track>XML XML</track> <duration>1.45</duration> </song> </cd> Now look at a schema snippet (Listing 15) that describes the cd element in a flexible manner, and allows a schema author to augment the content of the cd element without the need to change the schema. Listing 15. Schema snippet - CD entry definition <xs:complexType <xs:sequence> <xs:any <xs:element <xs:element <xs:any <xs:element <xs:any <xs:element <xs:any </xs:sequence> <xs:attribute </xs:complexType> <xs:element As you can see from the schema in Listing 15, optional elements appearing in the xml snippet (Listing 14) (namely genre, currency, and song) are specified in the schema through the many element wildcard definitions, <xs:any>, scattered through the complex type definition, CatalogEntry. This can make the schema hard to read and results in extra work, sometimes duplication by requiring the schema author to insert wildcard declarations through the schema. Open content addresses this issue by providing default wildcards, which extend the content model to accept elements anywhere or only at the end of the content model. Open contents can be specified at the level of the schema or the complex type. Note that the open content wildcard is even weaker than the explicitly specified wildcards. That is, if an element in the sub-element sequence can match either an explicit wildcard or the open content wildcard, the explicit wildcard takes precedence. Open content in complex type definitions To specify open content on a complex type, include an <xs:openContent> child element in the complex type definition or in the <xs:restriction> and <xs:extension> children of the complex type definition. The <xs:openContent> element can contain optional id and mode attributes. The value of the mode attribute determines how the content model is extended. The value interleave indicates that elements matching the open content wildcard can be accepted anywhere in the sub-element sequence, whereas the value suffix indicates that elements can be accepted only at the end of the sequence. The mode attribute can also take a value none, which we will discuss in more detail in the next subsection. The child of the <xs:openContent> element is an element wildcard. In Listing 16, we illustrate how to define the cd element using the new open content feature in XML Schema 1.1. It shows how you can replace element wildcards from the schema snippet in Listing 15 with an open content. Listing 16. Schema snippet - CD entry using open content <xs:complexType <xs:openContent <xs:any </xs:openContent> "/> In Listing 16, the complex type definition contains a sequence of four child elements that are explicitly defined. In addition, the <xs:openContent> element allows elements from any namespace to appear anywhere within these child elements. Open content at the schema document level Schema authors often need to add the same kind of wildcard to a large number of complex types to allow future extension. This begs for the ability to specify a default open content that is applied to all the complex types. This reduces the effort to write and maintain the schema, as well as ensures that no complex type is accidentally left inextensible. To specify default open content, include an <xs:defaultOpenContent> child element under the <xs:schema> element. Like the <xs:openContent> element, the <xs:defaultOpenContent> element contains an element wildcard and similar optional id and mode attributes, where mode takes either interleave or suffix as its value. In addition, the default open content element can contain an optional appliesToEmpty attribute. When the value of the appliesToEmpty attribute is true, the default open content is applied to all complex types in the current schema document. The value false indicates that the default open content does not apply if a complex type might otherwise have an empty content model. Another way to override the default behavior is to specify none as the value of mode on a complex type's <xs:openContent> element. A value of none indicates that this complex type does not make use of the default open content. In Listing 17, we modify the schema snippet from Listing 16 to use a default open content instead of an open content at the complex type level. Listing 17. Schema snippet - CD entry using default open content <xs:schema ...> ... <xs:defaultOpenContent <xs:any </xs:openContent> ... <xs:complexType "/> ... </xs:schema> The content model of the complex type definition, CatalogEntry, contains a sequence of four explicitly defined child elements as well as an open content courtesy of the <xs:defaultOpenContent> element defined at the schema level. Default schema-document wide attributes In XML Schema 1.0, schema authors have the ability to define a common set of attributes for a given complex type by using <xs:attributeGroup>. Listing 18 shows an example of an attribute group that defines two commonly used attributes: width and height. Listing 18. Schema snippet - Common attributes defined using an attribute group > If the set of attributes happened to be common to many complex type definitions, there was no easy way to indicate that fact in XML Schema 1.0, other than to include the attribute group reference in all complex type definitions. Listing 19 illustrates how, in XML Schema 1.0, many complex type definitions can define the same set of attributes by referring to the same attribute group. Listing 19. Schema snippet - Common attributes defined in multiple complex type definitions > <xs:complexType ... <xs:attributeGroup </xs:complexType> XML Schema 1.1 has introduced the notion of default attribute groups. On the <xs:schema> element, you can designate an attribute group definition as the default (using the defaultAttributes attribute). This attribute group definition will automatically be included in each complex type defined in the schema document. In Listing 20 below, both dimensionType and sofa will include the attributes defined in the attribute group dimensionGroup. There is no need to explicitly reference the attribute group in either complex type definition. Listing 20. Schema snippet - Common attributes defined ... </xs:complexType> ... </xs:schema> If a complex type definition wants to override the default behavior (that is, you do not want to include the attribute group), you can set the defaultAttributesApply attribute on the <xs:complexType> element to false. In Listing 21, the <xs:complexType> named person overrides the default behavior of default attributes (by indicating that you do not want to include the list of default attributes). Listing 21. Schema snippet - Overriding the behavior ... </xs:complexType> ... </xs:schema> Default attribute groups make it easier to specify attributes which every complex type in a schema should accept (for example, xml:id and xml:lang, or an attribute wildcard). Conclusion In this article, we discussed some of the versioning features in XML Schema 1.1, highlighting the changes to wildcard support and the addition of open content to allow XML Schema authors to write schemas that can be compatible with future versions. In Part 4 of the series, we will explore more versioning features such as conditional inclusion and component override.; developerWorks; December 2008): Start your exploration with an overview of the key improvements over XML Schema 1.0 and in-depth look at datatype. - XML Schema 1.1, Part 2: An introduction to XML Schema 1.1: Co-occurence constraints using XPath 2.0 (Neil Delima, Sandy Gao, Michael Glavassevich, Khaled Noaman; developerWorks; January 2009): Discover the co-constraint mechanisms introduced by XML Schema 1.1. - XML 1.0 specification (Fourth Edition):. - Guide to Versioning XML Languages using XML Schema 1.1: Explore a guide for schema authors seeking help for versioning their schemas. - W3C Workshop on XML Schema 1.0 User Experiences: Dig into a workshop where participants shared their user experience with XML Schema 1.0. - New to XML: Visit this great starting point for resources available to XML developers on IBM developerWorks. - XML Parser for Java (Xerces2-J): Try this parser distributed by Apache. -.
http://www.ibm.com/developerworks/library/x-xml11pt3/
CC-MAIN-2016-30
refinedweb
3,547
52.9
If you have ever learnt or tried learning Java then you would know Java Static Method is a concept that does create confusion to some extent. In this article I will try to demystify this Java concept. Following pointers will be covered in this article, - Java Static Method vs Instance Method - Java Static Method - Restrictions on Static Method - Why is the Java main method static? So let us get started then, Java Static Method vs Instance Method Instance Methods Methods that require an object of its class to be created before calling it is called as Instance methods. To invoke a instance method, we have to create an Object of the class in within which it defined. Sample: public void Sample(Str name) { //Execution Code.... } // Return type should be something from the following int, float String even user defined data types will do.</p> Static Methods Static methods do not depend on the need to create object of a class. You can refer them by the class name itself or meaning you refer object of the class. Sample: public static void Example(Str name) { // code to be executed.... } //Ensure To static modifier in their declaration. //Return type just like the last example can be int, float, String or user defined data type. Let us move on to the next topic of this article, Java. Sample //Java Program to demonstrate the use of a static method. class Student{ int rollno; String name; static String college = "ITS"; //static method to change the value of static variable static void change(){ college = "BBDIT"; } //constructor to initialize the variable Student(int r, String n){ rollno = r; name = n; } //method to display values void display(){System.out.println(rollno+" "+name+" "+college);} } //Test class to create and display the values of object public class TestStaticMethod{ public static void main(String args[]){ Student.change();//calling change method //creating objects Student s1 = new Student(111,"Karan"); Student s2 = new Student(222,"Aryan"); Student s3 = new Student(333,"Sonoo"); //calling display method s1.display(); s2.display(); s3.display(); } } Output 111 Karan BBDIT 222 Aryan BBDIT 333 Sonoo BBDIT Let us continue with the next part of this article Restrictions on Java Static Method There are two main restrictions. They are: - The static method can not use non static data member or call non-static method directly. - this and super cannot be used in static context. Let us move on to the final bit of this article, Why is the Java main method static? It is because the object is not required to call a static method. If it were a non-static method, JVM creates an object first then call main() method that will lead the problem of extra memory allocation..
https://www.edureka.co/blog/java-static-method/
CC-MAIN-2019-35
refinedweb
451
60.95
Introduction and Background Yesterday, I was talking to myself about writing another post for C# programming; I agree I am addicted to programming now. So, I wanted to write a blog post and I found that I had a topic on my “write list.” So that I wasn't just writing about the topic, I added a bit more programming to it to support another feature. So basically what I had on my writing list was an email client in C# programming language. In this blog post, I will share the basics for developing an email client in C# using Windows Presentation Foundation framework. The source code is much shorter and more compact so that building your own basic client takes no time at all. However, adding a few more features will will be used to build this one SMTP + IMAP application to develop a simple email client. Understanding the protocols Before I dig deeper and start to write the source code and explain the methods to build the application. I will try to explain the protocols that will in .NET framework. For more of such packages and namespaces to manage the networking capabilities of an application, read System.Net namespaces on MSDN. Instead, I will use a library to get myself started in no time. A very long time ago, I came upon a library, “ImapX,” which is a wonderful tool for implementing the IMAP protocol in C# applications. You can get ImapX from NuGet galleries by executing the following NuGet package manager command: Install-Package Imapx This will add the package. Remember, Imapx works in only selected environments or not all. You should read more about it, on the CodePlex website. The IMAP protocol is used to fetch the emails, to read, rather than downloading the entire mailbox and storing it on your own device. It is it is the time to continue the programming part and develop the application itself. First of all, create a new application of WPF framework. It is always a good approach to write the concerns and different sections of your applications. Our application will have the following modules: 1. Authentication module 2. Folder view 3. Message view 4. Create a new message. Separating these concerns will help us build the application in a much more agile way so that when we have to update or create a new feature in the application, doing so won’t take any longer. However, if you're hard-code everything in the same page then things get really very difficult. In this post, I will also give you a few tips to ensure that things are not made more difficult than they can be. Managing the “MainWindow”" Each WPF application will contain a MainWindow window, which is the default window that gets rendered on the screen. But, my recommendation is that you only create a Frame object in that window. Nothing else. That frame object will be used to navigate to multiple pages and different views of the applications depending on the user and application interactions. The event is used to notify when a new message comes. The object is defined as below: Read more articles on WPF: View All
http://www.c-sharpcorner.com/article/building-custom-email-client-in-wpf-using-C-Sharp/
CC-MAIN-2018-09
refinedweb
534
63.7
To see how Windows Forms can be used to create a more realistic Windows application, in this section you'll build a utility named FileCopier that copies all files from a group of directories selected by the user to a single target directory or device, such as a floppy or backup hard drive on the company network. Although you won't implement every possible feature, you can imagine programming this application so that you can mark dozens of files and have them copied to multiple disks, packing them as tightly as possible. You might even extend the application to compress the files. The true goal of this example is for you to exercise many of the C# skills learned in earlier chapters and to explore the Windows.Forms namespace. For the purposes of this example and to keep the code simple, focus on the user interface and the steps needed to wire up its various controls. The final application UI is shown in Figure 13-7. The user interface for FileCopier consists of the following controls: Labels: Source Files and Target Directory Buttons: Clear, Copy, Delete, and Cancel An Overwrite if exists checkbox A text box displaying the path of the selected target directory Two large tree-view controls, one for available source directories and one for available target devices and directories The goal is to allow the user to check files (or entire directories) in the left tree view (source). If the user presses the Copy button, the files checked on the left side will be copied to the Target Directory specified in the right-hand control. If the user presses Delete, the checked files will be deleted. The rest of this chapter implements a number of FileCopier features in order to demonstrate the fundamental features of Windows Forms. The first task is to open a new project named FileCopier. The IDE puts you into the Designer, in which you can drag widgets onto the form. You can expand the form to the size you want. Drag, drop, and set the Name properties of labels (lblSource, lblTarget, lblStatus), buttons (btnClear, btnCopy, btnDelete, btnCancel), a checkbox (chkOverwrite), a textbox (txtTargetDir), and tree-view controls (tvwSource, tvwTargetDir) from the Toolbox onto your form until it looks more or less like the one shown in Figure 13-8. You want checkboxes next to the directories and files in the source selection window but not in the target (where only one directory will be chosen). Set the CheckBoxes property on the left TreeView control, tvwSource, to true, and set the property on the right TreeView control, tvwTargetDir, to false. To do so, click each control in turn and adjust the values in the Properties window. Once this is done, double-click the Cancel button to create its event handlerwhen you double-click a control, Visual Studio .NET creates an event handler for that object. One particular event is the target event, and Visual Studio .NET opens that event's event handler: protected void btnCancel_Click (object sender, System.EventArgs e) { Application.Exit( ); } You can set many different events for the TreeView control. Do so programmatically by clicking the Events button in the Properties window. From there you can create new handlers, just by filling in a new event-handler method name. Visual Studio .NET will register the event handler and open the editor for the code, where it will create the header and put the cursor in an empty method body. So much for the easy part. Visual Studio .NET will generate code to set up the form and initialize all the controls, but it won't fill the TreeView controls. That you must do by hand. The two TreeView controls work identically, except that the left control, tvwSource, lists the directories and files, whereas the right control, tvwTargetDir, lists only directories. The CheckBoxes property on tvwSource is set to true, and on tvwTargetDir it is set to false. Also, although tvwSource will allow multiselect, which is the default for TreeView controls, you will enforce single selection for tvwTargetDir. You'll factor the common code for both TreeView controls into a shared method FillDirectoryTree and pass in the control with a flag indicating whether to get the files. You'll call this method from the Form's constructor, once for each of the two controls: FillDirectoryTree(tvwSource, true); FillDirectoryTree(tvwTargetDir, false); The FillDirectoryTree implementation names the TreeView parameter tvw. This will represent the source TreeView and the destination TreeView in turn. You'll need some classes from System.IO, so add a using System.IO; statement at the top of Form1.cs. Next, add the method declaration to Form1.cs: private void FillDirectoryTree(TreeView tvw, bool isSource) The TreeView control has a property, Nodes, which gets a TreeNodeCollection object. The TreeNodeCollection is a collection of TreeNode objects, each of which represents a node in the tree. Start by emptying that collection: tvw.Nodes.Clear( ); You are ready to fill the TreeView's Nodes collection by recursing through the directories of all the drives. First, get all the logical drives on the system. To do so, call a static method of the Environment object, GetLogicalDrives( ). The Environment class provides information about and access to the current platform environment. You can use the Environment object to get the machine name, OS version, system directory, and so forth, from the computer on which you are running your program. string[] strDrives = Environment.GetLogicalDrives( ); GetLogicalDrives( ) returns an array of strings, each of which represents the root directory of one of the logical drives. You will iterate over that collection, adding nodes to the TreeView control as you go. foreach (string rootDirectoryName in strDrives) { You should process each drive within the foreach loop. You can add these two lines to limit the search to a particular drive (this is good if you have several large drives or some network drives): if (rootDirectoryName != @"C:\") continue; The very first thing you need to determine is whether the drive is ready. My hack for that is to get the list of top-level directories from the drive by calling GetDirectories( ) on a DirectoryInfo object I created for the root directory: DirectoryInfo dir = new DirectoryInfo(rootDirectoryName); dir.GetDirectories( ); The DirectoryInfo class exposes instance methods for creating, moving, and enumerating through directories, their files, and their subdirectories. The DirectoryInfo class is covered in detail in Chapter 21. The GetDirectories( ) method returns a list of directories, but throw this list away. You are calling it here only to generate an exception if the drive is not ready. Wrap the call in a try block and take no action in the catch block. The effect is that if an exception is thrown, the drive is skipped. Once you know that the drive is ready, create a TreeNode to hold the root directory of the drive and add that node to the TreeView control: TreeNode ndRoot = new TreeNode(rootDirectoryName); tvw.Nodes.Add(ndRoot); You now want to recurse through the directories, so you call into a new routine, GetSubDirectoryNodes( ), passing in the root node, the name of the root directory, and the flag indicating whether you want files: if (isSource) { GetSubDirectoryNodes(ndRoot, ndRoot.Text, true); } else { GetSubDirectoryNodes(ndRoot, ndRoot.Text, false); } You are probably wondering why you need to pass in ndRoot.Text if you're already passing in ndRoot. Patienceyou will see why this is needed when you recurse back into GetSubDirectoryNodes. You are now finished with FillDirectoryTree( ). See Example 13-3 for a complete listing of this method. GetSubDirectoryNodes( ) begins by once again calling GetDirectories( ), this time stashing away the resulting array of DirectoryInfo objects: private void GetSubDirectoryNodes( TreeNode parentNode, string fullName, bool getFileNames) { DirectoryInfo dir = new DirectoryInfo(fullName); DirectoryInfo[] dirSubs = dir.GetDirectories( ); Notice that the node passed in is named parentNode. The current level of nodes will be considered children to the node passed in. This is how you map the directory structure to the hierarchy of the tree view. Iterate over each subdirectory, skipping any that are marked Hidden: foreach (DirectoryInfo dirSub in dirSubs) { if ( (dirSub.Attributes & FileAttributes.Hidden) != 0 ) { continue; } FileAttributes is an enum; other possible values include Archive, Compressed, Directory, Encrypted, Hidden, Normal, ReadOnly, etc. Create a TreeNode with the directory name and add it to the Nodes collection of the node passed in to the method (parentNode): TreeNode subNode = new TreeNode(dirSub.Name); parentNode.Nodes.Add(subNode); Now recurse back into the GetSubDirectoryNodes( ) method, passing in the node you just created as the new parent, the full path as the full name of the parent, and the flag: GetSubDirectoryNodes(subNode,dirSub.FullName,getFileNames); Once you've recursed through the subdirectories, it is time to get the files for the directory if the getFileNames flag is true. To do so, call the GetFiles( ) method on the DirectoryInfo object. An array of FileInfo objects is returned: if (getFileNames) { // Get any files for this node. FileInfo[] files = dir.GetFiles( ); The FileInfo class (covered in Chapter 21) provides instance methods for manipulating files. You can now iterate over this collection, accessing the Name property of the FileInfo object and passing that name to the constructor of a TreeNode, which you then add to the parent node's Nodes collection (thus creating a child node). There is no recursion this time because files do not have subdirectories: foreach (FileInfo file in files) { TreeNode fileNode = new TreeNode(file.Name); parentNode.Nodes.Add(fileNode); } That's all it takes to fill the two tree views. See Example 13-3 for a complete listing of this method. You must handle a number of events in this example. First, the user might click Cancel, Copy, Clear, or Delete. Second, the user might click one of the checkboxes in the left TreeView or one of the nodes in the right TreeView. Let's consider the clicks on the TreeViews first, as they are the more interesting, and potentially the more challenging. There are two TreeView objects, each with its own event handler. Consider the source TreeView object first. The user checks the files and directories he wants to copy from. Each time the user clicks a file or directory, a number of events are raised. The event you must handle is AfterCheck. To do so, implement a custom event-handler method you will create and name tvwSource_AfterCheck( ). Visual Studio .NET will wire this to the event handler, or if you are not using the integrated development environment, you must do so yourself. tvwSource.AfterCheck += new System.Windows.Forms.TreeViewEventHandler (this.tvwSource_AfterCheck); The implementation of AfterCheck( ) delegates the work to a recursable method named SetCheck( ) that you'll also write. To add the AfterCheck event, select the tvwSource control, click the Events icon in the Properties window, then double-click on AfterCheck. This will add the event, wire it up, and place you in the code editor where you can add the body of the method: private void tvwSource_AfterCheck ( object sender, System.Windows.Forms.TreeViewEventArgs e) { SetCheck(e.Node,e.Node.Checked); } The event handler passes in the sender object and an object of type TreeViewEventArgs. It turns out that you can get the node from this TreeViewEventArgs object (e). Call SetCheck( ), passing in the node and the state of whether the node has been checked. Each node has a Nodes property, which gets a TreeNodeCollection containing all the subnodes. SetCheck( ) recurses through the current node's Nodes collection, setting each subnode's check mark to match that of the node that was checked. In other words, when you check a directory, all its files and subdirectories are checked, recursively, all the way down. For each TreeNode in the Nodes collection, check to see if it is a leaf. A node is a leaf if its own Nodes collection has a count of zero. If it is a leaf, set its check property to whatever was passed in as a parameter. If it is not a leaf, recurse.); } } } This propagates the checkmark (or clears the checkmark) down through the entire structure. In this way, the user can indicate that he wants to select all the files in all the subdirectories by clicking a single directory. The event handler for the target TreeView is somewhat trickier. The event itself is AfterSelect. (Remember that the target TreeView does not have checkboxes.) This time, you want to take the one directory chosen and put its full path into the text box at the upper-left corner of the form. To do so, you must work your way up through the nodes, finding the name of each parent directory and building the full path: private void tvwTargetDir_AfterSelect ( object sender, System.Windows.Forms.TreeViewEventArgs e) { string theFullPath = GetParentString(e.Node); We'll look at GetParentString( ) in just a moment. Once you have the full path, you must lop off the backslash (if any) on the end, and then you can fill the text box: if (theFullPath.EndsWith("\\")) { theFullPath = theFullPath.Substring(0,theFullPath.Length-1); } txtTargetDir.Text = theFullPath; The GetParentString( ) method takes a node and returns a string with the full path. To do so, it recurses upward through the path, adding the backslash after any node that is not a leaf: private string GetParentString(TreeNode node) { if(node.Parent == null) { return node.Text; } else { return GetParentString(node.Parent) + node.Text + (node.Nodes.Count == 0 ? "" : "\\"); } } The recursion stops when there is no parent; that is, when you hit the root directory. Given the SetCheck( ) method developed earlier, handling the Clear button's click event is trivial: protected void btnClear_Click (object sender, System.EventArgs e) { foreach (TreeNode node in tvwSource.Nodes) { SetCheck(node, false); } } Just call the SetCheck( ) method on the root nodes and tell them to recursively uncheck all their contained nodes. Now that you can check the files and pick the target directory, you're ready to handle the Copy button-click event. The very first thing you need to do is to get a list of which files were selected. What you want is an array of FileInfo objects, but you have no idea how many objects will be in the list. This is a perfect job for ArrayList. Delegate responsibility for filling the list to a method called GetFileList( ): private void btnCopy_Click ( object sender, System.EventArgs e) { ArrayList fileList = GetFileList( ); Let's pick that method apart before returning to the event handler. Start by instantiating a new ArrayList object to hold the strings representing the names of all the files selected: private ArrayList GetFileList( ) { ArrayList fileNames = new ArrayList( ); To get the selected filenames, you can walk through the source TreeView control: foreach (TreeNode theNode in tvwSource.Nodes) { GetCheckedFiles(theNode, fileNames); } To see how this works, step into the GetCheckedFiles( ) method. This method is pretty simple: it examines the node it was handed. If that node has no children (node.Nodes.Count == 0), it is a leaf. If that leaf is checked, get the full path (by calling GetParentString( ) on the node) and add it to the ArrayList passed in as a parameter: private void GetCheckedFiles(TreeNode node, ArrayList fileNames) { if (node.Nodes.Count == 0) { if (node.Checked) { string fullPath = GetParentString(node); fileNames.Add(fullPath); } } If the node is not a leaf, recurse down the tree, finding the child nodes: else { foreach (TreeNode n in node.Nodes) { GetCheckedFiles(n,fileNames); } } } This will return the ArrayList filled with all the filenames. Back in GetFileList( ), use this ArrayList of filenames to create a second ArrayList, this time to hold the actual FileInfo objects: ArrayList fileList = new ArrayList( ); Notice that once again you do not tell the ArrayList constructor what kind of object it will hold. This is one of the advantages of a rooted type-system: the collection only needs to know that it has some kind of Object; because all types are derived from Object, the list can hold FileInfo objects as easily as it can hold string objects. You can now iterate through the filenames in ArrayList, picking out each name and instantiating a FileInfo object with it. You can detect if it is a file or a directory by calling the Exists property, which will return false if the File object you created is actually a directory. If it is a File, you can add it to the new ArrayList: foreach (string fileName in fileNames) { FileInfo file = new FileInfo(fileName); if (file.Exists) { fileList.Add(file); } } You want to work your way through the list of selected files in large to small order so that you can pack the target disk as tightly as possible. You must therefore sort the ArrayList. You can call its Sort( ) method, but how will it know how to sort File objects? Remember, the ArrayList has no special knowledge about its contents. To solve this, you must pass in an IComparer interface. We'll create a class called FileComparer that will implement this interface and that will know how to sort FileInfo objects: public class FileComparer : IComparer { This class has only one method, Compare( ), which takes two objects as arguments: public int Compare (object f1, object f2) { The normal approach is to return 1 if the first object (f1) is larger than the second (f2), to return -1 if the opposite is true, and to return 0 if they are equal. In this case, however, you want the list sorted from big to small, so you should reverse the return values. To test the length of the FileInfo object, you must cast the Object parameters to FileInfo objects (which is safe, as you know this method will never receive anything else): FileInfo file1 = (FileInfo) f1; FileInfo file2 = (FileInfo) f2; if (file1.Length > file2.Length) { return -1; } if (file1.Length < file2.Length) { return 1; } return 0; } } Returning to GetFileList( ), you were about to instantiate the IComparer reference and pass it to the Sort( ) method of fileList: IComparer comparer = (IComparer) new FileComparer( ); fileList.Sort(comparer); That done, you can return fileList to the calling method: return fileList; The calling method was btnCopy_Click. Remember, you went off to GetFileList( ) in the first line of the event handler! protected void btnCopy_Click (object sender, System.EventArgs e) { ArrayList fileList = GetFileList( ); At this point, you've returned with a sorted list of File objects, each representing a file selected in the source TreeView. You can now iterate through the list, copying the files and updating the UI: foreach (FileInfo file in fileList) { try { lblStatus.Text = "Copying " + txtTargetDir.Text + "\\" + file.Name + "..."; Application.DoEvents( ); file.CopyTo(txtTargetDir.Text + "\\" + file.Name,chkOverwrite.Checked); } catch (Exception ex) { MessageBox.Show(ex.Message); } } lblStatus.Text = "Done."; Application.DoEvents( ); As you go, write the progress to the lblStatus label and call Application.DoEvents( ) to give the UI an opportunity to redraw. Then call CopyTo( ) on the file, passing in the target directory obtained from the text field, and a Boolean flag indicating whether the file should be overwritten if it already exists. You'll notice that the flag you pass in is the value of the chkOverWrite checkbox. The Checked property evaluates true if the checkbox is checked and false if not. The copy is wrapped in a try block because you can anticipate any number of things going wrong when copying files. For now, handle all exceptions by popping up a dialog box with the error; you might want to take corrective action in a commercial application. That's it; you've implemented file copying! The code to handle the delete event is even simpler. The very first thing you do is ask the user if she is sure she wants to delete the files: protected void btnDelete_Click (object sender, System.EventArgs e) { System.Windows.Forms.DialogResult result = MessageBox.Show( "Are you quite sure?", // msg "Delete Files", // caption MessageBoxButtons.OKCancel, // buttons MessageBoxIcon.Exclamation, // icons MessageBoxDefaultButton.Button2); // default button You can use the MessageBox static Show( ) method, passing in the message you want to display, the title "Delete Files" as a string, and flags. MessageBox.OKCancel asks for two buttons: OK and Cancel. MessageBox.IconExclamation indicates that you want to display an exclamation mark icon. MessageBox.DefaultButton.Button2 sets the second button (Cancel) as the default choice. When the user chooses OK or Cancel, the result is passed back as a System.Windows.Forms.DialogResult enumerated value. You can test this value to see if the user selected OK: if (result == System.Windows.Forms.DialogResult.OK) { If so, you can get the list of fileNames and iterate through it, deleting each as you go: ArrayList fileNames = GetFileList( ); foreach (FileInfo file in fileNames) { try { lblStatus.Text = "Deleting " + txtTargetDir.Text + "\\" + file.Name + "..."; Application.DoEvents( ); file.Delete( ); } catch (Exception ex) { MessageBox.Show(ex.Message); } } lblStatus.Text = "Done."; Application.DoEvents( ); This code is identical to the copy code, except that the method that is called on the file is Delete( ). Example 13-3 provides the commented source code for this example. using System; using System.Drawing; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data; using System.IO; /// <remarks> /// File Copier - WinForms demonstration program /// (c) Copyright 2001 Liberty Associates, Inc. /// </remarks> namespace FileCopier { /// <summary> /// Form demonstrating Windows Forms implementation /// </summary> public class Form1 : System.Windows.Forms.Form { // < declarations of Windows widgets cut here > /// <summary> /// Required designer variable. /// </summary> private System.ComponentModel.Container components = null; /// <summary> /// internal class which knows how to compare /// two files we want to sort large to small, /// so reverse the normal return values. /// </summary> public class FileComparer : IComparer { public int Compare (object f1, object f2) { FileInfo file1 = (FileInfo) f1; FileInfo file2 = (FileInfo) f2; if (file1.Length > file2.Length) { return -1; } if (file1.Length < file2.Length) { return 1; } return 0; } } public Form1( ) { // // Required for Windows Form Designer support // InitializeComponent( ); // fill the source and target directory trees FillDirectoryTree(tvwSource, true); FillDirectoryTree(tvwTargetDir, false); } /// <summary> /// Fill the directory tree for either the Source or /// Target TreeView. /// </summary> private void FillDirectoryTree( TreeView tvw, bool isSource) { // Populate tvwSource, the Source TreeView, // with the contents of // the local hard drive. // First clear all the nodes. tvw.Nodes.Clear( ); // Get the logical drives and put them into the // root nodes. Fill an array with all the // logical drives on the machine. string[] strDrives = Environment.GetLogicalDrives( ); // Iterate through the drives, adding them to the tree. // Use a try/catch block, so if a drive is not ready, // e.g. an empty floppy or CD, // it will not be added to the tree. foreach (string rootDirectoryName in strDrives) { if (rootDirectoryName != @"C:\") continue; try { // Fill an array with all the first level // subdirectories. If the drive is // not ready, this will throw an exception. DirectoryInfo dir = new DirectoryInfo(rootDirectoryName); dir.GetDirectories( ); TreeNode ndRoot = new TreeNode(rootDirectoryName); // Add a node for each root directory. tvw.Nodes.Add(ndRoot); // Add subdirectory nodes. // If Treeview is the source, // then also get the filenames. if (isSource) { GetSubDirectoryNodes( ndRoot, ndRoot.Text, true); } else { GetSubDirectoryNodes( ndRoot, ndRoot.Text, false); } } // Catch any errors such as // Drive not ready. catch (Exception e) { MessageBox.Show(e.Message); } } } // close for FillSourceDirectoryTree /// <summary> /// Gets all the subdirectories below the /// passed in directory node. /// Adds to the directory tree. /// The parameters passed in are the parent node /// for this subdirectory, /// the full path name of this subdirectory, /// and a Boolean to indicate /// whether or not to get the files in the subdirectory. /// </summary> private void GetSubDirectoryNodes( TreeNode parentNode, string fullName, bool getFileNames) { DirectoryInfo dir = new DirectoryInfo(fullName); DirectoryInfo[] dirSubs = dir.GetDirectories( ); // Add a child node for each subdirectory. foreach (DirectoryInfo dirSub in dirSubs) { // do not show hidden folders if ( (dirSub.Attributes & FileAttributes.Hidden) != 0 ) { continue; } /// <summary> /// Each directory contains the full path. /// We need to split it on the backslashes, /// and only use /// the last node in the tree. /// Need to double the backslash since it /// is normally /// an escape character /// </summary> TreeNode subNode = new TreeNode(dirSub.Name); parentNode.Nodes.Add(subNode); // Call GetSubDirectoryNodes recursively. GetSubDirectoryNodes( subNode,dirSub.FullName,getFileNames); } if (getFileNames) { // Get any files for this node. FileInfo[] files = dir.GetFiles( ); // After placing the nodes, // now place the files in that subdirectory. foreach (FileInfo file in files) { TreeNode fileNode = new TreeNode(file.Name); parentNode.Nodes.Add(fileNode); } } } // < boilerplate code cut here > /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main( ) { Application.Run(new Form1( )); } /// <summary> /// Create an ordered list of all /// the selected files, copy to the /// target directory /// </summary> private void btnCopy_Click(object sender, System.EventArgs e) { // get the list ArrayList fileList = GetFileList( ); // copy the files foreach (FileInfo file in fileList) { try { // update the label to show progress lblStatus.Text = "Copying " + txtTargetDir.Text + "\\" + file.Name + "..."; Application.DoEvents( ); // copy the file to its destination location file.CopyTo(txtTargetDir.Text + "\\" + file.Name,chkOverwrite.Checked); } catch (Exception ex) { // you may want to do more than // just show the message MessageBox.Show(ex.Message); } } lblStatus.Text = "Done."; Application.DoEvents( ); } /// <summary> /// on cancel, exit /// </summary> private void btnCancel_Click(object sender, System.EventArgs e) { Application.Exit( ); } /// <summary> /// Tell the root of each tree to uncheck /// all the nodes below /// </summary> private void btnClear_Click(object sender, System.EventArgs e) { // get the top most node for each drive // and tell it to clear recursively foreach (TreeNode node in tvwSource.Nodes) { SetCheck(node, false); } } /// <summary> /// check that the user does want to delete /// Make a list and delete each in turn /// </summary> private void btnDelete_Click(object sender, System.EventArgs e) { // ask them if they are sure System.Windows.Forms.DialogResult result = MessageBox.Show( "Are you quite sure?", // msg "Delete Files", // caption MessageBoxButtons.OKCancel, // buttons MessageBoxIcon.Exclamation, // icons MessageBoxDefaultButton.Button2); // default button // if they are sure... if (result == System.Windows.Forms.DialogResult.OK) { // iterate through the list and delete them. // get the list of selected files ArrayList fileNames = GetFileList( ); foreach (FileInfo file in fileNames) { try { // update the label to show progress lblStatus.Text = "Deleting " + txtTargetDir.Text + "\\" + file.Name + "..."; Application.DoEvents( ); // Danger Will Robinson! file.Delete( ); } catch (Exception ex) { // you may want to do more than // just show the message MessageBox.Show(ex.Message); } } lblStatus.Text = "Done."; Application.DoEvents( ); } } /// <summary> /// Get the full path of the chosen directory /// copy it to txtTargetDir /// </summary> private void tvwTargetDir_AfterSelect( object sender, System.Windows.Forms.TreeViewEventArgs e) { // get the full path for the selected directory string theFullPath = GetParentString(e.Node); // if it is not a leaf, it will end with a back slash // remove the backslash if (theFullPath.EndsWith("\\")) { theFullPath = theFullPath.Substring(0,theFullPath.Length-1); } // insert the path in the text box txtTargetDir.Text = theFullPath; } /// <summary> /// Mark each node below the current /// one with the current value of checked /// </summary> private void tvwSource_AfterCheck(object sender, System.Windows.Forms.TreeViewEventArgs e) { // Call a recursible method. // e.node is the node which was checked by the user. // The state of the check mark is already // changed by the time you get here. // Therefore, we want to pass along // the state of e.node.Checked. SetCheck(e.Node,e.Node.Checked); } /// <summary> /// recursively set or clear check marks /// </summary>); } } } /// <summary> /// Given a node and an array list /// fill the list with the names of /// all the checked files /// </summary> // Fill the ArrayList with the full paths of // all the files checked private void GetCheckedFiles(TreeNode node, ArrayList fileNames) { // if this is a leaf... if (node.Nodes.Count == 0) { // if the node was checked... if (node.Checked) { // get the full path and add it to the arrayList string fullPath = GetParentString(node); fileNames.Add(fullPath); } } else // if this node is not a leaf { // if this node is not a leaf foreach (TreeNode n in node.Nodes) { GetCheckedFiles(n,fileNames); } } } /// <summary> /// Given a node, return the /// full path name /// </summary> private string GetParentString(TreeNode node) { // if this is the root node (c:\) return the text if(node.Parent == null) { return node.Text; } else { // recurse up and get the path then // add this node and a slash // if this node is the leaf, don't add the slash return GetParentString(node.Parent) + node.Text + (node.Nodes.Count == 0 ? "" : "\\"); } } /// <summary> /// shared by delete and copy /// creates an ordered list of all /// the selected files /// </summary> private ArrayList GetFileList( ) { // create an unsorted array list of the full file names ArrayList fileNames = new ArrayList( ); // fill the fileNames ArrayList with the // full path of each file to copy foreach (TreeNode theNode in tvwSource.Nodes) { GetCheckedFiles(theNode, fileNames); } // Create a list to hold the FileInfo objects ArrayList fileList = new ArrayList( ); // for each of the file names.Exists) { // both the key and the value are the file // would it be easier to have an empty value? fileList.Add(file); } } // Create an instance of the IComparer interface IComparer comparer = (IComparer) new FileComparer( ); // pass the comparer to the sort method so that the list // is sorted by the compare method of comparer. fileList.Sort(comparer); return fileList; } } }
http://etutorials.org/Programming/Programming+C.Sharp/Part+II+Programming+with+C/Chapter+13.+Building+Windows+Applications/13.2+Creating+a+Windows+Forms+Application/
CC-MAIN-2018-13
refinedweb
4,753
56.05
import package.subpackage.* does not work import package.subpackage.* does not work I have 3 class files. A.java B.java C.java Below is the code block A.java:- package com.test...: cannot access A bad class file: .\A.java file does not contain class A Please remove Jquery form validation does not work effect does not work. Could you please solve this problem? Thanks!  ...Jquery form validation does not work I want to use jquery validation for my Forms tags. I also imported necessary javascript files e.g. How does the Java for each loop work? How does the Java for each loop work? How does the Java for each loop How does Social Media Marketing Work How does Social Media Marketing Work In this section we will understand the working of Social Media Marketing and see "How Social Media Marketing... Marketing really works for your website. How does Social Media Marketing Work Struts validation not work properly - Struts Struts validation not work properly hi... i have a problem...) { this.address = address; } } my struts-config.xml.../StrutsCustomValidator.shtml http How does LBS work? How does LBS work? Location based service (LBS) is the application that users use to locate its own position with the help of some basic components like mobile devices, mobile Message Resource Bundle work. Message Resource Bundle work. How does Value replacement in Message Resource Bundle work validate() method of ActionForm work validate() method of ActionForm work How does validate() method of ActionForm work Download Struts Learn how to Download Struts for application development or just for learning the new version of Struts. This video tutorial shows you how you can download struts and save on your computer. This easy to understand download struts shows does anybody could tell me who's the author - Struts does anybody could tell me who's the author does anyone would tell me who's the author of Struts2 tutorial() I'd like to translate this tutorial into Chinese , to make more and "/> </plug-in> </struts-config> validator...;!-- This file contains the default Struts Validator pluggable validator... in this file. # Struts Validator Error Messages errors.required={0 Request[/DispatchAction] does not contain handler parameter named 'parameter'. This may be caused by whitespace in the label text. - Struts Request[/DispatchAction] does not contain handler parameter named 'parameter... struts-config.xml file & three jsp pages but it shows the warning Request[/DispatchAction] does not contain handler parameter named 'parameter'. This may Struts Projects ; Mockrunner does not read any configuration file like web.xml or struts... Struts Projects Easy Struts Projects to learn and get into development ASAP. These Struts Project will help you jump the hurdle of learning complex what is the use of debug 2 Hi Friend, This is the parameter that is used to specify the debug level for the ActionServlet class.It is having three values: 0- It does not give any debug information. 1 struts html tag - Struts struts html tag Hi, the company I work for use an "id" tag on their tag like this: How can I do this with struts? I tried and they don't work Action in Struts 2 Framework Actions Actions are the core basic unit of work in Struts2 framework. Each action provides the processing logic for a specific URL with which it is linked... { return "success"; } } action class does not extend another class and nor 2 - History of Struts 2 Struts 2 History Apache Struts... taken over by the Apache Software Foundation in 2002. Struts have provided.... However, with the growing demand of web application, Strut 1 does not stand firm Struts Articles security principle and discuss how Struts can be leveraged to address..., but the examples should work in any container. We will create a Struts plugin class.... A Struts Action does the job, may be calling some other tiers of the Application Struts Books components How to work with the Commons Validator, ActionForms... quickly apply Struts to your work settings with confidence.  ... pages which makes sense as this so important to a struts app. Does justice velocity template - Struts velocity template what is velocity template and how does it work? whats the flow of execution of velocity template? i want an example using velocity template? Hi friend, Velocity is an open source templating tool STRUTS INTERNATIONALIZATION STRUTS INTERNATIONALIZATION -------------------------------- by Farihah... to implement Internationalization (abbreviated as I18N) in Struts... in their traditional way. The customers will expect the product to work in their native Struts Alternative for the presentation layer of the Struts framework. However, stxx does not force you to go the XML route, both technologies will work side by side. Struts... Struts Alternative Struts is very robust the parameter="addUserAction" as querystring.ex: at this time it working fine... now it showing error javax.servlet.ServletException: Request[/View/user] does Struts Book - Popular Struts Books is for developers and architects with real Struts work to do. It offers a broad... Struts Book - Popular Struts Books Programming Jakarta Struts As popular as the Struts Framework for building web applications Struts first example - Struts the version of struts is used struts1/struts 2. Thanks Hi! I am using struts 2 for work. Thanks. Hi friend, Please visit...Struts first example Hi! I have field price. I want to check How Struts Works gets start up the first work it does is to check the web.xml file and determine... How Struts Works The basic purpose of the Java Servlets in struts is to handle requests Struts 2.0.3 Released # [WW-1557] - multiple="true" does not work in s:select tag # [WW-1562... Struts 2.0.3 Released The Struts... added. Here are the list changes made to Struts framework Allow new template Request[/searchMenu] does not contain handler parameter named 'function'. This may be caused by whitespace in the label text. - Subversion struts DispatchAction class .i am facing error like Request[/searchMenu] does...Request[/searchMenu] does not contain handler parameter named 'function... in the label text. jsp file = Search for - Shipping Category struts-config.xml select tag in Struts2 to handle Enums - Struts ={'Help':'Help', 'Thanks':Thanks:}" But it does not work. I get type error...! For more information on Select Tag in Struts 2 visit About Struts processPreprocess method - Struts About Struts processPreprocess method Hi java folks, Help me.... It does nothing. It always returns true. If it return false from this method will abort request processing. For more information on struts visit - Framework , Struts : Struts Frame work is the implementation of Model-View-Controller (MVC) design pattern for the JSP. Struts is maintained as a part of Apache Jakarta... struts application ? Before that what kind of things necessary Struts 2 Tutorial the database. More Struts Validator Examples User input validations...Struts 2 Tutorial RoseIndia Struts 2 Tutorial and Online free training helps you learn new elegant 2 Actions Struts 2 Actions In this section we will learn about Struts 2 Actions, which is a fundamental concept in most of the web application frameworks. Struts 2 Action are the important concept jvm work jvm work wht is the work of jvm? deaply. Hi Friend, Java Virtual Machine or JVM for short is a software execution engine to run the java programs. Java Virtual Machine is also known as "Java Interpreter" Does hibernate support polymorphism? Does hibernate support polymorphism? Hi, Does hibernate support polymorphism? thanks package javax.ws.rs does not exist package javax.ws.rs does not exist Hi, Which jar file should I add to remove "package javax.ws.rs does not exist" error? Thanks Developing Struts Application Developing Struts Application  ... outline of Struts, we can enumerate the following points. All requests... of the servlets or Struts Actions... All data submitted by user are sent multiboxes - Struts are checked but when i click on uncheck it does nothing(The uncheck radio button... in javascript code or in struts bean. Hi friend, Code to solve java - Struts should handle the response. If the Action does not return null, the RequestProcessor... interface.Please check at Improve Your Assertiveness at Work the need to improve on, address them directly. Take action to work on them... as the silence itself will speak on your behalf. However it does not work in this way... Improve Your Assertiveness at Work   Difference between Struts and Spring tag Library while Spring does not. 5)Spring is loosely coupled while Struts... while Spring is light weight. 4)Struts supports tag Library while Spring does...)Struts supports tag Library while Spring does not. 5)Spring is loosely coupled java - Struts you want in your project. Do you want to work in java swing? Thanks What does COMMIT do? What does COMMIT do? What does COMMIT do? Hi, here is the answer, COMMIT makes permanent the changes made by DML statements in the transaction. The changes made by the DML statements of a transaction become Hello - Struts development kit (JDK) supplied by Sun does not provide a tool to create platform specific - Struts work, see code below:- in the Class Struts Guide ? - - Struts Frame work is the implementation of Model-View-Controller (MVC) design... Struts Guide - This tutorial is extensive guide to the Struts Framework package javax.validation does not exist package javax.validation does not exist In my maven application I got following error while compiling the code: package javax.validation does not exist How to resolve the error? Thanks Hi, You should include Why Outsourcing, Why Outsourcing Service, Why Outsourcing Work challenges. By outsourcing work to such vendors, a firm can access their facilities... with all the training. Cash Flow When an organization outsources work, it often sells the assets associated with the transferred work to the vendor Problems With Struts - Development process Problems With Struts Respected Sir , While Deploying Struts Application in Tomcat 5.0 ,using Forward Action.This can properly work on NetBean but when i deploye in Tomcat only this will generate Struts 2 File Upload Struts 2 File Upload In this section you will learn how to write program in Struts 2 to upload the file... in any directory on the server machine. The Struts 2 FileUpload component can singleton class in struts - Java Beginners singleton class in struts hi, This balakrishna, I want to retrive data from database on first hit to home page in struts frame work What does the delete operator do? What does the delete operator do? What does the delete operator do Does finally always execute in Java? Does finally always execute in Java? Does finally always execute in Java formBackingObject() method will work. how formBackingObject() method will work. How the formBackingObject method will behave collection frame work collection frame work explain all the concept in collection frame Struts integration with EJB in JBOSS3.2 Controller) Architecture. It is open source and free. Struts frame work... projects group. Struts is a frame work for building really complex... Struts integration with EJB in JBOSS3.2
http://roseindia.net/tutorialhelp/comment/18201
CC-MAIN-2014-41
refinedweb
1,819
67.65
JavaScript Interview Questions & Answers Table of Contents - ### What are the possible ways to create objects in JavaScript There are many ways to create objects in javascript as below Object constructor: The simplest way to create an empty object is using the Object constructor. Currently this approach is not recommended. var object = new Object(); Object's create method: The create method of Object creates a new object by passing the prototype object as a parameter var object = Object.create(null); Object literal syntax: The object literal syntax is equivalent to create method when it passes null as parameter; } } var object = new Person("Sudheer"); Singleton pattern: A Singleton is an object which can only be instantiated one time. Repeated calls to its constructor return the same instance and this way one can ensure that they don't accidentally create multiple instances. var object = new function(){ this.name = "Sudheer"; } - ### What is a prototype chain Prototype chaining is used to build new types of objects based on existing ones. It is similar to inheritance in a class based language. The prototype on object instance is available through Object.getPrototypeOf(object) or proto property whereas prototype on constructors function is available through object.prototype. - ### What is the difference between Call, Apply and Bind The difference between Call, Apply and Bind can be explained with below examples, Call: The call() method invokes a function with a given thisvalue and arguments provided one by one var employee1 = {firstName: 'John', lastName: 'Rodson'}; var employee2 = {firstName: 'Jimmy', lastName: 'Baily'}; function invite(greeting1, greeting2) { console.log(greeting1 + ' ' + this.firstName + ' ' + this.lastName+ ', '+ greeting2); } invite.call(employee1, 'Hello', 'How are you?'); // Hello John Rodson, How are you? invite.call(employee2, 'Hello', 'How are you?'); // Hello Jimmy Baily, How are you? Apply: Invokes the function and allows you to pass in arguments as an array var employee1 = {firstName: 'John', lastName: 'Rodson'}; var employee2 = {firstName: 'Jimmy', lastName: 'Baily'}; function invite(greeting1, greeting2) { console.log(greeting1 + ' ' + this.firstName + ' ' + this.lastName+ ', '+ greeting2); } invite.apply(employee1, ['Hello', 'How are you?']); // Hello John Rodson, How are you? invite.apply(employee2, ['Hello', 'How are you?']); // Hello Jimmy Baily, How are you? bind: returns a new function, allowing you to pass in an array and any number of arguments var employee1 = {firstName: 'John', lastName: 'Rodson'}; var employee2 = {firstName: 'Jimmy', lastName: 'Baily'}; function invite(greeting1, greeting2) { console.log(greeting1 + ' ' + this.firstName + ' ' + this.lastName+ ', '+ greeting2); } var inviteEmployee1 = invite.bind(employee1); var inviteEmployee2 = invite.bind(employee2); inviteEmployee1('Hello', 'How are you?'); // Hello John Rodson, How are you? inviteEmployee2('Hello', 'How are you?'); // Hello Jimmy Baily, How are you? Call and apply are pretty interchangeable. Both execute the current function immediately. You need to decide whether it’s easier to send in an array or a comma separated list of arguments. You can remember by treating Call is for comma (separated list) and Apply is for Array. Whereas Bind creates a new function that will have thisset to the first parameter passed to bind(). - ### What) - ### What is the purpose of the array slice method The slice() method returns the selected elements in an array as a new array object. It selects the elements starting at the given start argument, and ends at the given optional end argument without including the last element. If you omit the second argument then it selects till the end. Some of the examples of this method are, let arrayIntegers = [1, 2, 3, 4, 5]; let arrayIntegers1 = arrayIntegers.slice(0,2); // returns [1,2] let arrayIntegers2 = arrayIntegers.slice(2,3); // returns [3] let arrayIntegers3 = arrayIntegers.slice(4); //returns [5] Note: Slice method won't mutate the original array but it returns the subset as a new array. - ### What is the purpose of the array splice method The splice() method is used either adds/removes items to/from an array, and then returns the removed item. The first argument specifies the array position for insertion or deletion whereas the option second argument indicates the number of elements to be deleted. Each additional argument is added to the array. Some of the examples of this method are, let arrayIntegersOriginal1 = [1, 2, 3, 4, 5]; let arrayIntegersOriginal2 = [1, 2, 3, 4, 5]; let arrayIntegersOriginal3 = [1, 2, 3, 4, 5]; let arrayIntegers1 = arrayIntegersOriginal1.splice(0,2); // returns [1, 2]; original array: [3, 4, 5] let arrayIntegers2 = arrayIntegersOriginal2.splice(3); // returns [4, 5]; original array: [1, 2, 3] let arrayIntegers3 = arrayIntegersOriginal3.splice(3, 1, "a", "b", "c"); //returns [4]; original array: [1, 2, 3, "a", "b", "c", 5] Note: Splice method modifies the original array and returns the deleted array. - ### What is the difference between slice and splice Some of the major difference in a tabular form - ### How do you compare Object and Map Objects are similar to Maps in that both let you set keys to values, retrieve those values, delete keys, and detect whether something is stored at a key. Due to this reason, Objects have been used as Maps historically. But there are important differences that make using a Map preferable in certain cases. - The keys of an Object are Strings and Symbols, whereas they can be any value for a Map, including functions, objects, and any primitive. - The keys in Map are ordered while keys added to Object are not. Thus, when iterating over it, a Map object returns keys in order of insertion. - You can get the size of a Map easily with the size property, while the number of properties in an Object must be determined manually. - A Map is an iterable and can thus be directly iterated, whereas iterating over an Object requires obtaining its keys in some fashion and iterating over them. - An Object has a prototype, so there are default keys in the map that could collide with your keys if you're not careful. As of ES5 this can be bypassed by using map = Object.create(null), but this is seldom done. - A Map may perform better in scenarios involving frequent addition and removal of key pairs. - ### What is the difference between == and === operators JavaScript provides both strict(===, !==) and type-converting(==, !=) equality comparison. The strict operators take type of variable in consideration, while non-strict operators make type correction/conversion based upon values of variables. The strict operators follow the below conditions for different types, - Two strings are strictly equal when they have the same sequence of characters, same length, and same characters in corresponding positions. - Two numbers are strictly equal when they are numerically equal. i.e, Having the same number value. There are two special cases in this, - not equal with ===, but equal with ==. i.e, null===undefined --> false but null==undefined --> true Some of the example which covers the above cases, 0 == false // true 0 === false // false 1 == "1" // true 1 === "1" // false null == undefined // true null === undefined // false '0' == false // true '0' === false // false []==[] or []===[] //false, refer different objects in memory {}=={} or {}==={} //false, refer different objects in memory - ### What are lambda or arrow functions An arrow function is a shorter syntax for a function expression and does not have its own this, arguments, super, or new.target. These functions are best suited for non-method functions, and they cannot be used as constructors. - ### What is a first class function In Javascript, functions are first class objects. First-class functions); - ### What is a first order function First-order function is a function that doesn’t accept another function as an argument and doesn’t return a function as its return value. const firstOrder = () => console.log ('I am a first order function!'); - ### What is a higher order function Higher-order function is a function that accepts another function as an argument or returns a function as a return value. const firstOrderFunc = () => console.log ('Hello I am a First order function'); const higherOrder = ReturnFirstOrderFunc => ReturnFirstOrderFunc (); higherOrder (firstOrderFunc); - ### What is a unary function Unary function (i.e. monadic) is a function that accepts exactly one argument. Let us take an example of unary function. It stands for a single argument accepted by a function. const unaryFunction = a => console.log (a + 10); // Add 10 to the given argument and display the value - ### What is the currying function Currying is the process of taking a function with multiple arguments and turning it into a sequence of functions each with only a single argument. Currying is named after a mathematician Haskell Curry. By applying currying, a n-ary function turns it into a unary function. Let's take an example of n-ary function and how it turns into a currying function const multiArgFunction = (a, b, c) => a + b + c; const curryUnaryFunction = a => b => c => a + b + c; curryUnaryFunction (1); // returns a function: b => c => 1 + b + c curryUnaryFunction (1) (2); // returns a function: c => 3 + c curryUnaryFunction (1) (2) (3); // returns the number 6 Curried functions are great to improve code reusability and functional composition. - ### What is a pure function A Pure function is a function where the return value is only determined by its arguments without any side effects. i.e, If you call a function with the same arguments 'n' number of times and 'n' number of places in the application then it will always return the same value. Let's take an example to see the difference between pure and impure functions, //Impure let numberArray = []; const impureAddNumber = number => numberArray.push (number); //Pure const pureAddNumber = number => argNumberArray => argNumberArray.concat ([number]); //Display the results console.log (impureAddNumber (6)); // returns 1 console.log (numberArray); // returns [6] console.log (pureAddNumber (7) (numberArray)); // returns [6, 7] console.log (numberArray); // returns [6] As per above code snippets, Push function is impure itself by altering the array and returning an push number index which is independent of parameter value. Whereas Concat on the other hand takes the array and concatenates it with the other array producing a whole new array without side effects. Also, the return value is a concatenation of the previous array. Remember that Pure functions are important as they simplify unit testing without any side effects and no need for dependency injection. They also avoid tight coupling and make it harder to break your application by not having any side effects. These principles are coming together with Immutability concept of ES6 by giving preference to const over let usage. - ### What is the purpose of the let keyword The letstatement declares a block scope local variable. Hence the variables defined) - ### What is the difference between let and var You can list out the differences in a tabular format) } - ### What is the reason to choose the name let as a keyword Let is a mathematical statement that was adopted by early programming languages like Scheme and Basic. It has been borrowed from dozens of other languages that use let already as a traditional keyword as close to var as possible. - ### How do you redeclare variables in switch block without an error If you try to redeclare variables in a switch blockthen it will cause errors because there is only one block. For example, the below code block throws a syntax error as below, let counter = 1; switch(x) { case 0: let name; break; case 1: let name; // SyntaxError for redeclaration. break; } To avoid this error, you can create a nested block inside a case clause and create a new block scoped lexical environment. let counter = 1; switch(x) { case 0: { let name; break; } case 1: { let name; // No SyntaxError for redeclaration. break; } } - ### What is the Temporal Dead Zone The Temporal Dead Zone is a behavior in JavaScript that occurs when declaring a variable with the let and const keywords, but not with var. In ECMAScript 6, accessing a let or const variable before its declaration (within its scope) causes a ReferenceError. The time span when that happens, between the creation of a variable’s binding and its declaration, is called the temporal dead zone. Let's see this behavior with an example, function somemethod() { console.log(counter1); // undefined console.log(counter2); // ReferenceError var counter1 = 1; let counter2 = 2; } - ### What - ### What is the benefit of using modules There are a lot of benefits to using modules in favour of a sprawling. Some of the benefits are, - Maintainability - Reusability - Namespacing - ### What is memoization Memoization is a programming technique which attempts to increase a function’s performance by caching its previously computed results. Each time a memoized function is called, its parameters are used to index the cache. If the data is present, then it can be returned, without executing the entire function. Otherwise the function is executed and then the result is added to the cache. Let's take an example of adding function with memoization, const memoizAddition = () => { let cache = {}; return (value) => { if (value in cache) { console.log('Fetching from cache'); return cache[value]; // Here, cache.value cannot be used as property name starts with the number which is not a valid JavaScript identifier. Hence, can only be accessed using the square bracket notation. } else { console.log('Calculating result'); let result = value + 20; cache[value] = result; return result; } } } // returned function from memoizAddition const addition = memoizAddition(); console.log(addition(20)); //output: 40 calculated console.log(addition(20)); //output: 40 cached - ### What is Hoisting Hoisting is a JavaScript mechanism where variables and function declarations are moved to the top of their scope before code execution. Remember that JavaScript only hoists declarations, not initialisation. Let's take a simple example of variable hoisting, console.log(message); //output : undefined var message = 'The variable Has been hoisted'; The above code looks like as below to the interpreter, var message; console.log(message); message = 'The variable Has been hoisted'; - ### What are classes in ES6 In ES6, Javascript classes are primarily syntactic sugar over JavaScript’s existing prototype-based inheritance. For example, the prototype based inheritance written in function expression as below, function Bike(model,color) { this.model = model; this.color = color; } Bike.prototype.getDetails = function() { return this.model + ' bike has' + this.color + ' color'; }; Whereas ES6 classes can be defined as an alternative class Bike{ constructor(color, model) { this.color= color; this.model= model; } getDetails() { return this.model + ' bike has' + this.color + ' color'; } } - ### What are closures closure concept, function Welcome(name){ var greetingInfo = function(message){ console.log(message+' '+name); } return greetingInfo(); } var myFunction = Welcome('John'); myFunction('Welcome '); //Output: Welcome John myFunction('Hello Mr.'); //output: Hello Mr.John As per the above code, the inner function(greetingInfo) has access to the variables in the outer function scope(Welcome) even after the outer function has returned. - ### What are modules Modules refer to small units of independent, reusable code and also act as the foundation of many JavaScript design patterns. Most of the JavaScript modules export an object literal, a function, or a constructor - ### Why do you need modules Below are the list of benefits using modules in javascript ecosystem - Maintainability - Reusability - Namespacing - ### What is scope in javascript Scope is the accessibility of variables, functions, and objects in some particular part of your code during runtime. In other words, scope determines the visibility of variables and other resources in areas of your code. - ### What is a service worker A Service worker is basically a script (JavaScript file) that runs in the background, separate from a web page and provides features that don't need a web page or user interaction. Some of the major features of service workers are Rich offline experiences(offline first web application development), periodic background syncs, push notifications, intercept and handle network requests and programmatically managing a cache of responses. - ### How do you manipulate DOM using a service worker Service worker can't access the DOM directly. But it can communicate with the pages it controls by responding to messages sent via the postMessageinterface, and those pages can manipulate the DOM. - ### How do you reuse information across service worker restarts The problem with service worker is that it gets terminated when not in use, and restarted when it's next needed, so you cannot rely on global state within a service worker's onfetchand onmessagehandlers. In this case, service workers will have access to IndexedDB API in order to persist and reuse across restarts. - ### What is IndexedDB IndexedDB is a low-level API for client-side storage of larger amounts of structured data, including files/blobs. This API uses indexes to enable high-performance searches of this data. - ### What is web storage Web storage is an API that provides a mechanism by which browsers can store key/value pairs locally within the user's browser, in a much more intuitive fashion than using cookies. The web storage provides two mechanisms for storing data on the client. - Local storage: It stores data for current origin with no expiration date. - Session storage: It stores data for one session and the data is lost when the browser tab is closed. - ### What is a post message Post message is a method that enables cross-origin communication between Window objects.(i.e, between a page and a pop-up that it spawned, or between a page and an iframe embedded within it). Generally, scripts on different pages are allowed to access each other if and only if the pages follow same-origin policy(i.e, pages share the same protocol, port number, and host). - ### What is a Cookie A cookie is a piece of data that is stored on your computer to be accessed by your browser. Cookies are saved as key/value pairs. For example, you can create a cookie named username as below, document.cookie = "username=John"; - ### Why do you need a Cookie Cookies are used to remember information about the user profile(such as username). It basically involves two steps, - When a user visits a web page, the user profile can be stored in a cookie. - Next time the user visits the page, the cookie remembers the user profile. - ### What are the options in a cookie There are few below options available for a cookie, - By default, the cookie is deleted when the browser is closed but you can change this behavior by setting expiry date (in UTC time). document.cookie = "username=John; expires=Sat, 8 Jun 2019 12:00:00 UTC"; - By default, the cookie belongs to a current page. But you can tell the browser what path the cookie belongs to using a path parameter. document.cookie = "username=John; path=/services"; - ### How do you delete a cookie You can delete a cookie by setting the expiry date as a passed date. You don't need to specify a cookie value in this case. For example, you can delete a username cookie in the current page as below. document.cookie = "username=; expires=Fri, 07 Jun 2019 00:00:00 UTC; path=/;"; Note: You should define the cookie path option to ensure that you delete the right cookie. Some browsers doesn't allow to delete a cookie unless you specify a path parameter. - ### What are the differences between cookie, local storage and session storage Below are some of the differences between cookie, local storage and session storage, - ### What is the main difference between localStorage and sessionStorage LocalStorage is the same as SessionStorage but it persists the data even when the browser is closed and reopened(i.e it has no expiration time) whereas in sessionStorage data gets cleared when the page session ends. - ### How do you access web storage The Window object implements the WindowLocalStorageand WindowSessionStorageobjects which has localStorage(window.localStorage) and sessionStorage(window.sessionStorage) properties respectively. These properties create an instance of the Storage object, through which data items can be set, retrieved and removed for a specific domain and storage type (session or local). For example, you can read and write on local storage objects as below localStorage.setItem('logo', document.getElementById('logo').value); localStorage.getItem('logo'); - ### What are the methods available on session storage The session storage provided methods for reading, writing and clearing the session data // Save data to sessionStorage sessionStorage.setItem('key', 'value'); // Get saved data from sessionStorage let data = sessionStorage.getItem('key'); // Remove saved data from sessionStorage sessionStorage.removeItem('key'); // Remove all saved data from sessionStorage sessionStorage.clear(); - ### What is a storage event and its event handler The StorageEvent is an event that fires when a storage area has been changed in the context of another document. Whereas onstorage property is an EventHandler for processing storage events. The syntax would be as below window.onstorage = functionRef; Let's take the example usage of onstorage event handler which logs the storage key and it's values window.onstorage = function(e) { console.log('The ' + e.key + ' key has been changed from ' + e.oldValue + ' to ' + e.newValue + '.'); }; - ### Why do you need web storage Web storage is more secure, and large amounts of data can be stored locally, without affecting website performance. Also, the information is never transferred to the server. Hence this is a more recommended approach than Cookies. - ### How do you check web storage browser support You need to check browser support for localStorage and sessionStorage before using web storage, if (typeof(Storage) !== "undefined") { // Code for localStorage/sessionStorage. } else { // Sorry! No Web Storage support.. } - ### How do you check web workers browser support You need to check browser support for web workers before using it if (typeof(Worker) !== "undefined") { // code for Web worker support. } else { // Sorry! No Web Worker support.. } - ### Give an example of a web worker You need to follow below steps to start using web workers for counting example - Create a Web Worker File: You need to write a script to increment the count value. Let's name it as counter.js let i = 0; function timedCount() { i = i + 1; postMessage(i); setTimeout("timedCount()",500); } timedCount(); Here postMessage() method is used to post a message back to the HTML page - Create a Web Worker Object: You can create a web worker object by checking for browser support. Let's name this file as web_worker_example.js if (typeof(w) == "undefined") { w = new Worker("counter.js"); } and we can receive messages from web worker w.onmessage = function(event){ document.getElementById("message").innerHTML = event.data; }; - Terminate a Web Worker: Web workers will continue to listen for messages (even after the external script is finished) until it is terminated. You can use the terminate() method to terminate listening to the messages. w.terminate(); - Reuse the Web Worker: If you set the worker variable to undefined you can reuse the code w = undefined; - ### What are the restrictions of web workers on DOM WebWorkers don't have access to below javascript objects since they are defined in an external files - Window object - Document object - Parent object - ### What is a promise A promise is an object that may produce a single value some time in the future with either a resolved value or a reason that it’s not resolved(for example, network error). It will be in one of the 3 possible states: fulfilled, rejected, or pending. The syntax of Promise creation looks like below, const promise = new Promise(function(resolve, reject) { // promise description }) The usage of a promise would be as below, const promise = new Promise(resolve => { setTimeout(() => { resolve("I'm a Promise!"); }, 5000); }, reject => { }); promise.then(value => console.log(value)); The action flow of a promise will be as below, - ### Why do you need a promise Promises are used to handle asynchronous operations. They provide an alternative approach for callbacks by reducing the callback hell and writing the cleaner code. - ### What are the three states of promise Promises have three states: - Pending: This is an initial state of the Promise before an operation begins - Fulfilled: This state indicates that the specified operation was completed. - Rejected: This state indicates that the operation did not complete. In this case an error value will be thrown. - ### What is a callback function A callback function is a function passed into another function as an argument. This function is invoked inside the outer function to complete an action. Let's take a simple example of how to use callback function function callbackFunction(name) { console.log('Hello ' + name); } function outerFunction(callback) { let name = prompt('Please enter your name.'); callback(name); } outerFunction(callbackFunction); - ### Why do we need callbacks The callbacks are needed because javascript is an event driven language. That means instead of waiting for a response javascript will keep executing while listening for other events. Let's take an example with the first function invoking an API call(simulated by setTimeout) and the next function which logs the message. function firstFunction(){ // Simulate a code delay setTimeout( function(){ console.log('First function called'); }, 1000 ); } function secondFunction(){ console.log('Second function called'); } firstFunction(); secondFunction(); Output // Second function called // First function called As observed from the output, javascript didn't wait for the response of the first function and the remaining code block got executed. So callbacks are used in a way to make sure that certain code doesn’t execute until the other code finishes execution. - ### What is a callback hell Callback Hell is an anti-pattern with multiple nested callbacks which makes code hard to read and debug when dealing with asynchronous logic. The callback hell looks like below, async1(function(){ async2(function(){ async3(function(){ async4(function(){ .... }); }); }); }); - ### What are server-sent events Server-sent events (SSE) is a server push technology enabling a browser to receive automatic updates from a server via HTTP connection without resorting to polling. These are a one way communications channel - events flow from server to client only. This has been used in Facebook/Twitter updates, stock price updates, news feeds etc. - ### How do you receive server-sent event notifications The EventSource object is used to receive server-sent event notifications. For example, you can receive messages from server as below, if(typeof(EventSource) !== "undefined") { var source = new EventSource("sse_generator.js"); source.onmessage = function(event) { document.getElementById("output").innerHTML += event.data + "<br>"; }; } - ### How do you check browser support for server-sent events You can perform browser support for server-sent events before using it as below, if(typeof(EventSource) !== "undefined") { // Server-sent events supported. Let's have some code here! } else { // No server-sent events supported } - ### What are the events available for server sent events Below are the list of events available for server sent events | Event | Description | |---- | --------- | onopen | It is used when a connection to the server is opened | | onmessage | This event is used when a message is received | | onerror | It happens when an error occurs| - ### What are the main rules of promise A promise must follow a specific set of rules, - A promise is an object that supplies a standard-compliant .then()method - A pending promise may transition into either fulfilled or rejected state - A fulfilled or rejected promise is settled and it must not transition into any other state. - Once a promise is settled, the value must not change. - ### What is callback in callback You can nest one callback inside in another callback to execute the actions sequentially one by one. This is known as callbacks in callbacks. loadScript('/script1.js', function(script) { console.log('first script is loaded'); loadScript('/script2.js', function(script) { console.log('second script is loaded'); loadScript('/script3.js', function(script) { console.log('third script is loaded'); // after all scripts are loaded }); }) }); - ### What is promise chaining The process of executing a sequence of asynchronous tasks one after another using promises is known as Promise chaining. Let's take an example of promise chaining for calculating the final result, new Promise(function(resolve, reject) { setTimeout(() => resolve(1), 1000); }).then(function(result) { console.log(result); // 1 return result * 2; }).then(function(result) { console.log(result); // 2 return result * 3; }).then(function(result) { console.log(result); // 6 return result * 4; }); In the above handlers, the result is passed to the chain of .then() handlers with the below work flow, - The initial promise resolves in 1 second, - After that .thenhandler is called by logging the result(1) and then return a promise with the value of result * 2. - After that the value passed to the next .thenhandler by logging the result(2) and return a promise with result * 3. - Finally the value passed to the last .thenhandler by logging the result(6) and return a promise with result * 4. - ### What is promise.all Promise.all is a promise that takes an array of promises as an input (an iterable), and it gets resolved when all the promises get resolved or any one of them gets rejected. For example, the syntax of promise.all method is below, Promise.all([Promise1, Promise2, Promise3]) .then(result) => { console.log(result) }) .catch(error => console.log(`Error in promises ${error}`)) Note: Remember that the order of the promises(output the result) is maintained as per input order. - ### What is the purpose of the race method in promise Promise.race() method will return the promise instance which is firstly resolved or rejected. Let's take an example of race() method where promise2 is resolved first var promise1 = new Promise(function(resolve, reject) { setTimeout(resolve, 500, 'one'); }); var promise2 = new Promise(function(resolve, reject) { setTimeout(resolve, 100, 'two'); }); Promise.race([promise1, promise2]).then(function(value) { console.log(value); // "two" // Both promises will resolve, but promise2 is faster }); - ### What is a strict mode in javascript Strict Mode is a new feature in ECMAScript 5 that allows you to place a program, or a function, in a “strict” operating context. This way it prevents certain actions from being taken and throws more exceptions. The literal expression "use strict";instructs the browser to use the javascript code in the Strict mode. - ### Why do you need strict mode Strict mode is useful to write "secure" JavaScript by notifying "bad syntax" into real errors. For example, it eliminates accidentally creating a global variable by throwing an error and also throws an error for assignment to a non-writable property, a getter-only property, a non-existing property, a non-existing variable, or a non-existing object. - ### How do you declare strict mode The strict mode is declared by adding "use strict"; to the beginning of a script or a function. If declared at the beginning of a script, it has global scope. "use strict"; x = 3.14; // This will cause an error because x is not declared and if you declare inside a function, it has local scope x = 3.14; // This will not cause an error. myFunction(); function myFunction() { "use strict"; y = 3.14; // This will cause an error } - ### What is the purpose of double exclamation The double exclamation or negation(!!) ensures the resulting type is a boolean. If it was falsey (e.g. 0, null, undefined, etc.), it will be false, otherwise, true. For example, you can test IE version using this expression as below, let isIE8 = false; isIE8 = !! navigator.userAgent.match(/MSIE 8.0/); console.log(isIE8); // returns true or false If you don't use this expression then it returns the original value. console.log(navigator.userAgent.match(/MSIE 8.0/)); // returns either an Array or null Note: The expression !! is not an operator, but it is just twice of ! operator. - ### What is the purpose of the delete operator The delete keyword is used to delete the property as well as its value. var user= {name: "John", age:20}; delete user.age; console.log(user); // {name: "John"} - ### What is the typeof operator You can use the JavaScript typeof operator to find the type of a JavaScript variable. It returns the type of a variable or an expression. typeof "John Abraham" // Returns "string" typeof (1 + 2) // Returns "number" - ### What is undefined property The undefined property indicates that a variable has not been assigned a value, or not declared at all. The type of undefined value is undefined too. var user; // Value is undefined, type is undefined console.log(typeof(user)) //undefined Any variable can be emptied by setting the value to undefined. user = undefined - ### What is null value The value null represents the intentional absence of any object value. It is one of JavaScript's primitive values. The type of null value is object. You can empty the variable by setting the value to null. var user = null; console.log(typeof(user)) //object - ### What is the difference between null and undefined Below are the main differences between null and undefined, - ### What is eval The eval() function evaluates JavaScript code represented as a string. The string can be a JavaScript expression, variable, statement, or sequence of statements. console.log(eval('1 + 2')); // 3 - ### What is the difference between window and document Below are the main differences between window and document, - ### How do you access history in javascript The window.history object contains the browser's history. You can load previous and next URLs in the history using back() and next() methods. function goBack() { window.history.back() } function goForward() { window.history.forward() } Note: You can also access history without window prefix. - ### What are the javascript data types Below are the list of javascript data types available - Number - String - Boolean - Object - Undefined - ### What is isNaN The isNaN() function is used to determine whether a value is an illegal number (Not-a-Number) or not. i.e, This function returns true if the value equates to NaN. Otherwise it returns false. isNaN('Hello') //true isNaN('100') //false - ### What are the differences between undeclared and undefined variables Below are the major differences between undeclared and undefined variables, - ### What are global variables Global variables are those that are available throughout the length of the code without any scope. The var keyword is used to declare a local variable but if you omit it then it will become global variable msg = "Hello" // var is missing, it becomes global variable - ### What are the problems with global variables The problem with global variables is the conflict of variable names of local and global scope. It is also difficult to debug and test the code that relies on global variables. - ### What is NaN property The NaN property is a global property that represents "Not-a-Number" value. i.e, It indicates that a value is not a legal number. It is very rare to use NaN in a program but it can be used as return value for few cases Math.sqrt(-1) parseInt("Hello") - ### What is the purpose of isFinite function The isFinite() function is used to determine whether a number is a finite, legal number. It returns false if the value is +infinity, -infinity, or NaN (Not-a-Number), otherwise it returns true. isFinite(Infinity); // false isFinite(NaN); // false isFinite(-Infinity); // false isFinite(100); // true - ### What is an event flow Event flow is the order in which event is received on the web page. When you click an element that is nested in various other elements, before your click actually reaches its destination, or target element, it must trigger the click event for each of its parent elements first, starting at the top with the global window object. There are two ways of event flow - Top to Bottom(Event Capturing) - Bottom to Top (Event Bubbling) - ### What is event bubbling Event bubbling is a type of event propagation where the event first triggers on the innermost target element, and then successively triggers on the ancestors (parents) of the target element in the same nesting hierarchy till it reaches the outermost DOM element. - ### What is event capturing Event capturing is a type of event propagation where the event is first captured by the outermost element, and then successively triggers on the descendants (children) of the target element in the same nesting hierarchy till it reaches the innermost DOM element. - ### How do you submit a form using JavaScript You can submit a form using JavaScript use document.form[0].submit(). All the form input's information is submitted using onsubmit event handler function submit() { document.form[0].submit(); } - ### How do you find operating system details The window.navigator object contains information about the visitor's browser OS details. Some of the OS properties are available under platform property, console.log(navigator.platform); - ### What is the difference between document load and DOMContentLoaded events The DOMContentLoadedevent is fired when the initial HTML document has been completely loaded and parsed, without waiting for assets(stylesheets, images, and subframes) to finish loading. Whereas The load event is fired when the whole page has loaded, including all dependent resources(stylesheets, images). - ### What is the difference between native, host and user objects Native objectsare objects that are part of the JavaScript language defined by the ECMAScript specification. For example, String, Math, RegExp, Object, Function etc core objects defined in the ECMAScript spec. Host objectsare objects provided by the browser or runtime environment (Node). For example, window, XmlHttpRequest, DOM nodes etc are considered as host objects. User objectsare objects defined in the javascript code. For example, User objects created for profile information. - ### What are the tools or techniques used for debugging JavaScript code You can use below tools or techniques for debugging javascript - Chrome Devtools - debugger statement - Good old console.log statement - ### What are the pros and cons of promises over callbacks Below are the list of pros and cons of promises over callbacks, Pros: - It avoids callback hell which is unreadable - Easy to write sequential asynchronous code with .then() - Easy to write parallel asynchronous code with Promise.all() - Solves some of the common problems of callbacks(call the callback too late, too early, many times and swallow errors/exceptions) Cons: - It makes little complex code - You need to load a polyfill if ES6 is not supported - ### What is the difference between an attribute and a property Attributes are defined on the HTML markup whereas properties are defined on the DOM. For example, the below HTML element has 2 attributes type and value, <input type="text" value="Name:"> You can retrieve the attribute value as below, const input = document.querySelector('input'); console.log(input.getAttribute('value')); // Good morning console.log(input.value); // Good morning And after you change the value of the text field to "Good evening", it becomes like console.log(input.getAttribute('value')); // Good morning console.log(input.value); // Good evening - ### What is same-origin policy The same-origin policy is a policy that prevents JavaScript from making requests across domain boundaries. An origin is defined as a combination of URI scheme, hostname, and port number. If you enable this policy then it prevents a malicious script on one page from obtaining access to sensitive data on another web page using Document Object Model(DOM). - ### What is the purpose of void 0 Void(0) is used to prevent the page from refreshing. This will be helpful to eliminate the unwanted side-effect, because it will return the undefined primitive value. It is commonly used for HTML documents that use href="JavaScript:Void(0);" within an element. i.e, when you click a link, the browser loads a new page or refreshes the same page. But this behavior will be prevented using this expression. For example, the below link notify the message without reloading the page ```javascript <a href="JavaScript:void(0);" onclick="alert('Well done!')">Click Me!</a> Is JavaScript a compiled or interpreted language JavaScript is an interpreted language, not a compiled language. An interpreter in the browser reads over the JavaScript code, interprets each line, and runs it. Nowadays modern browsers use a technology known as Just-In-Time (JIT) compilation, which compiles JavaScript to executable bytecode just as it is about to run. Is JavaScript a case-sensitive language Yes, JavaScript is a case sensitive language. The language keywords, variables, function & object names, and any other identifiers must always be typed with a consistent capitalization of letters. Is there any relation between Java and JavaScript No, they are entirely two different programming languages and have nothing to do with each other. But both of them are Object Oriented Programming languages and like many other languages, they follow similar syntax for basic features(if, else, for, switch, break, continue etc). What are events Events are "things" that happen to HTML elements. When JavaScript is used in HTML pages, JavaScript can reacton these events. Some of the examples of HTML events are, - Web page has finished loading - Input field was changed - Button was clicked Let's describe the behavior of click event for button element, <!doctype html> <html> <head> <script> function greeting() { alert('Hello! Good morning'); } </script> </head> <body> <button type="button" onclick="greeting()">Click me</button> </body> </html> Who created javascript JavaScript was created by Brendan Eich in 1995 during his time at Netscape Communications. Initially it was developed under the name Mocha, but later the language was officially called LiveScriptwhen it first shipped in beta releases of Netscape. What is the use of preventDefault method The preventDefault() method cancels the event if it is cancelable, meaning that the default action or behaviour that belongs to the event will not occur. For example, prevent form submission when clicking on submit button and prevent opening the page URL when clicking on hyperlink are some common use cases. document.getElementById("link").addEventListener("click", function(event){ event.preventDefault(); }); Note: Remember that not all events are cancelable. What is the use of stopPropagation method The stopPropagation method is used to stop the event from bubbling up the event chain. For example, the below nested divs with stopPropagation method prevents default event propagation when clicking on nested div(Div1) <p>Click DIV1 Element</p> <div onclick="secondFunc()">DIV 2 <div onclick="firstFunc(event)">DIV 1</div> </div> <script> function firstFunc(event) { alert("DIV 1"); event.stopPropagation(); } function secondFunc() { alert("DIV 2"); } </script> What are the steps involved in return false usage The return false statement in event handlers performs the below steps, - First it stops the browser's default action or behaviour. - It prevents the event from propagating the DOM - Stops callback execution and returns immediately when called. What is BOM The Browser Object Model (BOM) allows JavaScript to "talk to" the browser. It consists of the objects navigator, history, screen, location and document which are children of the window. The Browser Object Model is not standardized and can change based on different browsers. What is the use of setTimeout The setTimeout() method is used to call a function or evaluate an expression after a specified number of milliseconds. For example, let's log a message after 2 seconds using setTimeout method, setTimeout(function(){ console.log("Good morning"); }, 2000); What is the use of setInterval The setInterval() method is used to call a function or evaluate an expression at specified intervals (in milliseconds). For example, let's log a message after 2 seconds using setInterval method, setInterval(function(){ console.log("Good morning"); }, 2000); Why is JavaScript treated as Single threaded JavaScript is a single-threaded language. Because the language specification does not allow the programmer to write code so that the interpreter can run parts of it in parallel in multiple threads or processes. Whereas languages like java, go, C++ can make multi-threaded and multi-process programs. What is an event delegation Event delegation is a technique for listening to events where you delegate a parent element as the listener for all of the events that happen inside it. For example, if you wanted to detect field changes in inside a specific form, you can use event delegation technique, var form = document.querySelector('#registration-form'); // Listen for changes to fields inside the form form.addEventListener('input', function (event) { // Log the field that was changed console.log(event.target); }, false); What is ECMAScript ECMAScript is the scripting language that forms the basis of JavaScript. ECMAScript standardized by the ECMA International standards organization in the ECMA-262 and ECMA-402 specifications. The first edition of ECMAScript was released in 1997. What is JSON JSON (JavaScript Object Notation) is a lightweight format that is used for data interchanging. It is based on a subset of JavaScript language in the way objects are built in JavaScript. What are the syntax rules of JSON Below are the list of syntax rules of JSON - The data is in name/value pairs - The data is separated by commas - Curly braces hold objects - Square brackets hold arrays What is the purpose JSON stringify When sending data to a web server, the data has to be in a string format. You can achieve this by converting JSON object into a string using stringify() method. var userJSON = {'name': 'John', age: 31} var userString = JSON.stringify(user); console.log(userString); //"{"name":"John","age":31}" How do you parse JSON string When receiving the data from a web server, the data is always in a string format. But you can convert this string value to a javascript object using parse() method. var userString = '{"name":"John","age":31}'; var userJSON = JSON.parse(userString); console.log(userJSON);// {name: "John", age: 31} Why do you need JSON When exchanging data between a browser and a server, the data can only be text. Since JSON is text only, it can easily be sent to and from a server, and used as a data format by any programming language. What are PWAs Progressive web applications (PWAs) are a type of mobile app delivered through the web, built using common web technologies including HTML, CSS and JavaScript. These PWAs are deployed to servers, accessible through URLs, and indexed by search engines. What is the purpose of clearTimeout method The clearTimeout() function is used in javascript to clear the timeout which has been set by setTimeout()function before that. i.e, The return value of setTimeout() function is stored in a variable and it’s passed into the clearTimeout() function to clear the timer. For example, the below setTimeout method is used to display the message after 3 seconds. This timeout can be cleared by the clearTimeout() method. <script> var msg; function greeting() { alert('Good morning'); } function start() { msg =setTimeout(greeting, 3000); } function stop() { clearTimeout(msg); } </script> What is the purpose of clearInterval method The clearInterval() function is used in javascript to clear the interval which has been set by setInterval() function. i.e, The return value returned by setInterval() function is stored in a variable and it’s passed into the clearInterval() function to clear the interval. For example, the below setInterval method is used to display the message for every 3 seconds. This interval can be cleared by the clearInterval() method. <script> var msg; function greeting() { alert('Good morning'); } function start() { msg = setInterval(greeting, 3000); } function stop() { clearInterval(msg); } </script> How do you redirect new page in javascript In vanilla javascript, you can redirect to a new page using the locationproperty of window object. The syntax would be as follows, function redirect() { window.location.href = 'newPage.html'; } How do you check whether a string contains a substring There are 3 possible ways to check whether a string contains a substring or not, - Using includes: ES6 provided String.prototype.includesmethod to test a string contains a substring var mainString = "hello", subString = "hell"; mainString.includes(subString) - Using indexOf: In an ES5 or older environment, you can use String.prototype.indexOfwhich returns the index of a substring. If the index value is not equal to -1 then it means the substring exists in the main string. var mainString = "hello", subString = "hell"; mainString.indexOf(subString) !== -1 - Using RegEx: The advanced solution is using Regular expression's test method( RegExp.test), which allows for testing for against regular expressions var mainString = "hello", regex = /hell/; regex.test(mainString) How do you validate an email in javascript You can validate an email in javascript using regular expressions. It is recommended to do validations on the server side instead of the client side. Because the javascript can be disabled on the client side. function validateEmail(email) { var re = /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/; return re.test(String(email).toLowerCase()); } The above regular expression accepts unicode characters. How do you get the current url with javascript You can use window.location.hrefexpression to get the current url path and you can use the same expression for updating the URL too. You can also use document.URLfor read-only purposes but this solution has issues in FF. console.log('location.href', window.location.href); // Returns full URL What are the various url properties of location object The below Locationobject properties can be used to access URL components of the page, - href - The entire URL - protocol - The protocol of the URL - host - The hostname and port of the URL - hostname - The hostname of the URL - port - The port number in the URL - pathname - The path name of the URL - search - The query portion of the URL - hash - The anchor portion of the URL How do get query string values in javascript You can use URLSearchParams to get query string values in javascript. Let's see an example to get the client code value from URL query string, const urlParams = new URLSearchParams(window.location.search); const clientCode = urlParams.get('clientCode'); How do you check if a key exists in an object You can check whether a key exists in an object or not using three approaches, - Using in operator: You can use the in operator whether a key exists in an object or not "key" in obj and If you want to check if a key doesn't exist, remember to use parenthesis, !("key" in obj) - Using hasOwnProperty method: You can use hasOwnPropertyto particularly test for properties of the object instance (and not inherited properties) obj.hasOwnProperty("key") // true - Using undefined comparison: If you access a non-existing property from an object, the result is undefined. Let’s compare the properties against undefined to determine the existence of the property. const user = { name: 'John' }; console.log(user.name !== undefined); // true console.log(user.nickName !== undefined); // false How do you loop through or enumerate javascript object You can use the for-inloop to loop through javascript object. You can also make sure that the key you get is an actual property of an object, and doesn't come from the prototype using hasOwnPropertymethod. var object = { "k1": "value1", "k2": "value2", "k3": "value3" }; for (var key in object) { if (object.hasOwnProperty(key)) { console.log(key + " -> " + object[key]); // k1 -> value1 ... } } How do you test for an empty object There are different solutions based on ECMAScript versions - Using Object entries(ECMA 7+): You can use object entries length along with constructor type. Object.entries(obj).length === 0 && obj.constructor === Object // Since date object length is 0, you need to check constructor check as well - Using Object keys(ECMA 5+): You can use object keys length along with constructor type. Object.keys(obj).length === 0 && obj.constructor === Object // Since date object length is 0, you need to check constructor check as well - Using for-in with hasOwnProperty(Pre-ECMA 5): You can use a for-in loop along with hasOwnProperty. function isEmpty(obj) { for(var prop in obj) { if(obj.hasOwnProperty(prop)) { return false; } } return JSON.stringify(obj) === JSON.stringify({}); } What is an arguments object The arguments object is an Array-like object accessible inside functions that contains the values of the arguments passed to that function. For example, let's see how to use arguments object inside sum function, function sum() { var total = 0; for (var i = 0, len = arguments.length; i < len; ++i) { total += arguments[i]; } return total; } sum(1, 2, 3) // returns 6 Note: You can't apply array methods on arguments object. But you can convert into a regular array as below. var argsArray = Array.prototype.slice.call(arguments); How do you make first letter of the string in an uppercase You can create a function which uses a chain of string methods such as charAt, toUpperCase and slice methods to generate a string with the first letter in uppercase. function capitalizeFirstLetter(string) { return string.charAt(0).toUpperCase() + string.slice(1); } What are the pros and cons of for loop The for-loop is a commonly used iteration syntax in javascript. It has both pros and cons ####Pros - Works on every environment - You can use break and continue flow control statements ####Cons - Too verbose - Imperative - You might face one-by-off errors How do you display the current date in javascript You can use new Date()to generate a new Date object containing the current date and time. For example, let's display the current date in mm/dd/yyyy var today = new Date(); var dd = String(today.getDate()).padStart(2, '0'); var mm = String(today.getMonth() + 1).padStart(2, '0'); //January is 0! var yyyy = today.getFullYear(); today = mm + '/' + dd + '/' + yyyy; document.write(today); How do you compare two date objects You need to use date.getTime() method to compare date values instead of comparison operators (==, !=, ===, and !== operators) var d1 = new Date(); var d2 = new Date(d1); console.log(d1.getTime() === d2.getTime()); //True console.log(d1 === d2); // False How do you check if a string starts with another string You can use ECMAScript 6's String.prototype.startsWith()method to check if a string starts with another string or not. But it is not yet supported in all browsers. Let's see an example to see this usage, "Good morning".startsWith("Good"); // true "Good morning".startsWith("morning"); // false How do you trim a string in javascript JavaScript provided a trim method on string types to trim any whitespaces present at the beginning or ending of the string. " Hello World ".trim(); //Hello World If your browser(<IE9) doesn't support this method then you can use below polyfill. if (!String.prototype.trim) { (function() { // Make sure we trim BOM and NBSP var rtrim = /^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g; String.prototype.trim = function() { return this.replace(rtrim, ''); }; })(); } How do you add a key value pair in javascript There are two possible solutions to add new properties to an object. Let's take a simple object to explain these solutions. var object = { key1: value1, key2: value2 }; - Using dot notation: This solution is useful when you know the name of the property object.key3 = "value3"; - Using square bracket notation: This solution is useful when the name of the property is dynamically determined. obj["key3"] = "value3"; Is the !-- notation represents a special operator No,that's not a special operator. But it is a combination of 2 standard operators one after the other, - A logical not (!) - A prefix decrement (--) At first, the value decremented by one and then tested to see if it is equal to zero or not for determining the truthy/falsy value. How do you assign default values to variables You can use the logical or operator ||in an assignment expression to provide a default value. The syntax looks like as below, var a = b || c; As per the above expression, variable 'a 'will get the value of 'c' only if 'b' is falsy (if is null, false, undefined, 0, empty string, or NaN), otherwise 'a' will get the value of 'b'. How do you define multiline strings You can define multiline string literals using the '\' character followed by line terminator. var str = "This is a \ very lengthy \ sentence!"; But if you have a space after the '\' character, the code will look exactly the same, but it will raise a SyntaxError. What is an app shell model An application shell (or app shell) architecture is one way to build a Progressive Web App that reliably and instantly loads on your users' screens, similar to what you see in native applications. It is useful for getting some initial HTML to the screen fast without a network. Can we define properties for functions Yes, We can define properties for functions because functions are also objects. fn = function(x) { //Function code goes here } fn.name = "John"; fn.profile = function(y) { //Profile code goes here } What is the way to find the number of parameters expected by a function You can use function.lengthsyntax to find the number of parameters expected by a function. Let's take an example of sumfunction to calculate the sum of numbers, function sum(num1, num2, num3, num4){ return num1 + num2 + num3 + num4; } sum.length // 4 is the number of parameters expected. What is a polyfill A polyfill is a piece of JS code used to provide modern functionality on older browsers that do not natively support it. For example, Silverlight plugin polyfill can be used to mimic the functionality of an HTML Canvas element on Microsoft Internet Explorer 7. What are break and continue statements The break statement is used to "jump out" of a loop. i.e, It breaks the loop and continues executing the code after the loop. for (i = 0; i < 10; i++) { if (i === 5) { break; } text += "Number: " + i + "<br>"; } The continue statement is used to "jump over" one iteration in the loop. i.e, It breaks one iteration (in the loop), if a specified condition occurs, and continues with the next iteration in the loop. for (i = 0; i < 10; i++) { if (i === 5) { continue; } text += "Number: " + i + "<br>"; } What are js labels The label statement allows us to name loops and blocks in JavaScript. We can then use these labels to refer back to the code later. For example, the below code with labels avoids printing the numbers when they are same, var i, j; loop1: for (i = 0; i < 3; i++) { loop2: for (j = 0; j < 3; j++) { if (i === j) { continue loop1; } console.log('i = ' + i + ', j = ' + j); } } // Output is: // "i = 1, j = 0" // "i = 2, j = 0" // "i = 2, j = 1" What are the benefits of keeping declarations at the top It is recommended to keep all declarations at the top of each script or function. The benefits of doing this are, - Gives cleaner code - It provides a single place to look for local variables - Easy to avoid unwanted global variables - It reduces the possibility of unwanted re-declarations What are the benefits of initializing variables It is recommended to initialize variables because of the below benefits, - It gives cleaner code - It provides a single place to initialize variables - Avoid undefined values in the code What are the recommendations to create new object It is recommended to avoid creating new objects using new Object(). Instead you can initialize values based on it's type to create the objects. - Assign {} instead of new Object() - Assign "" instead of new String() - Assign 0 instead of new Number() - Assign false instead of new Boolean() - Assign [] instead of new Array() - Assign /()/ instead of new RegExp() - Assign function (){} instead of new Function() You can define them as an example, var v1 = {}; var v2 = ""; var v3 = 0; var v4 = false; var v5 = []; var v6 = /()/; var v7 = function(){}; How do you define JSON arrays JSON arrays are written inside square brackets and arrays contain javascript objects. For example, the JSON array of users would be as below, "users":[ {"firstName":"John", "lastName":"Abrahm"}, {"firstName":"Anna", "lastName":"Smith"}, {"firstName":"Shane", "lastName":"Warn"} ] How do you generate random integers You can use Math.random() with Math.floor() to return random integers. For example, if you want generate random integers between 1 to 10, the multiplication factor should be 10, Math.floor(Math.random() * 10) + 1; // returns a random integer from 1 to 10 Math.floor(Math.random() * 100) + 1; // returns a random integer from 1 to 100 Note: Math.random() returns a random number between 0 (inclusive), and 1 (exclusive) Can you write a random integers function to print integers with in a range Yes, you can create a proper random function to return a random number between min and max (both included) function randomInteger(min, max) { return Math.floor(Math.random() * (max - min + 1) ) + min; } randomInteger(1, 100); // returns a random integer from 1 to 100 randomInteger(1, 1000); // returns a random integer from 1 to 1000 What is tree shaking Tree shaking is a form of dead code elimination. It means that unused modules will not be included in the bundle during the build process and for that it relies on the static structure of ES2015 module syntax,( i.e. import and export). Initially this has been popularized by the ES2015 module bundler rollup. What is the need of tree shaking Tree Shaking can significantly reduce the code size in any application. i.e, The less code we send over the wire the more performant the application will be. For example, if we just want to create a “Hello World” Application using SPA frameworks then it will take around a few MBs, but by tree shaking it can bring down the size to just a few hundred KBs. Tree shaking is implemented in Rollup and Webpack bundlers. Is it recommended to use eval No, it allows arbitrary code to be run which causes a security problem. As we know that the eval() function is used to run text as code. In most of the cases, it should not be necessary to use it. What is a Regular Expression A regular expression is a sequence of characters that forms a search pattern. You can use this search pattern for searching data in a text. These can be used to perform all types of text search and text replace operations. Let's see the syntax format now, /pattern/modifiers; For example, the regular expression or search pattern with case-insensitive username would be, /John/i What are the string methods available in Regular expression Regular Expressions has two string methods: search() and replace(). The search() method uses an expression to search for a match, and returns the position of the match. var msg = "Hello John"; var n = msg.search(/John/i); // 6 The replace() method is used to return a modified string where the pattern is replaced. var msg = "Hello John"; var n = msg.replace(/John/i, "Buttler"); // Hello Buttler What are modifiers in regular expression Modifiers can be used to perform case-insensitive and global searches. Let's list down some of the modifiers, | Modifier | Description | |---- | --------- | i | Perform case-insensitive matching | | g | Perform a global match rather than stops at first match | | m | Perform multiline matching| Let's take an example of global modifier, var text = "Learn JS one by one"; var pattern = /one/g; var result = text.match(pattern); // one,one What are regular expression patterns Regular Expressions provide a group of patterns in order to match characters. Basically they are categorized into 3 types, - Brackets: These are used to find a range of characters. For example, below are some use cases, - [abc]: Used to find any of the characters between the brackets(a,b,c) - [0-9]: Used to find any of the digits between the brackets - (a|b): Used to find any of the alternatives separated with | - Metacharacters: These are characters with a special meaning For example, below are some use cases, - \d: Used to find a digit - \s: Used to find a whitespace character - \b: Used to find a match at the beginning or ending of a word - Quantifiers: These are useful to define quantities For example, below are some use cases, - n+: Used to find matches for any string that contains at least one n - n*: Used to find matches for any string that contains zero or more occurrences of n - n?: Used to find matches for any string that contains zero or one occurrences of n What is a RegExp object RegExp object is a regular expression object with predefined properties and methods. Let's see the simple usage of RegExp object, var regexp = new RegExp('\\w+'); console.log(regexp); // expected output: /\w+/ How do you search a string for a pattern You can use the test() method of regular expression in order to search a string for a pattern, and return true or false depending on the result. var pattern = /you/; console.log(pattern.test("How are you?")); //true What is the purpose of exec method The purpose of exec method is similar to test method but it executes a search for a match in a specified string and returns a result array, or null instead of returning true/false. var pattern = /you/; console.log(pattern.exec("How are you?")); //["you", index: 8, input: "How are you?", groups: undefined] How do you change the style of a HTML element You can change inline style or classname of a HTML element using javascript - Using style property: You can modify inline style using style property document.getElementById("title").style.fontSize = "30px"; - Using ClassName property: It is easy to modify element class using className property document.getElementById("title").style.className = "custom-title"; What would be the result of 1+2+'3' The output is going to be 33. Since 1and 2are numeric values, the result of the first two digits is going to be a numeric value 3. The next digit is a string type value because of that the addition of numeric value 3and string type value 3is just going to be a concatenation value 33. What is a debugger statement The debugger statement invokes any available debugging functionality, such as setting a breakpoint. If no debugging functionality is available, this statement has no effect. For example, in the below function a debugger statement has been inserted. So execution is paused at the debugger statement just like a breakpoint in the script source. function getProfile() { // code goes here debugger; // code goes here } What is the purpose of breakpoints in debugging You can set breakpoints in the javascript code once the debugger statement is executed and the debugger window pops up. At each breakpoint, javascript will stop executing, and let you examine the JavaScript values. After examining values, you can resume the execution of code using the play button. Can I use reserved words as identifiers No, you cannot use the reserved words as variables, labels, object or function names. Let's see one simple example, var else = "hello"; // Uncaught SyntaxError: Unexpected token else How do you detect a mobile browser You can use regex which returns a true or false value depending on whether or not the user is browsing with a mobile. window.mobilecheck = function() { var mobileCheck =))) mobileCheck = true;})(navigator.userAgent||navigator.vendor||window.opera); return mobileCheck; }; How do you detect a mobile browser without regexp You can detect mobile browsers by simply running through a list of devices and checking if the useragent matches anything. This is an alternative solution for RegExp usage, function detectmob() {) ){ return true; } else { return false; } } How do you get the image width and height using JS You can programmatically get the image and check the dimensions(width and height) using Javascript. var img = new Image(); img.onload = function() { console.log(this.width + 'x' + this.height); } img.src = ''; How do you make synchronous HTTP request Browsers provide an XMLHttpRequest object which can be used to make synchronous HTTP requests from JavaScript function httpGet(theUrl) { var xmlHttpReq = new XMLHttpRequest(); xmlHttpReq.open( "GET", theUrl, false ); // false for synchronous request xmlHttpReq.send( null ); return xmlHttpReq.responseText; } How do you make asynchronous HTTP request Browsers provide an XMLHttpRequest object which can be used to make asynchronous HTTP requests from JavaScript by passing the 3rd parameter as true. function httpGetAsync(theUrl, callback) { var xmlHttpReq = new XMLHttpRequest(); xmlHttpReq.onreadystatechange = function() { if (xmlHttpReq.readyState == 4 && xmlHttpReq.status == 200) callback(xmlHttpReq.responseText); } xmlHttp.open("GET", theUrl, true); // true for asynchronous xmlHttp.send(null); } How do you convert date to another timezone in javascript You can use the toLocaleString() method to convert dates in one timezone to another. For example, let's convert current date to British English timezone as below, console.log(event.toLocaleString('en-GB', { timeZone: 'UTC' })); //29/06/2019, 09:56:00 What are the properties used to get size of window You can use innerWidth, innerHeight, clientWidth, clientHeight properties of windows, document element and document body objects to find the size of a window. Let's use them combination of these properties to calculate the size of a window or document, var width = window.innerWidth || document.documentElement.clientWidth || document.body.clientWidth; var height = window.innerHeight || document.documentElement.clientHeight || document.body.clientHeight; What is a conditional operator in javascript The conditional (ternary) operator is the only JavaScript operator that takes three operands which acts as a shortcut for if statements. var isAuthenticated = false; console.log(isAuthenticated ? 'Hello, welcome' : 'Sorry, you are not authenticated'); //Sorry, you are not authenticated Can you apply chaining on conditional operator Yes, you can apply chaining on conditional operators similar to if … else if … else if … else chain. The syntax is going to be as below, function traceValue(someParam) { return condition1 ? value1 : condition2 ? value2 : condition3 ? value3 : value4; } // The above conditional operator is equivalent to: function traceValue(someParam) { if (condition1) { return value1; } else if (condition2) { return value2; } else if (condition3) { return value3; } else { return value4; } } What are the ways to execute javascript after page load You can execute javascript after page load in many different ways, - window.onload: window.onload = function ... - document.onload: document.onload = function ... - body onload: <body onload="script();"> What is the difference between proto and prototype The __proto__object is the actual object that is used in the lookup chain to resolve methods, etc. Whereas prototypeis the object that is used to build __proto__when you create an object with new ( new Employee ).__proto__ === Employee.prototype; ( new Employee ).prototype === undefined; Give an example where do you really need semicolon It is recommended to use semicolons after every statement in JavaScript. For example, in the below case it throws an error ".. is not a function" at runtime due to missing semicolon. // define a function var fn = function () { //... } // semicolon missing at this line // then execute some code inside a closure (function () { //... })(); and it will be interpreted as var fn = function () { //... }(function () { //... })(); In this case, we are passing the second function as an argument to the first function and then trying to call the result of the first function call as a function. Hence, the second function will fail with a "... is not a function" error at runtime. What is a freeze method The freeze() method is used to freeze an object. Freezing an object does not allow adding new properties to an object,prevents from removing and prevents changing the enumerability, configurability, or writability of existing properties. i.e, It returns the passed object and does not create a frozen copy. const obj = { prop: 100 }; Object.freeze(obj); obj.prop = 200; // Throws an error in strict mode console.log(obj.prop); //100 Note: It causes a TypeError if the argument passed is not an object. What is the purpose of freeze method Below are the main benefits of using freeze method, - It is used for freezing objects and arrays. - It is used to make an object immutable. Why do I need to use freeze method In the Object-oriented paradigm, an existing API contains certain elements that are not intended to be extended, modified, or re-used outside of their current context. Hence it works as the finalkeyword which is used in various languages. How do you detect a browser language preference You can use navigator object to detect a browser language preference as below, var language = navigator.languages && navigator.languages[0] || // Chrome / Firefox navigator.language || // All browsers navigator.userLanguage; // IE <= 10 console.log(language); How to convert string to title case with javascript Title case means that the first letter of each word is capitalized. You can convert a string to title case using the below function, function toTitleCase(str) { return str.replace( /\w\S*/g, function(txt) { return txt.charAt(0).toUpperCase() + txt.substr(1).toLowerCase(); } ); } toTitleCase("good morning john"); // Good Morning John How do you detect javascript disabled in the page You can use the <noscript>tag to detect javascript disabled or not. The code block inside <noscript>gets executed when JavaScript is disabled, and is typically used to display alternative content when the page generated in JavaScript. <script type="javascript"> // JS related code goes here </script> <noscript> <a href="next_page.html?noJS=true">JavaScript is disabled in the page. Please click Next Page</a> </noscript> What are various operators supported by javascript An operator is capable of manipulating(mathematical and logical computations) a certain value or operand. There are various operators supported by JavaScript as below, - Arithmetic Operators: Includes + (Addition),– (Subtraction), * (Multiplication), / (Division), % (Modulus), + + (Increment) and – – (Decrement) - Comparison Operators: Includes = =(Equal),!= (Not Equal), ===(Equal with type), > (Greater than),> = (Greater than or Equal to),< (Less than),<= (Less than or Equal to) - Logical Operators: Includes &&(Logical AND),||(Logical OR),!(Logical NOT) - Assignment Operators: Includes = (Assignment Operator), += (Add and Assignment Operator), – = (Subtract and Assignment Operator), *= (Multiply and Assignment), /= (Divide and Assignment), %= (Modules and Assignment) - Ternary Operators: It includes conditional(: ?) Operator - typeof Operator: It uses to find type of variable. The syntax looks like typeof variable What is a rest parameter Rest parameter is an improved way to handle function parameters which allows us to represent an indefinite number of arguments as an array. The syntax would be as below, function f(a, b, ...theArgs) { // ... } For example, let's take a sum example to calculate on dynamic number of parameters, function total(…args){ let sum = 0; for(let i of args){ sum+=i; } return sum; } console.log(fun(1,2)); //3 console.log(fun(1,2,3)); //6 console.log(fun(1,2,3,4)); //13 console.log(fun(1,2,3,4,5)); //15 Note: Rest parameter is added in ES2015 or ES6 What happens if you do not use rest parameter as a last argument The rest parameter should be the last argument, as its job is to collect all the remaining arguments into an array. For example, if you define a function like below it doesn’t make any sense and will throw an error. function someFunc(a,…b,c){ //You code goes here return; } What are the bitwise operators available in javascript Below are the list of bitwise logical operators used in JavaScript - Bitwise AND ( & ) - Bitwise OR ( | ) - Bitwise XOR ( ^ ) - Bitwise NOT ( ~ ) - Left Shift ( << ) - Sign Propagating Right Shift ( >> ) - Zero fill Right Shift ( >>> ) What is a spread operator Spread operator allows iterables( arrays / objects / strings ) to be expanded into single arguments/elements. Let's take an example to see this behavior, function calculateSum(x, y, z) { return x + y + z; } const numbers = [1, 2, 3]; console.log(calculateSum(...numbers)); // 6 How do you determine whether object is frozen or not Object.isFrozen() method is used to determine if an object is frozen or not.An object is frozen if all of the below conditions hold true, - If it is not extensible. - If all of its properties are non-configurable. - If all its data properties are non-writable. The usage is going to be as follows, const object = { property: 'Welcome JS world' }; Object.freeze(object); console.log(Object.isFrozen(object)); How do you determine two values same or not using object The Object.is() method determines whether two values are the same value. For example, the usage with different types of values would be, Object.is('hello', 'hello'); // true Object.is(window, window); // true Object.is([], []) // false Two values are the same if one of the following holds: - both undefined - both null - both true or both false - both strings of the same length with the same characters in the same order - both the same object (means both object have same reference) - both numbers and both +0 both -0 both NaN both non-zero and both not NaN and both have the same value. What is the purpose of using object is method Some of the applications of Object's ismethod are follows, - It is used for comparison of two strings. - It is used for comparison of two numbers. - It is used for comparing the polarity of two numbers. - It is used for comparison of two objects. How do you copy properties from one object to other You can use the Object.assign() method which is used to copy the values and properties from one or more source objects to a target object. It returns the target object which has properties and values copied from the target object. The syntax would be as below, Object.assign(target, ...sources) Let's take example with one source and one target object, const target = { a: 1, b: 2 }; const source = { b: 3, c: 4 }; const returnedTarget = Object.assign(target, source); console.log(target); // { a: 1, b: 3, c: 4 } console.log(returnedTarget); // { a: 1, b: 3, c: 4 } As observed in the above code, there is a common property( b) from source to target so it's value has been overwritten. What are the applications of assign method Below are the some of main applications of Object.assign() method, - It is used for cloning an object. - It is used to merge objects with the same properties. What is a proxy object The Proxy object is used to define custom behavior for fundamental operations such as property lookup, assignment, enumeration, function invocation, etc. The syntax would be as follows, var p = new Proxy(target, handler); Let's take an example of proxy object, var handler = { get: function(obj, prop) { return prop in obj ? obj[prop] : 100; } }; var p = new Proxy({}, handler); p.a = 10; p.b = null; console.log(p.a, p.b); // 10, null console.log('c' in p, p.c); // false, 100 In the above code, it uses gethandler which define the behavior of the proxy when an operation is performed on it What is the purpose of seal method The Object.seal() method is used to seal an object, by preventing new properties from being added to it and marking all existing properties as non-configurable. But values of present properties can still be changed as long as they are writable. Let's see the below example to understand more about seal() method const object = { property: 'Welcome JS world' }; Object.seal(object); object.property = 'Welcome to object world'; console.log(Object.isSealed(object)); // true delete object.property; // You cannot delete when sealed console.log(object.property); //Welcome to object world What are the applications of seal method Below are the main applications of Object.seal() method, - It is used for sealing objects and arrays. - It is used to make an object immutable. What are the differences between freeze and seal methods If an object is frozen using the Object.freeze() method then its properties become immutable and no changes can be made in them whereas if an object is sealed using the Object.seal() method then the changes can be made in the existing properties of the object. How do you determine if an object is sealed or not The Object.isSealed() method is used to determine if an object is sealed or not. An object is sealed if all of the below conditions hold true - If it is not extensible. - If all of its properties are non-configurable. - If it is not removable (but not necessarily non-writable). Let's see it in the action const object = { property: 'Hello, Good morning' }; Object.seal(object); // Using seal() method to seal the object console.log(Object.isSealed(object)); // checking whether the object is sealed or not How do you get enumerable key and value pairs The Object.entries() method is used to return an array of a given object's own enumerable string-keyed property [key, value] pairs, in the same order as that provided by a for...in loop. Let's see the functionality of object.entries() method in an example, const object = { a: 'Good morning', b: 100 }; for (let [key, value] of Object.entries(object)) { console.log(`${key}: ${value}`); // a: 'Good morning' // b: 100 } Note: The order is not guaranteed as object defined. What is the main difference between Object.values and Object.entries method The Object.values() method's behavior is similar to Object.entries() method but it returns an array of values instead [key,value] pairs. const object = { a: 'Good morning', b: 100 }; for (let value of Object.values(object)) { console.log(`${value}`); // 'Good morning' 100 } How can you get the list of keys of any object You can use the Object.keys()method which is used to return an array of a given object's own property names, in the same order as we get with a normal loop. For example, you can get the keys of a user object, const user = { name: 'John', gender: 'male', age: 40 }; console.log(Object.keys(user)); //['name', 'gender', 'age'] How do you create an object with prototype The Object.create() method is used to create a new object with the specified prototype object and properties. i.e, It uses an existing object as the prototype of the newly created object. It returns a new object with the specified prototype object and properties. const user = { name: 'John', printInfo: function () { console.log(`My name is ${this.name}.`); } }; const admin = Object.create(user); admin.name = "Nick"; // Remember that "name" is a property set on "admin" but not on "user" object admin.printInfo(); // My name is Nick What is a WeakSet WeakSet is used to store a collection of weakly(weak references) held objects. The syntax would be as follows, new WeakSet([iterable]); Let's see the below example to explain it's behavior, var ws = new WeakSet(); var user = {}; ws.add(user); ws.has(user); // true ws.delete(user); // removes user from the set ws.has(user); // false, user has been removed What are the differences between WeakSet and Set The main difference is that references to objects in Set are strong while references to objects in WeakSet are weak. i.e, An object in WeakSet can be garbage collected if there is no other reference to it. Other differences are, - Sets can store any value Whereas WeakSets can store only collections of objects - WeakSet does not have size property unlike Set - WeakSet does not have methods such as clear, keys, values, entries, forEach. - WeakSet is not iterable. List down the collection of methods available on WeakSet Below are the list of methods available on WeakSet, - add(value): A new object is appended with the given value to the weakset - delete(value): Deletes the value from the WeakSet collection. - has(value): It returns true if the value is present in the WeakSet Collection, otherwise it returns false. - length(): It returns the length of weakSetObject Let's see the functionality of all the above methods in an example, var weakSetObject = new WeakSet(); var firstObject = {}; var secondObject = {}; // add(value) weakSetObject.add(firstObject); weakSetObject.add(secondObject); console.log(weakSetObject.has(firstObject)); //true console.log(weakSetObject.length()); //2 weakSetObject.delete(secondObject); What is a WeakMap The WeakMap object is a collection of key/value pairs in which the keys are weakly referenced. In this case, keys must be objects and the values can be arbitrary values. The syntax is looking like as below, new WeakMap([iterable]) Let's see the below example to explain it's behavior, var ws = new WeakMap(); var user = {}; ws.set(user); ws.has(user); // true ws.delete(user); // removes user from the map ws.has(user); // false, user has been removed What are the differences between WeakMap and Map The main difference is that references to key objects in Map are strong while references to key objects in WeakMap are weak. i.e, A key object in WeakMap can be garbage collected if there is no other reference to it. Other differences are, - Maps can store any key type Whereas WeakMaps can store only collections of key objects - WeakMap does not have size property unlike Map - WeakMap does not have methods such as clear, keys, values, entries, forEach. - WeakMap is not iterable. List down the collection of methods available on WeakMap Below are the list of methods available on WeakMap, - set(key, value): Sets the value for the key in the WeakMap object. Returns the WeakMap object. - delete(key): Removes any value associated to the key. - has(key): Returns a Boolean asserting whether a value has been associated to the key in the WeakMap object or not. - get(key): Returns the value associated to the key, or undefined if there is none. Let's see the functionality of all the above methods in an example, var weakMapObject = new WeakMap(); var firstObject = {}; var secondObject = {}; // set(key, value) weakMapObject.set(firstObject, 'John'); weakMapObject.set(secondObject, 100); console.log(weakMapObject.has(firstObject)); //true console.log(weakMapObject.get(firstObject)); // John weakMapObject.delete(secondObject); What is the purpose of uneval The uneval() is an inbuilt function which is used to create a string representation of the source code of an Object. It is a top-level function and is not associated with any object. Let's see the below example to know more about it's functionality, var a = 1; uneval(a); // returns a String containing 1 uneval(function user() {}); // returns "(function user(){})" How do you encode an URL The encodeURI() function is used to encode complete URI which has special characters except (, / ? : @ & = + $ #) characters. var uri = 'шеллы'; var encoded = encodeURI(uri); console.log(encoded); // How do you decode an URL The decodeURI() function is used to decode a Uniform Resource Identifier (URI) previously created by encodeURI(). var uri = 'шеллы'; var encoded = encodeURI(uri); console.log(encoded); // try { console.log(decodeURI(encoded)); // "шеллы" } catch(e) { // catches a malformed URI console.error(e); } How do you print the contents of web page The window object provided a print() method which is used to print the contents of the current window. It opens a Print dialog box which lets you choose between various printing options. Let's see the usage of print method in an example, <input type="button" value="Print" onclick="window.print()" /> Note: In most browsers, it will block while the print dialog is open. What is the difference between uneval and eval The unevalfunction returns the source of a given object; whereas the evalfunction does the opposite, by evaluating that source code in a different memory area. Let's see an example to clarify the difference, var msg = uneval(function greeting() { return 'Hello, Good morning'; }); var greeting = eval(msg); greeting(); // returns "Hello, Good morning" What is an anonymous function An anonymous function is a function without a name! Anonymous functions are commonly assigned to a variable name or used as a callback function. The syntax would be as below, function (optionalParameters) { //do something } const myFunction = function(){ //Anonymous function assigned to a variable //do something }; [1, 2, 3].map(function(element){ //Anonymous function used as a callback function //do something }); Let's see the above anonymous function in an example, var x = function (a, b) {return a * b}; var z = x(5, 10); console.log(z); // 50 What is the precedence order between local and global variables A local variable takes precedence over a global variable with the same name. Let's see this behavior in an example. var msg = "Good morning"; function greeting() { msg = "Good Evening"; console.log(msg); } greeting(); What are javascript accessors ECMAScript 5 introduced javascript object accessors or computed properties through getters and setters. Getters uses the getkeyword whereas Setters uses the setkeyword. var user = { firstName: "John", lastName : "Abraham", language : "en", get lang() { return this.language; } set lang(lang) { this.language = lang; } }; console.log(user.lang); // getter access lang as en user.lang = 'fr'; console.log(user.lang); // setter used to set lang as fr How do you define property on Object constructor The Object.defineProperty() static method is used to define a new property directly on an object, or modify an existing property on an object, and returns the object. Let's see an example to know how to define property, const newObject = {}; Object.defineProperty(newObject, 'newProperty', { value: 100, writable: false }); console.log(newObject.newProperty); // 100 newObject.newProperty = 200; // It throws an error in strict mode due to writable setting What is the difference between get and defineProperty Both have similar results until unless you use classes. If you use getthe property will be defined on the prototype of the object whereas using Object.defineProperty()the property will be defined on the instance it is applied to. What are the advantages of Getters and Setters Below are the list of benefits of Getters and Setters, - They provide simpler syntax - They are used for defining computed properties, or accessors in JS. - Useful to provide equivalence relation between properties and methods - They can provide better data quality - Useful for doing things behind the scenes with the encapsulated logic. Can I add getters and setters using defineProperty method Yes, You can use the Object.defineProperty()method to add Getters and Setters. For example, the below counter object uses increment, decrement, add and subtract properties, var obj = {counter : 0}; // Define getters Object.defineProperty(obj, "increment", { get : function () {this.counter++;} }); Object.defineProperty(obj, "decrement", { get : function () {this.counter--;} }); // Define setters Object.defineProperty(obj, "add", { set : function (value) {this.counter += value;} }); Object.defineProperty(obj, "subtract", { set : function (value) {this.counter -= value;} }); obj.add = 10; obj.subtract = 5; console.log(obj.increment); //6 console.log(obj.decrement); //5 What is the purpose of switch-case The switch case statement in JavaScript is used for decision making purposes. In a few cases, using the switch case statement is going to be more convenient than if-else statements. The syntax would be as below, switch (expression) { case value1: statement1; break; case value2: statement2; break; . . case valueN: statementN; break; default: statementDefault; } The above multi-way branch statement provides an easy way to dispatch execution to different parts of code based on the value of the expression. What are the conventions to be followed for the usage of switch case Below are the list of conventions should be taken care, - The expression can be of type either number or string. - Duplicate values are not allowed for the expression. - The default statement is optional. If the expression passed to switch does not match with any case value then the statement within default case will be executed. - The break statement is used inside the switch to terminate a statement sequence. - The break statement is optional. But if it is omitted, the execution will continue on into the next case. What are primitive data types A primitive data type is data that has a primitive value (which has no properties or methods). There are 5 types of primitive data types. - string - number - boolean - null - undefined What are the different ways to access object properties There are 3 possible ways for accessing the property of an object. - Dot notation: It uses dot for accessing the properties objectName.property - Square brackets notation: It uses square brackets for property access objectName["property"] - Expression notation: It uses expression in the square brackets objectName[expression] What are the function parameter rules JavaScript functions follow below rules for parameters, - The function definitions do not specify data types for parameters. - Do not perform type checking on the passed arguments. - Do not check the number of arguments received. i.e, The below function follows the above rules, function functionName(parameter1, parameter2, parameter3) { console.log(parameter1); // 1 } functionName(1); What is an error object An error object is a built in error object that provides error information when an error occurs. It has two properties: name and message. For example, the below function logs error details, try { greeting("Welcome"); } catch(err) { console.log(err.name + "<br>" + err.message); } When you get a syntax error A SyntaxError is thrown if you try to evaluate code with a syntax error. For example, the below missing quote for the function parameter throws a syntax error try { eval("greeting('welcome)"); // Missing ' will produce an error } catch(err) { console.log(err.name); } What are the different error names from error object There are 6 different types of error names returned from error object, | Error Name | Description | |---- | --------- | EvalError | An error has occurred in the eval() function | | RangeError | An error has occurred with a number "out of range" | | ReferenceError | An error due to an illegal reference| | SyntaxError | An error due to a syntax error| | TypeError | An error due to a type error | | URIError | An error due to encodeURI() | What are the various statements in error handling Below are the list of statements used in an error handling, - try: This statement is used to test a block of code for errors - catch: This statement is used to handle the error - throw: This statement is used to create custom errors. - finally: This statement is used to execute code after try and catch regardless of the result. What are the two types of loops in javascript - Entry Controlled loops: In this kind of loop type, the test condition is tested before entering the loop body. For example, For Loop and While Loop comes under this category. - Exit Controlled Loops: In this kind of loop type, the test condition is tested or evaluated at the end of the loop body. i.e, the loop body will execute at least once irrespective of test condition true or false. For example, do-while loop comes under this category. What is nodejs Node.js is a server-side platform built on Chrome's JavaScript runtime for easily building fast and scalable network applications. It is an event-based, non-blocking, asynchronous I/O runtime that uses Google's V8 JavaScript engine and libuv library. What is an Intl object The Intl object is the namespace for the ECMAScript Internationalization API, which provides language sensitive string comparison, number formatting, and date and time formatting. It provides access to several constructors and language sensitive functions. How do you perform language specific date and time formatting You can use the Intl.DateTimeFormatobject which is a constructor for objects that enable language-sensitive date and time formatting. Let's see this behavior with an example, var date = new Date(Date.UTC(2019, 07, 07, 3, 0, 0)); console.log(new Intl.DateTimeFormat('en-GB').format(date)); // 07/08/2019 console.log(new Intl.DateTimeFormat('en-AU').format(date)); // 07/08/2019 What is an Iterator An iterator is an object which defines a sequence and a return value upon its termination. It implements the Iterator protocol with a next()method which returns an object with two properties: value(the next value in the sequence) and done(which is true if the last value in the sequence has been consumed). How does synchronous iteration works Synchronproperty contains an iterated element and the doneproperty determines whether the element is the last element or not. Let's demonstrate synchronous iteration with an array as below, const iterable = ['one', 'two', 'three']; const iterator = iterable[Symbol.iterator](); console.log(iterator.next()); // { value: 'one', done: false } console.log(iterator.next()); // { value: 'two', done: false } console.log(iterator.next()); // { value: 'three', done: false } console.log(iterator.next()); // { value: 'undefined, done: true } What is an event loop The Event Loop is a queue of callback functions. When an async function executes, the callback function is pushed into the queue. The JavaScript engine doesn't start processing the event loop until the async function has finished executing the code. Note: It allows Node.js to perform non-blocking I/O operations even though JavaScript is single-threaded. What is call stack Call Stack is a data structure for javascript interpreters to keep track of function calls in the program. It has two major actions, - Whenever you call a function for its execution, you are pushing it to the stack. - Whenever the execution is completed, the function is popped out of the stack. Let's take an example and it's state representation in a diagram format function hungry() { eatFruits(); } function eatFruits() { return "I'm eating fruits"; } // Invoke the `hungry` function hungry(); The above code processed in a call stack as below, - Add the hungry()function to the call stack list and execute the code. - Add the eatFruits()function to the call stack list and execute the code. - Delete the eatFruits()function from our call stack list. - Delete the hungry()function from the call stack list since there are no items anymore. What is an event queue What is a decorator A decorator is an expression that evaluates to a function and that takes the target, name, and decorator descriptor as arguments. Also, it optionally returns a decorator descriptor to install on the target object. Let's define admin decorator for user class at design time, function admin(isAdmin) { return function(target) { target.isAdmin = isAdmin; } } @admin(true) class User() { } console.log(User.isAdmin); //true @admin(false) class User() { } console.log(User.isAdmin); //false What are the properties of Intl object Below are the list of properties available on Intl object, - Collator: These are the objects that enable language-sensitive string comparison. - DateTimeFormat: These are the objects that enable language-sensitive date and time formatting. - ListFormat: These are the objects that enable language-sensitive list formatting. - NumberFormat: Objects that enable language-sensitive number formatting. - PluralRules: Objects that enable plural-sensitive formatting and language-specific rules for plurals. - RelativeTimeFormat: Objects that enable language-sensitive relative time formatting. What is an Unary operator The unary(+) operator is used to convert a variable to a number.If the variable cannot be converted, it will still become a number but with the value NaN. Let's see this behavior in an action. var x = "100"; var y = + x; console.log(typeof x, typeof y); // string, number var a = "Hello"; var b = + a; console.log(typeof a, typeof b, b); // string, number, NaN How do you sort elements in an array The sort() method is used to sort the elements of an array in place and returns the sorted array. The example usage would be as below, var months = ["Aug", "Sep", "Jan", "June"]; months.sort(); console.log(months); // ["Aug", "Jan", "June", "Sep"] What is the purpose of compareFunction while sorting arrays The compareFunction is used to define the sort order. If omitted, the array elements are converted to strings, then sorted according to each character's Unicode code point value. Let's take an example to see the usage of compareFunction, let numbers = [1, 2, 5, 3, 4]; numbers.sort((a, b) => b - a); console.log(numbers); // [5, 4, 3, 2, 1] How do you reversing an array You can use the reverse() method to reverse the elements in an array. This method is useful to sort an array in descending order. Let's see the usage of reverse() method in an example, let numbers = [1, 2, 5, 3, 4]; numbers.sort((a, b) => b - a); numbers.reverse(); console.log(numbers); // [1, 2, 3, 4 ,5] How do you find min and max value in an array You can use Math.minand Math.maxmethods on array variables to find the minimum and maximum elements within an array. Let's create two functions to find the min and max value with in an array, var marks = [50, 20, 70, 60, 45, 30]; function findMin(arr) { return Math.min.apply(null, arr); } function findMax(arr) { return Math.max.apply(null, arr); } console.log(findMin(marks)); console.log(findMax(marks)); How do you find min and max values without Math functions You can write functions which loop through an array comparing each value with the lowest value or highest value to find the min and max values. Let's create those functions to find min and max values, var marks = [50, 20, 70, 60, 45, 30]; function findMin(arr) { var length = arr.length var min = Infinity; while (length--) { if (arr[length] < min) { min = arr[len]; } } return min; } function findMax(arr) { var length = arr.length var max = -Infinity; while (len--) { if (arr[length] > max) { max = arr[length]; } } return max; } console.log(findMin(marks)); console.log(findMax(marks)); What is an empty statement and purpose of it The empty statement is a semicolon (;) indicating that no statement will be executed, even if JavaScript syntax requires one. Since there is no action with an empty statement you might think that it's usage is quite less, but the empty statement is occasionally useful when you want to create a loop that has an empty body. For example, you can initialize an array with zero values as below, // Initialize an array a for(int i=0; i < a.length; a[i++] = 0) ; How do you get metadata of a module You can use the import.metaobject which is a meta-property exposing context-specific meta data to a JavaScript module. It contains information about the current module, such as the module's URL. In browsers, you might get different meta data than NodeJS. <script type="module" src="welcome-module.js"></script> console.log(import.meta); // { url: "" } What is a comma operator The comma operator is used to evaluate each of its operands from left to right and returns the value of the last operand. This is totally different from comma usage within arrays, objects, and function arguments and parameters. For example, the usage for numeric expressions would be as below, var x = 1; x = (x++, x); console.log(x); // 2 What is the advantage of a comma operator It is normally used to include multiple expressions in a location that requires a single expression. One of the common usages of this comma operator is to supply multiple parameters in a forloop. For example, the below for loop uses multiple expressions in a single location using comma operator, for (var a = 0, b =10; a <= 10; a++, b--) You can also use the comma operator in a return statement where it processes before returning. function myFunction() { var a = 1; return (a += 10, a); // 11 } greeting(name: string): string { return "Hello, " + name; } let user = "Sudheer"; console.log(greeting(user)); The greeting method allows only string type as argument. What are the differences between javascript and typescript Below are the list of differences between javascript and typescript, | feature | typescript | javascript | |---- | --------- | ---- | Language paradigm | Object oriented programming language | Scripting language | | Typing support | Supports static typing | It has dynamic typing | | Modules | Supported | Not supported | | Interface | It has interfaces concept | Doesn't support interfaces | | Optional parameters | Functions support optional parameters | No support of optional parameters for functions | What are the advantages of typescript over javascript Below are some of the advantages of typescript over javascript, - TypeScript is able to find compile time errors at the development time only and it makes sures less runtime errors. Whereas javascript is an interpreted language. - TypeScript is strongly-typed or supports static typing which allows for checking type correctness at compile time. This is not available in javascript. - TypeScript compiler can compile the .ts files into ES3,ES4 and ES5 unlike ES6 features of javascript which may not be supported in some browsers. What is an object initializer An object initializer is an expression that describes the initialization of an Object. The syntax for this expression is represented as a comma-delimited list of zero or more pairs of property names and associated values of an object, enclosed in curly braces ({}). This is also known as literal notation. It is one of the ways to create an object. var initObject = {a: 'John', b: 50, c: {}}; console.log(initObject.a); // John What is a constructor method The constructor method is a special method for creating and initializing an object created within a class. If you do not specify a constructor method, a default constructor is used. The example usage of constructor would be as below, class Employee { constructor() { this.name = "John"; } } var employeeObject = new Employee(); console.log(employeeObject.name); // John What happens if you write constructor more than once in a class The "constructor" in a class is a special method and it should be defined only once in a class. i.e, If you write a constructor method more than once in a class it will throw a SyntaxErrorerror. class Employee { constructor() { this.name = "John"; } constructor() { // Uncaught SyntaxError: A class may only have one constructor this.age = 30; } } var employeeObject = new Employee(); console.log(employeeObject.name); How do you call the constructor of a parent class You can use the superkeyword to call the constructor of a parent class. Remember that super()must be called before using 'this' reference. Otherwise it will cause a reference error. Let's the usage of it, class Square extends Rectangle { constructor(length) { super(length, length); this.name = 'Square'; } get area() { return this.width * this.height; } set area(value) { this.area = value; } } How do you get the prototype of an object You can use the Object.getPrototypeOf(obj)method to return the prototype of the specified object. i.e. The value of the internal prototypeproperty. If there are no inherited properties then nullvalue is returned. const newPrototype = {}; const newObject = Object.create(newPrototype); console.log(Object.getPrototypeOf(newObject) === newPrototype); // true What happens If I pass string type for getPrototype method In ES5, it will throw a TypeError exception if the obj parameter isn't an object. Whereas in ES2015, the parameter will be coerced to an Object. // ES5 Object.getPrototypeOf('James'); // TypeError: "James" is not an object // ES2015 Object.getPrototypeOf('James'); // String.prototype How do you set prototype of one object to another You can use the Object.setPrototypeOf()method that sets the prototype (i.e., the internal Prototypeproperty) of a specified object to another object or null. For example, if you want to set prototype of a square object to rectangle object would be as follows, Object.setPrototypeOf(Square.prototype, Rectangle.prototype); Object.setPrototypeOf({}, null); How do you check whether an object can be extendable or not The Object.isExtensible()method is used to determine if an object is extendable or not. i.e, Whether it can have new properties added to it or not. const newObject = {}; console.log(Object.isExtensible(newObject)); //true Note: By default, all the objects are extendable. i.e, The new properties can be added or modified. How do you prevent an object to extend The Object.preventExtensions()method is used to prevent new properties from ever being added to an object. In other words, it prevents future extensions to the object. Let's see the usage of this property, const newObject = {}; Object.preventExtensions(newObject); // NOT extendable try { Object.defineProperty(newObject, 'newProperty', { // Adding new property value: 100 }); } catch (e) { console.log(e); // TypeError: Cannot define property newProperty, object is not extensible } What are the different ways to make an object non-extensible You can mark an object non-extensible in 3 ways, - Object.preventExtensions - Object.seal - Object.freeze var newObject = {}; Object.preventExtensions(newObject); // Prevent objects are non-extensible Object.isExtensible(newObject); // false var sealedObject = Object.seal({}); // Sealed objects are non-extensible Object.isExtensible(sealedObject); // false var frozenObject = Object.freeze({}); // Frozen objects are non-extensible Object.isExtensible(frozenObject); // false How do you define multiple properties on an object The Object.defineProperties()method is used to define new or modify existing properties directly on an object and returning the object. Let's define multiple properties on an empty object, const newObject = {}; Object.defineProperties(newObject, { newProperty1: { value: 'John', writable: true }, newProperty2: {} }); What is MEAN in javascript The MEAN (MongoDB, Express, AngularJS, and Node.js) stack is the most popular open-source JavaScript software tech stack available for building dynamic web apps where you can write both the server-side and client-side halves of the web project entirely in JavaScript. What Is Obfuscation in javascript Obfuscation is the deliberate act of creating obfuscated javascript code(i.e, source or machine code) that is difficult for humans to understand. It is something similar to encryption, but a machine can understand the code and execute it. Let's see the below function before Obfuscation, function greeting() { console.log('Hello, welcome to JS world'); } And after the code Obfuscation, it would be appeared as below,}('2 1(){0.3(\'4, 7 6 5 8\')}',9,9,'console|greeting|function|log|Hello|JS|to|welcome|world'.split('|'),0,{})) Why do you need Obfuscation Below are the few reasons for Obfuscation, - The Code size will be reduced. So data transfers between server and client will be fast. - It hides the business logic from outside world and protects the code from others - Reverse engineering is highly difficult - The download time will be reduced What is Minification Minification is the process of removing all unnecessary characters(empty spaces are removed) and variables will be renamed without changing it's functionality. It is also a type of obfuscation . What are the advantages of minification Normally it is recommended to use minification for heavy traffic and intensive requirements of resources. It reduces file sizes with below benefits, - Decreases loading times of a web page - Saves bandwidth usages What are the differences between Obfuscation and Encryption Below are the main differences between Obfuscation and Encryption, | Feature | Obfuscation | Encryption | |---- | --------- | ---- | Definition | Changing the form of any data in any other form | Changing the form of information to an unreadable format by using a key | | A key to decode | It can be decoded without any key | It is required | | Target data format | It will be converted to a complex form | Converted into an unreadable format | What are the common tools used for minification There are many online/offline tools to minify the javascript files, - Google's Closure Compiler - UglifyJS2 - jsmin - javascript-minifier.com/ - prettydiff.com How do you perform form validation using javascript JavaScript can be used to perform HTML form validation. For example, if the form field is empty, the function needs to notify, and return false, to prevent the form being submitted. Lets' perform user login in an html form, <form name="myForm" onsubmit="return validateForm()" method="post"> User name: <input type="text" name="uname"> <input type="submit" value="Submit"> </form> And the validation on user login is below, function validateForm() { var x = document.forms["myForm"]["uname"].value; if (x == "") { alert("The username shouldn't be empty"); return false; } } How do you perform form validation without javascript You can perform HTML form validation automatically without using javascript. The validation enabled by applying the requiredattribute to prevent form submission when the input is empty. <form method="post"> <input type="text" name="uname" required> <input type="submit" value="Submit"> </form> Note: Automatic form validation does not work in Internet Explorer 9 or earlier. What are the DOM methods available for constraint validation The below DOM methods are available for constraint validation on an invalid input, - checkValidity(): It returns true if an input element contains valid data. - setCustomValidity(): It is used to set the validationMessage property of an input element. Let's take an user login form with DOM validations function myFunction() { var userName = document.getElementById("uname"); if (!userName.checkValidity()) { document.getElementById("message").innerHTML = userName.validationMessage; } else { document.getElementById("message").innerHTML = "Entered a valid username"; } } What are the available constraint validation DOM properties Below are the list of some of the constraint validation DOM properties available, - validity: It provides a list of boolean properties related to the validity of an input element. - validationMessage: It displays the message when the validity is false. - willValidate: It indicates if an input element will be validated or not. What are the list of validity properties The validity property of an input element provides a set of properties related to the validity of data. - customError: It returns true, if a custom validity message is set. - patternMismatch: It returns true, if an element's value does not match its pattern attribute. - rangeOverflow: It returns true, if an element's value is greater than its max attribute. - rangeUnderflow: It returns true, if an element's value is less than its min attribute. - stepMismatch: It returns true, if an element's value is invalid according to step attribute. - tooLong: It returns true, if an element's value exceeds its maxLength attribute. - typeMismatch: It returns true, if an element's value is invalid according to type attribute. - valueMissing: It returns true, if an element with a required attribute has no value. - valid: It returns true, if an element's value is valid. Give an example usage of rangeOverflow property If an element's value is greater than its max attribute then rangeOverflow property returns true. For example, the below form submission throws an error if the value is more than 100, <input id="age" type="number" max="100"> <button onclick="myOverflowFunction()">OK</button> ```javascript function myOverflowFunction() { if (document.getElementById("age").validity.rangeOverflow) { alert("The mentioned age is not allowed"); } } ``` **[⬆ Back to Top](#table-of-contents)** Is enums feature available in javascript No, javascript does not natively support enums. But there are different kinds of solutions to simulate them even though they may not provide exact equivalents. For example, you can use freeze or seal on object, var DaysEnum = Object.freeze({"monday":1, "tuesday":2, "wednesday":3, ...}) What is an enum An enum is a type restricting variables to one value from a predefined set of constants. JavaScript has no enums but typescript provides built-in enum support. enum Color { RED, GREEN, BLUE } How do you list all properties of an object You can use the Object.getOwnPropertyNames()method which returns an array of all properties found directly in a given object. Let's the usage of it in an example, const newObject = { a: 1, b: 2, c: 3 }; console.log(Object.getOwnPropertyNames(newObject)); ["a", "b", "c"] How do you get property descriptors of an object You can use the Object.getOwnPropertyDescriptors()method which returns all own property descriptors of a given object. The example usage of this method is below, const newObject = { a: 1, b: 2, c: 3 }; const descriptorsObject = Object.getOwnPropertyDescriptors(newObject); console.log(descriptorsObject.a.writable); //true console.log(descriptorsObject.a.configurable); //true console.log(descriptorsObject.a.enumerable); //true console.log(descriptorsObject.a.value); // 1 What are the attributes provided by a property descriptor A property descriptor is a record which has the following attributes - value: The value associated with the property - writable: Determines whether the value associated with the property can be changed or not - configurable: Returns true if the type of this property descriptor can be changed and if the property can be deleted from the corresponding object. - enumerable: Determines whether the property appears during enumeration of the properties on the corresponding object or not. - set: A function which serves as a setter for the property - get: A function which serves as a getter for the property How do you extend classes The extendskeyword is used in class declarations/expressions to create a class which is a child of another class. It can be used to subclass custom classes as well as built-in objects. The syntax would be as below, class ChildClass extends ParentClass { ... } Let's take an example of Square subclass from Polygon parent class, class Square extends Rectangle { constructor(length) { super(length, length); this.name = 'Square'; } get area() { return this.width * this.height; } set area(value) { this.area = value; } } How do I modify the url without reloading the page The window.location.urlproperty will be helpful to modify the url but it reloads the page. HTML5 introduced the history.pushState()and history.replaceState()methods, which allow you to add and modify history entries, respectively. For example, you can use pushState as below, window.history.pushState('page2', 'Title', '/page2.html'); How do you check whether an array includes a particular value or not The Array#includes()method is used to determine whether an array includes a particular value among its entries by returning either true or false. Let's see an example to find an element(numeric and string) within an array. var numericArray = [1, 2, 3, 4]; console.log(numericArray.includes(3)); // true var stringArray = ['green', 'yellow', 'blue']; console.log(stringArray.includes('blue')); //true How do you compare scalar arrays You can use length and every method of arrays to compare two scalar(compared directly using ===) arrays. The combination of these expressions can give the expected result, const arrayFirst = [1,2,3,4,5]; const arraySecond = [1,2,3,4,5]; console.log(arrayFirst.length === arraySecond.length && arrayFirst.every((value, index) => value === arraySecond[index])); // true ` If you would like to compare arrays irrespective of order then you should sort them before, javascript const arrayFirst = [2,3,1,4,5]; const arraySecond = [1,2,3,4,5]; console.log(arrayFirst.length === arraySecond.length && arrayFirst.sort().every((value, index) => value === arraySecond[index])); //true ` How to get the value from get parameters The new URL()object accepts the url string and searchParamsproperty of this object can be used to access the get parameters. Remember that you may need to use polyfill or window.locationto access the URL in older browsers(including IE). `javascript let urlString = ""; //window.location.href let url = new URL(urlString); let parameterZ = url.searchParams.get("z"); console.log(parameterZ); // 3 ` How do you print numbers with commas as thousand separators You can use the Number.prototype.toLocaleString()method which returns a string with a language-sensitive representation such as thousand separator,currency etc of this number. ` javascript function convertToThousandFormat(x){ return x.toLocaleString(); // 12,345.679 } console.log(convertToThousandFormat(12345.6789)); ` What is the difference between java and javascript Both are totally unrelated programming languages and no relation between them. Java is statically typed, compiled, runs on its own VM. Whereas Javascript is dynamically typed, interpreted, and runs in a browser and nodejs environments. Let's see the major differences in a tabular format, | Feature | Java | JavaScript | |---- | ---- | ----- | Typed | It's a strongly typed language | It's a dynamic typed language | | Paradigm | Object oriented programming | Prototype based programming | | Scoping | Block scoped | Function-scoped | | Concurrency | Thread based | event based | | Memory | Uses more memory | Uses less memory. Hence it will be used for web pages | Is javascript supports namespace JavaScript doesn’t support namespace by default. So if you create any element(function, method, object, variable) then it becomes global and pollutes the global namespace. Let's take an example of defining two functions without any namespace, ` javascript function func1() { console.log("This is a first definition"); } function func1() { console.log("This is a second definition"); } func1(); // This is a second definition ` It always calls the second function definition. In this case, namespace will solve the name collision problem. How do you declare namespace Even though JavaScript lacks namespaces, we can use Objects , IIFE to create namespaces. - Using Object Literal Notation: Let's wrap variables and functions inside an Object literal which acts as a namespace. After that you can access them using object notation `javascript var namespaceOne = { function func1() { console.log("This is a first definition"); } } var namespaceTwo = { function func1() { console.log("This is a second definition"); } } namespaceOne.func1(); // This is a first definition namespaceTwo.func1(); // This is a second definition ` - Using IIFE (Immediately invoked function expression): The outer pair of parentheses of IIFE creates a local scope for all the code inside of it and makes the anonymous function a function expression. Due to that, you can create the same function in two different function expressions to act as a namespace. ` javascript (function() { function fun1(){ console.log("This is a first definition"); } fun1(); }()); (function() { function fun1(){ console.log("This is a second definition"); } fun1(); }()); ` - Using a block and a let/const declaration: In ECMAScript 6, you can simply use a block and a let declaration to restrict the scope of a variable to a block. ` javascript { let myFunction= function fun1(){ console.log("This is a first definition"); } myFunction(); } //myFunction(): ReferenceError: myFunction is not defined. { let myFunction= function fun1(){ console.log("This is a second definition"); } myFunction(); } //myFunction(): ReferenceError: myFunction is not defined. ` How do you invoke javascript code in an iframe from parent page Initially iFrame needs to be accessed using either document.getElementByor window.frames. After that contentWindowproperty of iFrame gives the access for targetFunction ` javascript document.getElementById('targetFrame').contentWindow.targetFunction(); window.frames[0].frameElement.contentWindow.targetFunction(); // Accessing iframe this way may not work in latest versions chrome and firefox ` How do get the timezone offset from date You can use the getTimezoneOffsetmethod of the date object. This method returns the time zone difference, in minutes, from current locale (host system settings) to UTC `javascript var offset = new Date().getTimezoneOffset(); console.log(offset); // -480 ` How do you load CSS and JS files dynamically You can create both link and script elements in the DOM and append them as child to head tag. Let's create a function to add script and style resources as below, `javascript function loadAssets(filename, filetype) { if (filetype == "css") { // External CSS file var fileReference = document.createElement("link") fileReference.setAttribute("rel", "stylesheet"); fileReference.setAttribute("type", "text/css"); fileReference.setAttribute("href", filename); } else if (filetype == "js") { // External JavaScript file var fileReference = document.createElement('script'); fileReference.setAttribute("type", "text/javascript"); fileReference.setAttribute("src", filename); } if (typeof fileReference != "undefined") document.getElementsByTagName("head")[0].appendChild(fileReference) } ` What are the different methods to find HTML elements in DOM If you want to access any element in an HTML page, you need to start with accessing the document object. Later you can use any of the below methods to find the HTML element, - document.getElementById(id): It finds an element by Id - document.getElementsByTagName(name): It finds an element by tag name - document.getElementsByClassName(name): It finds an element by class name What is jQuery jQuery is a popular cross-browser JavaScript library that provides Document Object Model (DOM) traversal, event handling, animations and AJAX interactions by minimizing the discrepancies across browsers. It is widely famous with its philosophy of “Write less, do more”. For example, you can display welcome message on the page load using jQuery as below, `javascript $(document).ready(function(){ // It selects the document and apply the function on page load alert('Welcome to jQuery world'); }); ` Note: You can download it from jquery's official site or install it from CDNs, like google. What is V8 JavaScript engine V8 is an open source high-performance JavaScript engine used by the Google Chrome browser, written in C++. It is also being used in the node.js project. It implements ECMAScript and WebAssembly, and runs on Windows 7 or later, macOS 10.12+, and Linux systems that use x64, IA-32, ARM, or MIPS processors. Note: It can run standalone, or can be embedded into any C++ application. Why do we call javascript as dynamic language JavaScript is a loosely typed or a dynamic language because variables in JavaScript are not directly associated with any particular value type, and any variable can be assigned/reassigned with values of all types. `javascript let age = 50; // age is a number now age = 'old'; // age is a string now age = true; // age is a boolean ` What is a void operator The voidoperator evaluates the given expression and then returns undefined(i.e, without returning value). The syntax would be as below, `javascript void (expression) void expression ` Let's display a message without any redirection or reload `javascript <a href="javascript:void(alert('Welcome to JS world'))">Click here to see a message</a> ` Note: This operator is often used to obtain the undefined primitive value, using "void(0)". How to set the cursor to wait The cursor can be set to wait in JavaScript by using the property "cursor". Let's perform this behavior on page load using the below function. `javascript function myFunction() { window.document.body.style.cursor = "wait"; } ` and this function invoked on page load `html <body onload="myFunction()"> ` How do you create an infinite loop You can create infinite loops using for and while loops without using any expressions. The for loop construct or syntax is better approach in terms of ESLint and code optimizer tools, `javascript for (;;) {} while(true) { } ` Why do you need to avoid with statement JavaScript's with statement was intended to provide a shorthand for writing recurring accesses to objects. So it can help reduce file size by reducing the need to repeat a lengthy object reference without performance penalty. Let's take an example where it is used to avoid redundancy when accessing an object several times. `javascript a.b.c.greeting = 'welcome'; a.b.c.age = 32; ` Using withit turns this into: `javascript with(a.b.c) { greeting = "welcome"; age = 32; } ` But this withstatement creates performance problems since one cannot predict whether an argument will refer to a real variable or to a property inside the with argument. What is the output of below for loops ` javascript for (var i = 0; i < 4; i++) { // global scope setTimeout(() => console.log(i)); } for (let i = 0; i < 4; i++) { // block scope setTimeout(() => console.log(i)); } ` The output of the above for loops is 4 4 4 4 and 0 1 2 3 Explanation: Due to the event queue/loop of javascript, the setTimeoutcallback function is called after the loop has been executed. Since the variable i is declared with the varkeyword it became a global variable and the value was equal to 4 using iteration when the time setTimeout function is invoked. Hence, the output of the first loop is 4 4 4 4. Whereas in the second loop, the variable i is declared as the letkeyword it becomes a block scoped variable and it holds a new value(0, 1 ,2 3) for each iteration. Hence, the output of the first loop is 0 1 2 3. List down some of the features of ES6 Below are the list of some new features of ES6, - Support for constants or immutable variables - Block-scope support for variables, constants and functions - Arrow functions - Default parameters - Rest and Spread Parameters - Template Literals - Multi-line Strings - Destructuring Assignment - Enhanced Object Literals - Promises - Classes - Modules What is ES6 ES6 is the sixth edition of the javascript language and it was released in June 2015. It was initially known as ECMAScript 6 (ES6) and later renamed to ECMAScript 2015. Almost all the modern browsers support ES6 but for the old browsers there are many transpilers, like Babel.js etc. Can I redeclare let and const variables No, you cannot redeclare let and const variables. If you do, it throws below error `bash Uncaught SyntaxError: Identifier 'someVariable' has already been declared ` Explanation: The variable declaration with varkeyword refers to a function scope and the variable is treated as if it were declared at the top of the enclosing scope due to hoisting feature. So all the multiple declarations contributing to the same hoisted variable without any error. Let's take an example of re-declaring variables in the same scope for both var and let/const variables. `javascript var name = 'John'; function myFunc() { var name = 'Nick'; var name = 'Abraham'; // Re-assigned in the same function block alert(name); // Abraham } myFunc(); alert(name); // John ` The block-scoped multi-declaration throws syntax error, ` javascript let name = 'John'; function myFunc() { let name = 'Nick'; let name = 'Abraham'; // Uncaught SyntaxError: Identifier 'name' has already been declared alert(name); } myFunc(); alert(name); ` Is const variable makes the value immutable No, the const variable doesn't make the value immutable. But it disallows subsequent assignments(i.e, You can declare with assignment but can't assign another value later) `javascript const userList = []; userList.push('John'); // Can mutate even though it can't re-assign console.log(userList); // ['John'] ` What are default parameters In E5, we need to depend on logical OR operators to handle default values of function parameters. Whereas in ES6, Default function parameters feature allows parameters to be initialized with default values if no value or undefined is passed. Let's compare the behavior with an examples, ` javascript //ES5 var calculateArea = function(height, width) { height = height || 50; width = width || 60; return width * height; } console.log(calculateArea()); //300 ` The default parameters makes the initialization more simpler, ` javascript //ES6 var calculateArea = function(height = 50, width = 60) { return width * height; } console.log(calculateArea()); //300 ` What are template literals Template literals or template strings are string literals allowing embedded expressions. These are enclosed by the back-tick (`) character instead of double or single quotes. In E6, this feature enables using dynamic expressions as below, `javascript var greeting = `Welcome to JS World, Mr. ${firstName} ${lastName}.` ` In ES5, you need break string like below, ` javascript var greeting = 'Welcome to JS World, Mr. ' + firstName + ' ' + lastName.` **Note:** You can use multi-line strings and string interpolation features with template literals. **[⬆ Back to Top](#table-of-contents)** How do you write multi-line strings in template literals In ES5, you would have to use newline escape characters('\n') and concatenation symbols(+) in order to get multi-line strings. javascript console.log('This is string sentence 1\n' + 'This is string sentence 2'); Whereas in ES6, You don't need to mention any newline sequence character, javascript console.log(`This is string sentence 'This is string sentence 2`); What are nesting templates The nesting template is a feature supported within template literals syntax to allow inner backticks inside a placeholder ${ } within the template. For example, the below nesting template is used to display the icons based on user permissions whereas outer template checks for platform type, javascript const iconStyles = `icon ${ isMobilePlatform() ? '' : `icon-${user.isAuthorized ? 'submit' : 'disabled'}` }`; You can write the above use case without nesting template features as well. However, the nesting template feature is more compact and readable. javascript //Without nesting templates const iconStyles = `icon ${ isMobilePlatform() ? '' : (user.isAuthorized ? 'icon-submit' : 'icon-disabled'}`; What are tagged templates Tagged templates are the advanced form of templates in which tags allow you to parse template literals with a function. The tag function accepts the first parameter as an array of strings and remaining parameters as expressions. This function can also return manipulated strings based on parameters. Let's see the usage of this tagged template behavior of an IT professional skill set in an organization, javascript var user1 = 'John'; var skill1 = 'JavaScript'; var experience1 = 15; var user2 = 'Kane'; var skill2 = 'JavaScript'; var experience2 = 5; function myInfoTag(strings, userExp, experienceExp, skillExp) { var str0 = strings[0]; // "Mr/Ms. " var str1 = strings[1]; // " is a/an " var str2 = strings[2]; // "in" var expertiseStr; if (experienceExp > 10){ expertiseStr = 'expert developer'; } else if(skillExp > 5 && skillExp <= 10) { expertiseStr = 'senior developer'; } else { expertiseStr = 'junior developer'; } return ${str0}${userExp}${str1}${expertiseStr}${str2}${skillExp}; } var output1 = myInfoTag`Mr/Ms. ${ user1 } is a/an ${ experience1 } in ${skill1}`; var output2 = myInfoTag`Mr/Ms. ${ user2 } is a/an ${ experience2 } in ${skill2}`; console.log(output1);// Mr/Ms. John is a/an expert developer in JavaScript console.log(output2);// Mr/Ms. Kane is a/an junior developer in JavaScript What are raw strings ES6 provides a raw strings feature using the String.raw()method which is used to get the raw string form of template strings. This feature allows you to access the raw strings as they were entered, without processing escape sequences. For example, the usage would be as below, javascript var calculationString = String.raw `The sum of numbers is \n${1+2+3+4}!`; console.log(calculationString); // The sum of numbers is 10 If you don't use raw strings, the newline character sequence will be processed by displaying the output in multiple lines javascript var calculationString = `The sum of numbers is \n${1+2+3+4}!`; console.log(calculationString); // The sum of numbers is // 10 Also, the raw property is available on the first argument to the tag function javascript function tag(strings) { console.log(strings.raw[0]); } What is destructuring assignment The destructuring assignment is a JavaScript expression that makes it possible to unpack values from arrays or properties from objects into distinct variables. Let's get the month values from an array using destructuring assignment javascript var [one, two, three] = ['JAN', 'FEB', 'MARCH']; console.log(one); // "JAN" console.log(two); // "FEB" console.log(three); // "MARCH" and you can get user properties of an object using destructuring assignment, javascript var {name, age} = {name: 'John', age: 32}; console.log(name); // John console.log(age); // 32 What are default values in destructuring assignment A variable can be assigned a default value when the value unpacked from the array or object is undefined during destructuring assignment. It helps to avoid setting default values separately for each assignment. Let's take an example for both arrays and object use cases, Arrays destructuring: javascript var x, y, z; [x=2, y=4, z=6] = [10]; console.log(x); // 10 console.log(y); // 4 console.log(z); // 6 Objects destructuring: javascript var {x=2, y=4, z=6} = {x: 10}; console.log(x); // 10 console.log(y); // 4 console.log(z); // 6 How do you swap variables in destructuring assignment If you don't use destructuring assignment, swapping two values requires a temporary variable. Whereas using a destructuring feature, two variable values can be swapped in one destructuring expression. Let's swap two number variables in array destructuring assignment, javascript var x = 10, y = 20; [x, y] = [y, x]; console.log(x); // 20 console.log(y); // 10 What are enhanced object literals Object literals make it easy to quickly create objects with properties inside the curly braces. For example, it provides shorter syntax for common object property definition as below. javascript //ES6 var x = 10, y = 20 obj = { x, y } console.log(obj); // {x: 10, y:20} //ES5 var x = 10, y = 20 obj = { x : x, y : y} console.log(obj); // {x: 10, y:20} What are dynamic imports The dynamic imports using import()function syntax allows us to load modules on demand by using promises or the async/await syntax. Currently this feature is in stage4 proposal. The main advantage of dynamic imports is reduction of our bundle's sizes, the size/payload response of our requests and overall improvements in the user experience. The syntax of dynamic imports would be as below, javascript import('./Module').then(Module => Module.method()); What are the use cases for dynamic imports Below are some of the use cases of using dynamic imports over static imports, - Import a module on-demand or conditionally. For example, if you want to load a polyfill on legacy browser javascript if (isLegacyBrowser()) { import(···) .then(···); } - Compute the module specifier at runtime. For example, you can use it for internationalization. javascript import(`messages_${getLocale()}.js`).then(···); - Import a module from within a regular script instead a module. What are typed arrays Typed arrays are array-like objects from ECMAScript 6 API for handling binary data. JavaScript provides 8 Typed array types, - Int8Array: An array of 8-bit signed integers - Int16Array: An array of 16-bit signed integers - Int32Array: An array of 32-bit signed integers - Uint8Array: An array of 8-bit unsigned integers - Uint16Array: An array of 16-bit unsigned integers - Uint32Array: An array of 32-bit unsigned integers - Float32Array: An array of 32-bit floating point numbers - Float64Array: An array of 64-bit floating point numbers For example, you can create an array of 8-bit signed integers as below javascript const a = new Int8Array(); // You can pre-allocate n bytes const bytes = 1024 const a = new Int8Array(bytes) What are the advantages of module loaders The module loaders provides the below features, - Dynamic loading - State isolation - Global namespace isolation - Compilation hooks - Nested virtualization What is collation Collation is used for sorting a set of strings and searching within a set of strings. It is parameterized by locale and aware of Unicode. Let's take comparison and sorting features, - Comparison: javascript var list = [ "ä", "a", "z" ]; // In German, "ä" sorts with "a" Whereas in Swedish, "ä" sorts after "z" var l10nDE = new Intl.Collator("de"); var l10nSV = new Intl.Collator("sv"); console.log(l10nDE.compare("ä", "z") === -1); // true console.log(l10nSV.compare("ä", "z") === +1); // true - Sorting: javascript var list = [ "ä", "a", "z" ]; // In German, "ä" sorts with "a" Whereas in Swedish, "ä" sorts after "z" var l10nDE = new Intl.Collator("de"); var l10nSV = new Intl.Collator("sv"); console.log(list.sort(l10nDE.compare)) // [ "a", "ä", "z" ] console.log(list.sort(l10nSV.compare)) // [ "a", "z", "ä" ] What is for...of statement The for...of statement creates a loop iterating over iterable objects or elements such as built-in String, Array, Array-like objects (like arguments or NodeList), TypedArray, Map, Set, and user-defined iterables. The basic usage of for...of statement on arrays would be as below, javascript let arrayIterable = [10, 20, 30, 40, 50]; for (let value of arrayIterable) { value ++; console.log(value); // 11 21 31 41 51 } What is the output of below spread operator array javascript [...'John Resig'] The output of the array is ['J', 'o', 'h', 'n', '', 'R', 'e', 's', 'i', 'g'] Explanation: The string is an iterable type and the spread operator within an array maps every character of an iterable to one element. Hence, each character of a string becomes an element within an Array. Is PostMessage secure Yes, postMessages can be considered very secure as long as the programmer/developer is careful about checking the origin and source of an arriving message. But if you try to send/receive a message without verifying its source will create cross-site scripting attacks. What are the problems with postmessage target origin as wildcard The second argument of postMessage method specifies which origin is allowed to receive the message. If you use the wildcard “*” as an argument then any origin is allowed to receive the message. In this case, there is no way for the sender window to know if the target window is at the target origin when sending the message. If the target window has been navigated to another origin, the other origin would receive the data. Hence, this may lead to XSS vulnerabilities. javascript targetWindow.postMessage(message, '*'); How do you avoid receiving postMessages from attackers Since, javascript //Listener on window.addEventListener("message", function(message){ if(/^\.some-sender\.com$/.test(message.origin)){ console.log('You received the data from valid sender', message.data); } }); Can I avoid using postMessages completely You cannot avoid using postMessages completely(or 100%). Even though your application doesn’t use postMessage considering the risks, a lot of third party scripts use postMessage to communicate with the third party service. So your application might be using postMessage without your knowledge. Is postMessages synchronous The postMessages are synchronous in IE8 browser but they are asynchronous in IE9 and all other modern browsers (i.e, IE9+, Firefox, Chrome, Safari).Due to this asynchronous behaviour, we use a callback mechanism when the postMessage is returned. What paradigm is Javascript JavaScript is a multi-paradigm language, supporting imperative/procedural programming, Object-Oriented Programming and functional programming. JavaScript supports Object-Oriented Programming with prototypical inheritance. What is the difference between internal and external javascript Internal JavaScript: It is the source code within the script tag. External JavaScript: The source code is stored in an external file(stored with .js extension) and referred with in the tag. Is JavaScript faster than server side script Yes, JavaScript is faster than server side script. Because JavaScript is a client-side script it does not require any web server’s help for its computation or calculation. So JavaScript is always faster than any server-side script like ASP, PHP, etc. How do you get the status of a checkbox You can apply the checkedproperty on the selected checkbox in the DOM. If the value is Truemeans the checkbox is checked otherwise it is unchecked. For example, the below HTML checkbox element can be access using javascript as below, html <input type="checkbox" name="checkboxname" value="Agree"> Agree the conditions<br> javascript console.log(document.getElementById(‘checkboxname’).checked); // true or false What is the purpose of double tilde operator The double tilde operator(~~) is known as double NOT bitwise operator. This operator is going to be a quicker substitute for Math.floor(). How do you convert character to ASCII code You can use the String.prototype.charCodeAt()method to convert string characters to ASCII numbers. For example, let's find ASCII code for the first letter of 'ABC' string, javascript "ABC".charCodeAt(0) // returns 65 Whereas String.fromCharCode()method converts numbers to equal ASCII characters. javascript String.fromCharCode(65,66,67); // returns 'ABC' What is ArrayBuffer An ArrayBuffer object is used to represent a generic, fixed-length raw binary data buffer. You can create it as below, javascript let buffer = new ArrayBuffer(16); // create a buffer of length 16 alert(buffer.byteLength); // 16 To manipulate an ArrayBuffer, we need to use a “view” object. javascript //Create a DataView referring to the buffer let view = new DataView(buffer); What is the output of below string expression javascript console.log("Welcome to JS world"[0]) The output of the above expression is "W". Explanation: The bracket notation with specific index on a string returns the character at a specific location. Hence, it returns the character "W" of the string. Since this is not supported in IE7 and below versions, you may need to use the .charAt() method to get the desired result. What is the purpose of Error object The Error constructor creates an error object and the instances of error objects are thrown when runtime errors occur. The Error object can also be used as a base object for user-defined exceptions. The syntax of error object would be as below, javascript new Error([message[, fileName[, lineNumber]]]) You can throw user defined exceptions or errors using Error object in try...catch block as below, javascript try { if(withdraw > balance) throw new Error("Oops! You don't have enough balance"); } catch (e) { console.log(e.name + ': ' + e.message); } What is the purpose of EvalError object The EvalError object indicates an error regarding the global eval()function. Even though this exception is not thrown by JavaScript anymore, the EvalError object remains for compatibility. The syntax of this expression would be as below, javascript new EvalError([message[, fileName[, lineNumber]]]) You can throw EvalError with in try...catch block as below, javascript try { throw new EvalError('Eval function error', 'someFile.js', 100); } catch (e) { console.log(e.message, e.name, e.fileName); // "Eval function error", "EvalError", "someFile.js" What are the list of cases error thrown from non-strict mode to strict mode When you apply 'use strict'; syntax, some of the below cases will throw a SyntaxError before executing the script - When you use Octal syntax javascript var n = 022; - Using withstatement - When you use delete operator on a variable name - Using eval or arguments as variable or function argument name - When you use newly reserved keywords - When you declare a function in a block javascript if (someCondition) { function f() {} } Hence, the errors from above cases are helpful to avoid errors in development/production environments. Is all objects have prototypes No. All objects have prototypes except for the base object which is created by the user, or an object that is created using the new keyword. What is the difference between a parameter and an argument Parameter is the variable name of a function definition whereas an argument represents the value given to a function when it is invoked. Let's explain this with a simple function javascript function myFunction(parameter1, parameter2, parameter3) { console.log(arguments[0]) // "argument1" console.log(arguments[1]) // "argument2" console.log(arguments[2]) // "argument3" } myFunction("argument1", "argument2", "argument3") What is the purpose of some method in arrays The some() method is used to test whether at least one element in the array passes the test implemented by the provided function. The method returns a boolean value. Let's take an example to test for any odd elements, javascript var array = [1, 2, 3, 4, 5, 6 ,7, 8, 9, 10]; var odd = element ==> element % 2 !== 0; console.log(array.some(odd)); // true (the odd element exists) How do you combine two or more arrays The concat() method is used to join two or more arrays by returning a new array containing all the elements. The syntax would be as below, javascript array1.concat(array2, array3, ..., arrayX) Let's take an example of array's concatenation with veggies and fruits arrays, javascript var veggies = ["Tomato", "Carrot", "Cabbage"]; var fruits = ["Apple", "Orange", "Pears"]; var veggiesAndFruits = veggies.concat(fruits); console.log(veggiesAndFruits); // Tomato, Carrot, Cabbage, Apple, Orange, Pears What is the difference between Shallow and Deep copy There are two ways to copy an object, Shallow Copy: Shallow copy is a bitwise copy of an object. A new object is created that has an exact copy of the values in the original object. If any of the fields of the object are references to other objects, just the reference addresses are copied i.e., only the memory address is copied. Example javascript var empDetails = { name: "John", age: 25, expertise: "Software Developer" } to create a duplicate javascript var empDetailsShallowCopy = empDetails //Shallow copying! if we change some property value in the duplicate one like this: javascript empDetailsShallowCopy.name = "Johnson" The above statement will also change the name of empDetails, since we have a shallow copy. That means we're losing the original data as well. Deep copy: A deep copy copies all fields, and makes copies of dynamically allocated memory pointed to by the fields. A deep copy occurs when an object is copied along with the objects to which it refers. Example javascript var empDetails = { name: "John", age: 25, expertise: "Software Developer" } Create a deep copy by using the properties from the original object into new variable javascript var empDetailsDeepCopy = { name: empDetails.name, age: empDetails.age, expertise: empDetails.expertise } Now if you change empDetailsDeepCopy.name, it will only affect empDetailsDeepCopy& not empDetails How do you create specific number of copies of a string The, javascript 'Hello'.repeat(4); // 'HelloHelloHelloHello' How do you return all matching strings against a regular expression The matchAll()method can be used to return an iterator of all results matching a string against a regular expression. For example, the below example returns an array of matching string results against a regular expression, javascript let regexp = /Hello(\d?))/g; let greeting = 'Hello1Hello2Hello3'; let greetingList = [...greeting.matchAll(regexp)]; console.log(greetingList[0]); //Hello1 console.log(greetingList[1]); //Hello2 console.log(greetingList[2]); //Hello3 How do you trim a string at the beginning or ending The trimmethod of string prototype is used to trim on both sides of a string. But if you want to trim especially at the beginning or ending of the string then you can use trimStart/trimLeftand trimEnd/trimRightmethods. Let's see an example of these methods on a greeting message, javascript var greeting = ' Hello, Goodmorning! '; console.log(greeting); // " Hello, Goodmorning! " console.log(greeting.trimStart()); // "Hello, Goodmorning! " console.log(greeting.trimLeft()); // "Hello, Goodmorning! " console.log(greeting.trimEnd()); // " Hello, Goodmorning!" console.log(greeting.trimRight()); // " Hello, Goodmorning!" What is the output of below console statement with unary operator Let's take console statement with unary operator as given below, javascript console.log(+ 'Hello'); The output of the above console log statement returns NaN. Because the element is prefixed by the unary operator and the JavaScript interpreter will try to convert that element into a number type. Since the conversion fails, the value of the statement results in NaN value. Does javascript uses mixins What is a thunk function A thunk is just a function which delays the evaluation of the value. It doesn’t take any arguments but gives the value whenever you invoke the thunk. i.e, It is used not to execute now but it will be sometime in the future. Let's take a synchronous example, javascript const add = (x,y) => x + y; const thunk = () => add(2,3); thunk() // 5 What are asynchronous thunks The asynchronous thunks are useful to make network requests. Let's see an example of network requests, javascript function fetchData(fn){ fetch('') .then(response => response.json()) .then(json => fn(json)) } const asyncThunk = function (){ return fetchData(function getData(data){ console.log(data) }) } asyncThunk() The getDatafunction won't be called immediately but it will be invoked only when the data is available from API endpoint. The setTimeout function is also used to make our code asynchronous. The best real time example is redux state management library which uses the asynchronous thunks to delay the actions to dispatch. What is the output of below function calls Code snippet: javascript const circle = { radius: 20, diameter() { return this.radius * 2; }, perimeter: () => 2 * Math.PI * this.radius }; console.log(circle.diameter()); console.log(circle.perimeter()); Output: The output is 40 and NaN. Remember that diameter is a regular function, whereas the value of perimeter is an arrow function. The thiskeyword of a regular function(i.e, diameter) refers to the surrounding scope which is a class(i.e, Shape object). Whereas this keyword of perimeter function refers to the surrounding scope which is a window object. Since there is no radius property on window objects it returns an undefined value and the multiple of number value returns NaN value. How to remove all line breaks from a string The easiest approach is using regular expressions to detect and replace newlines in the string. In this case, we use replace function along with string to replace with, which in our case is an empty string. javascript function remove_linebreaks( var message ) { return message.replace( /[\r\n]+/gm, "" ); } In the above expression, g and m are for global and multiline flags. What is the difference between reflow and repaint A repaint occurs when changes are made which affect the visibility of an element, but not its layout. Examples of this include outline, visibility, or background color. A reflow involves changes that affect the layout of a portion of the page (or the whole page). Resizing the browser window, changing the font, content changing (such as user typing text), using JavaScript methods involving computed styles, adding or removing elements from the DOM, and changing an element's classes are a few of the things that can trigger reflow. Reflow of an element causes the subsequent reflow of all child and ancestor elements as well as any elements following it in the DOM. What happens with negating an array Negating an array with !character will coerce the array into a boolean. Since Arrays are considered to be truthy So negating it will return false. javascript console.log(![]); // false What happens if we add two arrays If you add two arrays together, it will convert them both to strings and concatenate them. For example, the result of adding arrays would be as below, javascript console.log(['a'] + ['b']); // "ab" console.log([] + []); // "" console.log(![] + []); // "false", because ![] returns false. What is the output of prepend additive operator on falsy values If you prepend the additive(+) operator on falsy values(null, undefined, NaN, false, ""), the falsy value converts to a number value zero. Let's display them on browser console as below, javascript console.log(+null); // 0 console.log(+undefined);// NaN console.log(+false); // 0 console.log(+NaN); // NaN console.log(+""); // 0 How do you create self string using special characters The self string can be formed with the combination of []()!+characters. You need to remember the below conventions to achieve this pattern. - Since Arrays are truthful values, negating the arrays will produce false: ![] === false - As per JavaScript coercion rules, the addition of arrays together will toString them: [] + [] === "" - Prepend an array with + operator will convert an array to false, the negation will make it true and finally converting the result will produce value '1': +(!(+[])) === 1 By applying the above rules, we can derive below conditions javascript ![] + [] === "false" +!+[] === 1 Now the character pattern would be created as below, javascript s e l f ^^^^^^^^^^^^^ ^^^^^^^^^^^^^ ^^^^^^^^^^^^^ ^^^^^^^^^^^^^ (![] + [])[3] + (![] + [])[4] + (![] + [])[2] + (![] + [])[0] ^^^^^^^^^^^^^ ^^^^^^^^^^^^^ ^^^^^^^^^^^^^ ^^^^^^^^^^^^^ (![] + [])[+!+[]+!+[]+!+[]] + (![] + [])[+!+[]+!+[]+!+[]+!+[]] + (![] + [])[+!+[]+!+[]] + (![] + [])[+[]] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (![]+[])[+!+[]+!+[]+!+[]]+(![]+[])[+!+[]+!+[]+!+[]+!+[]]+(![]+[])[+!+[]+!+[]]+(![]+[])[+[]] How do you remove falsy values from an array You can apply the filter method on the array by passing Boolean as a parameter. This way it removes all falsy values(0, undefined, null, false and "") from the array. javascript const myArray = [false, null, 1,5, undefined] myArray.filter(Boolean); // [1, 5] // is same as myArray.filter(x => x); How do you get unique values of an array You can get unique values of an array with the combination of Setand rest expression/spread(...) syntax. javascript console.log([...new Set([1, 2, 4, 4, 3])]); // [1, 2, 4, 3] What is destructuring aliases Sometimes you would like to have a destructured variable with a different name than the property name. In that case, you'll use a : newNameto specify a name for the variable. This process is called destructuring aliases. javascript const obj = { x: 1 }; // Grabs obj.x as as { otherName } const { x: otherName } = obj; How do you map the array values without using map method You can map the array values without using the mapmethod by just using the frommethod of Array. Let's map city names from Countries array, javascript const countries = [ { name: 'India', capital: 'Delhi' }, { name: 'US', capital: 'Washington' }, { name: 'Russia', capital: 'Moscow' }, { name: 'Singapore', capital: 'Singapore' }, { name: 'China', capital: 'Beijing' }, { name: 'France', capital: 'Paris' }, ]; const cityNames = Array.from(countries, ({ capital}) => capital); console.log(cityNames); // ['Delhi, 'Washington', 'Moscow', 'Singapore', 'Beijing', 'Paris'] How do you empty an array You can empty an array quickly by setting the array length to zero. javascript let cities = ['Singapore', 'Delhi', 'London']; cities.length = 0; // cities becomes [] How do you rounding numbers to certain decimals You can round numbers to a certain number of decimals using toFixedmethod from native javascript. javascript let pie = 3.141592653; pie = pie.toFixed(3); // 3.142 What is the easiest way to convert an array to an object You can convert an array to an object with the same data using spread(...) operator. javascript var fruits = ["banana", "apple", "orange", "watermelon"]; var fruitsObject = {...fruits}; console.log(fruitsObject); // {0: "banana", 1: "apple", 2: "orange", 3: "watermelon"} How do you create an array with some data You can create an array with some data or an array with the same values using fillmethod. javascript var newArray = new Array(5).fill("0"); console.log(newArray); // ["0", "0", "0", "0", "0"] What are the placeholders from console object Below are the list of placeholders available from console object, - %o — It takes an object, - %s — It takes a string, - %d — It is used for a decimal or integer These placeholders can be represented in the console.log as below javascript const user = { "name":"John", "id": 1, "city": "Delhi"}; console.log("Hello %s, your details %o are available in the object form", "John", user); // Hello John, your details {name: "John", id: 1, city: "Delhi"} are available in object Is it possible to add CSS to console messages Yes, you can apply CSS styles to console messages similar to html text on the web page. javascript console.log('%c The text has blue color, with large font and red background', 'color: blue; font-size: x-large; background: red'); The text will be displayed as below, Note: All CSS styles can be applied to console messages. What is the purpose of dir method of console object The console.dir()is used to display an interactive list of the properties of the specified JavaScript object as JSON. javascript const user = { "name":"John", "id": 1, "city": "Delhi"}; console.dir(user); The user object displayed in JSON representation Is it possible to debug HTML elements in console Yes, it is possible to get and debug HTML elements in the console just like inspecting elements. javascript const element = document.getElementsByTagName("body")[0]; console.log(element); It prints the HTML element in the console, How do you display data in a tabular format using console object The console.table()is used to display data in the console in a tabular format to visualize complex arrays or objects. js const users = [{ "name":"John", "id": 1, "city": "Delhi"}, { "name":"Max", "id": 2, "city": "London"}, { "name":"Rod", "id": 3, "city": "Paris"} ]; console.table(users); The data visualized in a table format, Not: Remember that console.table()is not supported in IE. How do you verify that an argument is a Number or not The combination of IsNaN and isFinite methods are used to confirm whether an argument is a number or not. javascript function isNumber(n){ return !isNaN(parseFloat(n)) && isFinite(n); } How do you create copy to clipboard button You need to select the content(using .select() method) of the input element and execute the copy command with execCommand (i.e, execCommand('copy')). You can also execute other system commands like cut and paste. javascript document.querySelector("#copy-button").onclick = function() { // Select the content document.querySelector("#copy-input").select(); // Copy to the clipboard document.execCommand('copy'); }; What is the shortcut to get timestamp You can use new Date().getTime()to get the current timestamp. There is an alternative shortcut to get the value. javascript console.log(+new Date()); console.log(Date.now()); How do you flattening multi dimensional arrays Flattening bi-dimensional arrays is trivial with Spread operator. javascript const biDimensionalArr = [11, [22, 33], [44, 55], [66, 77], 88, 99]; const flattenArr = [].concat(...biDimensionalArr); // [11, 22, 33, 44, 55, 66, 77, 88, 99] But you can make it work with multi-dimensional arrays by recursive calls, javascript function flattenMultiArray(arr) { const flattened = [].concat(...arr); return flattened.some(item => Array.isArray(item)) ? flattenMultiArray(flattened) : flattened; } const multiDimensionalArr = [11, [22, 33], [44, [55, 66, [77, [88]], 99]]]; const flatArr = flattenMultiArray(multiDimensionalArr); // [11, 22, 33, 44, 55, 66, 77, 88, 99] What is the easiest multi condition checking You can use indexOfto compare input with multiple values instead of checking each value as one condition. javascript // Verbose approach if (input === 'first' || input === 1 || input === 'second' || input === 2) { someFunction(); } // Shortcut if (['first', 1, 'second', 2].indexOf(input) !== -1) { someFunction(); } How do you capture browser back button The window.onbeforeunloadmethod is used to capture browser back button events. This is helpful to warn users about losing the current data. javascript window.onbeforeunload = function() { alert("You work will be lost"); }; How do you disable right click in the web page The right click on the page can be disabled by returning false from the oncontextmenuattribute on the body element. html <body oncontextmenu="return false;"> What are wrapper objects Primitive Values like string,number and boolean don't have properties and methods but they are temporarily converted or coerced to an object(Wrapper object) when you try to perform actions on them. For example, if you apply toUpperCase() method on a primitive string value, it does not throw an error but returns uppercase of the string. javascript let name = "john"; console.log(name.toUpperCase()); // Behind the scenes treated as console.log(new String(name).toUpperCase()); i.e, Every primitive except null and undefined have Wrapper Objects and the list of wrapper objects are String,Number,Boolean,Symbol and BigInt. What is AJAX AJAX stands for Asynchronous JavaScript and XML and it is a group of related technologies(HTML, CSS, JavaScript, XMLHttpRequest API etc) used to display data asynchronously. i.e. We can send data to the server and get data from the server without reloading the web page. What are the different ways to deal with Asynchronous Code Below are the list of different ways to deal with Asynchronous code. - Callbacks - Promises - Async/await - Third-party libraries such as async.js,bluebird etc How to cancel a fetch request Until a few days back, One shortcoming of native promises is no direct way to cancel a fetch request. But the new AbortControllerfrom js specification allows you to use a signal to abort one or multiple fetch calls. The basic flow of cancelling a fetch request would be as below, - Create an AbortControllerinstance - Get the signal property of an instance and pass the signal as a fetch option for signal - Call the AbortController's abort property to cancel all fetches that use that signal For example, let's pass the same signal to multiple fetch calls will cancel all requests with that signal, javascript const controller = new AbortController(); const { signal } = controller; fetch("", { signal }).then(response => { console.log(`Request 1 is complete!`); }).catch(e => { if(e.name === "AbortError") { // We know it's been canceled! } }); fetch("", { signal }).then(response => { console.log(`Request 2 is complete!`); }).catch(e => { if(e.name === "AbortError") { // We know it's been canceled! } }); // Wait 2 seconds to abort both requests setTimeout(() => controller.abort(), 2000); What is web speech API Web speech API is used to enable modern browsers recognize and synthesize speech(i.e, voice data into web apps). This API has been introduced by W3C Community in the year 2012. It has two main parts, - SpeechRecognition (Asynchronous Speech Recognition or Speech-to-Text): It provides the ability to recognize voice context from an audio input and respond accordingly. This is accessed by the SpeechRecognitioninterface. The below example shows on how to use this API to get text from speech, javascript window.SpeechRecognition = window.webkitSpeechRecognition || window.SpeechRecognition; // webkitSpeechRecognition for Chrome and SpeechRecognition for FF const recognition = new window.SpeechRecognition(); recognition.onresult = (event) => { // SpeechRecognitionEvent type const speechToText = event.results[0][0].transcript; console.log(speechToText); } recognition.start(); In this API, browser is going to ask you for permission to use your microphone - SpeechSynthesis (Text-to-Speech): It provides the ability to recognize voice context from an audio input and respond. This is accessed by the SpeechSynthesisinterface. For example, the below code is used to get voice/speech from text, javascript if('speechSynthesis' in window){ var speech = new SpeechSynthesisUtterance('Hello World!'); speech.lang = 'en-US'; window.speechSynthesis.speak(speech); } The above examples can be tested on chrome(33+) browser's developer console. Note: This API is still a working draft and only available in Chrome and Firefox browsers(ofcourse Chrome only implemented the specification) ⬆ Back to Top What is minimum timeout throttling Both browser and NodeJS javascript environments throttles with a minimum delay that is greater than 0ms. That means even though setting a delay of 0ms will not happen instantaneously. Browsers: They have a minimum delay of 4ms. This throttle occurs when successive calls are triggered due to callback nesting(certain depth) or after a certain number of successive intervals. Note: The older browsers have a minimum delay of 10ms. Nodejs: They have a minimum delay of 1ms. This throttle happens when the delay is larger than 2147483647 or less than 1. The best example to explain this timeout throttling behavior is the order of below code snippet. javascript function runMeFirst() { console.log('My script is initialized'); } setTimeout(runMeFirst, 0); console.log('Script loaded'); and the output would be in cmd Script loaded My script is initialized If you don't use setTimeout, the order of logs will be sequential. javascript function runMeFirst() { console.log('My script is initialized'); } runMeFirst(); console.log('Script loaded'); and the output is, cmd My script is initialized Script loaded How do you implement zero timeout in modern browsers You can't use setTimeout(fn, 0) to execute the code immediately due to minimum delay of greater than 0ms. But you can use window.postMessage() to achieve this behavior. What are tasks in event loop A task is any javascript code/program which is scheduled to be run by the standard mechanisms such as initially starting to run a program, run an event callback, or an interval or timeout being fired. All these tasks are scheduled on a task queue. Below are the list of use cases to add tasks to the task queue, - When a new javascript program is executed directly from console or running by the <script>element, the task will be added to the task queue. - When an event fires, the event callback added to task queue - When a setTimeout or setInterval is reached, the corresponding callback added to task queue What is microtask Microtask is the javascript code which needs to be executed immediately after the currently executing task/microtask is completed. They are kind of blocking in nature. i.e, The main thread will be blocked until the microtask queue is empty. The main sources of microtasks are Promise.resolve, Promise.reject, MutationObservers, IntersectionObservers etc Note: All of these microtasks are processed in the same turn of the event loop. ⬆ Back to Top What are different event loops What is the purpose of queueMicrotask How do you use javascript libraries in typescript file It is known that not all JavaScript libraries or frameworks have TypeScript declaration files. But if you still want to use libraries or frameworks in our TypeScript files without getting compilation errors, the only solution is declarekeyword along with a variable declaration. For example, let's imagine you have a library called customLibrarythat doesn’t have a TypeScript declaration and have a namespace called customLibraryin the global namespace. You can use this library in typescript code as below, javascript declare var customLibrary; In the runtime, typescript will provide the type to the customLibraryvariable as anytype. The another alternative without using declare keyword is below javascript var customLibrary: any; What are the differences between promises and observables Some of the major difference in a tabular form | Promises | Observables | |---- | --------- | Emits only a single value at a time | Emits multiple values over a period of time(stream of values ranging from 0 to multiple) | | Eager in nature; they are going to be called immediately | Lazy in nature; they require subscription to be invoked | | Promise is always asynchronous even though it resolved immediately | Observable can be either synchronous or asynchronous| | Doesn't provide any operators | Provides operators such as map, forEach, filter, reduce, retry, and retryWhen etc | | Cannot be canceled | Canceled by using unsubscribe() method | What is heap Heap(Or memory heap) is the memory location where objects are stored when we define variables. i.e, This is the place where all the memory allocations and de-allocation take place. Both heap and call-stack are two containers of JS runtime. Whenever runtime comes across variables and function declarations in the code it stores them in the Heap. What is an event table Event Table is a data structure that stores and keeps track of all the events which will be executed asynchronously like after some time interval or after the resolution of some API requests. i.e Whenever you call a setTimeout function or invoke async operation, it is added to the Event Table. It doesn't not execute functions on it’s own. The main purpose of the event table is to keep track of events and send them to the Event Queue as shown in the below diagram. What is a microTask queue Microtask Queue is the new queue where all the tasks initiated by promise objects get processed before the callback queue. The microtasks queue are processed before the next rendering and painting jobs. But if these microtasks are running for a long time then it leads to visual degradation. What is the difference between shim and polyfill A. How do you detect primitive or non primitive value type In JavaScript, primitive types include boolean, string, number, BigInt, null, Symbol and undefined. Whereas non-primitive types include the Objects. But you can easily identify them with the below function, javascript var myPrimitive = 30; var myNonPrimitive = {}; function isPrimitive(val) { return Object(val) !== val; } isPrimitive(myPrimitive); isPrimitive(myNonPrimitive); If the value is a primitive data type, the Object constructor creates a new wrapper object for the value. But If the value is a non-primitive data type (an object), the Object constructor will give the same object. What is babel Babel is a JavaScript transpiler to convert ECMAScript 2015+ code into a backwards compatible version of JavaScript in current and older browsers or environments. Some of the main features are listed below, Is Node.js completely single threaded Node is a single thread, but some of the functions included in the Node.js standard library(e.g, fs module functions) are not single threaded. i.e, Their logic runs outside of the Node.js single thread to improve the speed and performance of a program. What are the common use cases of observables Some of the most common use cases of observables are web sockets with push notifications, user input changes, repeating intervals, etc What is RxJS RxJS (Reactive Extensions for JavaScript) is a library for implementing reactive programming using observables that makes it easier to compose asynchronous or callback-based code. It also provides utility functions for creating and working with observables. What is the difference between Function constructor and function declaration The functions which are created with Function constructordo not create closures to their creation contexts but they are always created in the global scope. i.e, the function can access its own local variables and global scope variables only. Whereas function declarations can access outer function variables(closures) too. Let's see this difference with an example, Function Constructor: javascript var a = 100; function createFunction() { var a = 200; return new Function('return a;'); } console.log(createFunction()()); // 100 Function declaration: javascript var a = 100; function createFunction() { var a = 200; return function func() { return a; } } console.log(createFunction()()); // 200 What is a Short circuit condition Short circuit conditions are meant for condensed way of writing simple if statements. Let's demonstrate the scenario using an example. If you would like to login to a portal with an authentication condition, the expression would be as below, javascript if (authenticate) { loginToPorta(); } Since the javascript logical operators evaluated from left to right, the above expression can be simplified using && logical operator javascript authenticate && loginToPorta(); What is the easiest way to resize an array The length property of an array is useful to resize or empty an array quickly. Let's apply length property on number array to resize the number of elements from 5 to 2, javascript var array = [1, 2, 3, 4, 5]; console.log(array.length); // 5 array.length = 2; console.log(array.length); // 2 console.log(array); // [1,2] and the array can be emptied too javascript var array = [1, 2, 3, 4, 5]; array.length = 0; console.log(array.length); // 0 console.log(array); // [] What is an observable An Observable is basically a function that can return a stream of values either synchronously or asynchronously to an observer over time. The consumer can get the value by calling subscribe()method. Let's look at a simple example of an Observable javascript import { Observable } from 'rxjs'; const observable = new Observable(observer => { setTimeout(() => { observer.next('Message from a Observable!'); }, 3000); }); observable.subscribe(value => console.log(value)); Note: Observables are not part of the JavaScript language yet but they are being proposed to be added to the language What is the difference between function and class declarations The main difference between function declarations and class declarations is hoisting. The function declarations are hoisted but not class declarations. Classes: javascript const user = new User(); // ReferenceError class User {} Constructor Function: javascript const user = new User(); // No error function User() { } What is an async function An async function is a function declared with the asynckeyword which enables asynchronous, promise-based behavior to be written in a cleaner style by avoiding promise chains. These functions can contain zero or more awaitexpressions. Let's take a below async function example, javascript async function logger() { let data = await fetch(''); // pause until fetch returns console.log(data) } logger(); It is basically syntax sugar over ES2015 promises and generators. How do you prevent promises swallowing errors While using asynchronous code, JavaScript’s ES6 promises can make your life a lot easier without having callback pyramids and error handling on every second line. But Promises have some pitfalls and the biggest one is swallowing errors by default. Let's say you expect to print an error to the console for all the below cases, javascript Promise.resolve('promised value').then(function() { throw new Error('error'); }); Promise.reject('error value').catch(function() { throw new Error('error'); }); new Promise(function(resolve, reject) { throw new Error('error'); }); But there are many modern JavaScript environments that won't print any errors. You can fix this problem in different ways, Add catch block at the end of each chain: You can add catch block to the end of each of your promise chains javascript Promise.resolve('promised value').then(function() { throw new Error('error'); }).catch(function(error) { console.error(error.stack); }); But it is quite difficult to type for each promise chain and verbose too. Add done method: You can replace first solution's then and catch blocks with done method javascript Promise.resolve('promised value').done(function() { throw new Error('error'); }); Let's say you want to fetch data using HTTP and later perform processing on the resulting data asynchronously. You can write doneblock as below, javascript getDataFromHttp() .then(function(result) { return processDataAsync(result); }) .done(function(processed) { displayData(processed); }); In future, if the processing library API changed to synchronous then you can remove doneblock as below, javascript getDataFromHttp() .then(function(result) { return displayData(processDataAsync(result)); }) and then you forgot to add doneblock to thenblock leads to silent errors. Extend ES6 Promises by Bluebird: Bluebird extends the ES6 Promises API to avoid the issue in the second solution. This library has a “default” onRejection handler which will print all errors from rejected Promises to stderr. After installation, you can process unhandled rejections javascript Promise.onPossiblyUnhandledRejection(function(error){ throw error; }); and discard a rejection, just handle it with an empty catch javascript Promise.reject('error value').catch(function() {}); What is deno Deno is a simple, modern and secure runtime for JavaScript and TypeScript that uses V8 JavaScript engine and the Rust programming language. How do you make an object iterable in javascript By default, plain objects are not iterable. But you can make the object iterable by defining a Symbol.iteratorproperty on it. Let's demonstrate this with an example, javascript const collection = { one: 1, two: 2, three: 3, [Symbol.iterator]() { const values = Object.keys(this); let i = 0; return { next: () => { return { value: this[values[i++]], done: i > values.length } } }; } }; const iterator = collection[Symbol.iterator](); console.log(iterator.next()); // → {value: 1, done: false} console.log(iterator.next()); // → {value: 2, done: false} console.log(iterator.next()); // → {value: 3, done: false} console.log(iterator.next()); // → {value: undefined, done: true} The above process can be simplified using a generator function, javascript const collection = { one: 1, two: 2, three: 3, [Symbol.iterator]: function * () { for (let key in this) { yield this[key]; } } }; const iterator = collection[Symbol.iterator](); console.log(iterator.next()); // {value: 1, done: false} console.log(iterator.next()); // {value: 2, done: false} console.log(iterator.next()); // {value: 3, done: false} console.log(iterator.next()); // {value: undefined, done: true} What is a Proper Tail Call First, we should know about tail call before talking about "Proper Tail Call". A tail call is a subroutine or function call performed as the final action of a calling function. Whereas Proper tail call(PTC) is a technique where the program or code will not create additional stack frames for a recursion when the function call is a tail call. For example, the below classic or head recursion of factorial function relies on stack for each step. Each step need to be processed upto n * factorial(n - 1) javascript function factorial(n) { if (n === 0) { return 1 } return n * factorial(n - 1) } console.log(factorial(5)); //120 But if you use Tail recursion functions, they keep passing all the necessary data it needs down the recursion without relying on the stack. javascript function factorial(n, acc = 1) { if (n === 0) { return acc } return factorial(n - 1, n * acc) } console.log(factorial(5)); //120 The above pattern returns the same output as the first one. But the accumulator keeps track of total as an argument without using stack memory on recursive calls. How do you check an object is a promise or not If you don't know if a value is a promise or not, wrapping the value as Promise.resolve(value)which returns a promise javascript function isPromise(object){ if(Promise && Promise.resolve){ return Promise.resolve(object) == object; }else{ throw "Promise not supported in your environment" } } var i = 1; var promise = new Promise(function(resolve,reject){ resolve() }); console.log(isPromise(i)); // false console.log(isPromise(p)); // true Another way is to check for .then()handler type javascript function isPromise(value) { return Boolean(value && typeof value.then === 'function'); } var i = 1; var promise = new Promise(function(resolve,reject){ resolve() }); console.log(isPromise(i)) // false console.log(isPromise(promise)); // true How to detect if a function is called as constructor You can use new.targetpseudo-property to detect whether a function was called as a constructor(using the new operator) or as a regular function call. - If a constructor or function invoked using the new operator, new.target returns a reference to the constructor or function. - For function calls, new.target is undefined. javascript function Myfunc() { if (new.target) { console.log('called with new'); } else { console.log('not called with new'); } } new Myfunc(); // called with new Myfunc(); // not called with new Myfunc.call({}); not called with new What are the differences between arguments object and rest parameter There are three main differences between arguments object and rest parameters - The arguments object is an array-like but not an array. Whereas the rest parameters are array instances. - The arguments object does not support methods such as sort, map, forEach, or pop. Whereas these methods can be used in rest parameters. - The rest parameters are only the ones that haven’t been given a separate name, while the arguments object contains all arguments passed to the function What are the differences between spread operator and rest parameter Rest parameter collects all remaining elements into an array. Whereas Spread operator allows iterables( arrays / objects / strings ) to be expanded into single arguments/elements. i.e, Rest parameter is opposite to the spread operator. What are the different kinds of generators There are five kinds of generators, Generator function declaration: javascript function* myGenFunc() { yield 1; yield 2; yield 3; } const genObj = myGenFunc(); Generator function expressions: javascript const myGenFunc = function* () { yield 1; yield 2; yield 3; }; const genObj = myGenFunc(); Generator method definitions in object literals: javascript const myObj = { * myGeneratorMethod() { yield 1; yield 2; yield 3; } }; const genObj = myObj.myGeneratorMethod(); Generator method definitions in class: javascript class MyClass { * myGeneratorMethod() { yield 1; yield 2; yield 3; } } const myObject = new MyClass(); const genObj = myObject.myGeneratorMethod(); Generator as a computed property: javascript const SomeObj = { *[Symbol.iterator] () { yield 1; yield 2; yield 3; } } console.log(Array.from(SomeObj)); // [ 1, 2, 3 ] What are the built-in iterables Below are the list of built-in iterables in javascript, - Arrays and TypedArrays - Strings: Iterate over each character or Unicode code-points - Maps: iterate over its key-value pairs - Sets: iterates over their elements - arguments: An array-like special variable in functions - DOM collection such as NodeList What are the differences between for...of and for...in statements Both for...in and for...of statements iterate over js data structures. The only difference is over what they iterate: - for..in iterates over all enumerable property keys of an object - for..of iterates over the values of an iterable object. Let's explain this difference with an example, javascript let arr = ['a', 'b', 'c']; arr.newProp = 'newVlue'; // key are the property keys for (let key in arr) { console.log(key); } // value are the property values for (let value of arr) { console.log(value); }. How do you define instance and non-instance properties The Instance properties must be defined inside of class methods. For example, name and age properties defined insider constructor as below, javascript class Person { constructor(name, age) { this.name = name; this.age = age; } } But Static(class) and prototype data properties must be defined outside of the ClassBody declaration. Let's assign the age value for Person class as below, javascript Person.staticAge = 30; Person.prototype.prototypeAge = 40; What is the difference between isNaN and Number.isNaN? - isNaN: The global function isNaNconverts the argument to a Number and returns true if the resulting value is NaN. - Number.isNaN: This method does not convert the argument. But it returns true when the type is a Number and value is NaN. Let's see the difference with an example, javascript isNaN(‘hello’); // true Number.isNaN('hello'); // false How to invoke an IIFE without any extra brackets? Immediately Invoked Function Expressions(IIFE) requires a pair of parenthesis to wrap the function which contains set of statements. js (function(dt) { console.log(dt.toLocaleTimeString()); })(new Date()); Since both IIFE and void operator discard the result of an expression, you can avoid the extra brackets using void operatorfor IIFE as below, js void function(dt) { console.log(dt.toLocaleTimeString()); }(new Date()); Is that possible to use expressions in switch cases? You might have seen expressions used in switch condition but it is also possible to use for switch cases by assigning true value for the switch condition. Let's see the weather condition based on temparature as an example, js const weather = function getWeather(temp) { switch(true) { case temp < 0: return 'freezing'; case temp < 10: return 'cold'; case temp < 24: return 'cool'; default: return 'unknown'; } }(10); What is the easiest way to ignore promise errors? The easiest and safest way to ignore promise errors is void that error. This approach is ESLint friendly too. js await promise.catch(e => void e); How do style the console output using CSS? You can add CSS styling to the console output using the CSS format content specifier %c. The console string message can be appended after the specifier and CSS style in another argument. Let's print the red the color text using console.log and CSS specifier as below, js console.log("%cThis is a red text", "color:red"); It is also possible to add more styles for the content. For example, the font-size can be modified for the above text js console.log("%cThis is a red text with bigger font", "color:red; font-size:20px"); Coding Exercise 1. What is the output of below code javascript var car = new Vehicle("Honda", "white", "2010", "UK"); console.log(car); function Vehicle(model, color, year, country) { this.model = model; this.color = color; this.year = year; this.country = country; } - 1: Undefined - 2: ReferenceError - 3: null - 4: {model: "Honda", color: "white", year: "2010", country: "UK"} Answer: 4 The function declarations are hoisted similar to any variables. So the placement for `Vehicle` function declaration doesn't make any difference. 2. What is the output of below code javascript function foo() { let x = y = 0; x++; y++; return x; } console.log(foo(), typeof x, typeof y); - 1: 1, undefined and undefined - 2: ReferenceError: X is not defined - 3: 1, undefined and number - 4: 1, number and number Answer Answer: 3 Of course the return value of `foo()` is 1 due to the increment operator. But the statement `let x = y = 0` declares a local variable x. Whereas y declared as a global variable accidentally. This statement is equivalent to, ``` javascript let x; window.y = 0; x = window.y; ``` Since the block scoped variable x is undefined outside of the function, the type will be undefined too. Whereas the global variable `y` is available outside the function, the value is 0 and type is number. 3. What is the output of below code javascript function main(){ console.log('A'); setTimeout( function print(){ console.log('B'); } ,0); console.log('C'); } main(); - 1: A, B and C - 2: B, A and C - 3: A and C - 4: A, C and B Answer Answer: 4 The statements order is based on the event loop mechanism. The order of statements follows the below order, 1. At first, the main function is pushed to the stack. 2. Then the browser pushes the fist statement of the main function( i.e, A's console.log) to the stack, executing and popping out immediately. 3. But `setTimeout` statement moved to Browser API to apply the delay for callback. 4. In the meantime, C's console.log added to stack, executed and popped out. 5. The callback of `setTimeout` moved from Browser API to message queue. 6. The `main` function popped out from stack because there are no statements to execute 7. The callback moved from message queue to the stack since the stack is empty. 8. The console.log for B is added to the stack and display on the console. 4. What is the output of below equality check javascript console.log(0.1 + 0.2 === 0.3); - 1: false - 2: true Answer Answer: 1 This is due to the float point math problem. Since the floating point numbers are encoded in binary format, the addition operations on them lead to rounding errors. Hence, the comparison of floating points doesn't give expected results. You can find more details about the explanation here [0.30000000000000004.com/]() 5. What is the output of below code javascript var y = 1; if (function f(){}) { y += typeof f; } console.log(y); - 1: 1function - 2: 1object - 3: ReferenceError - 4: 1undefined Answer Answer: 4 The main points in the above code snippets are, 1. You can see function expression instead function declaration inside if statement. So it always returns true. 2. Since it is not declared(or assigned) anywhere, f is undefined and typeof f is undefined too. In other words, it is same as ``` javascript var y = 1; if ('foo') { y += typeof f; } console.log(y); ``` **Note:** It returns 1object for MS Edge browser 6. What is the output of below code javascript function foo() { return { message: "Hello World" }; } console.log(foo()); - 1: Hello World - 2: Object {message: "Hello World"} - 3: Undefined - 4: SyntaxError Answer Answer: 3 This is a semicolon issue. Normally semicolons are optional in JavaScript. So if there are any statements(in this case, return) missing semicolon, it is automatically inserted immediately. Hence, the function returned as undefined. Whereas if the opening curly brace is along with the return keyword then the function is going to be returned as expected. ``` javascript function foo() { return { message: "Hello World" }; } console.log(foo()); // {message: "Hello World"} ``` 7. What is the output of below code javascript var myChars = ['a', 'b', 'c', 'd'] delete myChars[0]; console.log(myChars); console.log(myChars[0]); console.log(myChars.length); - 1: [empty, 'b', 'c', 'd'], empty, 3 - 2: [null, 'b', 'c', 'd'], empty, 3 - 3: [empty, 'b', 'c', 'd'], undefined, 4 - 4: [null, 'b', 'c', 'd'], undefined, 4 Answer Answer: 3 The `delete` operator will delete the object property but it will not reindex the array or change its length. So the number or elements or length of the array won't be changed. If you try to print myChars then you can observe that it doesn't set an undefined value, rather the property is removed from the array. The newer versions of Chrome use `empty` instead of `undefined` to make the difference a bit clearer. 8. What is the output of below code in latest Chrome javascript var array1 = new Array(3); console.log(array1); var array2 = []; array2[2] = 100; console.log(array2); var array3 = [,,,]; console.log(array3); - 1: [undefined × 3], [undefined × 2, 100], [undefined × 3] - 2: [empty × 3], [empty × 2, 100], [empty × 3] - 3: [null × 3], [null × 2, 100], [null × 3] - 4: [], [100], [] Answer Answer: 2 The latest chrome versions display `sparse array`(they are filled with holes) using this empty x n notation. Whereas the older versions have undefined x n notation. **Note:** The latest version of FF displays `n empty slots` notation. 9. What is the output of below code javascript const obj = { prop1: function() { return 0 }, prop2() { return 1 }, ['prop' + 3]() { return 2 } } console.log(obj.prop1()); console.log(obj.prop2()); console.log(obj.prop3()); - 1: 0, 1, 2 - 2: 0, { return 1 }, 2 - 3: 0, { return 1 }, { return 2 } - 4: 0, 1, undefined Answer Answer: 1 ES6 provides method definitions and property shorthands for objects. So both prop2 and prop3 are treated as regular function values. 10. What is the output of below code javascript console.log(1 < 2 < 3); console.log(3 > 2 > 1); - 1: true, true - 2: true, false - 3: SyntaxError, SyntaxError, - 4: false, false Answer Answer: 2 The important point is that if the statement contains the same operators(e.g, < or >) then it can be evaluated from left to right. The first statement follows the below order, 1. console.log(1 < 2 < 3); 2. console.log(true < 3); 3. console.log(1 < 3); // True converted as `1` during comparison 4. True Whereas the second statement follows the below order, 1. console.log(3 > 2 > 1); 2. console.log(true > 1); 3. console.log(1 > 1); // False converted as `0` during comparison 4. False 11. What is the output of below code in non-strict mode javascript function printNumbers(first, second, first) { console.log(first, second, first); } printNumbers(1, 2, 3); - 1: 1, 2, 3 - 2: 3, 2, 3 - 3: SyntaxError: Duplicate parameter name not allowed in this context - 4: 1, 2, 1 Answer Answer: 2 In non-strict mode, the regular JavaScript functions allow duplicate named parameters. The above code snippet has duplicate parameters on 1st and 3rd parameters. The value of the first parameter is mapped to the third argument which is passed to the function. Hence, the 3rd argument overrides the first parameter. **Note:** In strict mode, duplicate parameters will throw a Syntax Error. 12. What is the output of below code javascript const printNumbersArrow = (first, second, first) => { console.log(first, second, first); } printNumbersArrow(1, 2, 3); - 1: 1, 2, 3 - 2: 3, 2, 3 - 3: SyntaxError: Duplicate parameter name not allowed in this context - 4: 1, 2, 1 Answer Answer: 3 Unlike regular functions, the arrow functions doesn't not allow duplicate parameters in either strict or non-strict mode. So you can see `SyntaxError` in the console. 13. What is the output of below code javascript const arrowFunc = () => arguments.length; console.log(arrowFunc(1, 2, 3)); - 1: ReferenceError: arguments is not defined - 2: 3 - 3: undefined - 4: null Answer Answer: 1 Arrow functions do not have an `arguments, super, this, or new.target` bindings. So any reference to `arguments` variable tries to resolve to a binding in a lexically enclosing environment. In this case, the arguments variable is not defined outside of the arrow function. Hence, you will receive a reference error. Where as the normal function provides the number of arguments passed to the function ``` javascript const func = function () { return arguments.length; } console.log(func(1, 2, 3)); ``` But If you still want to use an arrow function then rest operator on arguments provides the expected arguments ``` javascript const arrowFunc = (...args) => args.length; console.log(arrowFunc(1, 2, 3)); ``` 14. What is the output of below code javascript console.log( String.prototype.trimLeft.name === 'trimLeft' ); console.log( String.prototype.trimLeft.name === 'trimStart' ); - 1: True, False - 2: False, True Answer Answer: 2 In order to be consistent with functions like `String.prototype.padStart`, the standard method name for trimming the whitespaces is considered as `trimStart`. Due to web web compatibility reasons, the old method name 'trimLeft' still acts as an alias for 'trimStart'. Hence, the prototype for 'trimLeft' is always 'trimStart' 15. What is the output of below code javascript console.log(Math.max()); - 1: undefined - 2: Infinity - 3: 0 - 4: -Infinity Answer Answer: 4 -Infinity is the initial comparant because almost every other value is bigger. So when no arguments are provided, -Infinity is going to be returned. **Note:** Zero number of arguments is a valid case. 16. What is the output of below code javascript console.log(10 == [10]); console.log(10 == [[[[[[[10]]]]]]]); - 1: True, True - 2: True, False - 3: False, False - 4: False, True Answer Answer: 1 As per the comparison algorithm in the ECMAScript specification(ECMA-262), the above expression converted into JS as below ``` javascript 10 === Number([10].valueOf().toString()) // 10 ``` So it doesn't matter about number brackets([]) around the number, it is always converted to a number in the expression. 17. What is the output of below code javascript console.log(10 + '10'); console.log(10 - '10'); - 1: 20, 0 - 2: 1010, 0 - 3: 1010, 10-10 - 4: NaN, NaN Answer Answer: 2 The concatenation operator(+) is applicable for both number and string types. So if any operand is string type then both operands concatenated as strings. Whereas subtract(-) operator tries to convert the operands as number type. 18. What is the output of below code javascript console.log([0] == false); if([0]) { console.log("I'm True"); } else { console.log("I'm False"); } - 1: True, I'm True - 2: True, I'm False - 3: False, I'm True - 4: False, I'm False Answer Answer: 1 In comparison operators, the expression `[0]` converted to Number([0].valueOf().toString()) which is resolved to false. Whereas `[0]` just becomes a truthy value without any conversion because there is no comparison operator. 19. What is the output of below code javascript console.log([1, 2] + [3, 4]); - 1: [1,2,3,4] - 2: [1,2][3,4] - 3: SyntaxError - 4: 1,23,4 Answer Answer: 4 The + operator is not meant or defined for arrays. So it converts arrays into strings and concatenates them. 20. What is the output of below code javascript const numbers = new Set([1, 1, 2, 3, 4]); console.log(numbers); const browser = new Set('Firefox); console.log(browser); - 1: {1, 2, 3, 4}, {"F", "i", "r", "e", "f", "o", "x"} - 2: {1, 2, 3, 4}, {"F", "i", "r", "e", "o", "x"} - 3: [1, 2, 3, 4], ["F", "i", "r", "e", "o", "x"] - 4: {1, 1, 2, 3, 4}, {"F", "i", "r", "e", "f", "o", "x"} Answer Answer: 1 Since `Set` object is a collection of unique values, it won't allow duplicate values in the collection. At the same time, it is case sensitive data structure. 21. What is the output of below code javascript console.log(NaN === NaN); - 1: True - 2: False Answer Answer: 2 JavaScript follows IEEE 754 spec standards. As per this spec, NaNs are never equal for floating-point numbers. 22. What is the output of below code javascript let numbers = [1, 2, 3, 4, NaN]; console.log(numbers.indexOf(NaN)); - 1: 4 - 2: NaN - 3: SyntaxError - 4: -1 Answer Answer: 4 The `indexOf` uses strict equality operator(===) internally and `NaN === NaN` evaluates to false. Since indexOf won't be able to find NaN inside an array, it returns -1 always. But you can use `Array.prototype.findIndex` method to find out the index of NaN in an array or You can use `Array.prototype.includes` to check if NaN is present in an array or not. ``` javascript let numbers = [1, 2, 3, 4, NaN]; console.log(numbers.findIndex(Number.isNaN)); // 4 console.log(numbers.includes(Number.isNaN)); // true ``` 23. What is the output of below code javascript let [a, ...b,] = [1, 2, 3, 4, 5]; console.log(a, b); - 1: 1, [2, 3, 4, 5] - 2: 1, {2, 3, 4, 5} - 3: SyntaxError - 4: 1, [2, 3, 4] Answer Answer: 3 When using rest parameters, trailing commas are not allowed and will throw a SyntaxError. If you remove the trailing comma then it displays 1st answer ``` javascript let [a, ...b,] = [1, 2, 3, 4, 5]; console.log(a, b); // 1, [2, 3, 4, 5] ``` 25. What is the output of below code javascript async function func() { return 10; } console.log(func()); - 1: Promise {: 10} - 2: 10 - 3: SyntaxError - 4: Promise {: 10} Answer Answer: 1 Async functions always return a promise. But even if the return value of an async function is not explicitly a promise, it will be implicitly wrapped in a promise. The above async function is equivalent to below expression, ``` javascript function func() { return Promise.resolve(10) } ``` 26. What is the output of below code javascript async function func() { await 10; } console.log(func()); - 1: Promise {: 10} - 2: 10 - 3: SyntaxError - 4: Promise {: undefined} Answer Answer: 4 The await expression returns value 10 with promise resolution and the code after each await expression can be treated as existing in a `.then` callback. In this case, there is no return expression at the end of the function. Hence, the default return value of `undefined` is returned as the resolution of the promise. The above async function is equivalent to below expression, ``` javascript function func() { return Promise.resolve(10).then(() => undefined) } ``` 27. What is the output of below code javascript function delay() { return new Promise(resolve => setTimeout(resolve, 2000)); } async function delayedLog(item) { await delay(); console.log(item); } async function processArray(array) { array.forEach(item => { await delayedLog(item); }) } processArray([1, 2, 3, 4]); - 1: SyntaxError - 2: 1, 2, 3, 4 - 3: 4, 4, 4, 4 - 4: 4, 3, 2, 1 Answer Answer: 1 Even though “processArray” is an async function, the anonymous function that we use for `forEach` is synchronous. If you use await inside a synchronous function then it throws a syntax error. 28. What is the output of below code javascript function delay() { return new Promise(resolve => setTimeout(resolve, 2000)); } async function delayedLog(item) { await delay(); console.log(item); } async function process(array) { array.forEach(async (item) => { await delayedLog(item); }); console.log('Process completed!'); } process([1, 2, 3, 5]); - 1: 1 2 3 5 and Process completed! - 2: 5 5 5 5 and Process completed! - 3: Process completed! and 5 5 5 5 - 4: Process completed! and 1 2 3 5 Answer Answer: 4 The forEach method will not wait until all items are finished but it just runs the tasks and goes next. Hence, the last statement is displayed first followed by a sequence of promise resolutions. But you control the array sequence using for..of loop, ``` javascript async function processArray(array) { for (const item of array) { await delayedLog(item); } console.log('Process completed!'); } ``` 29. What is the output of below code javascript var set = new Set(); set.add("+0").add("-0").add(NaN).add(undefined).add(NaN);; console.log(set); - 1: Set(4) {"+0", "-0", NaN, undefined} - 2: Set(3) {"+0", NaN, undefined} - 3: Set(5) {"+0", "-0", NaN, undefined, NaN} - 4: Set(4) {"+0", NaN, undefined, NaN} Answer Answer: 1 Set has few exceptions from equality check, 1. All NaN values are equal 2. Both +0 and -0 considered as different values 30. What is the output of below code javascript const sym1 = Symbol('one'); const sym2 = Symbol('one'); const sym3 = Symbol.for('two'); const sym4 = Symbol.for('two'); cnsooe.log(sym1 === sym2, sym3 === sym4); - 1: true, true - 2: true, false - 3: false, true - 4: false, false Answer Answer: 3 Symbol follows below conventions, 1. Every symbol value returned from Symbol() is unique irrespective of the optional string. 2. `Symbol.for()` function creates a symbol in a global symbol registry list. But it doesn't necessarily create a new symbol on every call, it checks first if a symbol with the given key is already present in the registry and returns the symbol if it is found. Otherwise a new symbol created in the registry. **Note:** The symbol description is just useful for debugging purposes. 31. What is the output of below code javascript const sym1 = new Symbol('one'); console.log(sym1); - 1: SyntaxError - 2: one - 3: Symbol('one') - 4: Symbol Answer Answer: 1 `Symbol` is a just a standard function and not an object constructor(unlike other primitives new Boolean, new String and new Number). So if you try to call it with the new operator will result in a TypeError 32. What is the output of below code javascript let myNumber = 100; let myString = '100'; if (!typeof myNumber === "string") { console.log("It is not a string!"); } else { console.log("It is a string!"); } if (!typeof myString === "number"){ console.log("It is not a number!") } else { console.log("It is a number!"); } - 1: SyntaxError - 2: It is not a string!, It is not a number! - 3: It is not a string!, It is a number! - 4: It is a string!, It is a number! Answer Answer: 4 The return value of `typeof myNumber (OR) typeof myString` is always the truthy value (either "number" or "string"). Since ! operator converts the value to a boolean value, the value of both `!typeof myNumber or !typeof myString` is always false. Hence the if condition fails and control goes to else block. 33. What is the output of below code javascript console.log(JSON.stringify({ myArray: ['one', undefined, function(){}, Symbol('')] })); console.log(JSON.stringify({ [Symbol.for('one')]: 'one' }, [Symbol.for('one')])); - 1: {"myArray":['one', undefined, {}, Symbol]}, {} - 2: {"myArray":['one', null,null,null]}, {} - 3: {"myArray":['one', null,null,null]}, "{ [Symbol.for('one')]: 'one' }, [Symbol.for('one')]" - 4: {"myArray":['one', undefined, function(){}, Symbol('')]}, {} Answer Answer: 2 The symbols has below constraints, 1. The undefined, Functions, and Symbols are not valid JSON values. So those values are either omitted (in an object) or changed to null (in an array). Hence, it returns null values for the value array. 2. All Symbol-keyed properties will be completely ignored. Hence it returns an empty object({}). 34. What is the output of below code javascript class A { constructor() { console.log(new.target.name) } } class B extends A { constructor() { super() } } new A(); new B(); - 1: A, A - 2: A, B Answer Answer: 2 Using constructors, `new.target` refers to the constructor (points to the class definition of class which is initialized) that was directly invoked by new. This also applies to the case if the constructor is in a parent class and was delegated from a child constructor. 35. What is the output of below code javascript const [x, ...y,] = [1, 2, 3, 4]; console.log(x, y); - 1: 1, [2, 3, 4] - 2: 1, [2, 3] - 3: 1, [2] - 4: SyntaxError Answer Answer: 4 It throws a syntax error because the rest element should not have a trailing comma. You should always consider using a rest operator as the last element. 36. What is the output of below code javascript const {a: x = 10, b: y = 20} = {a: 30}; console.log(x); console.log(y); - 1: 30, 20 - 2: 10, 20 - 3: 10, undefined - 4: 30, undefined Answer Answer: 1 The object property follows below rules, 1. The object properties can be retrieved and assigned to a variable with a different name 2. The property assigned a default value when the retrieved value is `undefined` 37. What is the output of below code javascript function area({length = 10, width = 20}) { console.log(length*width); } area(); - 1: 200 - 2: Error - 3: undefined - 4: 0 Answer Answer: 2 If you leave out the right-hand side assignment for the destructuring object, the function will look for at least one argument to be supplied when invoked. Otherwise you will receive an error `Error: Cannot read property 'length' of undefined` as mentioned above. You can avoid the error with either of the below changes, 1. **Pass at least an empty object:** ``` javascript function area({length = 10, width = 20}) { console.log(length*width); } area({}); ``` 2. **Assign default empty object:** ``` javascript function area({length = 10, width = 20} = {}) { console.log(length*width); } area(); ``` 38. What is the output of below code javascript const props = [ { id: 1, name: 'John'}, { id: 2, name: 'Jack'}, { id: 3, name: 'Tom'} ]; const [,, { name }] = props; console.log(name); - 1: Tom - 2: Error - 3: undefined - 4: John Answer Answer: 1 It is possible to combine Array and Object destructuring. In this case, the third element in the array props accessed first followed by name property in the object. 39. What is the output of below code javascript function checkType(num = 1) { console.log(typeof num); } checkType(); checkType(undefined); checkType(''); checkType(null); - 1: number, undefined, string, object - 2: undefined, undefined, string, object - 3: number, number, string, object - 4: number, number, number, number Answer Answer: 3 If the function argument is set implicitly(not passing argument) or explicitly to undefined, the value of the argument is the default parameter. Whereas for other falsy values('' or null), the value of the argument is passed as a parameter. Hence, the result of function calls categorized as below, 1. The first two function calls logs number type since the type of default value is number 2. The type of '' and null values are string and object type respectively. 40. What is the output of below code javascript function add(item, items = []) { items.push(item); return items; } console.log(add('Orange')); console.log(add('Apple')); - 1: ['Orange'], ['Orange', 'Apple'] - 2: ['Orange'], ['Apple'] Answer Answer: 2 Since the default argument is evaluated at call time, a new object is created each time the function is called. So in this case, the new array is created and an element pushed to the default empty array. 41. What is the output of below code javascript function greet(greeting, name, message = greeting + ' ' + name) { console.log([greeting, name, message]); } greet('Hello', 'John'); greet('Hello', 'John', 'Good morning!'); - 1: SyntaxError - 2: ['Hello', 'John', 'Hello John'], ['Hello', 'John', 'Good morning!'] Answer Answer: 2 Since parameters defined earlier are available to later default parameters, this code snippet doesn't throw any error. 42. What is the output of below code javascript function outer(f = inner()) { function inner() { return 'Inner' } } outer(); - 1: ReferenceError - 2: Inner Answer Answer: 1 The functions and variables declared in the function body cannot be referred from default value parameter initializers. If you still try to access, it throws a run-time ReferenceError(i.e, `inner` is not defined). 43. What is the output of below code javascript function myFun(x, y, ...manyMoreArgs) { console.log(manyMoreArgs) } myFun(1, 2, 3, 4, 5); myFun(1, 2); - 1: [3, 4, 5], undefined - 2: SyntaxError - 3: [3, 4, 5], [] - 4: [3, 4, 5], [undefined] Answer Answer: 3 The rest parameter is used to hold the remaining parameters of a function and it becomes an empty array if the argument is not provided. 44. What is the output of below code javascript const obj = {'key': 'value'}; const array = [...obj]; console.log(array); - 1: ['key', 'value'] - 2: TypeError - 3: [] - 4: ['key'] Answer Answer: 2 Spread syntax can be applied only to iterable objects. By default, Objects are not iterable, but they become iterable when used in an Array, or with iterating functions such as `map(), reduce(), and assign()`. If you still try to do it, it still throws `TypeError: obj is not iterable`. 45. What is the output of below code javascript function* myGenFunc() { yield 1; yield 2; yield 3; } var myGenObj = new myGenFunc; console.log(myGenObj.next().value); - 1: 1 - 2: undefined - 3: SyntaxError - 4: TypeError Answer Answer: 4 Generators are not constructible type. But if you still proceed to do, there will be an error saying "TypeError: myGenFunc is not a constructor" 46. What is the output of below code javascript function* yieldAndReturn() { yield 1; return 2; yield 3; } var myGenObj = yieldAndReturn() console.log(myGenObj.next()); console.log(myGenObj.next()); console.log(myGenObj.next()); - 1: { value: 1, done: false }, { value: 2, done: true }, { value: undefined, done: true } - 2: { value: 1, done: false }, { value: 2, done: false }, { value: undefined, done: true } - 3: { value: 1, done: false }, { value: 2, done: true }, { value: 3, done: true } - 4: { value: 1, done: false }, { value: 2, done: false }, { value: 3, done: true } Answer Answer: 1 A return statement in a generator function will make the generator finish. If a value is returned, it will be set as the value property of the object and done property to true. When a generator is finished, subsequent next() calls return an object of this form: `{value: undefined, done: true}`. 47. What is the output of below code javascript const myGenerator = (function *(){ yield 1; yield 2; yield 3; })(); for (const value of myGenerator) { console.log(value); break; } for (const value of myGenerator) { console.log(value); } - 1: 1,2,3 and 1,2,3 - 2: 1,2,3 and 4,5,6 - 3: 1 and 1 - 4: 1 Answer Answer: 4 The generator should not be re-used once the iterator is closed. i.e, Upon exiting a loop(on completion or using break & return), the generator is closed and trying to iterate over it again does not yield any more results. Hence, the second loop doesn't print any value. 48. What is the output of below code javascript const num = 0o38; console.log(num); - 1: SyntaxError - 2: 38 Answer Answer: 1 If you use an invalid number(outside of 0-7 range) in the octal literal, JavaScript will throw a SyntaxError. In ES5, it treats the octal literal as a decimal number. 49. What is the output of below code javascript const squareObj = new Square(10); console.log(squareObj.area); class Square { constructor(length) { this.length = length; } get area() { return this.length * this.length; } set area(value) { this.area = value; } } - 1: 100 - 2: ReferenceError Answer Answer: 2 Unlike function declarations, class declarations are not hoisted. i.e, First You need to declare your class and then access it, otherwise it will throw a ReferenceError "Uncaught ReferenceError: Square is not defined". **Note:** Class expressions also applies to the same hoisting restrictions of class declarations. 50. What is the output of below code javascript function Person() { } Person.prototype.walk = function() { return this; } Person.run = function() { return this; } let user = new Person(); let walk = user.walk; console.log(walk()); let run = Person.run; console.log(run()); - 1: undefined, undefined - 2: Person, Person - 3: SyntaxError - 4: Window, Window Answer Answer: 4 When a regular or prototype method is called without a value for **this**, the methods return an initial this value if the value is not undefined. Otherwise global window object will be returned. In our case, the initial `this` value is undefined so both methods return window objects. 51. What is the output of below code javascript class Vehicle { constructor(name) { this.name = name; } start() { console.log(`${this.name} vehicle started`); } } class Car extends Vehicle { start() { console.log(`${this.name} car started`); super.start(); } } const car = new Car('BMW'); console.log(car.start()); - 1: SyntaxError - 2: BMW vehicle started, BMW car started - 3: BMW car started, BMW vehicle started - 4: BMW car started, BMW car started Answer Answer: 3 The super keyword is used to call methods of a superclass. Unlike other languages the super invocation doesn't need to be a first statement. i.e, The statements will be executed in the same order of code. 52. What is the output of below code javascript const USER = {'age': 30}; USER.age = 25; console.log(USER.age); - 1: 30 - 2: 25 - 3: Uncaught TypeError - 4: SyntaxError Answer Answer: 2 Even though we used constant variables, the content of it is an object and the object's contents (e.g properties) can be altered. Hence, the change is going to be valid in this case. 53. What is the output of below code javascript console.log('🙂' === '🙂'); - 1: false - 2: true Answer Answer: 2 Emojis are unicodes and the unicode for smile symbol is "U+1F642". The unicode comparision of same emojies is equivalent to string comparison. Hence, the output is always true. 54. What is the output of below code? javascript console.log(typeof typeof typeof true); - 1: string - 2: boolean - 3: NaN - 4: number Answer Answer: 1 The typeof operator on any primitive returns a string value. So even if you apply the chain of typeof operators on the return value, it is always string. 55. What is the output of below code? javascript let zero = new Number(0); if (zero) { console.log("If"); } else { console.log("Else"); } - 1: If - 2: Else - 3: NaN - 4: SyntaxError Answer Answer: 1 1. The type of operator on new Number always returns object. i.e, typeof new Number(0) --> object. 2. Objects are always truthy in if block Hence the above code block always goes to if section. 55. What is the output of below code in non strict mode? javascript let msg = "Good morning!!"; msg.name = "John"; console.log(msg.name); - 1: "" - 2: Error - 3: John - 4: Undefined Answer Answer: 4 It returns undefined for non-strict mode and returns Error for strict mode. In non-strict mode, the wrapper object is going to be created and get the mentioned property. But the object get disappeared after accessing the property in next line. Discussion (9) Really awesome, thanks @sairamnagothu Will be very helpful if you can please provide an excel version of the document, to import it to my flash cards app. Thank you soo much, i can give you the raw copy or in pdf format, will that work for you ? Yes please, I'll try to convert it to Excel. I'll get back to you if I can make the conversion. Sure can you send me your mail id ? Yes Thank you very much. sent. please have a look :) Awesome.. ❤️ Thank you soo much @sairamnagothu
https://dev.to/capscode/500-javascript-question-answers-with-explanation-29im
CC-MAIN-2021-31
refinedweb
35,191
54.93
DB_ENV- DB_ENV- <stdio.h> #include <pthread.h> #include <stdlib.h> #include "db.h" void *checkpoint_thread(void *); int main(void) { int ret; u_int32_t env_flags; DB_ENV *envp; const char *db_home_dir = "/tmp/myEnvironment"; pthread_t ptid; envp = NULL; /* Open the environment */ ret = db_env_create(&envp, 0); if (ret != 0) { fprintf(stderr, "Error creating environment handle: %s\n", db_strerror(ret)); return (EXIT_FAILURE); } env_flags = DB_CREATE | /* If the environment does not * exist, create it. */ DB_INIT_LOCK | /* Initialize locking */ DB_INIT_LOG | /* Initialize logging */ DB_INIT_MPOOL | /* Initialize the cache */ DB_THREAD | /* Free-thread the env handle. */ DB_INIT_TXN; /* Initialize transactions */ /* Open the environment. */ ret = envp->open(envp, db_home_dir, env_flags, 0); if (ret != 0) { fprintf(stderr, "Error opening environment: %s\n", db_strerror(ret)); goto err; } /* Start a checkpoint thread. */ if ((ret = pthread_create( &ptid, NULL, checkpoint_thread, (void *)envp)) != 0) { fprintf(stderr, "txnapp: failed spawning checkpoint thread: %s\n", strerror(ret)); goto err; } /* * All other threads and application shutdown code * omitted for brevity. */ ... } void * checkpoint_thread(void *arg) { DB_ENV *dbenv; int ret; dbenv = arg; /* Checkpoint once a minute. */ for (;; sleep(60)) if ((ret = dbenv->txn_checkpoint(dbenv, 0, 0, 0)) != 0) { dbenv->err(dbenv, ret, "checkpoint thread"); exit (1); } /* NOTREACHED */ }
http://idlebox.net/2011/apidocs/db-5.2.28.zip/gsg_txn/C/filemanagement.html
CC-MAIN-2013-20
refinedweb
180
52.56
Creating an Advanced Google Maps Component in Ionic 2 By Josh Morony We all know how useful maps can be in a mobile application, and Google Maps is a great option to do just that. Of course, you can use the native Google Maps SDK through the use of a Cordova plugin, but I’m a fan of using the Google Maps JavaScript SDK. I’ve already covered how to do a basic Google Maps implementation in Ionic 2, but when using the JavaScript SDK it’s important to consider: What if the user does not have an Internet connection? It’s not unreasonable to make the maps unavailable if the user does not have an Internet connection, but how do we handle that gracefully? We don’t want an error occurring and breaking the application (because the Google Maps SDK hasn’t been loaded) or otherwise causing the maps not to work, so we need to consider the following: - What if the user does not have an Internet connection? - What if the user doesn’t have an Internet connection initially but does later? - What if the user does have an Internet connection initially but doesn’t later? To handle all of these scenarios, the solution we want to implement will: - Wait until a connection is available before loading the Google Maps SDK, rather than straight away - If the connection becomes unavailable, disable the Google Maps functionality - If the connection becomes available again, enable the Google Maps functionality again I rely on this functionality a lot and have already implemented it in Ionic 1 and Sencha Touch, so now I’m going to cover how to set up the same functionality in Ionic 2. Before We Get Started UPDATE: For a more up to date implementation of Google Maps in Ionic, you should check out this new tutorial. Before you go through this tutorial, you should have at least a basic understanding of Ionic 2 concepts and the differences to Ionic 1. You must also already have Ionic 2 installed on your machine. If you’re not familiar with Ionic 2 already, I’d recommend reading my Ionic 2 Beginners Guide first to get up and running and understand the basic concepts. 1. Generating a New Project Let’s start off by simply generating a new Ionic 2 project by running the following command: ionic start ionic2-advanced-maps blank --v2 We will eventually be making use of a provider for this application called ConnectivityService. This will handle detecting whether or not a user has an Internet connection available. We will go through the implementation later, for now we will just generate it. Run the following command to generate the provider: ionic g provider ConnectivityService In order to use this provider throughout the application we will need to add it to the app.module.ts file. Modify src/app/app.module.ts to reflect the following: import { NgModule } from '@angular/core'; import { IonicApp, IonicModule } from 'ionic-angular'; import { MyApp } from './app.component'; import { HomePage } from '../pages/home/home'; import { ConnectivityService } from '../providers/connectivity-service'; @NgModule({ declarations: [ MyApp, HomePage ], imports: [ IonicModule.forRoot(MyApp) ], bootstrap: [IonicApp], entryComponents: [ MyApp, HomePage ], providers: [ConnectivityService] }) export class AppModule {} 2. Create the Google Maps Page First, we are going to work on the home page which we will use to hold our Google Map. We will get a basic implementation of the class set up first, and then build on it. ModifyCtrl: NavController, public connectivityService: ConnectivityService) { this.loadGoogleMaps(); } } We’ve set up a few member variables here which we will make use of later, the only one you need to specifically worry about is the apiKey. This can be omitted, but if you have an API key for Google Maps (which you will need if your app will have a high load) then you can add it here. You will notice that at the end of the constructor we are making a call to loadGoogleMaps. This is what will trigger all of our logic but before we move onto that, we also need to create our connectivity service that is also referenced in the constructor. We are also importing Geolocation from the Ionic Native library that we will make use of soon, and we are using ViewChild and ElementRef to set up a reference to the map element that we will add to our template later. This way, we can simply add #map to the element in the HTML, and we will be able to set up a reference to it called mapElement by doing this: @ViewChild('map') mapElement: ElementRef; We’ve also added declare var google; so that TypeScript won’t throw up any errors when we start using the 3. Creating a Connectivity Service in Ionic 2 We’re going to make this service work both through the browser and when it is running on a device. We can much more accurately detect if the user has an Internet connection on a device if we use the Cordova Network Information plugin, so let’s add that by running the following command: ionic plugin add cordova-plugin-network-information We’ll also be making use of the Geolocation API, so feel free to add that plugin as well: ionic plugin add cordova-plugin-geolocation If you take a look at src/providers/connectivity-service.ts now it should look like this: import { Injectable } from '@angular/core'; import { Http } from '@angular/http'; import 'rxjs/add/operator/map'; @Injectable() export class ConnectivityService { constructor(public http: Http) { console.log('Hello ConnectivityService Provider'); } } This is what is automatically generated by the Ionic CLI, which is a nice starting point, but we are going to have to build on that. Modify src/providers/connectivity-service.ts to reflect the following: import { Injectable } from '@angular/core'; import { Network } from 'ionic-native'; import { Platform } from 'ionic-angular'; declare var Connection; @Injectable() export class ConnectivityService {; } } } We are injecting the Platform service which is provided by Ionic to detect whether we are running on iOS or Android (or neither). We want to use the network information plugin to check if the user is online if we are running on a device, so we need to run some different code if they are running through a normal browser. We’ve simply added two functions to this service which we will be able to call later, one to check if the user isOnline and one to check if the user isOffline. These both really do the same thing, and you could get away with having just the one function if you wanted. If we are running on a device we check the online status by checking navigator.connection.type, and if we are running through a browser we instead check navigator.onLine. Now all we need to do is import this connectivity service into any class that we want to use it in (like we have already done in home.ts). 4. Load Google Maps only when Online Ok now that we’ve got our connectivity service sorted we can get back to the logic of our Google Maps page. There’s actually quite a few steps we need to take care of, so I’m going to do a bit of a code dump here and then walk through it. Modify src/pages/home: NavController, public connectivityService: ConnectivityService) { this.loadGoogleMaps(); } loadGoogleMaps(){ this.addConnectivityListeners(); if(typeof google == "undefined" || typeof google.maps == "undefined"){ console.log("Google maps JavaScript needs to be loaded."); this.disableMap(); if(this.connectivityService.isOnline()){ console.log("online, loading map"); //Load the SDK window['mapInit'] = () => { this.initMap(); this.enableMap(); } let script = document.createElement("script"); script.id = "googleMaps"; if(this.apiKey){ script.src = '' + this.apiKey + '&callback=mapInit'; } else { script.src = ''; } document.body.appendChild(script); } } else { if(this.connectivityService.isOnline()){ console.log("showing map"); this.initMap(); this.enableMap(); } else { console.log("disabling map"); this.disableMap(); } } } initMap(){ this.mapInitialised = true;); }); } disableMap(){ console.log("disable map"); } enableMap(){ console.log("enable map"); } addConnectivityListeners(){ let onOnline = () => { setTimeout(() => { if(typeof google == "undefined" || typeof google.maps == "undefined"){ this.loadGoogleMaps(); } else { if(!this.mapInitialised){ this.initMap(); } this.enableMap(); } }, 2000); }; let onOffline = () => { this.disableMap(); }; document.addEventListener('online', onOnline, false); document.addEventListener('offline', onOffline, false); } } As I mentioned before, the first thing we do is call the loadGoogleMaps function that we have now created. This first checks if the google object is available, which would mean the SDK has already been loaded. If the SDK has not been loaded then we check if the user is online and then we load it by injecting the script into the document. Notice that we also supply a callback function that will trigger when the SDK has been loaded, this way we know when it is safe to start doing stuff with the map. At this point, if there is no Internet connection then we don’t do anything. The next thing we do (assuming an Internet connection is available) is run the initMap function. This simply handles creating a new map by using the loaded Google Maps SDK and sets the coordinates to the users current location using the Geolocation API. The only other important bit of code here is the addConnectivityListeners function. This will constantly listen for when the user comes back online, and when they do it will trigger the whole loading process we just discussed above (if it has not been completed already). This code will also call the enableMap and disableMap functions every time the Internet connection is lost or gained. Right now these don’t do anything apart from log a message to the console, but you can modify these to take whatever action you need. 5. Add the Map to the Template If you’ve tried running your code during this process you will have found that no map actually displays on the screen (and you’ll probably get some errors too). This is because we still need to add the map to our template and also add a bit of styling. Modify src/pages/home/home.html to reflect the following: <ion-header> <ion-navbar> <ion-title>Map</ion-title> </ion-navbar> </ion-header> <ion-content> <div #map</div> </ion-content> Modify map.scss to reflect the following: .ios, .md { home-page { .scroll-content { height: 100% } #map { width: 100%; height: 100%; } } } Summary If you compare the code of this tutorial to my other Google Maps tutorial for Ionic 2 you’ll notice that it is a lot more complex. You can’t assume that the use will always have an Internet connection though, so it means a solution like this is necessary in a production environment. If you were to go with the simpler approach and your user didn’t have an Internet connection when they first opened the app then it just wouldn’t work at all until they completely restart the app. The good news is though that this process never really changes, now that you’ve created one map component you should be able to quite easily drop it into any of your applications and just build on top of it. If you’d like to see how to do some more things with Google Maps in Ionic 2, my other tutorial covers how to add markers and info windows to the map.
https://www.joshmorony.com/creating-an-advanced-google-maps-component-in-ionic-2/
CC-MAIN-2019-51
refinedweb
1,875
50.67
. Tips for debugging MIDlet startup issues This article explains how to debug problems when you attempt to run your Java ME MIDlet on a real device. It covers both symptoms and likely causes. Cannot Even Download! Symptom: You are trying to install over the air (OTA, from an HTTP server), and you get a message like "unknown file type", or the JAD file displays in the browser as a text file, then your server might not have the correct MIME types configured. A "MIME type" is the piece of information sent with a file, that enables the receiver (like the browser) to know what kind of file it is, and process it correctly. Servers usually work out the MIME type by looking at the file name’s extension, and looking that up in their configuration file. The types you need are: You may need to ask your server’s administrator to add these for you. Some devices do not like certain characters in the file names. Keep to printable ASCII characters (those with codes 33-126). Avoid characters that are often forbidden in file names, like "*", "?", etc. Avoid spaces. Keep to "a"-"z", "A"-"Z" and "0"-"9". Ideally, keep to lower case, to avoid problems between case-sensitive (Unix/Linux) and case-insensitive (Windows) file systems. Underscores ("_") are usually OK. Avoid having more than one "." in the name. Some devices get confused with names like "myapplication.1.0.jar", seeing the file type as "1.0.jar" instead of "jar", and thinking it is not an application. Symptom: You see a message like "JAR too large" or "invalid JAR" when attempting to download, yet there is more than enough space on the device. Some devices have maximum file size limits. JAD files should ideally be kept below 5k. JAR files may have device-specific limits. Series 60 devices do not impose JAR size limits, but Series 40 devices do (as do some devices from other manufacturers). Check the device specifications for the maximum JAR size. Check what is in your JAR... sometimes over-sized JARs are caused by unwanted files creeping into the build process. Source code files and "thumbs.db" are common culprits. Symptom: During an OTA download, the download stops prematurely, and the application is not installed. Provided you install a JAD first, the phone knows how big the JAR will be before downloading it, and should not download a JAR it cannot install. While some devices will allow a JAR-only installation, this is not recommended. Use a JAD, ensuring that the MIDlet-Jar-Size is correct. Network problems can also cause premature download termination. Typical problems are: - Loss of signal - try downloading somewhere where the signal is stronger - Switching between 3G and 2G - if you have a good 2G signal but a poor 3G signal, try disabling 3G on the device (usually there is an option in phone network settings, set to "GSM" rather than "auto") - Network download limitations - some networks may limit download sizes, so try downloading something smaller (say, less than 64k, and work up to establish the limit) Sometimes, network limits appear to be specific to a cell, so you may find yourself unable to download a file, while someone in another city, using the same network, can. Problems Installing by Bluetooth or Infrared on Series 60 Symptom: Installation fails when installing from the message inbox (non-OTA installation). Send JAD and JAR files to the Series 60 message inbox, and installing from there, is a frequent source of problems. Common messages refer to "JAD/JAR mismatch" or "Version already installed". Problems may also occur with signed builds (they appear to be unsigned, or to have invalid signatures). The JAD and JAR will appear in the inbox as two separate items. Since you can only select one to install, the installation process searches the inbox for the other file. Sometimes, it may find a file from a previous installation (which will not match, or will match a version already installed). Try emptying the inbox, re-sending the JAD and JAR, and installing again. After installing an application, delete it from the inbox (both items, if you used a JAD as well as a JAR). Invalid JAR File Symptom: The JAR downloads OK, but the devices reports "invalid JAR file" or "invalid application", either immediately, or when you try to launch the application. Other symptoms might include "unable to start application" or "java.lang.ClassNotFoundException". First thing to check is the /META-INF/MANIFEST.MF file in the JAR. If you are installing a JAD, you will need to check that too. These files are plain text files, and can be viewed in Notepad. Remember to check the actual MANIFEST file from the JAR. Extract it using the command-line JAR utility, or WinZip. These files must contain: ClassNotFoundException may indicate that the class specified in MIDlet-1 is not present in the JAR. Remember to check what is actually in the JAR, not what you think is there. Notes: - Many devices will accept different version number formats, but some will require a three-part version, like "1.0.1". - Some devices do not like version numbers with many digits in each part. Try reducing the three parts to just one digit each, like "1.2.3". - The icon-file-name must refer to a PNG file, in the JAR, with the full path name (starting with a "/"). If there is no icon, the file name can be omitted. In this case, both commas are still needed. - The class-name must refer to a class in the JAR, with the fully qualified name (including any package name), and that class must be a public class that extends javax.microedition.midlet.MIDlet. - The MIDP and CLDC versions specified must be compatible with the device. Versions are backwards compatible, so a CLDC-1.1 device, for example, is always compatible with CLDC-1.0. A CLDC-1.0 device might refuse to run a CLDC-1.1 application. - The JAR URL can be relative. That is, if the JAR is in the same folder on the same server as the JAD, you need only the JAR file name. - The JAR size must match the size of the JAR file exactly. Where the same attribute appears in both JAD and Manifest, the value in each must be identical. Sometimes you get JAD/Manifest mismatch problems, even though they do match. This can happen when the device or the network's HTTP proxy server caches an older version, when you're downloading OTA. Look for the browser's option for clearing its cache. If that doesn't help, try uploading the JAD and JAR again, but with different file names. Example Manifest: MIDlet-1: My Application, , com.me.MyMidlet MIDlet-Name: My Application MIDlet-Version: 0.1.0 MIDlet-Vendor: me MicroEdition-Profile: MIDP-1.0 MicroEdition-Configuration: CLDC-1.0 Users of eclipseME need to configure these in the Application Descriptor Editor. There is corresponding documentation for the Eclipse MTJ, which replaces eclipseME. Symptom: You see a message about "incompatible" or "unsupported version" (or may be reported as "invalid JAR" or "invalid application"). A particularly common problem is where IDEs have automatically inserted "CLDC-1.1" and "MIDP-2.1" as the configuration and profile versions mentioned above. Try adjusting these to match your device (check the device specifications). Configuration and profile versions should be set to match the requirements of your application. Application Error Symptom: Device displays "Application Error" when the MIDlet is launched. This indicates that an exception has been thrown, and not caught by the application. On some devices, you will be able to find out the name of the exception by selecting "Details". Possible causes: The phrase "bad or missing stack map" means that your .class files have not been preverified. To run on CLDC VMs, Java class files must be processed by the preverifier tool in the Wireless Toolkit. Errors relating to "class format" or "class verification" may also indicate this problem. A common error is to preverify, then obfuscate. Some obfuscators do not recognize CLDC stack maps, and remove them from the .class files to save space (effectively "un-preverifying" the classes). Preverify after you obfuscate. Some obfuscators (such as newer versions of Proguard) are able to preverify for you as part of the process. Nothing Happens Symptom: You launch the application, but nothing happens. No error, no exception, nothing. If you have code in your MIDlet’s constructor, try moving it to the startApp() method. Code in the constructor can sometimes behave oddly, since the MIDlet might not be created completely yet. Avoid doing anything that takes a lot of time in startApp(). Like any event handler, startApp() should return as quickly as possible. Things that take a long time include: - Reading lots of records from RMS - Reading files from the JAR - HttpConnections Create a separate Thread to perform these tasks. This also means you can provide some feedback to the user. Remember that calls to Display.setCurrent() are not acted upon immediately; you must return from startApp() before anything will be displayed. No Luck? Post a question in the Mobile Java Forum. Provide as much detail as possible, including any messages from the device, the device's model, and the contents of your JAD and MANIFEST files. 30 Sep 2009.
http://developer.nokia.com/community/wiki/Tips_for_debugging_MIDlet_startup_issues
CC-MAIN-2015-14
refinedweb
1,563
66.13
Southern Seasons Magazine Holiday/Winter 2013-14 - Cover 1 First cover of Southern Seasons Holiday 2013 issue on newsstands December 2013. HOLIDAY 2013 M AG A ZINE THE XIV DALAI LAMA AT EMORY BUYING DIAMONDS ATLANTA HISTORY CENTER’S AT AUCTION SHEFFIELD HALE MCLEOD~HODGES DINING ~ FASHION PARTIES WEDDING LUXURY LIFESTYLES IN THE SOUTH FONDA: GEORGIA ON HER MIND Art by renowned illustrator Chris Silas Neal. Renowned COLLABORATION For generations, we have partnered with trusted advisors to help solve the most complex wealth issues. To find out how the highest caliber wealth advisory team in the business can complement all you offer your client, call Jack Sawyer at 404-760-2100 or visit wilmingtontrust.com. weALTH AdVISoRY | InVeSTMenT MAnAGeMenT | GLoBAL CAPITAL MARKeTS | ReTIReMenT PLAn SeRVICeS Š2013 Wilmington Trust Corporation. basler-fashion.com Lenox Square Mall 路 Atlanta SOUTHERN SEASONS MAGAZINE | 1 LET US REFRESH YOUR FURS FOR WINTER CLEANING & GLAZING ALTERATIONS CUSTOM DESIGN REPAIR PICK-UP & DELIVERY EXCEPTIONAL PRICING ON NEW FURS 4375 Cobb Parkway S.E. • Suite A Atlanta, GA 30339 • 404.659.2257 2 PUT YOUR EVENT IN THE SPOTLIGHT. The food and service are incredible. The space and design are stunning. Naturally, when you host your event here, you can expect a superior experience. The Kessel D. Stelling Ballroom offers the elegance and ambience that are sure to impress both you and your guests. For more information, contact one of our event planners at 770-916-2807 or visit cobbenergycentre.com/perfect. AT L A N TA , G A F O R T H E P E R F EC T P E R F O R M A N C E SOUTHERN SEASONS MAGAZINE | 3 Buckhead. $4,250,000 3270 West Paces Park Drive Deane Johnson 404.202.3522 Buckhead. $3,490,000 1165 West Conway Drive Betsy Akers 404.372.8144 Whitewater Creek. $2,850,000 4309 Sentinel Post Road Betsy Meagher 404.414.8440 Buckhead. $2,395,000 4280 Irma Court Jim Getzinger 404.307.4020 Milton. $2,549,000 14830 East Bluff Road Karen Salter 770.309.7309 Charlcie Forehand 678.613.4422 Alpharetta. $2,799,900 3030 Wellington Road Christine Gary 404.693.1030 AtlantaFineHomes.com 4 © MMXIII Sotheby’s International Realty Affiliates, Inc. All Rights Reserved. Nature by Roig, used with permission. Equal Housing Opportunity. Each Office Is Independently Owned And Operated. Buckhead ~ 404.237.5000 Intown ~ 404.874.0300 North Atlanta ~ 770.442.7300 2 0 1 3 FA M I LY B U S I N E S S O F T H E Y E A R Driving Value, Delivering Quality | A FA M I LY T R A D I T I O N S E R V I N G AT L A N TA F O R O V E R 6 5 Y E A R S ! I a e u easu erve he Atlan W an u t Metr ea. ustome fo hei sine fo hei a n u mplo ees o .W fo wa erving ou 2014. Bill Voyles, Vice President Valery Voyles, Chairman & CEO Ben Voyles, Vice President edvoyles.com SOUTHERN SEA SEASONS SEAS ONS S MAGAZINE | 5 WINTER 12 14 24 30 34 35 41 42 44 48 50 52 53 54 58 60 62 64 66 68 IN EVERY ISSUE Letter from the Editor Letters to the Editor 70 72 78 82 84 85 86 88 90 93 96 98 100 102 106 112 118 120 128 Toast Worthy Anniversaries McLeod-Hodges Wedding 2013-14 106 PEOPLE & PLACES Jane Fonda’s Stardom & Philanthropy His Holiness the Dalai Lama in Atlanta Glitz Gifts for Her Dazzling Diamond Deals at Auction Monica Matters: Giving that Counts Laura Seydel: The V-Day Campaign Atlanta History Center’s Sheffield Hale PNC Bank’s Cindy Widner Wall Dr. Ronald Goldstein: Faces of Beauty Grady’s New Correll Cardiac Center Ask Dr. Karin: Healthy Boundaries Atlanta’s Independent Schools Gallery Views: Holiday Artists Markets Exhibitions Calendar SOCIETY Parties for a Cause Starfish Ball On the Horizon Zoo Atlanta Beastly Feast Southern Seasons Launch Party Basler/Bloomingdale Party Su and Al Longman Party A Meal to Remember On the Go with Jenny Pruitt Cause to Celebrate Etcetera CALENDAR Performing Arts Fun Around Town STYLE Tesa Render-Wallace at Saks Evening Enchantment: Ravishing Red Foxy Winter Wraps TRAVEL Texas Hill Country DINING Star-Studded Sushi at Umi Dining Guide: Best Bites in Town Sweet Treats WEDDING Gorgeous Gowns COVER CREDITS HOLIDAY COVER: JANE FONDA, PHOTO BY FIROOZ ZAHEDI. WINTER COVER: DALAI LAMA, PHOTO BY KAY HINTON/EMORY UNIVERSITY. LATE WINTER COVER: LAUREN MCLEOD AND MATT HODGES, PHOTO BY BARRIE & RIC MERSHON. HAIR BY AMBROSIA SALON: MARY CROSSMAN & SUSAN ANDERSON. MAKEUP: MICHAEL WHITESIDES & MELISSA PARKER. 107 6 37 72 30 24 ON A MISSION In the “Third Act” of her life, 24 WOMAN Jane Fonda continues to inspire and entertain. EXCHANGE 30 CULTURAL The Dalai Lama in partnership with Emory University shares a wealth of ancient knowledge. IN THE MAKING 44 HISTORY Sheffield Hale leads the Atlanta History Center into a dynamic future. ELEGANCE 72 SOUTHERN Beautiful Atlanta bride Lauren McLeod marries Matt Hodges in an endearing ceremony. 118 112 34 64 SOUTHERN SEASONS MAGAZINE | 7 Soaring Sculpture for Atlanta BeltLine Weighing in at a whopping 13-plus tons, “Iron Column” is a mighty feat for the Atlanta BeltLine. Uniquely created out of the city’s old track rails, spikes, plates, switches and anchors, the sculpture transforms artifacts from Atlanta’s railroad history into a permanent piece of public art. The work was created by Phil Proctor and installed earlier this year on the Eastside Trail near the Historic Fourth Ward skate park. It recalls the Corinthian columns on the façade of the former Union Station, the city’s main railroad station which was demolished in 1972. The sculpture was made possible by a generous donation from IIDA Georgia (International Interior Design Association). “We are pleased that our partnership with the Atlanta BeltLine was able to yield such a timeless piece of permanent art for the public to enjoy for years to come,” said Ronnie Belizaire, president of IIDA Georgia. “It is partnerships like these that keep the Atlanta BeltLine’s momentum strong and capture the spirit of our collective investment in our city,” said Paul Morris, president/CEO of Atlanta BeltLine, Inc. FOR MORE DETAILS, VISIT BELTLINE.ORG. 8 ATLANTA BELTLINE, INC. SOUTHERN SEASONS MAGAZINE | 9 M A G A Z Eileen Gordon Ginger Strejcek Pamela White I N E WINTER 2013 VOLUME 8 NUMBER 4 PUBLISHER & EDITOR ASSOCIATE EDITOR PRODUCTION MANAGER CONTRIBUTING EDITOR TRAVEL EDITOR ECO EDITOR DINING EDITOR STYLE EDITOR Joey McCraw PSYCHOLOGY EDITOR SPECIAL CONTRIBUTOR Over 21 years in Buckhead • 3/4 mile south of Lenox off Peachtree Street Monica Kaufman Pearson Vivian Holley Laura Turner Seydel Jennifer Bradley Franklin Gail O’Neill Dr. Karin Smithson Dr. Ronald Goldstein Lisa Fuller Stephanie Mellom Jim Fitts Nancy Jo McDaniel Gail Lanier Pamela White and Ginger Strejcek Elizabeth and Carl Allen Drs. Dina and John Giesler Jack Sawyer Pamela Smart Dr. Bill Torres Cindy and Bill Voyles New Hours: Wednesday- Saturday, 12-5 ADVERTISING EXECUTIVES STAFF PHOTOGRAPHERS OFFICE MANAGER WEB SITE DESIGN BOARD OF ADVISORS FOUNDER OF SOUTHERN SEASONS MAGAZINE: Bob Brown. For advertising information please call 404/459-7002 THE NEW SEASON MAGAZINE, INC. dba SOUTHERN SEASONS MAGAZINE 6480 Roswell Road, Suite B · Atlanta, GA 30328 Fax 404.459.7077 · E-mail: info@southernseasons.net The subscription rate is $18 for one year; $30 for two years; $42 for three years. Price includes state sales tax. For advertising rates or subscription information, Call 404.459.7002 or visit southernseasons.net 10 SUBSCRIBE ONLINE piedmont.org © 2013 Piedmont Healthcare 02559-0313 SOUTHERN SEASONS MAGAZINE | 11 LETTER FROM THE EDITOR tlanta’s standing as a global hub and a truly international city is highlighted not only by our outstanding residents, but by the people and events that we host. So many internationally-known icons and entities are headquartered right here in the sleepy southern town I grew up in. But we’re sleepy no more! In a visionary alliance formed nearly a decade ago, Emory University began the cultural exchange with His Holiness the XIV Dalai Lama. This has become a groundbreaking partnership. The Tibetan culture offers much for us to learn as its people have, over the centuries, become masters of mind-body medicine and exhibit the longevity to show its validity. There are so many elements to this fascinating curriculum that you must partake of this inspirational read. Another international celebrity who brightens our city is none other than Jane Fonda, a two-time Oscar and Emmy-winning actress, who was nominated for another Emmy this year for her portrayal of Leona Lansing in HBO’s “The Newsroom.” Her professional accolades are enhanced by her philanthropic commitment to some of the greatest needs in our southern community: teen pregnancy and obesity. Her amazingly effective GCAPP has helped to reduce teen pregnancy by a staggering 45% in Georgia, and now Jane has aggressively expanded GCAPP’s mission to combat obesity and convey higher self-esteem in teens by raising their awareness of healthy choices. Our list of historical venues certainly includes the Atlanta History Center. In this remarkable issue, president/ CEO Sheffield Hale, an eloquent historian full of southern charm who has taken the helm of this unique cultural center, shares so many fascinating tales. One of the loveliest weddings in Atlanta took place in Enlightening Our City!! A Jane Fonda and Eileen Gordon. Jane Fonda & the Dalai Lama August, as Lauren McLeod married Matt Hodges at The Ritz-Carlton, Buckhead. Their love story is enchanting, as Lauren says she fell in love with Matt in just 10 days! He proposed to her in a gorgeous lavender field in the South of France, and the rest, as they say, is happy history! And as it is our impressive citizens who make us the most generous city in the world, we are delighted to profile Cindy Widner Wall of PNC Bank for her heartfelt support and participation in this city’s most respected nonprofit endeavors. Many other intriguing features and people profiles round out our holiday issue, which I proudly present for your reading pleasure. Wishing you a wonderful holiday season, Eileen Gordon Publisher & Editor 12 NANCY JO MCDANIEL Seasonal Specials Around the South ’Tis the season for some holiday fun happening over the river and through the woods. So pack up the car and hit the road to these family-friendly destinations: The Ritz-Carlton Lodge, Reynolds Plantation. Ice skating, elf tuck-ins, eggnog body scrubs and more await at this AAA Five Diamond resort tucked in the idyllic setting of Lake Oconee in Greensboro. From roasting s’mores and decorating gingerbread houses to savoring seasonal fare, indulging in sensational spa treatments, and gliding around the ice rink, guests can enjoy a truly memorable winter escape. RitzCarltonLodge.com. Biltmore Estate, Asheville, NC. Marvel at the spectacle of America’s largest home decorated with dozens of trees, thousands of ornaments and miles of brightly lit evergreen garland. Biltmore is impeccably decked for the holidays through Jan. 12. The elaborate celebration harkens back to Christmas Eve 1895 – the first time Biltmore’s founder, George Vanderbilt, hosted friends and family in his new home. biltmore.com. Chattanooga, TN. Explore a nocturnal fantasyland at Rock City with a million twinkling lights high atop Lookout Mountain. Have a Ruby Red Christmas with fresh fallen snow and horse-drawn carriage rides at Ruby Falls. Hop aboard a train for a trip to the North Pole. Go caroling on a river cruise. Shop ’til you drop at the Holiday Market. Chattanooga hosts a flurry of festivities throughout December. chattanoogafun.com/winter. Callaway Gardens, Pine Mountain, GA. Named one of the “Top 10 Places to See Holiday Lights” by National Geographic Traveler, Callaway Gardens’ Fantasy in Lights dazzles through Dec. 30, with eight million lights stretching more than five miles. Robin Lake Beach additionally features two narrated scenes with music and choreographed lights. For holiday shopping, Santa pictures, tasty treats and souvenirs, stop by the Christmas Village. callawaygardens.com. Get G et at Buckhead Lif Life’s Celebrated Restaurants All gift or personal dining card purchases get 20% more dollars. Complimentary gift boxes available. Voted Atlanta's Most Unique G i f t Voted Never expires • The gift t that keeps giving year-round Works like a debit card • Replacement for lost registered cards ards Purchase at any of our restaurants or: buckheadrestaurants.com | 404.237.2060 Offer valid through January 31, 2014. Applies to purchases in $25 increments. Additional restrictions apply. SOUTHERN SEASONS MAGAZINE | 13 LETTERS TO THE EDITOR This is so exciting! I love the cover and can’t wait to see the rest of the magazine! What a great way to highlight Karen, the ball, and Shepherd Center all at the same time! FLORINA NEWCOMB, SPECIAL EVENTS ASSOCIATE, SHEPHERD CENTER FOUNDATION WOW! I am so excited about the cover. It looks great and the article is beautifully written. Thank you a million times! It is such a great cause and the cover will generate such wonderful publicity for the Ball. The rest of the magazine is just gorgeous! KAREN SPIEGEL , LEGENDARY PARTY CHAIR I am sitting here with Southern Seasons Magazine and I am truly amazed by the cover and the article. I am so honored and grateful that you included us in this edition and I can’t thank you enough for it. It is a wedding gift that we can keep for as long as God gives us license to enjoy. Justin loved the magazine and the article. Thank you for making our wedding a very special event. I dreamed of my wedding day for so long and I couldn’t ask for a better one. It was magical! LINA TRIESCH, SAN ANTONIO, TEXAS The fall issue looks absolutely stunning! Congratulations. All your hard work is very evident and each page is more beautiful than the other. BILL LOWE , BILL LOWE GALLERY Love the new covers!! RONALD GOLDSTEIN, DDS, GOLDSTEIN, GARBER & SALAMA You just outdid yourself on the beautiful Fall issue of Southern Seasons. The fabulous cover for Shepherd Center, with Karen and the adorable Frosty and Bentley, is absolutely wonderful. Karen looks beautiful – a real lady, with real warmth, vibrant personality and someone whose image reflects the best in volunteer leadership. It is a very special gift to Shepherd Center that you have created. Thank you so very much! We are both thrilled beyond words. SUSAN TUCKER, TUCKER & ASSOCIATES Thank you so much for your beautiful page on the Swan House Ball. We are grateful for its success! BARBARA JOINER, SWAN HOUSE BALL CHAIR Of course we were proud of the attention you gave the silver anniversary of the Buckhead Coalition in your Fall edition, but of greater importance is the high quality of your periodical with its coverage of so many people and events in our community. To have a sophisticated magazine identified with Buckhead helps further cultivate our image with the brand we desire. It’s obvious you work diligently to reach perfection, which must benefit your advertisers equally to your subscribers. This formula for success shows through issue and issue. SAM MASSELL , PRESIDENT, BUCKHEAD COALITION Wow, what great coverage in the Fall issue for everyone. I saw so many familiar faces of so many great folks in the Atlanta spotlight that we have come to know over the years. I really liked all the coverage on our local folks and heroes who do and give so much, I couldn’t put it down reading all the articles. Loved the one Monica [Pearson] did defending Paula Dean. Monica is definitely a Class Act. I saw a bride and groom who looked very familiar and, low and behold, it was Marge and Jack [Sawyer] on their wedding day in 1948. Just had to say how cool that picture is. They are very special folks. JIM BOEHM, CLASS ACT Thank you so much for featuring Hublot’s Ladies Luncheon in the Fall issue of Southern Seasons ! We greatly appreciate your interest. NATALIE NAJJAR MACKING, SR. ACCOUNT EXECUTIVE, PREMIER AGENCY Just wanted to say a quick thank you for featuring our “Women On The Run” event in Southern Seasons. MANUELA “M” IKENZE , EVENT PRODUCER 14 The quality of your magazine is superb. The editorials and features are above and beyond first rate. I place your magazine among the very best in the industry.” I want to thank the editor and your staff for the outstanding article and photos that are in your Fall 2013 edition. The quality of your magazine is superb. The editorials and features are above and beyond first rate. I place your magazine among the very best in the industry. SYLVIA WEINSTOCK , SWEINSTOCK LLC, NEW YORK Great article [on Hawaii] by Vivian Holley for Southern Seasons Magazine ! FOUR SEASONS RESORT HUALALAI AT HISTORIC KA’UPULEHU We loved the most recent issue of Southern Seasons, especially the sensational cakes! CAPEL KANE , LAFORCE + STEVENS, NEW YORK The magazine is incredible! Robin Meade is a gorgeous woman and what a cover she made! I think you have outdone yourself with this issue – loved reading it from front to back. By the way, I really enjoyed Monica Pearson’s article about Paula Dean and I can not agree more! SU SO-LONGMAN, CEO, PALLET CENTRAL ENTERPRISES, INC. I really liked all the coverage on our local folks and heroes who do and give so much.” The fall issue looks absolutely stunning. All your hard work is very evident and each page is more beautiful than the other.” GOT A COMMENT? WE’D LOVE TO HEAR FROM YOU. DROP US A LINE AT INFO@SOUTHERNSEASONS.NET OR CALL 404/459-7002. SOUTHERN SEASONS MAGAZINE | 15 I’m so grateful to you for the coverage in the Fall issue of Southern Seasons. ROBIN MEADE , NEWS ANCHOR, HLN Thank you to Southern Seasons and to Laura Seydel for the wonderful article she wrote and you all ran about Georgia Interfaith Power & Light in the Fall 2013 issue. We are grateful for Laura’s witness as she cares for Creation and for Southern Seasons for helping spread this great message. We have already made several great connections and are looking forward to helping even more congregations care for creation. REV. ALEXIS CHASE , EXECUTIVE DIRECTOR, GEORGIA INTERFAITH POWER & LIGHT We had so much fun at the Southern Seasons party and the magazine is beautiful. The color quality is terrific – and so are the owners! BRENDA SMITH, ATLANTA Thank you so much for the great coverage of the Atlanta Speech School Language & Literacy Gala in your fall issue! We are so appreciative! CATHERINE MITCHELL JAXON We are thrilled with the gorgeous feature of Maggie and Brian’s wedding! REICHMAN PHOTOGRAPHY Because of your generosity, this was our most successful Blue Jean Ball in history, raising nearly $110,000 – a 62% increase from last year. Remember to save the date for next year’s annual ball of Crime Stoppers Greater Atlanta: Oct. 18, 2014. ATLANTA POLICE FOUNDATION Cheers! lanta’s Fin t It’s Life...Captured & FramedⓇ Candy Cane Lane 2-1/2 OZ. VAN GOGH BLUE VODKA 1 OZ. WHITE CRÈME DE MENTHE 1/2 OZ. PEPPERMINT SCHNAPPS CREAM SPLASH OF GRENADINE PEPPERMINT CANDY (GARNISH) Add a dash of grenadine into the bottom of a chilled martini glass and set aside. Pour the rest of the ingredients into a cocktail shaker filled with ice. Shake well and strain into the glass with grenadine to create a “swirl” effect. Garnish with peppermint candy. Mistletoe Martini 1.5 OZ. VIKINGFJORD CHOCOLATE RASPBERRY VODKA .25 OZ. CRÈME DE CACAO .75 OZ. GALLIANO RISTRETTO 1 OZ. SWEETENED HEAVY CREAM NJM Photography 404-355-3111 16 Combine first three ingredients in a mixing glass and add ice. Stir, strain and serve up in a cocktail glass, gently pour cream on top so it floats and garnish with three raspberries on a toothpick. ® ® e best career move you’ll ever make. Powerful marketing for all of your listings | Global exposure from multiple luxury partners Training to keep your business strong | Leading real estate technology & support For a con dential interview, call 404-504-7944. 532 East Paces Ferry Road, Suite 300 | Atlanta, Georgia 30305 SOUTHERN SEASONS MAGAZINE | 17 W W W. H A R R Y N O R M A N . C O M 18 SOUTHERN SEASONS MAGAZINE | 19 Go Red for Women! Created by women, for women, Go Red For Women® is the American Heart Association’s national movement to end heart disease in women. It promotes awareness and advocates for more research and swifter action for women’s heart health. Here are some ways to get involved: Feb. 7, 2014: National Wear Red Day. Wear red to support the cause, celebrate survivors and educate women about how to prevent heart disease. goredforwomen.org Feb. 8, 2014: Go Red Connect Event. Connect with the campaign through free screenings, educational materials, survivors’ stories, and other means of creating awareness. May 20, 2014: Go Red for Women Luncheon, with educational seminars and a keynote speaker. atlantagored.ahaevents.org FOR INFORMATION ON THE METRO ATLANTA AHA, VISIT HEART.ORG/ATLANTA. SOBERING STATISTICS ON HEART DISEASE IN WOMEN: • Heart disease is the No. 1 killer of women, and is more deadly than all forms of cancer combined. • Heart disease causes 1 in 3 women’s deaths each year, killing approximately one woman every minute. • An estimated 43 million women in the U.S. are affected by heart disease. • 90% of women have one or more risk factors for developing heart disease. • Women comprise only 24% of participants in all heart-related studies. • Since 1984, more women than men have died each year from heart disease. • The symptoms of heart disease can be different in women and men, and are often misunderstood. • While 1 in 31 American women dies from breast cancer each year, 1 in 3 dies of heart disease. • Only 1 in 5 American women believe that heart disease is her greatest health threat. A Leader in Comprehensive Orthopaedic Services • Board certified and fellowship trained in orthopaedic surgery and sports medicine • Specialties include arthroscopic surgery and joint replacement • Team physician for the Atlanta Silverbacks and Pace Academy PIEDMONT 105 Collier Road, Suite 2000 Atlanta, GA 30309-1710 • 404-352-1053 KENNESAW 3525 Busbee Drive, Suite 100 Kennesaw, GA 30144 • 770-635-1812 MARIETTA 790 Church Street, Suite 250 Marietta, GA 30060 • 770-635-1812 Peter J. Symbas, M.D. 20 orthoatlanta.com I WOULDN’T BE HERE WITHOUT GRADY. IT WAS A BEAUTIFUL SUNDAY AFTERNOON. My entire family was in the car, on the way home from Sunday school, when we had to swerve to avoid a mattress in the road and our car flipped over. My husband found me lying on the expressway, bleeding profusely. I was taken by ambulance to Grady with a shattered pelvis, a broken hip, broken ribs and a collapsed lung. I was very scared. Everyone there was so kind and kept telling me my kids were okay. If it wasn’t for Grady, I wouldn’t be here today to tell my story. Jamie Kleber Trauma Survivor SOUTHERN SEASONS MAGAZINE | 21 Is Your Skin Aging Faster Than You Are? You don’t have to live with skin that makes you look older than you want to look. What’s the solution? See a physician who’s a member of the American Society for Dermatologic Surgery. We offer affordable, minimally invasive, in-office treatments for a variety of skin concerns. Call 1-800-441-ASDS (2737). Or visit our Web site at 45 40 35 30 25 AS DS American Society for Dermatologic Surgery • Skin Rejuvenation & Resurfacing • Laser Therapy • Vein Treatment • BOTOX, Dysport & Xeomin Treatments • Skin Cancer Surgery • Liposuction • Fillers & Volumizing A MERICAN S OCIETY F OR D ERMATOLOGIC S URGERY Y O U R Herbert D. Alexander, Jr., M.D. Linda M. Benedict, M.D. Harold J. Brody, M.D. Alia S. Brown, M.D. Darren L. Casey, M.D. Kendra Cole, M.D. Jerry L. Cooper, M.D. Gregory J. Cox, M.D. Ashley Curtis, M.D. Richard L. Detlefs, M.D. 22 S K I N S U R G E R Y E X P E R T William L. Dobes, M.D. Corrine Erickson, M.D. Rutledge Forney, M.D. Trephina H. Galloway, D.O. Edmond I. Griffin, M.D. Alexander S. Gross, M.D. Tiffani K. Hamilton, M.D. Michelle L. Juneau, M.D. D. Scott Karempelis, M.D. John D. Kayal, M.D. Mark A. Knautz, M.D. J. Ellen Koo, M.D. Stephen J. Kraus, M.D. Katarina Lequeux-Nalovic, M.D. Elizabeth M. Losken, M.D. Eileen S. Niren, M.D. David C. Olansky, M.D. Diamondis Papadopoulos, M.D. Anna Paré, M.D. Joseph R. Payne, M.D. Henna K. Pearl, M.D. Dirk B. Robertson, M.D. Kirk D. Saddler, M.D. Richard L. Sturm, M.D. Janice M. Warner, M.D. G. Williamson Wray III, M.D. Sylvia W. Wright, M • 404.262.7733 • PHOTO COURTESY OF PARISH KOHANIM 4405 Northside Parkway, Suite 110 • Atlanta, Georgia 30327. Marianna Kovitch Dr. Dina Giesler CONNECT WITH US ON FACEBOOK AND TWITTER. SOUTHERN SEASONS MAGAZINE | 23 BY EILEEN GORDON Fonda’s MOVIES, TV AND GEORGIA ALWAYS Third From Jane’s soon to be released movie “?????????.” MITCHELL AND LAURA TURNER SEYDEL . STEPHANIE SIMON, JANE FONDA , PAT Jane Fonda is in a great place now, living very much in the present. In this phase of her amazing life, which she refers to as her “Third Act,” she is focused, productive and happy. 24 ON HER MIND F Act ollowing her hiatus from the silver screen during her 10-year marriage to Ted Turner, Jane seems to have effortlessly glided right back into movie stardom as though she had never left. Personally, I think that her body of work in the past decade may be her best yet, as I realize how much I missed her onscreen presence. NANCY JO MCDANIEL SOUTHERN SEASONS MAGAZINE | 25 From Jane’s soon-to-bereleased movie “This is Where I Leave You.” Her newest film roles are all intriguing characters, brilliantly portrayed. She is an actress! But this is what she does and not all that defines her. She is also a passionate activist and philanthropist. She’s deeply committed to adolescent issues, having found a great need among teenage girls when she first moved to Georgia. As the daughter of Hollywood legend Henry Fonda, you might mistakenly think that Jane Fonda has led a fairy-tale life full of privilege. But the truth is that her mother died when she was just 12 years old, and this led to the most difficult and challenging years of her young life. As a teenager without her mother’s love and guidance, Jane admits to being terribly unhappy and lonely as she tried to figure out who she was and what she wanted from life. She tried to be perfect and to fit into the mold of what she believed was expected of her, but she felt that something was wrong with her because her teen years were so emotionally dysfunctional. She went on to achieve remarkable success, with a career that took her from fashion model to movie star and fitness guru. As a liberal feminist, she always sparked a few fires along the way. When Jane headed south to Atlanta in the early ’90s, she took up arms for something that no one wanted to touch: teen pregnancy. In 1995, she founded the Georgia Campaign for Adolescent Pregnancy and Prevention (GCAPP) to help a target group that she personally and profoundly related to: teenagers. The organization has since become one of the most 26 effective nonprofits in the Southeast to help impact a significant 45% decrease in teen pregnancy through amazingly effective grassroots efforts based on expert educational programs. “From the time I moved to Atlanta and married Ted, I was interested in learning everything I could about this state,” Fonda said, “and I actually might have learned more than even Ted knew.” The greater Atlanta region was divided by neighborhoods ranging from extreme wealth to extreme poverty, and the statistics on the poor communities included a nationwide record in teen pregnancy. Another startling factor was teen and adult obesity. The challenges to help teenagers became Fonda’s motivation to shake things up. Fonda learned from medical professionals that the part of the brain in which planning and decision-making takes place is still under construction and does not fully develop until the early to mid 20s. This helps explain why some teens engage in risky behavior without thinking about the long-term consequences. Jane puts it this way: “Teenagers are full of raging hormones with a not-yet-mature brain. It’s like a Ferrari engine in the body of a Model-T.” During GCAPP’s first decade, its mission was to strive to eliminate teen pregnancy and to offer safe living environments for teen moms, where they learn to be good parents for their babies and finish high school while planning and working toward a future, including a career to support herself and her child. JANE FONDA , SAM WATERSTON AND JEFF DANIELS. I’M SERIOUSLY THINKING ABOUT STARTING A LEONA LANSING FAN CLUB Leona Lansing was the role Jane Fonda was born to play. This character embraces all the best of Jane’s real life strength, wit, sarcasm, brilliance and self-assuredness, as she is the cameo star of “The Newsroom,” HBO’S cable series about what really goes on inside one of the top cable news networks in the country. Only the characters inter-personal relationships are fiction, as writer/ producer Aaron Sorkin has brilliantly structured the story lines around actual worldwide news events. SAM WATERSTON, JANE FONDA AND MARCIA GAY HARDEN. SOUTHERN SEASONS MAGAZINE | 27 Fonda’s compassion and empathy for Georgia’s teenage girls did not just begin or end when she learned the alarming rate of teenage pregnancy in our state. ROSS HENDERSON GCAPP Board member Sonya Thompson and Daniel Meachum with GCAPP President and CEO Vikki Millender-Morrow. Betsy Feltus, a special guest who came from her Natchez, MS home for the occasion of the Patron Party, looked admiringly at Jane Fonda and at her own daughter Ginny Brewer, who is Patron Committee Co-chair and a GCAPP Board member. ROSS HENDERSON As is Fonda’s nature, she set out to learn everything she could about the teen pregnancy explosion within this largely economicallychallenged population. She consulted every expert she could find to understand this phenomenon so that she could effectively combat it. What she learned is that the socio-economic issues contributing to this were immense, including the lack of parental supervision in mostly single mother households, where the mother worked a full-time job in addition to raising her kids. Now was the time to take GCAPP to the next level. And Fonda gathered her incredible team and forged ahead. Still focused on this economically and socially challenged target group of teens, GCAPP has expanded its scope to address the high rate of obesity among adolescents (Georgia has the second highest rate in the nation), teaching young people how to make healthy choices about nutrition and their bodies. The organization likewise tweaked its name to the Georgia Campaign for Adolescent Power & Potential. “Power & Potential” also refers to her efforts to encourage these teens to make healthier choices in relationships by recognizing what a healthy relationship both looks like and feels like. Through a generous grant to Emory School of Medicine, Fonda established the Jane Fonda Center for Adolescent Reproductive Health in 2001. The goal of the center is to prevent adolescent pregnancy through training and program development. “I am excited to be partnered with the Department of Gynecology and Obstetrics at Emory University, one of the world’s leading universities, in providing research, education, training and inspiration to those who guide young people as they develop and mature,” Fonda said. FOR MORE INFORMATION, VISIT GCAPP.ORG OR JANEFONDACENTER.EMORY.EDU. 28 Danielle Beck and Jarey Milbury with GCAPP Development Director Kathy Egan. ROSS HENDERSON ROSS HENDERSON JANE SAID, “Divorce doesn’t stop the love.” She says she has stayed very close with ex-husband, Ted Turner, and her affection and respect for her step-daughter and son-in-law, Laura and Rutherford Seydel, is as close and enduring as any mother-daughter relationship could be. SOUTHERN SEASONS MAGAZINE | 29 ...Found in Translation “I am a simple monk” is the way His Holiness the Dalai Lama describes himself. But to the American scholars who have gotten to know him, he is an ambassador for humanity. What he brings to Atlanta’s Emory University is an immeasurable cultural and academic exchange between two of the most diverse cultures on the planet. 30 KAY HINTON / EMORY UNIVERSITY A Wealth of Knowledge BY EILEEN GORDON His Holiness the Dalai Lama, one of the world’s most renowned and revered voices for peace and universal ethics, is the spiritual leader of Tibet and the 1989 recipient of the Nobel Peace Prize. He describes himself as a simple Buddhist monk. SOUTHERN SEASONS MAGAZINE | 31 ANN BORDEN / EMORY UNIVERSITY 195 ANN BORDEN / EMORY UNIVERSITY Event designer Tony Brewer and Emory’s Michael Kloss, pictured with the Dalai Lama, have worked together since 2007 orchestrating the special private events held for the Dalai Lama during his visits to Atlanta. “As Tony and I have said numerous times when struggling through the complicated security, timing and protocol logistics of producing these events, it’s not about the lunch,” Kloss said. “Beyond the décor, flowers and artwork, people are truly the centerpiece of these events.” HIS HOLINESS THE XIV DALAI LAMA • • • • • • • • • • • Discovered at age 2 as the reincarnation of the Dalai Lama Began studying as a monk at age 6 Assumed full Dalai Lama duties in 1950 Fled from Tibet in 1959 during the Tibetan Uprising due to fears of retaliation from Chinese government Has lived in India since political exile from Tibet Won Nobel Peace Prize in 1989 Bestowed with over 150 awards and recognitions Co-authored over 110 books Has traveled to more than 67 countries spanning 6 continents to teach Special visits to Emory in 2007, 2010 and 2013 Teaches compassion, peace and inter-religious understanding for all people his October, more than 12,000 guests flocked to the Gwinnett Center and Emory campus for the amazing opportunity to see, hear and witness His Holiness the XIV Dalai Lama, Tenzin Gyatso. “The Visit 2013” marked the Dalai Lama’s third trip to Atlanta as a Presidential Distinguished Professor at Emory University. In a remarkably unique and highly revered collaboration, the school works hand-in-hand with the Dalai Lama to integrate the best of Western and Tibetan Buddhist education through science, religion and mind/body medicine. In short: ancient Tibetan wisdom meets modern scientific understanding. Conceived in 1998, the Emory-Tibet Partnership (ETP) has succeeded tremendously. Adding to the cultural and religious exchanges, enterprising work is being done through the Emory-Tibet Science Initiative, bringing a modern science curriculum into the Tibetan monastic education; the Emory-Tibet Medical Science Initiative, studying traditional Tibetan healing in modern research labs; and Cognitively Based Compassion Training, researching the physiological, psychological and behavioral benefits of compassion. “Our relationship has been a true mutual exchange of knowledge,” said Emory University President James W. Wagner. “Emory brings modern science education to Tibetan monastics on campus and throughout India. In turn, the Emory community benefits from the Tibetan Buddhist contemplative traditions of compassion meditation and holistic medicine, with an emphasis on the interplay of mind and body. “Each community receives the best of what the other has to offer in this intellectual and cultural interchange, creating a foundation for discoveries that expand our understanding of humanity,” Wagner continued. The Dalai Lama applauds the multi-faceted exchange. “This historic work is a testament to Emory’s sincere commitment to advancing human knowledge by drawing on the unique and complementary strengths of the Tibetan and Western traditions,” he said. “I firmly believe that education is an indispensable tool for the flourishing of human well-being and the creation of a just and peaceful society.” During the Dalai Lama’s momentous trips to Atlanta, a myriad of public and private events – from talks, panels and summits to cultural T 32 Emory University President James Wagner welcomes the Dalai Lama to Atlanta as a Presidential Distinguished Professor. The Dalai Lama accepted his first university appointment ever at Emory University in 2007. “For more than 30 years I have been engaged in an ongoing exchange with scientists, exploring what modern scientific knowledge and the time-honored science of mind embodied by the Tibetan tradition can bring to each other’s understanding of reality,” said His Holiness the XIV Dalai Lama. “This is important because the greatest problems humanity faces today must be addressed not only on a material level, but also on a psychological and emotional level. celebrations and intimate gatherings – are planned for what has been coined “The Visit.” Michael P. Kloss, Emory’s Chief of Protocol and executive director of the Office of University Events, orchestrates much of it. In the past six years, he has produced nearly 50 hours of live events with the Dalai Lama for more than 50,000 guests and 300,000 online viewers. The exclusive social events, critical for financial support, are sandwiched between the live public events. “We’ve welcomed the world to Atlanta in venues as varied as campus chapels, classrooms, Centennial Olympic Park, and just recently the Gwinnett Center having outgrown our campus facilities,” Kloss said. “One can feel nothing but privileged to be witness to these interactions, and to hear firsthand how lives have changed and more will follow. For the opportunity to play a small part in that greater good, I am deeply appreciative to Emory University.” 144 248 Event designer Tony Brewer, who has been involved with Emory’s private events for the Dalai Lama since 2007, says he has been honored and humbled to play a role. “To be able to provide the gift of beautiful surroundings to His Holiness, who has so enriched my own life, is the highest honor I could wish for,” Brewer said. The Dalai Lama has said that a “need for simple human-tohuman relationships is becoming increasingly urgent.” Through Emory University’s relationship with His Holiness, this pure and sincere hope is being heard across Atlanta and beyond. SOUTHERN SEASONS MAGAZINE | 33 BRYAN MELTZ / EMORY UNIVERSITY g tz 34 GIFTS ’Tis the season to sparkle in Tiffany. Pendants with diamonds and gemstones in platinum from the 2013 Blue Book Collection: pear-shaped morganite, tanzanite drop, oval morganite briolette with 18 karat rose gold. Price upon request. Ring in the New Year with Swarovski’s crystalline cocktail glasses that dazzle with crystal chaton-filled stems and a large faceted clear crystal base. $390, set of two. The Nirvana Star Clutch by Swarovski glitters with 2,744 clear crystals on black satin on one side and black lambskin on the other. It comes with an exclusive Nirvana-cut crystal closure. Price upon request. For novelty holiday finds, Bella Bag in Atlanta has a seasonal selection of collectibles, including the Louis Vuitton Red Alma snow globe ($749) and Trunk Porter glass dome ($799). Fine jewelry and watches can be acquired through the thriving auction industry at a steal of a deal! High-end auctions are becoming a new standard for the savvy buyer. Auctions Love Diamonds? Learn to Love A cushion-shape diamond single-stone ring, weighing 14.07 carats, with a GIA certificate stating it is G colour, VS1 clarity. David Morris, London. Estimate $500,000-700,000. This piece is among the selected jewels from the collection of renowned novelist Barbara Taylor Bradford to be sold at auction at Bonhams New Bond Street headquarters in London on Dec. 5. Southern Seasons Magazine |35 A RED BARON’S ALWAYS HAS A STUNNING SELECTION OF COVETED WATCHES LIKE THIS BRIETLING NAVITIMER 18 KT ANd STAINLESS WITH DIAMOND BEZEL CHRONOGRAPH (TOP) AND THIS SWISS FINE CHRONOGRAPH WITH ALLIGATOR BAND (BOTTOM). THIS 25.6 CARAT DIAMOND BRACELET SET IN 18K GOLD WAS RECENTLY SOLD AT THE OCTOBER RED BARON’S AUCTION FOR LESS THAN 20% OF APPRAISED VALUE. mong the coveted offerings at auction are not just fine art and antiques, but unbelievable bargains in fine jewelry. It is entirely conceivable that a five-caratplus diamond dream ring can be acquired at a fraction of the cost of retail, as the economic woes have fostered the newest and most popular way of investing in fine jewelry. Within the past five years, advertisements saying “WE BUY GOLD!” have popped up all over the country. As the economy faltered, hurting the fine jewelry industry, the price of gold and other precious metals has skyrocketed, adding insult to injury to retail jewelers. Many of the fine jewelry stores have grown their “Estate Jewelry departments” within their showrooms, providing substantially discounted items to their clientele. But the gold buyers who have thrived during this historical period are not the only new explosion of businesses. Highly respected auction houses, including industry leaders like Bonhams, Freeman’s, Red Baron’s and Brunk, have gathered a windfall of diamond jewelry, watches and more, as it has become a win-win industry for both buyers and sellers. The auction format ensures that a fair price is paid for merchandise driven by willing buyers and sellers. And while some gems will trade at a premium, others will trade for a very nice buyer price. Though auction houses do charge a commission for facilitating a sale, these fees are far less than retail markups. HOW DO YOU KNOW IF AN ITEM IS AUTHENTIC? Purveyors of fine jewelry historically have stamped their merchandise with identifying marks, which provide authentication. Cartier, Tiffany, Bulgari and others can be easily certified by the professional appraisers who work for the auction houses. The karat of the gold (14, 18 and sometimes 24!) can be determined through stamps and chemical testing. Reputable auction houses will accurately appraise and identify the treasures that go through their doors providing a comfort level to new auction buyers. A GUIDE TO GETTING IN THE AUCTION LOOP So, just how do you get the best pieces for the best prices? Be smart about it. Always buy from a reputable auction house with a proven track record. Most offer a five-day return policy, giving the buyer time to have the item independently appraised. Also make sure the purchases are guaranteed, with a full refund available if the item is not what it is said to be. Do some background research by browsing catalogs, attending auctions, and checking comparable prices. Check out clarity, cut and color charts at the Gemological Institute of America website at. Ask questions at the auction house. Be informed about what you want to buy. 36 A pair of cultured pearl and diamond earclips. Harry Winston, New York. Estimate: $80,000-120,000. THESE Pieces are among the selected jewels from the collection of renowned novelist Barbara Taylor Bradford to be sold at auction at Bonhams New Bond Street headquarters in London on Dec. 5. from freeman’s auction a Lady’s Cultured Pearl and Diamond Necklace. Each pearl approx. 8.5-9 mm, accented by platinum and yellow gold ‘butterfly’ with diamond and colored sapphire set wings. Estimate: $5,0007,000 / Result: $12,500. An antique sapphire and diamond brooch. S J Phillips, London. Estimate: $100,000-150,000. LOOKING TO SELL? You might have some hidden gems tucked away in your jewelry box. Here are a few items trending in the jewelry market: colored gemstones, particularly emeralds from Columbia, sapphires from Kashmir and rubies from Burma; natural pearls; signed pieces by major designers like Van Cleef & Arpels, Buccellati, David Webb and Harry Winston (especially from older periods like Art Deco); and estate jewelry that has been in private hands for many years, like an old mine cut diamond ring or jeweled jabot pins. from freeman’s auction an impressive lady’s 14.87 carat fancy yellow diamond ring. Estimate: $225,000-325,000 / Result: $314,500. Southern Seasons Magazine |37 Callanwolde Estate open for tours Put on your walking shoes. The 12-acre Callanwolde estate has opened its doors for weekday tours, showcasing the 27,000-square-foot mansion as well as the gardens and outer buildings. The impressive GothicTudor Revival style mansion was built in 1920 for one of Atlanta’s preeminent families, Charles Howard Candler, eldest son of Asa Candler, founder of Coca-Cola. Saved from demolition in 1971 by the citizens of DeKalb County, the architectural landmark in the historic Druid PHOTOGRAPHY BY DREW NEWMAN Hills neighborhood of Atlanta now serves as a unique fine arts center and hub of cultural activity. It is listed on the National Register of Historic Places. Representing the South for Over 30 Years Visitors will have the pleasure of listening Integrity • Discretion • Results to the Aeolian Organ, one of the few of its kind in the country. Specially designed for the house, it’s comprised of 3,742 pipes. FOR DETAILS, VISIT CALLANWOLDE.ORG. Brunk Auctions BA Visit us at • Asheville, North Carolina • 828-254-6846 Call our Atlanta representative Barbara Guillaume at 404-846-2183 Andrew Brunk NCAL 8830, Firm NCAL 3095, Robert S. Brunk NCAL 3041, Robert Ruggiero NCAL 7707 38 The most interesting antiques auction in the world. Only The Red Baron has an unparalleled eye. Buying and offering at auction the most exceptional antiques and jewelry from around the world. FINE JEWELRY • WATCHES • ANTIQUES • FINE ART CARS • UNIQUE COLLECTABLES • AND MORE Red Baron’s Antiques Pre-register for our January auction 8655 Roswell Road • Atlanta GA 30350 • 770.640.4604 SOUTHERN SEASONS MAGAZINE | 39 NANCY JO MCDANIEL Freeman’s Auction reeman’s, America’s oldest auction house, has been a leading fine art and antiques auctioneer and appraiser since 1805. Its extensive marketing, targeted advertising and the expansive reach of live internet bidding, combined with Freeman’s strategic alliance with Scotland’s oldest auction house, Lyon & Turnbull, provide a strong presence in the global marketplace, attracting a diverse group of bidders. The Southeast team has a seasoned staff with an in-depth knowledge of all aspects of the auction business. With the strength of its Philadelphia departments, Freeman’s builds on a foundation of specialized knowledge and offers the full gamut of auction services: accurate and competitive cataloging, including F A local connection to the global market pre-sale estimates and reserves; extensive research of property and market trends; in-house photography; catalogues; single-owner auctions; and a commitment to both consignors and buyers to provide seamless client service throughout the auction process. The appraisal services include verbal auction estimates – free of charge – as well as formal fair market valuations for a variety of needs, including estate planning, estate tax, charitable donations, gift tax, insurance and retail replacement. Freeman’s expertise includes rare books, maps and manuscripts; English and American furniture, silver and decorative arts; decorative Asian arts; European and American paintings and sculpture; modern and contemporary art; jewelry and watches; and photographs and photobooks. AT YOUR SERVICE IN THE SOUTHEAST HOLEN MILES LEWIS, a native Atlantan, recently joined Freeman’s as director of Business Development and Trusts & Estates for the Southeast. She was formerly VP of Trusts & Estates for Christie’s in New York City. Her extensive experience ranges from generalist appraisal training to benefit auctioneering. A graduate of The Lovett School and member of the Atlanta Debutante Club, Holen earned a bachelor’s degree in French from Middlebury College and a master’s degree in the Art Market and Connoisseurship from Christie’s Education in Manhattan. COLIN CLARKE, Vice President, opened Freeman’s first regional office in Charlottesville in 2004 to focus on clients in the greater Southeast. He has over three decades of art business expertise, including his first post as a restorer for the Royal Collection. An Invitation to Consign Whether you are selling a single work of art or an entire collection, Freeman’s, America’s oldest auction house, will help you navigate the consignment process. With locations throughout the United States and the United Kingdom, we are your local connection to the global art market. Freeman’s Southeastern representatives Colin Clarke and Holen Miles Lewis, an Atlanta native, will be in your area the week of January 27 to evaluate fine art, antiques, and jewelry for the spring 2014 auction season. To discuss consignment options or for a complimentary and confidential appointment, please contact: Fine Art & Antiques Colin Clarke 434.409.4549 cclarke@freemansauction.com Fine Jewelry or Trusts & Estates Holen Miles Lewis 434.409.0114 hlewis@freemansauction.com 126 Garrett Street Charlottesville, VA 22902 40 MONICA MATTERS A gift in need... is a gift indeed BY MONICA PEARSON This is the season of lists: party list, grocery list, baking list, Christmas card list, gift list and a list for your lists. So what to give to a girlfriend who has everything? How about a gift to a girl who has nothing? ust look at this list, the result of research for the Atlanta Women’s Foundation. This list is like a snowball in your face, a shock, cold and hard to accept: a 51 percent four-year graduation rate for Atlanta Public Schools in 2012; between 26 and 36 percent of single female-headed households living below the poverty level in five metro counties; one in five babies in those counties in 2009 born to mothers still in high school or with no diploma; and 81,000 girls living in poverty right now, J in those five counties of Clayton, Cobb, DeKalb, Fulton and Gwinnett. Over 27 years The Atlanta Women’s Foundation gave more than $12 million to organizations that help girls and women. Its annual fundraising luncheon, “Numbers Too Big To Ignore,” recently packed 1,200 women and a few men into the Tom Murphy Ballroom at the Georgia World Congress Center. As always there was an uplifting speaker, but what was most uplifting was that people gave at least $81 a piece to help one of those 81,000 girls. With a credit card, cash or check, they gave a way out of no way – hope, opportunity and a gift that literally could change a child’s life. Talk about a wise investment and a gift to the community. Women are the carriers of life. Women nurture. But women and girls need access. “AWF supports organizations that lift women and girls up and out of poverty by increasing their access to services and opportunities for advancement.” Those were the words written on the luncheon program that provide food for thought and action. Chew on that for a moment, especially as you write out your Christmas gift list. Ask yourself, does Rhonda really need another necklace or Lynell another pair of pajamas? Does Helen really need another sweater or scarf she probably will re-gift? Instead, write each of them a heartfelt note. List the qualities you love about them and tell them how your gift to them this year is a gift to the Atlanta Women’s Foundation in their name, to help mold another Rhonda, Lynell or Helen for the future. And all the while hum the song Beyoncé made famous, knowing your gift can make it possible, “Run the World (Girls)” – if only given the chance. Now add that to your list of lists! SOUTHERN SEASONS MAGAZINE | 41 © KARENKH | DREAMSTIME.COM NINH CHAU THE POWER IS YOURS BY LAURA TURNER SEYDEL UP! Against Violence on V-Day 2014 Marking its 15th anniversary on Feb. 14, 2013, V-Day launched its largest campaign to date, One Billion Rising, inviting ONE BILLION women and those who love them to WALK OUT, DANCE, RISE UP and DEMAND an end to this violence. V-Day wants the world to see our collective strength, our numbers, our solidarity across borders. Over one billion people rose up in over 200 countries, and I am proud that here in Atlanta we had incredible participation and V-Day drew a large crowd to Woodruff Park to Rise. For 2014, Eve wants to take One Billion Rising deeper and bigger. Through her amazing work at the City of Joy in the DRC (Democratic Republic of the Congo), the rape capital of the world, Eve realized that women’s justice is not an isolated issue. You can’t look at women’s justice without looking at other forms of justice, like racial or climate justice. For example, I recently participated in the International Women’s Earth and Climate Initiative Summit, a solutions-based, multi-faceted effort established to engage women worldwide to take action as powerful stakeholders in climate change and 42 Rise Sara Blakely and Laura Turner Seydel at the One Billion Rising Rally in Woodruff Park on V-Day 2013. Globally, one in three women and girls are raped and brutalized in their lifetime – that’s one billion women. These one billion women are our mothers, daughters, sisters and those who will be bringing future generations into the world. Eve Ensler, world-renowned author, playwright and activist, answered the call to this epidemic in 1998 by starting her V-Day campaign to end violence against women. sustainability solutions. One woman spoke about how in her village they were now having to plant crops four times a year due to climate change related problems. Another woman spoke of water reserves drying up, and yet another on the widespread displacement of the indigenous people in the Amazon. These are burdens that fall disproportionately on the women of the world as the main caretakers of the family and of children. So this year Eve is asking the world over to look to where you need justice and participate on V-Day, One Billion Rising for Justice. This will be a day of action and activism in which one billion of us will stand together and raise our voices as one to denounce injustice in all its forms against humanity and our planet. Women who are survivors of gender violence and those who love them will come together in community and solidarity outside places where we are entitled to justice. That includes colleges, schools, police stations, government offices, courtrooms, places of worship, military courts, embassies and sites of environmental injustice, as well as our workplaces and our homes. Imagine, one billion women releasing their stories, Above left: The One Billion Rising Rally in Woodruff Park on Feb. 14, 2013. Above: Sara Blakely, founder of Spanx, at last year’s rally. Left: Pat Mitchell, president and CEO of The Paley Center for Media and V-Board member, interviews Eve Ensler, founder of the V-Day campaign to end violence against women. Below: Rev. Gerald Durley, Laura Seydel, Rev. Bernice King and Bishop Barbara King, who all spoke at the State Capitol during the 2013 One Billion Rising Rally. dancing and speaking out at places where they need justice, where they need an end to violence against women and girls. It is our moral imperative to end the violence and rape of our bodies and of Mother Earth. One Billion Rising for Justice is a day not only to root out and expose injustice, but a day that empowers and strengthens us, and builds our communities by coming together and creating new friendships. I hope you will join me and many of Atlanta’s community leaders on Feb. 14, 2014, for One Billion Rising for Justice Atlanta (). I am also excited to share that Eve Ensler will be honored in Atlanta by the King Center on Jan. 18 with the 2014 Coretta Scott King A.N.G.E.L. Award. Muhammad Ali and Khalida Brohi will also be honored at the event (. eventbrite.com). We have a lot to be proud of here in Atlanta, so join me on V-Day 2014 to RISE UP against injustice! CHECK OUT ONEBILLIONRISING.ORG TO SEE PICTURES AND INSPIRING VIDEO FROM ALL OVER THE WORLD FROM 2013’S ONE BILLION RISING. SOUTHERN SEASONS MAGAZINE | 43 native Atlantan, F. Sheffield Hale brings a lifelong passion for history to his position as the president and CEO of the Atlanta History Center. Since stepping into this executive position in 2012, following 25 years of volunteer service with the AHC, Hale has been working diligently on the organization’s mission to connect people, history and culture. “We want to be seen as a valuable, relevant, exciting, convenient, engaging, modern, inclusive, immersive, thoughtprovoking and welcoming civic and cultural organization that impacts people’s lives and effects positive change in the community,” he said. A Sheffield Hale – Steward of Atlanta’s History GUARDIAN eat Southern Tales WHAT FASCINATES YOU ABOUT OUR CITY’S HISTORY? SHEFFIELD HALE WITH COSTUMED INTERPRETERS AT THE ATLANTA HISTORY CENTER. YOUR FAMILY HAS A LONG HISTORY OF PRESERVING ATLANTA’S PAST. TO WHAT DO YOU ATTRIBUTE YOUR PASSION FOR HISTORY? I grew up in a family of storytellers who related – usually with a great deal of imbedded humor – stories of family, and the people and history in the communities in which they lived. I was surrounded by their images and tales and they were always a part of my life. History seemed personal and fun and there was always a message on the importance of sharing family history through stories. That’s what history is – it’s storytelling and those stories show how we are all the product of the collective efforts of those before us and how their own history can shape our personal lives. I think the same kind of perspective is important when you are thinking about your relationship to the community in which you live. It was built by people you did not know and it is important that you appreciate their individual and collective efforts that you enjoy today. I believe that this type of perspective has the desired civic effect of encouraging you to work in similar fashion to make the city a better place for those who will come after us. Institutions like the Atlanta History Center help us understand our history in a way that creates a shared sense of ownership and community that in turn will encourage us all to work for our mutual benefit and those who will follow. 44 The ability of Atlanta, from generation to generation, to create outstanding leaders who see a vision of what a town with no natural reason to exist – no harbor, no navigable rivers, no minerals, or particularly fertile land – to become the center of the South. It did not happen by accident, it took sustained leadership and a lot of brass to build Atlanta into what she is today. WHY IS THE HISTORIC PRESERVATION OF ATLANTA SO VITAL TO OUR FUTURE? Preservation provides an antidote to our chronic cultural amnesia by reminding us of the past and promoting the telling of stories which make up our history. Historic preservation speaks both to memory and the realization that we have an obligation to those who have come before us and will come after us. Historic preservation also provides us with a road map in the context of the past 60 years of poor land planning and unsustainable design as to how to structure a city. Human civilization had figured out a pretty good way to organize itself until we threw away many of the key precepts – grid streets, walkability and human oriented design, when the car gave us the ability to radically reorder our relationships to each other and the landscape. The reason that “Our mission is to connect people, history, and culture in ways that build an informed and inspired community – one that is connected to the past, engaged in the present, and empowered by their ability to shape their community’s future. Unlike any other local institution, we have the distinct ability and unique mission to connect all Atlantans to a greater understanding of both where we live and about those with whom we share and build our community.” Prior to joining the Atlanta History Center, Sheffield Hale served as Chief Counsel of the American Cancer Society, Inc. and was a Partner practicing corporate law in the firm of Kilpatrick Townsend & Stockton LLP. He serves as a Trustee of the National Trust for Historic Preservation, Robert W. Woodruff Library of Atlanta University Center, and Fox Theatre, Inc. He holds a degree in history from the University of Georgia and a J.D. degree from the University of Virginia School of Law. people are moving back into cities is because they like the way they work – the scale, quality and layout that historic city centers and neighborhoods provide. YOU’VE BEEN INVOLVED WITH THE ATLANTA HISTORY CENTER FOR A QUARTER OF A CENTURY. WHAT HAVE YOU SEEN HAPPENING DURING THIS TIME? WHAT’S ON THE HORIZON? renovation, the AHC has spent the past three years strategically strengthening and building new audiences through increased programming for adults, families and young professionals. One area’s focus was to increase the diversity and quantity of our author program series at the Atlanta History Center and Margaret Mitchell House and, so far, we have seen more than 4,000 lecture attendees over the past year. WHAT ARE SOME NEW WAYS THAT THE AHC IS CONNECTING WITH THE COMMUNITY? I’ve witnessed the evolution of this organization in both infrastructure and its growing commitment to become more outward looking. The Center is at a critical point in which physical and programmatic changes are underway and change is happening. Nevertheless, to truly achieve a successful transformation, we must have a shift that is more than a physical transformation. We are creating a new culture here. Beyond documenting history, we want to emerge as an active, engaging, vibrant voice in telling Atlanta’s stories and connecting our communities through the AHC and all that we do – both on-site and in the community. We are continuing to work on our capital campaign for the extensive renovation of the Atlanta History Museum building, and the signature exhibit on the history of Atlanta. Construction will begin in the summer of 2014. In preparation for the We have listened to our audiences, and in particular our family audiences. Perception of history and museums, and the AHC, is our biggest barrier, and often excludes us from being seen as “age appropriate” for children, teens and families. Over the past year, we have increased our signature family festivals from 6 to 11 per year and have seen a 7% increase in our attendance. We attribute much of that growth to the festivals (which include Sheep to Shawl, Fall Folklife Festival, Day of the Dead and the Holiday Spirit), as well as through the expansion of our monthly toddler program, Magic Mondays, and Homeschool Days. Ultimately, the AHC strives to play an even more important SOUTHERN SEASONS MAGAZINE | 45 The Atlanta History Museum anchors the 33-acre campus of the Atlanta History Center. A capital campaign is underway to redesign the building and its signature Atlanta History exhibition. interpretive role for the metro area by becoming known for its innovative and thought-provoking methods of presenting local and regional stories to a national audience and national stories to a local and regional audience. Perhaps the most transformative initiative to our programming is Meet the Past, a new multi-year programming initiative, funded by the Goizueta Foundation, that brings to life the stories of real people from history through museum theater and interactive interpretation. As we plan for a presence in the community that is not just driven by exhibitions, Meet the Past is expanding into a regular offering with characters and museum theater experiences used to share the stories of our past throughout our exhibitions, historic houses and grounds. For example, the new Smith Family Farm experience allows visitors to encounter a Piedmont Georgia farm facing the challenges of the Civil War on the home front. The visitor becomes a partner in history as they interact with historic characters, including members of the Smith family, enslaved men and women who lived on the farm, neighbors and others. The living history characters share personal stories, perspectives and highlights of what life was like 150 years ago in 1863, engaging visitors in conversations, demonstrations and daily tasks from past times. One year ago, we launched Party with the Past, a free afterhour program series designed to engage young professionals with the History Center. With a tagline “Free History, Cold Beer,” this series shows that history can have some humor and gets us off of our campus and directly into other communities. Each party takes place at a different historic spot in the city and more than 2,700 people have attended at locations like the Margaret 46 Mitchell House, Auburn Avenue, Oakland Cemetery, the Fox Theater, Variety Playhouse in Little Five Points, Swan House and Zoo Atlanta. We’re seeing people coming to Party with the Past and then visiting our campus for other events because they want to be more involved. On Jan. 1, 2013, the Franklin M. Garrett Library and Studio became the new home of the Atlanta branch of StoryCorps, a tremendous partnership between StoryCorps, WABE 90.1 FM and the AHC. This innovative radio program allows people of all walks of life to record and share their stories. TELL US ABOUT PLANS TO RESHAPE THE MUSEUM, AND THE EVOLUTION OF THE NEW ATLANTA STORY EXHIBITION. The AHC was founded 87 years ago with a focus on preserving Atlanta’s history through collecting. The opening of the Atlanta History Museum in 1993 was a watershed event. The museum and exhibitions allowed us to expand focus, and provide a broader interpretation of Atlanta’s history, through our collections. Through our Kenan Research Center, collections, exhibitions, programming, and interpretation, we are weaving local history into a national story. Our museum front presence on West Paces Ferry does not properly portray the AHC of today. Our current capital campaign focuses on improving our front yard presence from West Paces Ferry Road. We want our campus to become an open, vibrant and welcoming place for our visitors and community. How can our street presence become something iconic and inviting while respecting the commercial and residential area of our location? We want to design an engaging visitor experience from the The Swan House at the Atlanta History Center is an elegant, classically styled mansion built in 1928 for the Edward H. Inman family, heirs to a cotton brokerage fortune. Tours allow visitors to explore this beautifully restored historic home. moment they enter our property. I believe we are going to achieve that with a new building design for the museum. At the core of the capital campaign is the reimagining of the current Metropolitan Frontiers exhibition. The new Atlanta exhibition, filling the same space and footprint as Metropolitan Frontiers, will employ sustainable best practices for educational and immersive technology while remaining focused on real artifacts and documents from our collection. Presenting the personal stories of those who helped pave the way to the Atlanta we know today, major themes such as race and civil rights and Atlanta’s entrepreneurial spirit will provide the framework for visitors to learn how our city came to be. The exhibition would look at a mix of technology, programming/performances within exhibitions, online interactions, featured objects, etc. HISTORY ASIDE, THE CENTER SERVES AS A VAST GREENSPACE IN THE HEART OF BUCKHEAD, FILLED WITH GARDENS AND TRAILS. HOW ARE YOU LOOKING TO BETTER UTILIZE THIS AREA? Our grounds are unique, but underutilized. How do we make that asset better available to our community and strengthen the benefit of the greenspace we provide in the heart of Buckhead? These are questions we are asking ourselves daily and look forward to finding the right mix of gardens and grounds improvements and programming to attract a new audience of outdoor enthustiasts to the AHC. We want to continue the improvement we started in our back yard over the past 10 years with the restoration of both historic houses, the construction of the Quarry Garden Bridge, the Connor Brown Discovery Trail, and the Mabel Dorn Reeder Amphitheater. We are looking to invest substantial resources in improving infrastructure, accessibility and interpretation of our 22 acres of greenspace and historic gardens, with a focus on native Georgia and Piedmont plants. There is so much history to be told through our gardens – agricultural, medicinal, native heirloom plants – and this will perhaps be one of our biggest areas of transformations. The Smith Family Farm at the Atlanta History Center includes the Tullie Smith House, a plantation-plain house built in the 1840s by the Robert Smith family. SOUTHERN SEASONS MAGAZINE | 47 O WHO IS PNC? Sou ern Savvy PNC BANK HAS A POWERFUL SOCIAL STRATEGY WITH THE DELIGHTFUL CINDY WIDNER WALL AT THE HELM. ver the past year, Cindy Widner Wall has become one of the most recognizable faces of PNC Bank, N.A. since this acclaimed brand has expanded into the Southeast. She brings 36 years of experience in financial services to her esteemed position as Senior VP of Wealth Management for PNC’s Atlanta market. But this Savannah native is regarded just as highly for her support of the Atlanta community as she is for her investment smarts. She is a fervent champion for Atlanta’s humanitarian causes, participating in charitable events all over town. And the fact that PNC is right behind her makes it all the more satisfying. grandkids to college, financing a second home or meeting charitable inclinations. Our team members all work together to seamlessly deliver the PNC experience to clients in a manner that works best for their goals. All team members are under one roof, allowing interaction among the relationship manager, wealth planner, high net worth individuals and families through a planning-based model with a team of professionals in investments, fiduciary, banking and wealth planning. Our disciplined approach is focused specifically on helping our clients achieve their financial goals – whether it’s preparing for retirement, sending kids or PNC is the 5th largest bank in the U.S. and the 7th largest bank-held wealth manager with $122 billion in assets under management (as of 9/30/13). It has a proud 160-year history that dates back to the Pittsburgh Trust and Savings Company founded in 1852. PNC has been in the Atlanta market since March 2012, when it acquired RBC, gaining positions in North Carolina, Alabama and Georgia. WHY JOIN PNC WEALTH MANAGEMENT® ? PNC has an excellent reputation! I’m a 36-year veteran with experience in retail, marketing, investments, insurance, cash management, risk management and wealth management. Joining PNC Wealth Management provides me an opportunity to have input in establishing our place in the Atlanta and Georgia markets. I’ve created a cohesive team to deliver comprehensive financial solutions to our clients. HOW DOES THE PNC WEALTH MANAGEMENT TEAM WORK WITH CLIENTS? PNC Wealth Management has more than 160 years serving the complex needs of The material presented in this article financial plan to your particular needs. For more information, please contact PNC at 1-888-762-6226. The PNC Financial Services Group, Inc. (“PNC”) uses the names PNC Wealth Management®, Hawthorn, PNC Family Wealth® and PNC Institutional Investments® fiduciary and agency services through its 48 MEET CINDY WIDNER WALL A graduate of Armstrong Atlantic State University in Savannah, Cindy has been active in numerous community and church endeavors over the years, from supporting child advocacy and early childhood development to serving as an Elder in the Presbyterian Church. She’s involved with the Atlanta History Center, the Shepherd Center, the Alliance Theatre, Fernbank, and Meals on Wheels Atlanta. Prior to joining PNC Wealth Management in March 2012, she has worked for two other Atlanta banks starting originally with the First National Bank of Atlanta. Cindy resides in Buckhead with her husband, James Wall, and their two adult daughters, Amanda and Catherine, live nearby. trust advisor, investment advisor and private banker. The team is tenacious and willing to leave no stone unturned to find the right approach for the client. HOW DOES PNC MEASURE SUCCESS? COMMUNITY? PNC’s commitment to the Atlanta market and our reception here couldn’t be warmer or more outstanding. We’re gaining a reputation of delivering an exceptional client experience and earning a place as a client-trusted advisor. We’ve opened our satellite office in the John’s Creek area this year in order to provide a closer experience for the clients in this market. We’re continuing to grow our team, build relationships with new clients, and create a presence in the Atlanta market. HOW HAS PNC MESHED WITH YOUR COMMITMENT TO THE ATLANTA Georgia has been my home all my adult life, and I’m committed to Atlanta. PNC is committed to the Atlanta community as well. One of our first commitments was a $1.2 million, 3-year grant from our Grow Up Great Foundation, a $350 million, multi-year initiative to help prepare children from birth to age five for school and life. At the local level, the grant expands pre-K art and science programs and quality early education in Georgia. PNC has supported numerous organizations that are vital to Atlanta’s history and future including the Atlanta History Center, the Shepherd Center, the Alliance Theatre, Fernbank Museum, Meals on Wheels Atlanta and a number of other Atlanta causes and organizations. Beyond financial contributions and making donations, PNC employees are involved in these same organizations by serving on boards, volunteering and fund development. Through PNC’s sponsorship of numerous events at the Shepherd Center, PNC has allowed me to expand my personal dedication to the Shepherd Center, where my nephew received rehabilitative care for a spinal injury. Another personal commitment which PNC is supportive of is my involvement and time to the Alzheimer’s Association. I am honored to be the vice chair this year and will be the chair next year. I’m also thrilled to be selected as one of the dancers for the 2014 Dancing Stars of Atlanta to raise funds in support of Alzheimer’s research. I am doing this in honor of my father, for whom I am the primary caretaker as his Alzheimer’s further develops. affiliate of PNC, or by licensed insurance agencies that are not affiliated with PNC; in either case a licensed insurance affiliate will receive compensation if you choose to purchase insurance through these programs. A decision to purchase insurance will not affect the cost or availability of other products or services from PNC or its affiliates. Hawthorn and PNC do not provide legal or accounting advice and neither provides tax advice in the absence of a specific written engagement for Hawthorn to do so. “PNC Wealth Management,” “Hawthorn, PNC Family Wealth” and “PNC Institutional Investments” are registered trademarks of The PNC Financial Services Group, Inc. Investments: Not FDIC Insured. No Bank Guarantee. May Lose Value. Insurance: Not FDIC Insured. No Bank or Federal SOUTHERN GovernmentSEASONS Guarantee. May Lose Value. MAGAZINE | 49 T H E M A N Y FA C E S O F B E A U T Y BY RONALD E. GOLDSTEIN, DDS The Power of a Woman’s Smile W e all do it…some more than others, but why? Perhaps actor and author John Cleese in “The Human Face” said it best, “A genuine smile gives us a warm glow of pleasure. A quick raise of the eyebrows grabs our attention – it is our most common expression of greeting” For many years I have observed that women seem to find it easier than men to flash a big smile. However, when I smile back, I think I am smiling just as much, but evidently not! I recently caught myself looking into my car mirror and I found what I thought was a big smile was just a medium grin. So was this just a fluke? I vowed to continue to test myself in my car mirror when I would let a woman driver go first at a stop sign, or a mother cross the road. I watched patiently. When they offered their friendly smiles and I smiled back only to steal a look in my rearview mirror, I consistently found mine was anything but a big toothy grin! So now I was confused. Was it really easier for women to suddenly smile? And were women giving a fake smile or a true smile? Psychology author Paul Ekman says, “We smile for many different reasons.” But he adds, “There is only one smile that is the true smile.” And yet even our “fake” smile can offer benefits. Perhaps Ekman’s opinion that a “polite smile” of “thanks” could be what women may offer easier than men. After all, a study by Yale psychologist Marianne LaFrance and others showed that women do tend to smile more than men, except when they are in similar situations. There are masking smiles to cover up what you are feeling, as well as enjoyment smiles. CEO of LittlePinkBook.com, Cynthia Good agreed with the research that women smile more than men. She states “Early on, women are taught to be pleasant. Smiling is one way to accomplish that. It’s also a way to diffuse tension. Women are often under pressure to come across as ‘nice,’ and smiling makes a person appear friendly. Also, older women are told to smile so they look more youthful and attractive. In our youth obsessed culture this comes in handy.” Southern hospitality is a known fact, so I wondered if Southern women are taught to smile more when growing up. Atlanta psychiatrist Dr. Sheldon Cohen acknowledged, “In my experience, I think Southern women are taught to smile more and we like to smile back at them so they want to smile even more.” Good adds that based on her experience growing up in Los Angeles but now living in Atlanta, “Women in the South smile more frequently. They are taught to be sweet and friendly.” Maybe that is just another reason why I love living in Atlanta. In an article by Katy Waldman for Slate, Waldman wrote, “In a raft of studies, women report smiling more than men (and men report smiling less than women). They speak of grinning on the job with strangers, with relatives, in a dazzlingly diverse array of situations. An unscientific scan of high school yearbook photos, newspaper clippings, Facebook pics, and advertisements backs up those studies: Women flash their pearly whites far more frequently than men, at least when someone is taking their picture. And in simulated job interviews, female participants salt their speech with smiles, while male test subjects are more likely to adopt neutral expressions.” CAN A SMILE PREDICT THE FUTURE? Psychologist, Dr. Dacher Keltner thinks so, and backs up his opinion with four decades of research that showed that a smile can indeed predict the future. His study was based on how a single smile in a photograph can and did predict women’s happiness four decades later! Keltner and his team studied photographs of young women at age 21 in their yearbook and surveyed them up to their 50s. They coded the presence of two muscle actions: the zygomatic major which pulls the lip corners up; and the orbicularis oculi, which circles the eye, when that muscle contracts it is associated with pleasure. The genuine smile of enjoyment not only makes us feel good but it makes others feel good as well. Just ask Mr. Kadokawa, 50CHANGING SMILES THROUGH COSMETIC DENTISTRY. HIS MULTIDISCIPLINARY PRACTICE IS IN ATLANTA, GEORGIA. SOUTHERN SEASONS MAGAZINE | 51 PHTOGRAPHY BY DR. RONALD E. GOLDSTEIN a smiling school tutor in Japan, who states, “In the past, Japanese culture discouraged smiling but now it is ok and people are learning how to smile.” In India, Laughing Club founder Dr. Madan Kataria has many classes teaching people to laugh, which boosts the immune system and helps people to smile more. Even “force laugh helps since the body doesn’t know the difference…” It still works! On a personal note, I have taken thousands of photographs of both men and women before any treatment to improve their smiles. But when asked to smile, 90% of the time I see a fake smile. However, after I have finished making improvements, they don’t hesitate to dazzle their true smile. I believe that a major reason is the subconscious realization that they are now proud of how their smile looks. In fact, I have had numerous patients who had been hiding their smile for so long it took retraining of their muscles plus realizing they could and should smile. The realization that their smiles were indeed good enough allowed them to finally let others enjoy seeing it too. Another reason for not smiling was brought out by writer Meredith Lepore, who penned an article in defense of actress Kristin Stewart, who she said is known for not smiling. Stewart explained in the June issue of Vanity Fair that it was her anxiety over not appearing real that causes her to not smile. So, for whatever reason folks hesitate to smile, their decision does affect how others see them and form their opinions about them as well. We humans have facial muscles that enable us to make up to 7,000 distinct expressions, but we only use about 100 of them! So if you feel your smile is holding you back from enjoying life more or keeping you from getting a better job, it may be time to consider changing your smile. It might even change your life as well! Atlanta model Daisy Santana has a natural smile that encourages viewers to smile back at her. Correll Cardiac Center New Center enhances Scope of Services for Cardiac Patients ADA LEE AND PETE CORRELL AT THE OPENING OF THE CORRELL CARDIAC CENTER AT GRADY. Grady Health System’s MEMBERS OF GRADY HEALTH FOUNDATION AND GRADY HEALTH SYSTEM BOARDS JOIN ADA LEE AND PETE CORRELL TO CUT THE RIBBON OFFICIALLY OPENING THE CORRELL CARDIAC CENTER. THE CORRELL CARDIAC CENTER HAS STATE-OFTHE-ART EQUIPMENT AND IS DESIGNED FOR THE BEST QUALITY OF PATIENT CARE. Grady Health System unveiled its state-of-the-art Correll Cardiac Center this fall, expanding its current services and capabilities to include a 24/7 catheterization lab, an electrophysiology lab and additional family waiting areas. Named for Grady Memorial Hospital Board Chairman A.D. “Pete” Correll and his wife Ada Lee, the new center occupies 11,000 square feet of renovated space on the second floor of Grady Memorial Hospital. “This is a proud day for Grady and the patients we serve,” said Grady President and CEO John Haupert at the ribboncutting ceremony on Sept. 4. “It is very fitting that we honor our board chair with this new cardiac center. He has been the heart and soul of Grady’s amazing transformation over the last five year.” ADA LEE AND PETE CORRELL CELEBRATE THE CENTER OPENING WITH SOME OF GRADY’S TOP DOCTORS. 52 Healthy BOUNDARIES BY DR. KARIN SMITHSON Overly-needies. Whiners. Emotional vampires. We all have had one of them in our lives – the “friend” that sucks the sunshine out of your afternoon, pulling any positive feelings that you might have brought with you right out of your core, letting your happiness drip down to the ground, right into a negative mudslide. They have hung onto your energy field and used you as a fueling station to meet and treat their needs, while you are left feeling exhausted, sad and de-fueled. So why do you keep letting them back in, only to repeat the same complain-drain-pain cycle with you? You likely know why: Because it’s what you’ve always done, you don’t want to confront them, and you don’t know how to say “no.” Am I right? If you are wondering if you have fallen into a pit of unhealthy quicksand, here is some validation that it might be time to make a change. If you relentlessly hear these phrases each time you hear this person’s voice, it’s time to talk: 1) “I...I...I...me...me...me...” It’s all about them, all of the time, even when you have a real crisis of your own, the channel somehow shifts to “MeMeTV,” tuning your needs out. 2) “Why me? My life is the worst.” Always the victim. And if you have actually been victimized, you can bet this person will trump your story with a worse one of their own. 3) “New designer purse? Glad one of us can afford indulgence.” Criticisms are slung at you slap in the face, often masked in guilt. All you know is you are constantly defending who you are without understanding why you end up apologizing and feeling like you should change. If you just relived your last 20 conversations with “that person,” let’s talk about the B-word. Yep, I’ll say it: Boundaries. It’s time to get some before all of the blurred lines send you staggering into insanity. If your friendship doesn’t ebb and flow with wellrounded reciprocity, take a long look at what you might be losing to maintain your connection. Is it worth it? Truth is, YOU are half of the relationship, so by adjusting what is allowed to be siphoned from your end, you will shift the entire dynamic. If you are feeling depleted by someone else’s needs, the healthiest thing for BOTH of you is to make a change. Staying stuck in the muck and mire with someone does not encourage them to change either. It is time to start protecting the energy field that is yours so that you stop giving all of your fuel to someone who merely turns it into a negative oil spill. It is time for you to start building healthy boundaries. PHOTO BY MICHAEL STOTHARD HAIR & MAKEUP BY ANGELA SUNI WEILBAECHER ASK DR. KARIN 8 STEPS TO BUILDING UP BOUNDARIES 1 2 3 Decide that you are losing more than you are gaining. When you decide that change is best for your life, you have already conquered the toughest step. Your whole world can shift. Validate WHY you are making the change by telling a trusted friend. By sharing your decision authentically with someone who will support you, you are creating an accountability check-in station, making you more likely to follow through. Start using the “Broken Record Technique.” Pick a simple statement that clearly creates a respectful boundary with this person, and say it every time the interaction starts to turn negative. As in, “I am sorry about that. I truly hope you figure it out. If you’ll excuse me, I have something that needs to get done.” PERIOD. Do not apologize, then excuse yourself. Do this at least five times in a row without fail. Cut it all in half pronto: Conversations, favors, interactions. Step away from the dynamic and let go of the guilt. Remind yourself that you are doing the best thing for both of you by breaking this cycle of negativity. Express love for them. Do all of this respectfully, without degrading that person or creating any more negative energy. Repeat. Repeat. Repeat. The other person will eventually turn to someone else. Spend time with all of the healthy, positive people in your life with all of the new free time you have just gained! 4 5 6 7 8 SOUTHERN SEASONS MAGAZINE | 53 Southern Seasons Magazine presents INDEPENDENT SCHOOLS ATLANTA INDEPENDENT SCHOOLS – ACADEMIC Alpharetta International Academy 4772 Webb Bridge Road, Alpharetta. 770/475-0558. aiamontessori.com Arlington Christian School 4500 Ridge Road, Fairburn. 770/964-9871. arlingtonchristian.org Atlanta Academy (The) 85 Mount Vernon Hwy. NE, Atlanta. 404/252-9555. atlantaacademy.com Atlanta Classical Christian Academy 3110 Sports Ave. SE, Smyrna. 770/874-8885. accak12.org Atlanta Country Day School 8725 Dunwoody Place, Suite 2 Atlanta, GA 30350 770/998-0311. atlantacountrydayschool.com Atlanta International School 2890 North Fulton Dr., Atlanta. 404/841-3840. aischool.org Atlanta School (The) 1015 Edgewood Ave. NE, Atlanta. 404/688-9550. theatlantaschool.com Blessed Trinity Catholic High School 11320 Woodstock Road, Roswell. 678/277-9083. btcatholic.org Brandon Hall School 1701 Brandon Hall Dr., Atlanta. 770/394-8177. brandonhall.org Carmen Adventist School 1330 North Cobb Pkwy., Marietta. 770/424-0606. antfb7.adventistschoolconnect.org Cambridge Academy 2780 Flat Shoals Road, Decatur. 404/241-1321. acambridgeacademy. com Christ the King School 46 Peachtree Way, Atlanta. 404/233-0383. christking.org Cobb County Christian School 545 Lorene Dr., Marietta. 770/434-1320. openbibleministry.org Cottage School (The) 770 Grimes Bridge Road, Roswell. 770/641-8688. cottageschool.org Covenant Christian School 3130 Atlanta Road, Smyrna. 770/435-1596. ccssmyrna.org Covered Bridge Academy 488 Hurt Road, Smyrna. 770/801-8292. coveredbridgeacademy.com Cumberland Christian Academy 2356 Clay Road, Austell. 770/819-6443. cumberlandchristian.org Davis Academy (The) 8105 Roberts Dr., Atlanta. 770/671-0085. davisacademy.org Dominion Christian High School 4607 Burnt Hickory Road, Marietta. 770/578-8150. dominionchristian.org Donnellan School (The) 4820 Long Island Dr., Atlanta. 404/255-0900. donnellan.org East Cobb Christian School 4616 Roswell Road NE, Marietta. 770/565-0881. eccs.org Eastside Christian School 2450 Lower Roswell Road, Marietta. 770/971-2332. eastsidechristianschool.com Epstein School (The) 335 Colewood Way NW, Atlanta. 404/250-5600. epsteinatlanta.org Faith Lutheran School 2111 Lower Roswell Road, Marietta. 770/973-8921. faithmarietta.com Fellowship Christian School 10965 Woodstock Road, Roswell. 770/992-4975. fellowshipchristianschool.org First Baptist Christian School 2958 North Main St., Kennesaw. 770/422-3254. fbcskennesaw.com First Montessori School of Atl. 5750 Long Island Dr. NW, Atlanta. 404/252-3910. firstmontessori.org Galloway School (The) 215 West Wieuca Road NW, Atlanta. 404/252-8389. gallowayschool.org Greenfield Hebrew Academy 5200 Northland Dr., Atlanta. 404/843-9900. ghacademy.org Heiskell School (The) 3260 Northside Dr. NW, Atlanta. 404/262-2233. heiskell.net Heritage Prep. School of Georgia 1700 Piedmont Avenue NE, Atlanta. 404/815-7711. heritageprep.org High Meadows School 1055 Willeo Road, Roswell. 770/993-2940. highmeadows.org Holy Innocents’ Episcopal School 805 Mount Vernon Hwy., Atlanta. 404/255-4026. hies.org Holy Spirit Preparatory School 4449 Northside Dr., Atlanta. 678/904-2811. holyspiritprep.org Landmark Christian School 50 East Broad St., Fairburn. 770/306-0647. landmarkchristianschool.org Lovett School (The) 4075 Paces Ferry Rd. NW, Atlanta. 404/262-3032. lovett.org Greater Atlanta’s Continued on page 56 54 Smart PLAYING IT Small classes, hands-on instruction, creative opportunities and personal accountability make The Cottage School experience a worthwhile investment. Almost 30 years ago, two teachers with a dream and the desire to help students who learn differently came together to build The Cottage School. Today, the dream of academic and personal independence has come true for hundreds of students, all thanks to Jacque and Joe Digieso. What began in a spartan one-room “classroom” in a Roswell office park has now grown into an expansive 23acre wooded campus. Overlooking the Chattahoochee River, the campus is comprised of a full size gymnasium, five large cottage-style classroom buildings, an outdoor classroom and a mountain bike/trail-run path. Serving grades 6-12, with enrollment representing more than 10 counties, The Cottage School Co-founded by Jacque Digieso (above) in 1985, The Cottage School has grown from a one-room “classroom” in Roswell to a 23-acre campus that serves a diverse enrollment of 160 students in grades engages its students in the pursuit of 6-12. Through the school’s unique academic and experiential programming, students emerge as knowledge but with a unique edge. independent, capable and successful young adults. Their days, filled with academic classes, art, music and sports activities, are organized as if they soccer, golf, tennis and mountain biking. Every student were on the job. Students clock in on a time clock and earn a participates in service projects with local nonprofits such as Drake mock salary that reflects punctuality, preparedness, effective House and tutoring at Mimosa Elementary. Most recently, this communication and leadership. Students operate within a small community of 160 students conducted a Back to School two-week contract designed to help meet their daily and Canned Food Drive, which delivered over 2,000 food items to weekly responsibilities. This real life structure enables them to fill the shelves at North Fulton Community Charities, at a time manage their work load, social commitments and community when they needed it the most! involvement. One student recently shared with a tour group, “The Cottage After a morning of academics, students and teachers shift School doesn’t prepare me for the next grade. It prepares me gears and participate in extended experiential classes. Equine for life!” Small classes, engaging hands-on instruction, endless enthusiasts enjoy horseback riding, thespians practice in their opportunities to be creative, and an emphasis on personal drama classes, gardeners enjoy horticulture, and future chefs accountability is what makes The Cottage School experience a take culinary arts. Some students leave for their joint enrollment worthwhile investment. Students of every graduating class enter classes at Perimeter College and others enjoy internships at local college, art school, apprentice programs and military service businesses. Student athletes compete in basketball, volleyball, empowered, engaged and ready to excel! SOUTHERN SEASONS MAGAZINE | 55 © BRIANGUEST | DREAMSTIME.COM ATLANTA INDEPENDENT SCHOOLS – ACADEMIC CONTINUED FROM PAGE 54 Marist School 3790 Ashford Dunwoody Road, Atlanta. 770/457-7201. marist.com Mt. Bethel Christian Academy 4385 Lower Roswell Road, Marietta. 770/971-0245. mtbethelchristian.org Mt. Paran Christian School 1275 Stanley Road, Kennesaw. 770/578-0182. mtparanschool.com Mt. Vernon Presbyterian School 471 Mt. Vernon Hwy. NE, Atlanta. 404/252-3448. mtvernonschool.org North Cobb Christian School 4500 Lakeview Dr., Kennesaw. 770/975-0252. ncchristian.org Our Lady of Mercy Catholic High School 861 Hwy. 279, Fayetteville. 770/461-2202. mercycatholic.org Pace Academy 966 W. Paces Ferry Road, Atlanta. 404/262-1345. paceacademy.org Paideia School (The) 1509 Ponce de Leon Ave., Atlanta. 404/377-3491. paideiaschool.org Riverside Military Academy 2001 Riverside Dr., Gainesville. 770/538-2938. 800/GO-CADET. riversidemilitary.com. Roswell Street Baptist Christian School 774 Roswell St., Marietta. 770/424-9824. roswellstreet.com Shiloh Hills Christian School 260 Hawkins Store Road NE, Kennesaw. 770/926-7729. shilohhills.com Shreiner Academy 1340 Terrell Mill Road, Marietta. 770/953-1340. shreiner.com St. Francis Schools 9375 Willeo Road, Roswell. 770/641-8257. 13440 Cogburn Road, Alpharetta. 678/339-9989. saintfrancisschools.com St. John the Evangelist 240 Arnold St., Hapeville. 404/767-4312. sjecs.org St. Joseph School 81 Lacy St., Marietta. 770/428-3328. stjosephschool.org St. Martin’s Episcopal School 3110-A Ashford Dunwoody Road, Atlanta. 404/237-4260. stmartinschool.org Trinity School 4301 Northside Pkwy., Atlanta. 404/231-8100. trinityatl.org Walker School (The) 700 Cobb Pkwy. N, Marietta. 770/427-2689. thewalkerschool.org Wesleyan School 5405 Spalding Dr., Peachtree Corners. 770/448-7640. wesleyanschool.org Westminster Schools (The) 1424 W. Paces Ferry Road, Atlanta. 404/355-8673. westminster.net Whitefield Academy 1 Whitefield Dr., Mableton. 678/305-3000. whitefieldacademy.com Woodward Academy 1662 Rugby Ave., College Park. 404/765-4000. woodward.edu Yeshiva Atlanta High School 3130 Raymond Dr., Atlanta. 770/451-5299. yeshivaatlanta.org. Youth Christian School 4967 Brownsville Road, Powder Springs. 770/943-1394. youthchristian.org CONTINUED ON PAGE 42 ATLANTA INDEPENDENT SCHOOLS – SPECIAL NEEDS Atlanta Speech School 3160 Northside Pkwy. NW, Atlanta. 404/233-5332. atlantaspeechschool.org Bedford School (The) 5665 Milam Road, Fairburn. 770/774-8001. thebedfordschool.org Brookwood Christian School 4728 Wood St., Acworth. 678/401-5855. brookwoodchristian.com Center Academy 3499 South Cobb Dr., Smyrna. 770/333-1616. centeracademy.com Cumberland Academy of GA 650 Mt. Vernon Hwy. NE, Atlanta. 404/835-9000. cumberlandacademy.org The Elaine Clark Center 5130 Peachtree Industrial Blvd., Chamblee. 770/458-3251. elaineclarkcenter.org Howard School (The) 1192 Foster St. NW, Atlanta. 404/377-7436. howardschool.org Jacob’s Ladder Center 407 Hardscrabble Road, Roswell. 770/998-1017. jacobsladdercenter.com Joseph Sams School 280 Brandywine Blvd., Fayetteville. 770/461-5894. josephsamsschool.org Mill Springs Academy 13660 New Providence Road, Alpharetta. 770/360-1336. millsprings.org Porter Academy 200 Cox Road, Roswell. 770/594-1313. porteracademy.org Schenck School (The) 282 Mount Paran Road NW, Atlanta. 404/252-2591. schenck.org Sophia Academy 1199 Mt. Vernon Road, Atlanta. 404/303-8722. sophiaacademy.org Swift School (The) 300 Grimes Bridge Road, Roswell. 678/205-4988. theswiftschool.org. FOR MORE LISTINGS, VISIT PRIVATESCHOOLSDIRECTORY.COM. 56 School Profiles BRANDON HALL Brandon Hall is a 53-year-old college preparatory school for boarding and day students. A national model in research-based education, the school offers small classes, tutoring and learning methodologies to fit every child – all within the framework of rigorous academic studies, an active sports program, and a caring, diverse and international campus community. Head of School: Dr. John L. Singleton Grades: 5-12. Enrollment: 160. SSPC Member. CUMBERLAND ACADEMY Cumberland Academy of Georgia specializes in the needs of children with high-functioning autism, Asperger’s, LD, ADD and ADHD. Fully accredited, Cumberland Academy is a private, non-profit, independent school for students in grades 4 -12 who have difficulty-sufficient adults. We recognize and prepare our students for the rapid pace that science, technology, engineering, art and math are having on our attitudes and behaviors. Engineering for the Future e Brandon Hall Learning Experience promotes 5 essential intelligence and learning elements: Science, Technology, Engineering, Art and Math (STEAM); Global Experience; Environmental Responsibility; Wellness Intelligence and Service to others. Call 770-394-8177 for enrollment availability. Brandon Hall School Atlanta’s nest college prepartory day and boarding school for grades 5th–12th. SOUTHERN SEASONS MAGAZINE | 57 GALLERY VIEWS ADAM NEWMAN IS AMONG THE GEORGIA ARTISTS PARTICIPATING IN TRINITY SCHOOL’S SPOTLIGHT ON ART, FEB. 3-8. HIS SCULPTURAL WORKS INCLUDE THIS CAPTIVATING PIECE FROM THE “NEWMANIC TRIBE” SERIES. SCULPTURAL WORK BY TOMMY PAYNE, AT SPOTLIGHT ON ART. “KISS ME” ORNAMENT BY LOCAL ARTIST ANGLYN PASS OF GLAK LOVE IS AMONG THE WORKS AVAILABLE AT THE SPRUILL GALLERY HOLIDAY ARTISTS MARKET, OPEN THROUGH DEC. 23. KELLY BLACKMON PHOTOGRAPHY. Artists Markets Around Town Atlanta is a meccca for art and there’s no better time to peruse the unique offerings of both emerging and established artists than during the holidays at these annual shows and sales: • The Spruill Gallery Holiday Artists Market is open through Dec. 23, filled with handcrafted items by local artisans. 4681 Ashford Dunwoody Road. Hours are 10 AM-6 PM Tues.-Sat. (with extended hours until 8 PM on Thurs.), and 12-5 PM Sun. A Jewelry Trunk Show will be Dec. 14. spruillarts.org. • Trinity School’s Spotlight on Art is one of the largest, most diverse art exhibitions and sales in the Southeast, with original works by 350 selected artists at 4301 Northside Pkwy. Though the show is Feb. 3-8, holiday shoppers can browse a special collection from the Trinity Artists Market at the Saks Fifth Avenue Gallery at Phipps Plaza, with items for sale now through Jan. 28. spotlightonart.com. • Christmas at Callanwolde, Dec. 6-17, includes a Charming ARTifacts sale, featuring art work by Callanwolde instructors and students, in the Conservatory, in addition to a host of seasonal events at the bedazzled mansion at 980 Briarcliff Road NE, Atlanta. Holiday Handmade, a market featuring Atlanta Etsy sellers, will be from 1-4 PM Dec. 14. Open daily. christmasatcallanwolde.org. • Apple Annie Craft Show, Dec. 6-7, is a juried show of original arts and crafts by more than 100 of the Southeast’s finest artisans at St. Ann Catholic Church, 4905 Roswell Road, Marietta. Open 9 AM-7 PM Fri., 9 AM-2 PM Sat. st-ann.org. 58 © ROMARE BEARDEN FOUNDATION/LICENSED BY VAGA, NEW YORK Romare Bearden: A Black Odyssey “Romare Bearden: A Black Odyssey,” will be on view Dec. 14-March 9 at the Michael C. Carlos Museum of Emory University. In 1977, Romare Bearden (1911-1988), one of the most powerful and original artists of the 20th century, created a series of collages and watercolors based on Homer’s epic poem, “The Odyssey.” Bearden’s own Odyssey series created an artistic bridge between classical mythology and African American culture. They were displayed for only two months in New York City before being scattered to private collections and public art museums. This new exhibition from the Smithsonian Institution Traveling Exhibition Service represents the first full-scale presentation of these works outside of New York City. HOME TO ITHACA, COLLAGE, 1977, COURTESY MOUNT HOLYOKE COLLEGE ART MUSEUM, SOUTH HADLEY, MA. ROMARE BEARDEN: A BLACK ODYSSEY IS ORGANIZED BY THE SMITHSONIAN INSTITUTION TRAVELING EXHIBITION SERVICE IN COOPERATION WITH THE ROMARE BEARDEN FOUNDATION AND ESTATE AND DC MOORE GALLERY; AND SUPPORTED BY A GRANT FROM THE STAVROS NIARCHOS FOUNDATION. Marco Polo’s Epic Adventure Embark on an epic journey at Fernbank Museum of Natural History, where “Marco Polo: Man & Myth” is making its North American debut through Jan. 5. This major exhibition brings the incredible 13th-century travels, experiences and legends of Marco Polo to life, highlighting his 24-year trek from Venice to China along, the exhibit includes rare and extraordinary objects from private collections and museums in Italy that illuminate the cultural practices, artistic traditions, unique landscapes and unusual animals he encountered. More than 80 objects, including coins, ceramics, artwork, maps and navigational tools, take visitors along the vast and ancient network of trade routes. AN 8-FOOT MODEL OF A VENETIAN GALLEY SHIP, SIMILAR TO THE ONE SAILED BY THE POLOS TO BEGIN THEIR JOURNEY THROUGH THE MIDDLE EAST AND ASIA. unidentified illnesses, encounters with bandits and thieves, unusual customs and practices, and some of the most spectacular and harsh landscapes in the world.” SOUTHERN SEASONS MAGAZINE | 59 Exhibitions WINTER Through Jan. Thomas Hart Benton. 315 East Paces Ferry Road, Atl. Tues.-Sat. 404/237- ALAN AVERY ART COMPANY 0370. alanaveryartcompany.com. ANN JACKSON GALLERY 932 Canton St., Roswell. 770/993- 4783. annjacksongallery.com. Through Dec. 31 “Small Works Show,” artful treasures. Opening reception: 6-8:30 PM Nov. 29. 690 Miami Circle, #150, Atl. Mon.-Sat. 404/467-1200. anneirwinfineart.com. ANNE IRWIN FINE ART ART STATION GALLERIES Through Jan. 11 Holiday exhibits. 5384 Manor Dr., Stone Mountain. Open Tues.-Sat. artstation.org. 770/469-1105. ATL. BOTANICAL GARDEN 404/876-5859. Rutherford Chang, Anne Lindberg, Kate Shepherd. “In Translation,” Jonathan Bouknight, Ben Schonberger, Nathan Sharratt. Opening reception: 7 PM Jan. 10. 535 Means Street NW, Atl. Tues.-Sat. 404/688-1970. St. SW, Atlanta. 404/524-4781. besharatgallery.com. Ernesto Torres. 980 Briarcliff Road NE. Mon.-Sat. 404/872-5338. BILL LOWE GALLERY lowegallery.com. callanwolde.org. 1555 Peachtree St., Suite 100, Atl. Tues.-Sat. 404/352-8114. 1345 Piedmont Ave. NE. Open daily. atlantabotanicalgarden.org. thecontemporary.org. ATLANTA HISTORY CENTER Indians and Georgia.” 130 West Paces Ferry Road. Open daily. BREMAN MUSEUM Through Jan. 1 “Native Lands: atlantahistorycenter.com. 404/814-4000. ATLANTA CONTEMPORARY ART CENTER Through Dec. 14 Fallen Fruit’s “Fallen Fruit of Atlanta,” Steven L. Anderson’s “Energy Strategies.” Jan. 10-March 8 “Coloring,” Bill Adams, Paul Stephen Benjamin, Rich’s: The Story Behind the Store,” interactive exhibit spans 150 years of Rich’s history. 1440 Spring St., NW, Atl. 678/222-3700. Through May 26 “Return To Ongoing “Wild, Wooly & Wonderful,” showcase of animal puppets representative of different cultures. Tues.-Sun. 1404 Spring St. NW at 18th, Atl. 404/873-3391. puppet.org. CENTER FOR PUPPETRY ARTS thebreman.org. BESHARAT GALLERY “Permanent Collection.” 175 Peters Through Jan. 31 Steve McCurry, CALLANWOLDE FINE ARTS CENTER GALLERY Jan. 10-March 7 Paintings by December “Small Works” show. Opening: 6-9 PM Dec. 6. 25 W. Park Square, Marietta. Tues.Sat. dkgallery.us. 770/427-5377. DK GALLERY FERNBANK MUSEUM OF NATURAL HISTORY “Shadow Circus: The Art of Kirsten Stingle and Lorraine Glessner” at Marietta/Cobb Museum of Art & Myth,” features a collection of 80-plus objects representing an epic journey that spanned 24 years and thousands of miles. Through Jan. 5 “Winter Wonderland,” cultural celebration with decorated trees and artifacts. Opening Feb. 15 “Whales: Giants of the Deep.” 767 Clifton Road, Atl. 404/929- Through Jan. 5 “Marco Polo: Man 6300. fernbankmuseum.org. FIRST FRIDAY GALLERY WALKS Dec. 6, Jan. 3, Feb. 7 Self-guided walking tour of the galleries of the Roswell Art District, from Plum Tree Village on Canton Street to Elizabeth Way to Oak Street. 6-9 PM the first Friday of the month. roswellartdistrict.com. 770/5949511. Through Jan. 5 “Exuberance of Meaning: The Art Patronage of Catherine the Great.” GEORGIA MUSEUM OF ART “BALLAST” BY KIRSTEN STINGLE 60 “The Art of the Louvre’s Tuileries Garden” at High Museum JAROSLAV PONCAR (CZECH, BORN 1945), THE TUILERIES GARDEN (LE JARDIN DES TUILERIES), 1985, GELATIN SILVER PRINT, 3 ½ X 14 INCHES, MUSÉE CARNAVALET-HISTOIRE DE PARIS, PH 1916. © JAROSLAV PONCAR Through June 8 “Bangles to Benches: Contemporary Jewelry and Design.” Woodruff Arts Center, 1280 Peachtree St. NE, Atl. Tues.-Sun. high.org. 404/733-HIGH. HUFF HARRINGTON 404/257-0511. 4240 Rickenbacker Dr., Atl. Mon.-Sat. huffharrington.com. JACKSON FINE ART 3115 East Shadowlawn Ave., Atl. Tues.-Sat. 404/233-3739. Through Dec. 21 “Colorshape.” jacksonfineart.com. Through Dec. 3 Audubon Interpretations by Laura W. Adams, new paper collage on canvas. 3235 Paces Ferry Place NW, Atl. Tues.-Sat. 404/261-8273. lagerquistgallery.net. LAGERQUIST GALLERY LUMIÈRE GALLERY Through Dec. 21 “New Work Through Jan. 5 “The Crossroads of Memory: Carroll Cloar and the American South.” Through Jan. 5 “L’objet en mouvement: Early Abstract Film.” “Cercle et Carré and the International Spirit of Abstract Art.” Through Jan. 12 “The Material of Culture: Renaissance Medals and Textiles from the Ulrich A. Middeldorf Collection.” Jan. 25-April 20 “Art Interrupted: Advancing American Art and the Politics of Cultural Diplomacy.” 90 Carlton St., Athens, East Campus of UGA, Performing and Visual Arts Complex. 706/5424662. georgiamuseum.org. – Photography by Vivian Maier,” featuring recently released images from the John Maloof Collection. The Galleries of Peachtree Hills, 425 Peachtree Hills Ave., Ste. 29B. Tues.-Sat. lumieregallery.net. 404/261-6100. MARCUS JEWISH COMMUNITY CENTER atlantajcc.org. MJCCA-Zaban Park, 5342 Tilly Mill Road, Dunwoody. 678/812-4000. “Paul Rand: Defining Design” at MODA Artist Apprentices: Hailey Lowe, Ashley Schick, Nathan Sharratt, Jiovnni Tallington. Feb. 1-March 29 “SCORE: Sports + Art & Design,” city-wide, multivenue exhibit. Museum of Contemporary Art of Ga., TULA Art Center, 75 Bennett St. 404/367-8700. mocaga.org. MARIETTA/COBB MUSEUM OF ART Circus: The Art of Kirsten Stingle and Lorraine Glessner” Through Dec. 15 The Portrait Society of Atlanta. Jan. 11-March 23 “Friends: Penley, Rossin and Steed.” Jan. 11-March 23 “The Barnes Family Trust.” 30 Atlanta St. 770/528-1444. Through Dec. 15 “Shadow ROSWELL FINE ARTS ALLIANCE Feb. 15-March 1 Member show. Art Center East, 9100 Fouts Road, Roswell. rfaa.org. R. ALEXANDER GALLERY Dec. 3-20 Holiday Small Works Show. 5933 Peachtree Industrial Blvd., B, Peachtree Corners. Tues.-Fri. 770/609-8662. HAGEDORN FOUNDATION GALLERY the South,” Jan Banning. Through Jan. 4 “Monochrome Portraits,” Trine Søndergaard. 425 Peachtree Hills Ave. #25, Atl. Through Jan. 4 “Down and Out in mariettacobbartmuseum.org. Through Dec. 31 The Holiday MASON MURER FINE ART Exhibition. 199 Armour Dr., Atl. Tues.-Sat. 404/879-1500. 404/492-7718. hfgallery.org. HERITAGE SANDY SPRINGS MUSEUM The Folk Art of Moses Robinson.” 6075 Sandy Springs Circle. 11 AM-2 PM Wed. & Sat. masonmurer.com. Through Jan. 26 “Paul Rand: Defining Design,” celebrating the master of graphic design. Feb. 6-April 27 “SCORE: Sports + Art & Design,” city-wide, multivenue exhibit. Museum of Design Atlanta, 1315 Peachtree St. Tues.-Sun. 404/9796455. museumofdesign.org. MODA ralexandergallery.com. Through Dec. 23 Holiday Artists Market, unique and locally crafted gifts and decor. Market hours: Tues.-Sun. 4681 Ashford Dunwoody Road. spruillarts.org. 770/394-4019. SPRUILL GALLERY Through April 2014 “Wit in Wood: MICHAEL C. CARLOS MUSEUM heritagesandysprings.org. Through Jan. 5 “Witness: The Art of Jerry Pinkney.” Through Jan. 12 “American Encounters: Genre Painting and Everyday Life.” Through Jan. 19 “The Art of the Louvre’s Tuileries Garden.” Through Feb. 2 The Bunnen Collection of Photography. Through April 13 “Go West!” Art and artifacts from America’s romance with the West. HIGH MUSEUM OF ART Footsteps of our Ancestors,” The Melion-Clum Collection of Modern Southwestern Pottery. Through Feb. 2 “Conserving the Memory: The Fratelli Alinari Photographs of Rome.” Dec. 14-March 9 “Romare Bearden: A Black Odyssey.” Emory University, 571 South Kilgo Circle, Atl. 404/727-4282. carlos. Through 2013 “Walking in the Through Dec. 8 “Picasso, Braque, Léger: 20th Century Modern Masters.” “Victor Hugo: Selections from the Schlossberg Collection.” “Haddon Sundblom: Santa Paintings.” 4484 Peachtree Road, NE, Atl. Tues.-Sun. 404/3648555. museum.oglethorpe.edu. OGLETHORPE UNIVERSITY MUSEUM OF ART SWAN COACH HOUSE GALLERY Through Jan. 4 “Little Things Mean A Lot,” holiday show of small works representing a broad range of art and artists. 3130 Slaton Dr., Atl. Tues.-Sat. 404/266-2636. swancoachhouse.com. TEW GALLERIES PRYOR FINE ART pryorfineart.com. Dec. 6-Jan. 6 Rimi Yang & emory.edu. 764 Miami Circle, Suite 132, Atl. Mon.-Sat. 404/352-8775. MOCA GA 404/869-0511. tewgalleries.com. Charles Keiger. Opening reception: 6-9 PM Dec. 6. 425 Peachtree Hills Ave., No. 24, Atl. Mon.-Sat. Through Jan. 18 Katherine Taylor, Through Jan. 18 2011/12 Working 2012/13 Working Artist Project. ALL TIMES AND DATES SUBJECT TO CHANGE. PLEASE CONTACT INDIVIDUAL VENUE FOR CONFIRMATION. SOUTHERN SEASONS MAGAZINE | 61 Whisperer T esa Render-Wallace may be the newest addition to Saks Fifth Avenue’s 5th Avenue Club, but the personal shopper and wardrobe stylist had a reputation for being a clotheshorse whisperer long before relocating to the flagship store at Phipps Plaza. Moreover, her bona fides included the kind of Rolodex and well-heeled following that made the new hire irresistible to Saks’ vice president and general manager Cathie Wilson. “In a world where most people’s favorite subject is themselves,” notes Wilson, “Tesa’s unique gift of wanting to serve and help her clients makes her a rarity, and a real asset to Saks.” When you consider Render-Wallace’s refined aesthetic (instilled in art college and honed throughout a 23-year career with Neiman Marcus); her sterling reputation for attention to detail, followthrough and superior customer service; and her legendary eye; it’s easy to see why she’s in constant demand. But when asked what she thinks makes her the darling of the Best-Dressed set, RenderWallace immediately cites her great ear. “Listening is key,” she says. “I think my personality lends itself to being a good stylist because I’ve always been far more comfortable staying in the background…observing other people and hearing what they had to say.” Whether picking up on verbal or non-verbal cues, such focus has proven indispensable when catering to the handful of power clients who entrust Render-Wallace with the logistics of ordering their seasonal wardrobes six to eight months in advance – frequently from the front rows of the collections in Paris, Milan and New York. But Render-Wallace is just as eager to cater to the less-experienced shopper who is in The Clotheshorse BY GAIL O’NEILL NEVER FULLY DRESSED WITHOUT A SMILE! Tesa RenderWallace is renowned for both her highstyle quotient and infectious laughter. search of a bit more guidance – or even a flight of fancy! “Nothing brings me more pleasure than encouraging a client to try on something outside of her comfort zone, then seeing her eyes light up as she gazes at her reflection in the dressing room mirror,” says Render-Wallace. Other job perks include the stream of midnight texts and emails to her smartphone from clients saying things like “Thank you!” or “I feel so beautiful tonight!,” as they check in from the weddings, charity balls or holiday parties for which their super-stylist has dressed them. Quite fitting for the woman who claims, “My clients are the first thing on my mind when I wake up every morning, and my last thought before going to sleep every night. I don’t dream about clothes. I dream about my clients, in their clothes, at the most important functions in their lives.” And what is Render-Wallace’s dream for the woman seeking to revitalize her wardrobe and outlook? “Keep it current,” she advises. “Every woman can embrace a touch of a trend. And when you try something new that works, one positive response can make you feel like you’re 20 years old again!” TESA’S MUST-HAVE TIPS • A sexy bootie • A great pop of color (green and fuschia are trending) • A great leather jacket 62 LAHCEN BOUFEDJI Pearl white square sequin backless evening gown with crystal and feather collar by Norman Ambrose. normanambrose.com STYLE SOUTHERN SEASONS MAGAZINE | 63 LITTLE LITTL LITT E DRU Ypsilon necklace crafted in 18kt white gold with diamonds. Available at Marinab.com. Antoinette chandelier earrings crafted in platinum and 26.78 carats of oval, marquise and pear shaped diamonds by Marina B. Corvallis silver clutch in smooth snake-embossed leather by Tommy Bahama. Available at tommybahama.com Need a description of dress by B Michael America. A Available at ???? ????. rom Napoleon Bonaparte to Canadian Mounties to the Marines, it seems servicemen in uniforms can’t get enough of red accents, precision cuts and sparkly embellishments in silver and gold. The same can be said of this season’s holiday dressing – where the Little Drummer Boy meets girl to spectacular effect. Hooah! Sadie evening dress in cranberry silk gazar by Shoshanna. Available at Tulipano. Feathery peep-toe evening shoe. Available at Stuart Weitzman. F Black stretch faille cocktail dress with grid mesh front. Check for availability at pamellaroland.com. 64 MMER BOY M ts Girl Cuff in 18kt white gold with diamonds. Available at Ivankatrumpcollection.com. Pave Octagonal Drop Earrings in 18kt yellow gold from the Ivanka Trump Collection. Embroidered tulle evening gown with sleeves and full tulle skirt. Available at basilsoda.com. Coral sequin and gold bullion embroidered gown with sea pearl fan appliques by Norman Ambrose. normanambrose.com. SOUTHERN SEASONS MAGAZINE Strapless crepe georgette evening dress with embroidered neckline by Basil Soda. | 65 21ST CENT Grey mink and crocodile tote by Nancy Gonzalez. Available by special order from Bergdorf Goodman NY. African ruby drop necklace in 18kt yellow gold with diamonds. Available at Neiman Marcus Atlanta and Ivankatrumpcollection.com. othing evokes 1940s’ Hollywood glamour like shrugs, stoles and over-sized sleeves. And with the resurgence of fox fur in outerwear, everything old is new again! Look for pencil skirts, ankle wraps and deco jewelry to round out the big picture. N Gray wool felt jacket with emerald fox fur, gray wool felt pleated skirt,multi stripe leather and suede gloves. carolinaherrera.com. Glamoroso black pony high heels. Available at Stuart Weitzman. 66 Clipelope bay anaconda clutch. Available at Stuart Weitzman. 4Gray wool mélange felt jacket with silver fox, gray wool flannel skirt and gray wool flannel belt, gray leather and brown suede gloves by Carolina Herrera. URY FOX 4Gray wool felt coat with amethyst lamb and fox fur by Carolina Herrera. 4Pink granite wool and silk mikado blouse with silver fox fur sleeves and bow detail, meteorite wool and silk mikado skirt with front panel detail by Carolina Herrera. African ruby drop earrings in 18kt yellow gold with diamonds from the Ivanka Trump Collection. Horizontal African ruby ring in 18kt yellow gold with diamonds from the Ivanka Trump Collection. SOUTHERN SEASONS MAGAZINE | 67 Back D S ince launching her couture wedding and evening gown collections in Ashdod, Israel, three years ago, Inbal Dror has been leaving brides and grooms breathless with traditional materials like antique lace, tulle, pearls and Swarovski crystals. But the Queen of Back Drama has also earned a reputation for the number of jaws dropped and pearls clutched, in congregations from Tacoma to Tel Aviv, thanks to her more modern passions for silhouettes featuring low backs, form-ďŹ tting cuts and over-the-top glamour. INBAL DROR 68 rama CONTACT INBALDROR.CO.IL/EN OR JOANPILLOWBRIDAL.COM JOAN PILLOW BRIDAL IN ATLANTA SOUTHERN SEASONS MAGAZINE | 69 Anniversari Dawn & Ben Elliott Celebrating 25 years November 13, 1988 Toa Worthy Dottie & Jerry Smith Celebrating 51 years March 30, 1962 70 © BRZOZOWSKA | ISTOCKPHOTO.COM ngs It was in these gorgeous lavender fields in the South of France that Matt Hodges proposed marriage to Lauren McLeod. SOUTHERN SEASONS MAGAZINE | 71 S 72 McLeod ~ Hodges sales. She met Matt, a corporate asset manager who earned his undergraduate and graduate degrees from Warwick University, at a bank social function. Matt proposed to Lauren in July 2012, in a beautiful lavender field in the South of France at the home of her godparents, Maureen and Edward Slater. This memorable setting was captured by Tony Brewer, who masterfully designed and staged both the ceremony and reception, punctuated with florals in shades of lavender to deep purple. It was Lauren’s godfather, Edward, who walked her down the aisle. In an endearing tribute to her father, Lauren’s wedding gown was the debutante gown he had selected for her years earlier. We ings the Seas outhern elegance resonated at the summer wedding of Ann Lauren McLeod and Matthew Philip Hodges, who were united in marriage on July 20, 2013, at The RitzCarlton, Buckhead. The captivating charm of Rabbi Bradley Lebenberg brought humor and warmth to the traditional Jewish ceremony, which was attended by nearly 400 friends and family members, who came from all over the country as well as abroad. The fun loving couple met in London, where Lauren moved to pursue her master’s degree in 2009. Upon graduating from the London School of Economics, she went to work for the Royal Bank of Scotland in future derivative PHOTOGRAPHY BY RIC AND BARRIE MERSHON SOUTHERN SEASONS MAGAZINE | 73 74 The occasion was particularly poignant for both Lauren and her mother, Leslie McLeod, as they celebrated the remarkable life of the late Dr. Hugh McLeod, beloved father and husband, with his ethereal presence symbolized by a brilliant candle in a lantern hung from the chuppah. The dress was modified by Emily Mak of Shanghai, as was Leslie’s glamorous off-the-shoulder black lace gown from Susan Lee. Following the ceremony, guests enjoyed a festive reception with dinner and dancing. Wedding planner and family friend Kandy MacCarthy worked with Hope Nudelman of The RitzCarlton, Buckhead for a seamless event. The invitations, wedding program and all of the couple’s stationery were brilliantly designed by Harrison Rohr of Exquisite Stationery. The couple enjoyed a five-night “minimoon” at The Ritz-Carlton, Canary Islands in Tenerife, with plans for a twoweek honeymoon in the Maldives in November. The newlyweds will continue to reside in London. SOUTHERN SEASONS MAGAZINE | 75 76 SOCIETY HIGH MUSEUM OF ART FOR THE GO WEST! GALA DECOR BY TONY BREWER & CO. PHOTOGRAPHY BY JANET HOWARD STUDIO SOUTHERN SEASONS MAGAZINE | 77 Cause Partiesfora DECEMBER SANTA FOR SENIORS HOLIDAY LUNCHEON Dec. 3 Annual luncheon at The Estate to benefit Senior Citizen Services of Metropolitan Atlanta. Guests should bring a new, unwrapped gift for a senior to be given during holiday meal deliveries. jperno@mealsonwheelsatlanta. org. scsatl.org. FORWARD ARTS FOUNDATION-SAKS FIFTH AVENUE FASHION SHOW & LUNCHEON at The St. Regis Atlanta – December 16 Dec. 6 Experience worldclass entertainment, live and silent auctions, and delectable dinner at one of Atlanta’s most spectacular eco benefits at the Georgia Aquarium. CAPTAIN PLANET FOUNDATION BENEFIT GALA captainplanetfoundation.org. Dec. 6 The Marietta Pilgrimage will present a magical evening of dining, music, and dancing at the Marietta/ Cobb Museum of Art. The tour of the Kennesaw Avenue historic district will be Dec. 7-8. 770/4291115. mariettapilgrimage.com. MARIETTA PILGRIMAGE TOUR GALA RED & GREEN SCENE – PARTY WITH A PURPOSE Dec. 12 Meet and mingle with top enthusiasts of sustainability and the built environment at one of Atlanta’s largest holiday industry events, supporting Toys for Tots and the annual community service project. Saks Fifth Avenue V.P. and G.M. Cathie Wilson, FAF Fashion Show chairs Sarah Kennedy and Anne Powers, and Saks Fifth Avenue marketing director Michelle New. and Anne Powers, chairs. For more information, call 404/361-9855. aiaatl.org. JANUARY STARFISH BALL Jan. 4 Atlanta’s most festive Mardi Gras celebration at The St. Regis Atlanta includes a formal dinner and silent auction. The nsoro Foundation annually celebrates the student of the year (the nsoro scholar with the highest overall GPA). Proceeds benefit nsoro Foundation high school graduation programs and the student scholarship fund. Tickets $500. Call 404/574-6763 or visit thenf.org. MAYOR’S MASKED BALL FORWARD ARTS FOUNDATION-SAKS FIFTH AVENUE FASHION SHOW AND LUNCHEON Dec. 16 11:30 AM. Highly anticipated luncheon at The St. Regis Atlanta benefiting the Forward Arts Foundation’s support of the High Museum of Art, the Atlanta History Center and other visual arts institutions. Sarah Kennedy Dec. 21 7 PM. The United Negro College Fund’s signature gala and one of the City of Atlanta’s premiere events of the holiday season will be held at the Atlanta Marriott Marquis. The party begins with the Mayor’s VIP reception and silent auction followed by an elegant dinner, Parade of Stars and Dignitaries, dancing and live entertainment. $550 per person. uncf.org. 404/302-8623. Jan. 18 The King Center’s annual awards dinner will be held at the Atlanta Hyatt-Regency Hotel. The award recognizes national and/or international individuals and organizations that exemplify excellence in leadership and have demonstrated a commitment to the principles and philosophy of Dr. Martin Luther King, Jr. Eve Ensler and “One Billion Rising Global Campaign” is being honored with a Coretta Scott King “A.N.G.E.L. Award.” Heavy Weight Boxing SALUTE TO GREATNESS AWARDS DINNER 78 The Host Committee: Back row: Laura Buoch, Pam Murphy, Greg Embry, Anita Patterson, Bonnie Suzy Wasserman , Leadbetter , Susan LeCraw, and Stephanie Rubye Reid , Boswell . Front Bryan Morris, row: Patricia Ginny Millner Terwilliger , , Mary , TracyHataway Dean and Dennis Dean , Liz Rebecca King . McDermott and Randy Korando. ANDRIA LAVINE PHOTOGRAPHY A TASTE OF LOVE at The Ritz-Carlton, Buckhead – February 8 Champ Mohammad Ali will also be honored. thekingcenter.org. Jan. 25 The prestigious 57th annual white-tie ball will be held at the Piedmont Driving Club. Proceeds will benefit the Piedmont Heart Institute’s Center for Aortic Disease. 404/605-3273. PIEDMONT BALL décor incorporating candelabras and hydrangeas, fine food and wine, “sterling” entertainment and dancing. SimplySterling25@att.net. SIMPLY STERLING Jan. 25 25th anniversary celebration of the Sandy Springs Society at the Cobb Energy Centre includes live and silent auctions with special items such as use of the Atlanta Hawks sky box, three nights on a private island in Belize, and tickets to the U.S. Tennis Open. Other highlights are a miniature canvas sale of local well-known artists and art students, elegant Jan. 29 7-9 PM. The Cathedral of St. Philip comes alive with celebration for this special evening to kick off the 43rd anniversary of the Antiques Show (Jan. 30-Feb. 2). Guests and sponsors will have the first look at fine antiques while enjoying music, fine wine and delectable offerings. Benefits Crossroads Community Ministries. Visit cathedralantiques.org or call 404/365-1107. CATHEDRAL ANTIQUES SHOW GALA PREVIEW PARTY auction. can enjoy a sneak peek at the artwork and the option to advance purchase works, absentee or proxy bid on Saturday night, private “tours” of the pieces to watch, live music, catered hors d’oeuvres and open bar. $190 includes Saturday night auction ticket. artpapers.org/ cocktails, plus hors d’oeuvres and desserts provided by some of Atlanta’s finest restaurants. $45 advance; $55 at door. For more information, visit artpapers.org/ auction. FEBRUARY Feb. 1 The 15th annual art auction will feature an impressive showcase of work by famed and emerging artists from around the world benefitting ART PAPERS’ awardwinning programs. This see and be seen event features live music or DJ, fabulous people watching, collection-worthy art, high-end ART PAPERS ART AUCTION ART PAPERS ART AUCTION COLLECTORS’ PREVIEW Jan. 31 During the preview guests hopeandwillball. Feb. 1 6 PM. Eleventh annual fundraiser for Children’s Healthcare of Atlanta will be held at The St. Regis Atlanta. The evening will begin with a cocktail reception and silent auction followed by dinner, special guest speaker, live auction and dancing. Proceeds will support nursing training and development through the Pediatric Simulation Center. Liz Shults and Kay Douglass, co-chairs. For more information, visit choa.org/ HOPE AND WILL BALL SOUTHERN SEASONS MAGAZINE | 79 PARTIES FOR A CAUSE AHC MEMBERs GUILD SPRING LUNCHEON Feb. 5 The Members Guild of the Atlanta History Center annual luncheon will feature a panel of prominent Atlanta women, including Ruth Anthony, Wendy Kopp of US Trust, Vikki Locke, Cynthia Moreland of the nsoro Foundation, Jenny Pruitt of Atlanta Fine Homes, Carol Tome of The Home Depot, and Valery Voyles of Ed Voyles Automotive. Susan Tucker, chair. For tickets, contact Katherine Hoogerwerf at 404/814-4102 or KHoogerwerf@ cathedral antiques show at The St. Regis Atlanta January 30-February 2 AtlantaHistoryCenter.com. Feb. 8 6:30-11 PM. Annual fundraiser for Senior Services North Fulton at the Atlanta Athletic Club, Johns Creek, includes fine dining, dancing, and silent and live auctions. ssnorthfulton.org. OPEN HEARTS FOR SENIORS Feb. 8 Black-tie gala at 103 West includes silent auction, dinner and dancing. Laura and Rutherford Seydel, honorees. Proceeds from the ball go to the Buckhead Rotary Foundation for funding and supporting metro Atlanta and international programs. Tickets $150 per couple. Patron tickets start at $500. Sponsorships: $1500$10,000. For more information, call Kay Quigley at 404/933-6637. Sister Moore ROTARY CLUB OF BUCKHEAD FOUNDATION BALL Executive director of Crossroads Stan Dawson, Cathedral Antiques Show co-chairs Marion Williams and Katherine Wright , and board chairman of Crossroads Wayne Vason. fundraising gala at The Ritz-Carlton, Buckhead featuring gourmet dining, premium wine pairings, live entertainment, silent and live auctions and dancing. Former WSBTV sportscaster Chuck Dowdle, emcee. $300. 404/527-7155. fundraiser for Trinity School at the InterContinental Hotel featuring a seated dinner, live and silent auctions, and entertainment. The Spotlight on Art Artists Market is Feb. 3-8. Proceeds benefit Trinity’s teacher education and scholarship funds. For more information, call 404/231-8119 or visit TASTE OF LOVE GALA Feb. 8 The Epilepsy Foundation of Georgia presents its signature at the Cobb Energy Centre – January 25 simply sterling epilepsyga.org. BIG-TO-DO Feb. 9 2-6 PM. This fun family event benefiting Visiting Nurse Health System’s Children’s Program will return to Stone Mountain Park. This snow day adventure includes two hours of snow tubing, s’mores over an open fire, plus entertainment for all ages. Kevin and Dawn Dwyer, co-chairs. vnhs.org. spotlightonart.com. ODYSSEY BRUNCH Feb. 18 9:30 AM. 5th annual brunch at The St. Regis Atlanta makes it possible for 300 students to attend this life-changing program for free. Events leading up to the brunch include a Shopping Night at Neiman Marcus on Dec. 4, and a Patron Party on Jan. 23 at the home of Melissa and Craig Allen. Ashley Miller and Swati Patel, co-chairs. Contact Catherine Mitchell Jaxon at 917/701-4091 or cmitchell.jaxon@ gmail.com. odysseyatlanta.org. HAUTE HOUNDS & COUTURE CATS Feb. 10 Tails everywhere will be wagging as calendars need to be marked for the third annual Haute Hounds & Couture Cats event taking place at Saks Fifth Avenue. The ultimate ladies luncheon begins at 11:30 AM and serves as the spring fashion showpiece for the season’s must-have looks. Cindy Voyles, chair. Visit atlantahumane. org or contact Natalie McIntosh Amuse’um 2014 Feb. 22 Honoring Children’s Healthcare of Atlanta, Amuse’um will feature a magical evening of entertaining activities, exciting auctions and live music and dancing to the Latin beat of Orchesta MaCuba! The museum will be transformed into a whimsical, adults-only cultural mecca, as patrons and guests enjoy cocktails 80 Standing: Event co-chair Pam Betz, Sandy Springs Society president Kate Dalba, co-chair Betsy Harrington. Seated event chair Joan Plunkett . 404/974-2828. SPOTLIGHT ON ART GALA Feb. 15 Annual signature inspired from around the world and indulge in international fare. Proceeds support the museum’s early childhood educational programming and community outreach. childrensmuseumatlanta.org. 404/659-KIDS [5437]. ATLANTA BALLET BALL Feb. 22 The Atlanta Ballet will present its 34th annual ball at The St. Regis Atlanta. This year’s special honoree is Atlanta Ballet artistic director emeritus Robert “Bobby” Barnett. The evening will offer live music, live and silent auctions and performances by Atlanta Ballet company members and students from the Centre for Dance Education. For reservations, contact Megan DeWitt at 404/873-5811, ext. 208 or mdewitt@ Honoring Atlanta Ballet Artistic Director Emeritus Robert Barnett ATLANTA BALLET BALL at The St. Regis Atlanta February 22 atlantaballet.com. atlantaballet.com/getinvolved/ballet-ball/. ATLANTA HEART BALL Feb. 22 This year’s ball at The RitzCarlton, Buckhead promises to be an engaging evening of fun, bringing community and philanthropic leaders together. The American Heart Association fundraiser celebrates the work and mission, donors and volunteers, and the lives saved and improved because of everyone’s effort. Contact Kelsey Schival at kelsey.schival@heart.org. 678/224-2065. ALL TIMES AND DATES SUBJECT TO CHANGE. PLEASE CONTACT INDIVIDUAL VENUE FOR CONFIRMATION. Atlanta Ballet dancer Kelsey Ebersold with Robert Barnett and Virginia Rich Barnett. ODYSSEY BRUNCH at The St. Regis Atlanta February 18 Host Committee: (standing) Alexandra Walter, Mary Beth Jenkins, Catherine Mitchell Jaxon, Shannon Dixon, Caroline Willis, Ashley Miller and Swati Patel ; (seated) Christine Ragland, Tyler Wynne and Forrest Canton ; (down the stairs) Jennifer Kellett, Molly Caine, Cara Isdell Lee and Christina Whitney . ROSE PHOTOGRAPHY SOUTHERN SEASONS BEN MAGAZINE | 81 CHARLIE MCCULLERS Starfish Ball January 4 at The St. Regis Atlanta he nsoro Educational Foundation celebrates Atlanta’s most festive Mardi Gras charity ball at The St. Regis Hotel on January 4, 2014. King Darrell J. Mays and Queen Lorri McClain will preside over the 5th annual formal New Orleans dinner featuring a silent auction. Founded in 2005 by the Mays Family of Altanta, The nsoro Foundation raises much needed funding for education programs for children in foster care and students who ageout or emancipate from foster care. Individual tickets are $500 and patron sponsorships begin at $1,500. FOR RESERVATIONS, CALL 404/574-6763 OR EMAIL: CYNTHIAMORELAND@ THENF.ORG. King Darrell J. Mays and Queen Lorri McClain. 82 Presented by the Grady Memorial Hospital Corporation Board of Directors Grady Health Foundation Board of Directors 2014 Gala Co-Chairs Jennifer and Tom Bell Roz and John Brewer Georgia Aquarium 225 Baker Street Northwest Atlanta, GA 30313 Saturday, March 15, 2014 Formal Black Tie Attire SOUTHERN SEASONS MAGAZINE | 83 Horizon MARCH benefit SafePath Children’s Advocacy Center. safepath.org. On the March 13 6-9 PM. The party kicks off the 25th anniversary of the American Craft Council Atlanta Show (March 14-17) at the Cobb Galleria Centre. Guests will have the chance to mix and mingle with the nation’s top craft artists while enjoying live entertainment, cocktails, a scotch tasting room and other delectable offerings. Benefits Hambidge Center for Creative Arts and Sciences and the ACC. Tickets $75 in advance. craftcouncil.org/ atlanta. 678/613-3396. RED CARPET GALA March 1 7 PM. Walk the red carpet Academy Awards style at Cumberland Academy of Georgia. This year’s event honors James Ramseur and benefits this special needs school. Guests can enjoy cocktails, dinner, casino gaming and great opportunities to bid on great live and silent auction items. cumberlandacademy.org. THE AMERICAN CRAFT SHOW PREVIEW PARTY tie fundraiser at the Georgia Aquarium will recognize some of Atlanta’s healthcare heroes. gradyhealthfoundation.org. CANDLELIGHT BALL March 22 The black-tie fundraising event for CADEF: The Childhood Autism Foundation at the InterContinental Hotel will feature a seated dinner, live and silent auction and entertainment. cadef.org. AtlantaHistoryCenter.com. atlantahistorycenter.com. year’s ball will honor Governor Nathan Deal and Mayor Kasim Reed. Aimee Chubb, chair. Contact Katherine Hoogerwerf at 404/814-4102 or KHoogerwerf@ MAY DRISKELL PRIZE DINNER May 2 6:30 PM. Tenth annual dinner at the High Museum of Art in honor of David C. Driskell. The prize will be presented to a scholar whose work contributes to the definition of the African American experience in the visual arts. Proceeds support the David C Driskell African-American Art Acquisition and Endowment Funds. 404/873-5811 ext. 203. HOPE FASHION SHOW March 24 The American Cancer Society Auxiliary will host its 23rd annual fashion show at The RitzCarlton, Buckhead. The event includes a sumptuous luncheon followed by a professional runway show. hopefashionshow.org. March 8 One of Cobb’s finest and most prestigious black-tie events, the 13th annual elegant gala includes dinner, entertainment and beautiful and fun auction items at the Cobb Galleria Centre. Proceeds HEARING CHILDREN’S VOICES March 15 Jennifer and Tom Bell, and Roz and John Brewer, co-chairs. The 4th annual black- WHITE COAT GRADY GALA HIGH MUSEUM ATLANTA WINE AUCTION 404/733-3303. high.org. May 3 6:30 PM-midnight. 14th annual black-tie gala at the Atlanta Marriott Marquis emphasizes the impressive work of TechBridge’s nonprofit clients. Karren Renner and Bill VanCurren, co-chairs. techbridge.org. March 26-29 Atlanta’s greatest on the grounds of the Swan House April 26 SWAN HOUSE BALL wineauction.org. food and wine party and the largest fundraising event for the High Museum of Art, “Legends of the Vine,” features world-renowned winemakers and legendary chefs from across the country. atlanta- TECHBRIDGE DIGITAL BALL March 27 6-9 PM. The Sandy Springs Society presents the 23rd annual “Tossed Out Treasures” ultimate flea market with an exclusive preview party to kick off the bargain hunter’s event. Guests will be the first to browse and buy the gently used upscale items while enjoying cocktails, hors d’oeuvres and a silent auction. Tickets $30 in advance; $35 at the door. sandyspringssociety.org. “TOSSED OUT TREASURES” PREVIEW PARTY May 3 Guests will be transported to exotic locales as they stroll through the Zoo grounds enjoying delicacies and getting up-close and personal with the inhabitants. Following the walk through, guests can enjoy bidding on silent and live auctions items, a seated dinner under the big tent and dancing to live music. For ticket information, contact Amy Walton at 404/6245836. zootlanta.org. ZOO ATLANTA’S BEASTLY FEAST APRIL Cynthia Widner Wall of Presenting Sponsor PNC Wealth Management, Atlanta History Center President and CEO Sheffield Hale, and Swan House Ball chair Aimee Chubb. HEARTS WITH HOPE GALA May 10 The 26th annual fundraiser for PADV (Partnership Against Domestic Violence) at The RitzCarlton, Buckhead will pay tribute to the mothers who are so often victims and victors of this crime. April 26 One of Atlanta’s premier social events and the Atlanta History Center’s largest fundraiser celebrates its 29th year. Presented by PNC Wealth Management, this SWAN HOUSE BALL Jennifer.Highsmith@padv.org. ALL TIMES AND DATES SUBJECT TO CHANGE. PLEASE CONTACT INDIVIDUAL VENUE FOR CONFIRMATION. 84 Beastly Feast The 2014 Beastly Feast Committee: Auction Cochair Gigi Rouland ; Auction Co-chair Burch Hanson ; Hospitality Chair Kathleen Waldrop ; Zoo Atlanta President and CEO Raymond King, holding Mandela the milky eagle owl; local Ford Motor Company Representative and Zoo Atlanta Board of Directors Vice Chair Mark Street ; Décor Chair Tony Brewer ; and Beastly Feast Co-chairs Michele and Ben Garren, holding the Lanner falcon Savannah. Not pictured: Auction Chair Ginny Brewer and Patron Co-chairs Nicole and Miles Cook. Ba f the il The 2014 Beastly Feast Co-Chairs Ben and Michele Garren feed Abu the reticulated giraffe. lans are well underway for the 2014 Beastly Feast, Ball of the Wild, scheduled for May 3 on the grounds of Zoo Atlanta. The Ford Motor Company Fund is the Presenting Sponsor, marking its 29th year supporting the event. Beginning at 6:30 p.m., guests will be transported to exotic locales as they stroll throughout the Zoo grounds enjoying delicacies from favorite local restaurants and getting up-close and personal with the furry and scaly inhabitants. The spring gala is always a highlight of the party season and the 2014 event will celebrate the Zoo’s 125th anniversary as well as its conservation efforts. Following the walk-through, guests can enjoy bidding on silent and live auction items, a seated dinner under the big tent decorated by Tony Brewer and Company, and dancing to live music. The 2014 Beastly Feast Committee includes Co-chairs Michele and Ben Garren, Auction Co-chairs Ginny Brewer, Burch Hanson and Gigi Rouland, Hospitality Chair Kathleen Waldrop, Patron Co-chairs Nicole and Miles Cook, and Décor Chair Tony Brewer. The generous donations from patrons support mission-critical conservation and education efforts, contributing directly to Zoo Atlanta’s reputation as a national leader in animal care and preservation of endangered species. Don’t miss this fun evening in one of Atlanta’s most unique settings! Host tables for 10 are ideal for groups at $5,000 and corporate donors may sponsor tables for 10 at the levels of $7,500, $12,500 and $25,000. Individual tickets start at $450. TO INQUIRE ABOUT BENEFITS OF VARIOUS TABLES AND TICKETS, CONTACT AMY WALTON AT 404/624-5836 OR VISIT SOUTHERN SEASONS MAGAZINE | 85 P May 3 at Zoo Atlanta Beast y east 2014 e PHOTOGRAPHY BY JIM FITTS DR. BILL TORRES AND CINDY VOYLES. DELECTABLE CUISINE WAS PROVIDED BY MARY HATAWAY OF SOIREE CATERING PICTURED HERE WITH BLOOMINGDALE’S GENERAL MANAGER TOM ABRAMS. WALLY ROGERS AND MARI PHILLIPS OF BASLER & BLOOMINGDALE’S. FABULOUS FALL LAUNCH The Fall issue of Southern Seasons Magazine was cause to celebrate An incredible September evening was filled with great friends and colleagues as the ultimate southern cocktail event was enjoyed by more than 200 guests. Mary Hataway of Soiree Catering provided delicious bites and Tony Brewer’s fine hand dressed the tables with seasonal flowers and gorgeous linens. Bloomingdale’s models strutting Basler’s fall collection offered a sneak peek at the next season’s fashions, and music by Class Act topped off the wonderful night! Special thanks to Sean Thorndike of the Atlanta History Center for his kindness and generosity. KARIN SMITHSON AND LOVETTE RUSSELL . PHOTOGRAPHY BY NINH CHAU at the Atlanta History Center JOE SMITHSON AND BILL VOYLES. CUMBERLAND ACADEMY OF GEORGIA FOUNDING DIRECTOR DEBBI SCARBOROUGH AND DEEDRA HUGHES, PRESIDENT HUGHES MEDIA. LISA FULLER AND PAM SMART. 86 KAY QUIGLEY, SALLY DORSEY AND EILEEN ROSENCRANTS. KAREN AND JOHN SPIEGEL WITH SUZANNE MOTT DANSBY. NEWLYWEDS BRIAN AND MAGGIE FITZGERALD WERE DELIGHTED TO BE ON THE LATE FALL COVER OF SOUTHERN SEASONS MAGAZINE. SHARON AND CHIP SHIRLEY STAND NEXT TO THE COVER OF THEIR NEW DAUGHTER-IN-LAW, LINA. TONY BREWER PROVIDED GORGEOUS FLORAL AND TABLE DECOR FOR THE PARTY. KANDIS AND ADAM JACKSON ADMIRE THE FALL ISSUE OF SOUTHERN SEASONS MAGAZINE. SANDRA AND DAN BALDWIN WITH GAIL O’NEILL , SOUTHERN SEASONS STYLE EDITOR. TV ANCHORS MONICA PEARSON AND BRENDA WOOD JOINED IN THE FESTIVITIES. SOUTHERN SEASONS MAGAZINE TAMMY GROSS AND SOUTHERN SEASONS PUBLISHER AND EDITOR EILEEN GORDON. | 87 Southern Seasons & Bloomies present The Basler Fall & Winter Collection From the Vixen Vodka cocktails to the gorgeous clothes, guests enjoyed an evening of fabulous fun at the fall/winter Basler Fashion preview. Bloomingdale’s GM Tom Abrams joined Eileen Gordon and Lisa Fuller of Southern Seasons Magazine in welcoming Atlanta’s fashionistas to this ultimate shopping event. PHOTOGRAPHY BY KIM LINK 88 LISA FULLER AND KELLY CANNON, U.S. TRUST SR. VICE PRESIDENT, PRIVATE CLIENT ADVISOR. PAM SMART AND LISA FULLER. SOUTHERN SEASONS PUBLISHER AND EDITOR EILEEN GORDON FULTON AND COUNTY COMMISSIONER ROBB PITTS. CHERYL ESPY AND DEBORAH MARSHALL .. LESLIE MCLEOD AND BLOOMINGDALE’S GENERAL MANAGER TOM ABRAMS. DANIELLE BERRY AND MARK SQUILLANTE . CARRIE KING, KITSY ROSE AND LEEANN MAXWELL . PEGGY MILAM AND BLOOMINGDALE’S BASLER STYLIST WALLY ROGERS. THEO TYSON, BASLER RETAIL SVP MICHAEL WALKER AND MARA MADDOX , SENIOR PUBLIC RELATIONS MANAGER, BLOOMINGDALE’S. SOUTHERN SEASONS MAGAZINE | 89 Longman + Longman o anyone who knows this endlessly romantic couple, their invitation to celebrate Su and Al Longman’s anniversary and Al’s birthday was sure to be a thrilling evening of fabulous fare and cherished friends. The August night began with individual limousines sent to each guest’s home to transport them to and from the festivities. Tony Conway magically transformed The Estate into lavish splendor with thousands of breathtaking fuchsia orchids, setting the tone for a multi-course gourmet dinner and dancing fit for royalty. The bill of fare began with an opulent caviar station. The divine cuisine included five courses, highlighted by a magnificent four-pound lobster for each guest and watermelon sherbet served on blocks of ice between courses. All of this VICTOR VELYAN SURPRISED SU AND AL WITH CUSTOM DESIGNED WORKS OF JEWELRY ART. T culinary excellence was accompanied by a variety of exquisite wines and champagnes chosen to enhance each taste. (No surprise that the limo drivers were a very good idea!) Chamber music filled the ballroom during dinner, followed by dancing into the night. Al surprised his lovely bride by having renowned jewelry designer Victor Velyan fly across the country to present her with a custom designed diamond encrusted 18-karat gold panther necklace. The entire evening was continually thrilling and entertaining as many of the Longmans’ dearest friends took center stage to tell endearing anecdotes about the honored pair. As well she should be, Su was a vision and the center of attention in her stunning couture runway gown designed by Marc Jacobs. The crystalline invitations in silk satin boxes were created by none other than the amazing stationaire Harrison Rohr. PHOTOGRAPHY BY SARA HANNA DEBBIE DEAN AND EILEEN ROSENCRANTS. 90 MO AKBAR AND MARK FILLION. SOUTHERN SEASONS MAGAZINE | 91 ABOVE LEFT: BLAINE PALMER, SAM HENDERSON AND REBECCA BILY. ABOVE RIGHT: MATTHEW PIEPER, MARGARET EATON, TAMARA SCHMIDLY, SCOTT SCHMIDLY AND STEVE EATON. LEFT: JIMMY AND HELEN CARLOS. Party in the Kitchen Open Hand Atlanta hosted the 10th annual Party in the Kitchen at the King Plow Arts Center Event Gallery recently. Four hundred guests enjoyed an exciting evening of music, cocktails and exquisite cuisine prepared by some of Atlanta’s finest chefs. This year’s event was co-hosted by celebrated chefs Kevin Rathbun and Gerry Klaskala, along with community philanthropists Helen S. Carlos and Rebecca Bily. It raised $265,000 for the organization’s community nutrition programs. SUSAN AND CHRISTO MAKRIDES. KELLY AND JIM WEATHERLY. PAULA AND GEORGE NORTON. SOS Give Me Five Celebrating its seventh anniversary this year, Share Our Strength’s® Give Me Five dinner celebrated its most successful night to date, raising $97,000 for Share Our Strength’s No Kid Hungry campaign, a national effort to end childhood hunger in America. Guests enjoyed an exquisite five-course meal prepared by five of Atlanta’s best chefs and complemented with wine and, for the first time, beer pairings by five of the city’s top sommeliers, as well as silent and live auctions and an impactful speech by WNBA star and Share Our Strength supporter Ruth Riley. 92 KAREN AND MATT REAVES, AND SONNY HAYES. JACK SAWYER OF SPONSOR WILMINGTON TRUST WITH HONORARY CHAIRS ELIZABETH AND CARL ALLEN AND DR. BILL TORRES. INGRID SAUNDERS JONES. A Meal to Remember presented a culinary extravaganza in the ballroom of The St. Regis Atlanta with décor of sophisticated crystal, white, silver and black enamel created by event co-chair Tony Conway. Co-chairs Nancy Brown and Marlene Alexander applauded the extraordinary dedication of honorary chairs Elizabeth and Carl Allen, long-time supporters of the organization that prepares and delivers more than 105,000 nutritious meals annually to homes of ill, frail and homebound seniors. Ingrid Saunders Jones was presented the first Corporate Community Service Award, named in her honor for future recipients. The former chair of The Coca-Cola Foundation was saluted for her extraordinary community service to Atlanta. A Meal to Remember ABOVE: TRIO OF CO-CHAIRS MARLENE ALEXANDER, TONY CONWAY OF A LEGENDARY EVENT AND ALSO A SPONSOR, AND NANCY BROWN. LEFT: JAMES WALL , CATHERINE WALL , AMANDA WALL , SPONSOR CYNTHIA WIDNER WALL OF PNC WEALTH MANAGEMENT, AND HER FATHER COY WIDNER. PHOTOGRAPHY BY JIM FITTS AND KIM LINK SOUTHERN SEASONS MAGAZINE | 93 DR. BILL TORRES, TARA WERTHER, DARRELL MAYS AND JACK SAWYER OF BEST CELLARS DINNER SPONSOR WILMINGTON TRUST. CHAIR EMERITUS JOEL KATZ WITH HIS WIFE KANE . BEST CELLARS DINNER – A galaxy of stars of Atlanta’s entertainment, philanthropic, sports, professional and volunteer spheres united in support of the T.J. Martell Foundation at the 5th Annual Best Cellars Dinner, held at The Ritz-Carlton, Buckhead. Benefiting the Winship Cancer Institute of Emory University, the evening continued the T.J. Martell Foundation’s mission of funding innovative medical research focused on finding cures for leukemia, cancer and AIDS. PHOTOGRAPHY BY KIM LINK Co-chairs Leslie Morgan, Molly Berry and Jennifer Morgan. Rebie Benedict of representing sponsor Harry Norman, Realtors; Laura Spearman ; Bonneau Ansley III of The Ansley Group, honorary co-chair and sponsor; Cathy Davis Hall of Harry Norman, Realtors; and Maggie Coulon Catts of Harry Norman, Realtors. Flea Market Preview Party Dressed in disco-era ensembles and even blonde wigs reminiscent of the 1970s, Flea Market co-chairs Molly Beery, Jennifer Morgan and Leslie Morgan greeted guests and organized the booths of treasures of home accessories, designer clothes, jewelry, fine art, books, antiques, furniture and bargainpriced finds. An array of enticing items attracted avid bidders to the silent auction at the party. Patricia McLean and preview party co-chair Georgia Schley Ritchie. PRESENTING SPEAKER TODD BANISTER, SENIOR VP ROB OWEN AND MILLENNIAL HOWARD. 94 AGENT CLAYTON Recognizing Atlanta’s rank as one of the top U.S. cities preferred by young professionals, the Buckhead North office of Harry Norman, Realtors has expanded its focus to provide additional expert real estate support to single professionals, young couples and families in choosing their next home. At a Continental Breakfast program, Rob Owen welcomed the “Millennial Realtors” Michael Neill, Kyle Gilbert, Kristen Feldman, Clayton Howard, Jackie Smith, Juan Carlos Carrion, Ashley Lee and Katie Brannen. MILLENNIAL AGENTS MILLENNIAL AGENTS MICHAEL NEILL AND KRISTEN FELDMAN. PHOTOGRAPHY BY ROSS HENDERSON PHOTOGRAPHY BY KIMBERLY LINK RIGHT: BALL CO-CHAIRS DOT STOLLER AND SUSAN MCCAFFREY. LEFT: BALL HONOREES CAROLE AND JOHN HARRISON. BELOW LEFT: BALL SPONSOR CYNTHIA WIDNER WALL OF PNC WEALTH MANAGEMENT WITH HER FATHER, COY WIDNER. BELOW RIGHT: LESLIE MCLEOD PRESENTED THE HUGH C. MCLEOD III AWARD TO DR. CARLTON SAVORY. Crystal Ball A jewel-like marine vignette, with sparkling open oyster shell revealing an enormous pearl, reflected the Crystal Ball’s theme, “The Magical Sea,” as guests entered The Ritz-Carlton, Buckhead for the 32nd annual gala benefiting the Arthritis Foundation. Co-chaired by Susan McCaffrey and Dot Stoller, the evening saluted Carole and John Harrison, honorees who have provided decade-long support and generosity to the Arthritis Foundation, and Corporate Honoree Northside Hospital, a partner in event team fund-raising and sponsorship generating more than $100,000 toward research, education and public health initiatives. PHOTOGRAPHY BY ROSS HENDERSON AND KIM LINK Arthritis Foundation EmPower Party ABOVE: GCAPP PRESIDENT AND CEO VIKKI MILLENDER-MORROW (CENTER) AND CO-CHAIRS ASHLEY MILLER AND ALEXANDRA WALTER. ABOVE RIGHT: DR. SANJAY GUPTA , RECIPIENT OF THE HEALTH PIONEER AWARD, WAS CONGRATULATED BY JANE FONDA AND TED TURNER. Dazzling in her graciousness, charm and legendary beauty, Jane Fonda, founder and chairman emeritus of GCAPP (the Georgia Campaign for Adolescent Power & Potential), personally welcomed guests to the EmPower Party, GCAPP’s signature fundraising event, at the Georgia Aquarium. After Fonda presented Dr. Sanjay Gupta with GCAPP’s Health Pioneer Award, the two friends engaged in an enlightening on-stage conversation on issues ranging from the alarming rise in obesity to the effects of genetics and behavior on health. SOUTHERN SEASONS MAGAZINE | 95 REGINA CHAN OF HONG KONG SOTHEBY’S INTERNATIONAL REALTY, DOUGLAS KERBS OF FULLER SOTHEBY’S INTERNATIONAL REALTY AND JENNY PRUITT OF ATLANTA FINE HOMES SOTHEBY’S INTERNATIONAL REALTY, WITH A CHINESE-ENGLISH INTERPRETER. JENNY PRUITT RECEIVING HER LUXURY REAL ESTATE AWARD FROM JOHN BRIAN LOSH. Career Milestone Jenny Pruitt was accorded the prestigious Lifetime Achievement Award by Who’s Who in Luxury Real Estate at an elegant reception held in conjunction with the organization’s Fall Conference in Atlanta. She was presented with the crystal obelisk by John Brian Losh, chairman of Luxury Real Estate, who cited Jenny’s four decades of success in residential real estate and her co-founding of Atlanta Fine Homes Sotheby’s International Realty, which is expected to surpass $1 billion in sales in 2013. CHARLES LAM OF THE CHINA REAL ESTATE CHAMBER OF COMMERCE-HONG KONG, DR. J.J. PO-AN HSIEH OF HONG KONG POLYTECHNIC UNIVERSITY, AND DAVID BOEHMIG. JENNY PRUITT AND DAVID BOEHMIG AT A LUNCHEON HOSTED BY SOTHEBY’S HONG KONG AUCTION HOUSE. To Asia & Back Sotheby’s Hong Kong celebrates 40th Year Calling Hong Kong “the most exhilarating and high-powered business and commercial center we’ve ever visited,” Jenny Pruitt, CEO, and David Boehmig, president, of Atlanta Fine Homes Sotheby’s International Realty, returned from the 40th anniversary celebration of Sotheby’s Hong Kong Auction House. During meetings with Sotheby’s wealthiest Asian clients, the two Atlanta real estate leaders presented each with a copy of the Atlanta Fine Homes Sotheby’s International Realty Collectors’ Book, Joie de Vivre. More than 250 Atlanta properties, all valued at over $1.5 million, are showcased. “The Asian leaders are astute business people who recognize Atlanta as a growth center, offering worldwide access through its airport and stability for significant real estate investment and appreciation,” David Boehmig said. PHOTOGRAPHY BY ROSS HENDERSON AND KIM LINK with Jenny Pruitt ATLANTA FINE HOMES SOTHEBY’S INTERNATIONAL REALTY AGENT YETTY ARP AND BOB PRUITT. NANCY SEE , SENIOR VP OF ATLANTA FINE HOMES SOTHEBY’S INTERNATIONAL REALTY, AND LEADING AGENT LESLIE RANSOM. 96 THERAPIST BETH SASSO WITH THERAPY DOG “FROSTY,” PHILANTHROPIST SPONSORS RAMON AND CAROL TOME , JOHN AND KAREN SPIEGEL , AND THERAPIST JENNIFER SPEER (SEATED), WITH THERAPY DOG “GALION.” HONORARY CO-CHAIR SALLY NUNNALLY (SEATED) IS SURROUNDED BY HER FAMILY, DAY AND CHARLES REDHEAD, HONORARY CO-CHAIR MCKEE NUNNALLY, JOHN REDHEAD, H. MCKEE NUNNALLY, AND ANNA , LAURA AND LIZZIE NUNNALLY. PHOTOGRAPHY BY JIM FITTS THE LEGENDARY PARTY Shepherd Center revealed new innovations at “The Future is Now,” the 25th anniversary of The Legendary Party. Held at The Ritz-Carlton, Buckhead, the gala raised over $1 million to benefit the Assistive Technology Center and Animal Assisted Therapy. Party chair Karen Spiegel joined Shepherd Center co founder James H. Shepherd Jr. in presenting astounding technological progress and therapy programs to more than 500 patrons in attendance. DARRELL MAYS AND TARA WERTHER. EILEEN ROSENCRANTS, DEBBIE DEAN, SU LONGMAN AND TARA WERTHER. DR. BILL TORRES AND JACK SAWYER. AN ENGAGING AFFAIR Jack Sawyer and Dr. Bill Torres hosted an artfully fun celebration in honor of Darrell Mays and his fiancée, Tara Werther, at The Lowe Gallery this fall. The couple was delighted to celebrate their happiness with close friends and family. SOUTHERN SEASONS MAGAZINE | 97 CAUSE TO CELEBRATE 1 2 3 1. Cancer Treatment Centers of America at Southeastern Regional Medical Center honored 21 five-year cancer survivors during Celebrate Life 2013, planting a tree for each to symbolize spirit and hope. Annie Stephenson Holsonback of CTCA with celebrant Ruth Gethers-Simil, Anne Meisner of CTCA, and Richard J Stephenson, founder of CTCA. 2. Kim Lape, Susan Been, Millie Smith and Kelly Loeffler co-hosted a private cocktail party at Gucci’s Phipps Plaza store to celebrate the “Handbag Artisan Corner” and raise money for Make-A-Wish Georgia. 3. The Patron Party for the Arthritis Foundation’s Crystal Ball was hosted by Tom and Ruth Anthony (center), pictured with Crystal Ball co-chairs Dot Stoller and Susan McCaffrey. Patron Party co-chairs Cheryl Espy, Lisa Fuller, Juli Owens and Brenda Smith orchestrated the ocean-themed event. 4. Patron Campaign co-chairs Boyd and Caroline Leake, Scott Willett of Johnson & Johnson, and Amy Elizabeth Smith at the Patron Party for the Arthritis Foundation’s Crystal Ball. 5. Dazzled by a gallery of contemporary artwork in the law offices of Greenberg Traurig, guests enjoyed an elaborate cocktail and buffet dinner at the Patron Party for the Legendary Party. Legendary Party chair Karen Spiegel and John Spiegel, Cynthia Widner Wall of PNC Wealth Management, and Ramon Tome and Carol Tome of The Home Depot. 4 5 98 The chaplains for Children’s Healthcare of Atlanta at Scottish Rite and Egleston Hospitals were honored at an “Appreciation Tea and Garden Tour” at the historic Griffith-Richard House in Sandy Springs. In attendance were (above) co-host Deane Johnson of Atlanta Fine Homes Sotheby’s International Realty, Dr. Brenda Green of CHOA, the Rev. Steve Yander of St. Joseph’s Hospital, Bernadine and Jean-Paul Richard ; and (right) Erika Johnson, co-host Wes Vawter of Atlanta Fine Homes Sotheby’s International Realty and his wife Terry Vawter. KEITH BERRY In attendance at the Caring For Others benefit at the Georgia Aquarium were CFO Board Chair Joseph Northington, CFO President/ CEO Eslene Richmond-Shockley and event designer William Fogler. The Latin American Association’s 25th Anniversary of Latin Fever Ball at the InterContinental Buckhead was a huge success, raising more than $380,000 to benefit the thousands of individuals that the LAA serves. Guild members Patty Webb, Aida Flamm, Lois Beserra, Barbarella Diaz (Guild Co-Chair), Angelica Guevara Young, Karla Arriola and Del Clark. KIM LINK A Meal to Remember co-chairs Marlene Alexander and Tony Conway with sponsor Cynthia Widner Wall of PNC Wealth Management raise their glasses in support of the culinary benefit at the elegant Patron Party, held at The Estate on Piedmont Road. The Farmer & The Chef benefit at The Ritz-Carlton, Buckhead raised over $230,000 for the March of Dimes. Enjoying the evening were Leslie McLeod, Melbin De La Cruz, Ruby Lucas and event chair Dr. Valerie Montgomery Rice, dean and executive VP of Morehouse School of Medicine. SOUTHERN SEASONS MAGAZINE | KIM LINK 99 ETCETERA 1 2 3 1. Pam Longobardi of Atlanta was named the winner of the prestigious Hudgens Prize, which includes a transformational cash award of $50,000 and an invitation for a solo exhibition at the Hudgens Center for the Arts in Duluth. She’s pictured with Teresa Osborn, executive director of the Hudgens. Photo by Jim Fitts. 2. The Phipps Plaza Lilly Pulitzer Store hosted a special shopping event to benefit the Atlanta Speech School’s Language & Literacy Gala. Gala co-chairs Liza Jancik and Mary Anne Massie are pictured with Becky McDaniel, assistant manager of the Lilly Pulitzer Store. Photo by Jim Fitts. 3. Ande Cook, Elyse Defoor, and Mary Stanley were among the crowd of 20,000 people who flocked to the Castleberry Hill neigbhborhood for Flux Night, a one-night public art celebration presented by Flux Projects. Photo by Raftermen Photography. 4. The 15th Annual Marlow’s Tavern Golf Classic was a huge success, raising more than $52,000 for Special Olympics Georgia. John C. Metz, cofounder of Aqua blue and executive chef and co-founder of Marlow’s Tavern and Sterling Spoon Culinary Management, Special Olympics athlete Josh Jansma and market partner Hank Clark at the golf tournament. 5. The Fox Theatre Institute, (FTI ) an Atlanta-based outreach program created by The Fox Theatre, presented a check to The President Theatre at a special event to kick off its 2013-2014 restoration projects. Pictured are Fox Theatre, Inc. board member Carolyn Wills ; President Theatre representatives Pattisue Elliott, Regina Garrett, Billy Garrett, Joan Caldwell, Josh Mitchell and Tarver Siebert ; and Fox Theatre Institute Program Manager Carmie McDonald. Photo by Philip Sanford. 6. HomeAid Atlanta, in partnership with Rainbow Village and Builder Captain Harcrest Homes, announced the construction of an apartment building for Rainbow Village, a long-term transformational housing community for homeless families in north metro Atlanta. Rev. Nancy Yancey and Norma Nyhoff of Rainbow Village receive items from HomeAid’s Essentials for Young Lives Drive. 4 100 5 6 7 7. Skyland Trail president and CEO Beth Finnerty joins gala co-chairs Kelly Loeffler and Betsy Akers at the Patron Party for Skyland Trail’s Benefits of Laughter. The party was held at the Buckhead home of Regina and Steve Hennessy. Photo by Tim Wilkerson Photography. 8. Guests enjoyed an evening of Italian culture at Festa Italiana at the Museum of Design Atlanta, in conjunction with its exhibit, “Barrique: Wine, Design, and Social Change.” Max Salmi, Italian Honorary Consul General Angela della Costanza Turner, Ricardo Cichi, Lavinia Cichi and Director of the National Italian American Foundation for Georgia Leo Pieri. Photo by Carlos Bell Photography. 9. Glyn Weakley Interiors presented a two-day show of Stephanie Kantis’ fashion jewelry. Company rep. Jill Reagan (center) shows Cindy Martin and Lisa Fuller some of the signature chains and gem stone pendants. Photo by Kim Link. 10. John Phillip Short was honored at an elegant reception on the publication of “Magic Lantern Empire: Colonialism and Society in Germany.” The author’s family members Ben Hill IV, Katherine Hill and Ben Hill were on hand to congratulate him. Photo by Ben Rose. 11. Sarah Cornwell, Quinn Nygren, Beth-Ann Taratoot, Savannah Cernosek and Meredith Jones headed to Whiskey Blue Atlanta’s rooftop for a cocktail party featuring art installations by local artist Allie Hendee. Photo by Dylan York. 12. Amy Nelson, Allie Hendee and Gwen Ross mixed and mingled at the Whiskey Blue cocktail party, which displayed Hendee’s 12-foot-tall dress forms made of natural hide, jute and burlap, topped with elk antlers. The soiree was co-hosted by Morgan Cohen of Morgan Kylee boutique. Photo by Dylan York. 8 9 10 11 SOUTHERN SEASONS MAGAZINE | 101 12 C. MCCULLERS, COURTESY OF ATLANTA BALLET arts “Atlanta Ballet’s Nutcracker” Dec. 6-29 Fox Theatre “Sam the Lovesick Snowman” Jan. 2-Feb. 2 at Center for Puppetry Arts 102 performing “Santaland Diaries” & “Madeline’s Christmas” Through Dec. 31 at Horizon Theatre ENOCH KING, HAROLD M. LEAVER AND LALA COCHRAN IN HORIZON THEATRE’S “THE SANTALAND DIARIES.” WINTER ACT1 THEATRE Christmas,” classic tale of a family coming together for the holidays. 7:30 PM Fri.-Sat., 2 PM Sun. 180 Academy St., Alpharetta. 770/751- & 8 PM Sat., 3 PM Sun. alvinailey. org. foxatltix.com. 855/ATL-TIXX. Through Dec. 22 “Home for ART STATION THEATRE Dec. 5-22 “A Broadway Christmas Carol,” Dickens’ story of Ebenezer Scrooge is mixed with Broadway song parodies. 8 PM Thurs.-Sat., 3 PM Sun. Feb. 20-March 9 “Making God Laugh,” touching family comedy. 5384 Manor Dr., Stone Mountain. 0033. act1theater.com. Jan. 23-Feb. 2 “Brighton Beach Memoirs,” Neil Simon’s classic coming-of-age comedy. 6285-R Roswell Road NE, Sandy Springs Plaza shopping center. act3productions.org. ACT 3 PRODUCTIONS artstation.org. 770/469-1105. ARTS AT EMORY Music Lessons and Carols, Glenn Auditorium, 1672 N. Decatur Road. 8 PM Fri., 4 & 8 PM Sat. Dec. 9 Emory University Symphony Orchestra. Jan. 18 Emory Arts Showcase. Jan. 25 Emory Community Choral Festival. Jan. 31 yMusic. Feb. 6 Emory Jazz Fest: Big Band Night. Feb. 7 Gary Motley Trio and Barbara Morrison at Jazz Fest. Feb. 13 St. Olaf Choir. Feb. 20 Lang Lang, piano. Theater Jan. 23 Harabel by Gypsee Yo, Mary Gray Munroe Theater, Dobbs University Center, 605 Asbury Circle NE, Atlanta. 7 PM. Jan. 28 Brave New Works. Unless noted, events at Schwartz Center for Performing Arts, 1700 N. Decatur Road, Atlanta. For a comprehensive list of events, visit Dec. 6-7 A Festival of Nine ACTOR’S EXPRESS CENTER FOR PUPPETRY ARTS Dec. 13-22 Libby’s at the Express Jan. 8-Feb. 9 “Six Degrees of Separation,” New York society is rocked by scandal when a young con artist enters its inner circle. 887 W. Marietta St., Atl. actors- express.com. 404/607-7469. Alliance Stage Series Jan. 15-Feb. 9 “The Geller Girls,” world premiere, romantic comedy about two Jewish sisters, living in Atlanta in 1895. Hertz Stage Series Jan. 31-Feb. 23 “In Love and Warcraft,” world premiere, a look at relationships in the digital age. Youth and Families Series Nov. 29-Dec. 29 “A Christmas Carol,” Dickens’ classic tale. Feb. 22-March 9 “Shrek The Musical,” fun fairy tale for family. Woodruff Arts Center, 1280 Peachtree St., NE. 404/733-4650. ALLIANCE THEATRE arts.emory.edu. 404/727-5050. alliancetheatre.org. ATLANTA BALLET Feb. 13-16 Extraordinary dance company performs an exciting collection of premieres, new productions and “Revelations” at the Fox Theatre. 8 PM Thurs.-Fri., 2 ALVIN AILEY AMERICAN DANCE THEATER Nutcracker,” family favorite, live with the Atlanta Ballet Orchestra at the Fox Theatre. Dec. 19 “The Nutty Nutcracker,” wacky spin on the classic at the Fox Theatre. Rated PG-13. Feb. 7-9, 13-15 Jean-Christophe Maillot’s “Roméo et Juliette,” Dec. 6-29 “Atlanta Ballet’s GREG MOONEY GREG MOONEY BRENNA MCCONNELL IN HORIZON THEATRE’S “MADELINE’S CHRISTMAS.” JOAN MARCUS PHYRE HAWKINS, MARK EVANS AND CHRISTOPHER JOHN O’NEILL IN “THE BOOK OF MORMON” FIRST NATIONAL TOUR. “The Book of Mormon” – Jan. 28-Feb. 9 at Fox Theatre Dec. 8 ASO Gospel Christmas Dec. 13-14 Christmas with the ethereal production redefines the classic love story, staged at Cobb Energy Centre. Feb. 15-16 “Pinocchio,” family ballet at Cobb Energy Centre, 2800 Cobb Galleria Pkwy., Atlanta. BIG CHICKEN CHORUS 404/892-3303. atlantaballet.com. ATLANTA BAROQUE ORCHESTRA Within: A Special Anniversary Retrospective, 4 PM at Roswell Presbyterian Church, 755 Mimosa Blvd. atlantabaroque.org. Jan. 12 Collaborations from 770/993-6316. ATLANTA LYRIC THEATRE Dec. 13-15, 20-22 “Sanders Family Christmas,” starring the original Theatre in the Square cast, staged in the Family Life Hall of First UMC of Marietta, 56 Whitlock Ave. 8 PM Fri.-Sat. & 2 PM Sun. Jan. 3-19 “Duke Ellington’s Sophisticated Ladies,” stylish and brassy retrospective, staged at Cobb Civic Center’s Anderson Theatre, Marietta. 404/377-9948. atlantalyrictheatre.com. ATLANTA OPERA operatic retelling of the famous legend, at Cobb Energy Centre, 2800 Cobb Galleria Pkwy. March 8, 11, 14, 16 “Faust,” atlantaopera.org. 404/881-8885. ATLANTA SYMPHONY HALL Feb. 13 Buddy Guy March 27 David Garrett Celebration Symphony Tour. Dec. 22 Celtic Woman, Christmas ASO, 8 PM Fri., 2 & 8 PM Sat. Dec. 15 ASO Kids’ Christmas, 1:30 & 3:30 PM Sun. Dec. 19-21 A Very Merry Holiday Pops, 8 PM Thurs.-Sat., 2 PM Sat. Dec. 22 Celtic Woman: Home for Christmas, 8 PM. Dec. 30 Widespread Panic’s 2013 Tunes For Tots benefit concert Dec. 31 ASO New Year’s Eve Jan. 4 TDP Alumni Legacy Concert, hosted by Monica Pearson, 7 PM. Free with ticket. Jan. 9, 11 Master and Commander, classical, 8 PM Thurs., 7:30 PM Sat. Jan. 23, 25, 26 Simply Fantastic, classical, 8 PM Thurs., 7:30 PM Sat., 2 PM Sun. Jan. 30-Feb. 1 Purely Russian Drama, classical, 8 PM Thurs. & Fri., 7:30 PM Sat. Feb. 6, 8 Daredevil!, classical, 8 PM Thurs., 7:30 PM Sat. Feb. 7 Paganini: Violin Concerto No. 1, 6:30 PM. Feb. 9 Tchaikovsky Discovers America!, 1:30 & 3:30 PM. Feb. 14, 15 Piano Romance, 8 PM. Feb. 20, 22 All Vaughan Williams, 8 PM Thurs., 7:30 PM Sat. Feb. 27-March 1 Poetic License, 8 PM Thurs.-Fri., 7:30 PM Sat. Atlanta Symphony Hall, Memorial Arts Building, Woodruff Arts Center, 1280 Peachtree St. NE, Atlanta. 404/733-5000. Dec. 14 Holiday show at Cobb 2878. bigchickenchorus.org. Civic Center, Marietta, with guest quartet Lunch Break. 770/530- BUCKHEAD THEATRE Broken Bones Dec. 6 Jason Isbell, St. Paul & the Dec. 28 North Mississippi AllStars 3110 Roswell Road, Atl. 404/8432825. thebuckheadtheatre.com. Dec. 8 Holiday POPS!, 3 PM at Decatur First UMC March 23 Spring Concert, 3 PM at Callanwolde Fine Arts Center, 980 Briarcliff Road, NE. 404/872-5338. calcb.org. CALLANWOLDE CONCERT BAND Tour! Pirate & Princess Adventure Dec. 27-29 Shen Yun Jan. 10 Travis Tritt Jan. 18 Billy Gardell Jan. 26 The Midtown Men Jan. 30 New Orleans! with Aaron Neville & Dirty Dozen Brass Band Feb. 7-9, 13-15 Atlanta Ballet “Roméo et Juliette” Feb. 15-16 Atlanta Ballet “Pinocchio” Feb. 22 Dinosaur Train Live Feb. 26 Georgia On My Mind: Celebrating Ray Charles 2800 Cobb Galleria Pkwy., Atlanta. cobbenergycentre.com. Dec. 7 Brian Setzer Orchestra Dec. 14-15 Disney Junior Live On 800/745-3000. CENTER FOR PUPPETRY ARTS Family Series Through Dec. 29 “Rudolph The Jan. 2-Feb. 2 “Stan the Lovesick Feb. 6-March 23 “Weather Feb. 25-March 2 “Great Expectations.” Ages 12+ Ongoing “Puppets: The Power of Wonder,” a display of 350 puppets from around the world. Tues.-Sun. 1404 Spring St. NW at 18th, Atlanta. 404/873-3391. puppet.org. Rocks!” Adults & Teens Snowman.” Red-Nosed Reindeer”™ atlantasymphony.org. ATLANTA WIND SYMPHONY at Roswell Cultural Arts Center. Symphony Hall, Memorial Arts Building, Woodruff Arts Center, 1280 Peachtree St. NE, Atlanta. Dec. 15 AWS on Holiday, 3 PM atlantawindsymphony.org. 800/745-3000. ticketmaster.com. ATLANTA SYMPHONY ORCHESTRA Nov. 29-30 Cirque de la Symphonie Thurs. & 2 PM Sat. Dec. 5, 7 Handel’s Messiah, 8 PM Through Dec. 22 “Christmas Canteen 2013,” musical revue. Jan. 16-Feb. 9 “Lombardi,” the real story of the legendary icon. 128 East Pike St., Lawrenceville. auroratheatre.com. 678/2266222. AURORA THEATRE Dec. 13-21 Christmas at Sweet Apple, a heartfelt collection of holiday stories by acclaimed Georgia writer Celestine Sibley 8 PM Thurs.-Sat., 2 PM Sun. and Sat. (Dec. 21 only). The Art PlaceMountainview, 3330 Sandy Plains Road, Marietta. centerstagenorth. org. 770/516-3330. CENTERSTAGE NORTH COBB ENERGY CENTRE Dec. 2 Dave Koz & Friends Dec. 6 Sinbad Through Dec. 15 “Irving Berlin’s White Christmas,” holiday musical. Dec. 17 Christmas Classics 2013, 8 PM. Dec. 19 N. Georgia Barber Shop Singers Christmas 2013, 8 PM. Dec. 20 Cumming Playhouse Singers Christmas Concert, 8 PM. Dec. 21 Sounds of Sawnee Christmas Concert, 8 PM. Dec. 22 North Georgia Chamber Symphony Christmas Concert, 3 PM. Dec. 31 The Return, Beatles Tribute Band, 3 & 8 PM. Jan. 3-4 “Me and 4 Others,” ’50s & ’60s rock’n’roll, 8 PM. Jan. 11 Peppino D’Agostino and Carlos Reyes, 3 & 8 PM. Jan. 14-16 “En Mis Palabras” (In my own words), produced by The Atlanta Opera, 10:30 AM & 1 PM. Jan. 18-19 Monroe Crossing, bluegrass, 8 PM Sat., 3 PM Sun. Jan. 24-26 “Magic Jukebox,” Mardi Gras Annual Musical Variety Show, 8 PM Fri. & Sat., 3 PM Sun. Feb. 13-March 9 “On Golden Pond,” a couple returns to their summer home for the 48th year. Shows at 8 PM Thurs.-Sat. & 3 PM Sun, unless otherwise noted. 101 School St. in the Historic | 103 CUMMING PLAYHOUSE SOUTHERN SEASONS MAGAZINE PERFORMING ARTS Cumming Public School. 770/7819178. playhousecumming.com. Feb. 12-March 2 “Shakespeare’s R&J,” new adaptation of the classic. 999 Brady Ave., Atlanta. 404/876-9468. fabrefaction.org. FABREFACTION THEATRE “Duke Ellington’s Sophisticated Ladies” staged by Atlanta Lyric Theatre Jan. 3-19 at Cobb Civic Center ferstcenter.org. Dec. 14 A Peter White Christmas with Rick Braun and Mindi Abair Feb. 7-8 Push Dance Company Feb. 15 Pat Metheny Unity Group March 1 Carolina Chocolate Drops Performances at 8 PM, unless noted. 349 Ferst Dr. NW Atlanta at GA. Tech. 404/894-9600. FERST CENTER FOR THE ARTS @ GA. TECH 14TH STREET PLAYHOUSE Joyful,” holiday stage play, 2 & 7 PM Sat., 2 & 5 PM Sun. Feb. 14-15 “Love Isn’t So Simple,” experience love and relationships from each side, 8 PM Fri., 3 & 8 PM Sat. 173 14th St. NE, Atlanta. 404/733- Dec. 7-8 “The Covering: Let`s Get 5000. 14thstplayhouse.org. Dec. 6-29 Atlanta Ballet’s Nutcracker. Dec. 16 Christmas with Amy Grant & Vince Gill. Dec. 31 New Year’s Eve Comedy with Cedric the Entertainer. Jan. 10 Gregg Allman Jan. 28-Feb. 9 “The Book of Mormon.” Feb. 13-16 Alvin Ailey American Dance Theater. Feb. 21 Robin Thicke. Feb. 28 “Alton Brown Live! The Edible Inevitable Tour.” March 4-9 “Once.” April 10-27 “The Lion King.” FOX THEATRE 660 Peachtree Street NE, Atlanta. 404/881-2100. foxtheatre.org. ticketmaster.com. GA. ENSEMBLE THEATRE Reno,” world premiere comedy. Feb. 27-March 16 “F. Scott Fitzgerald’s The Great Gatsby,” Jazz Age classic. Roswell Cultural Arts Center, 950 Forrest St. 770/641-1260. get.org. Jan. 9-26 “The Only Light in McEachern UMC, Powder Springs. Dec. 5 Carols by Candlelight, Johnson Ferry Baptist Church, Marietta. 7:30 PM. Dec. 10 Christmas Concert, Lenbrook, Peachtree Road, Atlanta. 7:30 PM. 3747 Peachtree Road NE, Atlanta. GWINNETT CENTER Arena Dec. 5 Bill Gaither, 7 PM. Dec. 16 Star 94’s Jingle Jam: 404/234-3581. tgafc.org. GA. STATE SCHOOL OF MUSIC Dec. 3 Jazz Combos Dec. 5 Jazz Band II Dec. 7-8 16th Annual Gala Holiday GA. FESTIVAL CHORUS Dec. 1 Carols by Candlelight, Concert, Rialto Center, 8 PM Sat. & 3 PM Sun. Dec. 9 Rialto Youth Jazz Big Band, 7:30 PM. Dec. 16 Rialto Youth Jazz Combos, 7:30 PM. Feb. 2 Bent Frequency, 3 PM Feb. 9 GSU Symphony Orchestra, 3 PM, Rialto Center. Feb. 9 University Symphony Orchestra, 3 PM, Rialto Center. Feb. 18 neoPhonia New Music Ensemble. Feb. 21 Georgia Bands of Distinction, 7 PM, Rialto Center. Feb. 24 Wind Orchestra & Metropolitan Atlanta Youth Wind Ensemble, 8 PM, Rialto Center. Feb. 25 University Band, 8 PM, Rialto Center. Performances at 8 PM at Kopleff Recital Hall, unless noted. music. Backstreet Boys, The Fray, Avril Lavigne, Goo Goo Dolls. Jan. 18-19 Professional Bull Riders, 8 PM Sat. & 2 PM Sun. Jan. 23 Fresh Beat Band, 7 PM. Feb. 5-9 Ringing Bros. and Barnum & Bailey Circus March 15 Harlem Globetrotters Performing Arts Center Nov. 29-Dec. 1 Northeast Atlanta Ballet’s “The Nutcracker.” Dec. 6-8, 13-15, 20-22 Gwinnett Ballet Theatre’s “The Nutcracker.” 6400 Sugarloaf Pkwy., Duluth. gwinnettcenter.com. HORIZON THEATRE Diaries,” holiday comedy about an out-of-work writer who takes a job as a Macy’s Department Store elf. Dec. 7-31 “Madeline’s Christmas,” family musical based on Ludwig Bemelmans’ book. 1083 Austin Ave., Atlanta. 404/584- Through Dec. 31 “The Santaland 7450. horizontheatre.com. JAZZ ROOTS: A LARRY ROSEN JAZZ SERIES Christmas Tour. Dec. 2 Dave Koz & Friends Neville & Dirty Dozen Brass Band. Feb. 26 Georgia On My Mind: Celebrating Ray Charles, with Take 6, Nnenna Freelon, Kirk Whalum, Shelly Berg & Clark Atlanta University Band and Singers. Cobb Energy Centre, 2800 Cobb Galleria Pkwy., Atlanta. jazzroots. gsu.edu. 404/413-5901. Jan. 30 New Orleans Jazz: Aaron DOUG GRAHAM, KEENA REDDINGHUNT AND THEO HARNESS. GA. SYMPHONY ORCHESTRA GSO & GYSO Choruses, 3 & 8 PM, Murray Arts Center, Kennesaw. Feb. 16 GYSO&C Concert #2, 3 & 7:30 PM, Bailey Center, KSU. Dec. 7 Holiday Pops with the “A Broadway Christmas Carol” Dec. 5-22 at Art Station Theatre 104 georgiasymphony.org. 770/4297016. net/atlanta. LEE HARPER & DANCERS HOLIDAY CONCERT company performs at 1 PM in the Kellett Theatre, Broyles Arts Center, The Westminster Schools, 1424 W. Paces Ferry Road. Free. westminster.net. Dec. 14 Adult modern dance leeharperanddancers.com. LIVE! IN ROSWELL SERIES Roswell Cultural Arts Center. Dec. 28 Sixpence None the Richer Feb. 7 Masters of Soul 770/594-6232. roswellpresents. com. Dec. 5-8 “Hershel & The Hanukkah Goblins,” family musical presented by Company J. Dec. 16-17 “Annie Jr.,” by Company J Youth Ensemble, 6 PM. Jan. 9, 12, 16, 19 “Peter Pan and Wendy,” presented by Synchronicity Performance Group. Jan. 26 Jazz at the JCC Series: The Upbeatniks: Beatles Tribute, 7-9 PM. Feb. 15 Jazz at the JCC Series: Michael Feinberg, 8-10 PM. MJCCA-Zaban Park, 5342 Tilly Mill Road, Dunwoody. 678/812-4002. atlantajcc.org. MARCUS JCC OF ATLANTA THE COMPANY WITH ARTISTIC DIRECTOR ROBERT BATTLE AND ASSOCIATE ARTISTIC DIRECTOR MASAZUMI CHAYA. Alvin Ailey American Dance Theater – Feb. 13-16 at Fox Theatre “ONCE” March 4-9 Musical celebration of life and love, staged at the Fox Theatre. 7:30 PM Tues.-Thurs., 8 PM Fri., 2 & 8 PM Sat., 1 & 6:30 PM Sun. foxtheatre.org/once. Feb. 21 Demi Lovato Feb. 26 Imagine Dragons March 15 Harlem Globetrotters 1 Philips Dr., next to CNN Center. 404/878-3000. philipsarena.com. Pond,” relationship drama. 8 PM Thurs.-Sat., 2:30 PM Sun. North DeKalb Cultural Center, 5339 Chamblee Dunwoody Road, Dunwoody. stagedoorplayers.net. MICHAEL O’NEAL SINGERS Christmas” with Atlanta Symphony Brass Quintet, 3 PM, Roswell UMC, 814 Mimosa Blvd. Dec. 22 Messiah Sing-Along, 3 PM at Roswell UMC. Feb. 9 “Night and Day: Time Pieces,” Kaleidoscope, 3 PM, Alpharetta Presbyterian Church. Dec. 8 “A Classy, Brassy PHILIPS ARENA POLK STREET PLAYERS 770/396-1726. 770/594-7974. mosingers.com. Nov. 30 The Story Tour Dec. 1 Kanye West Dec. 14 Pink Dec. 15 Andrea Bocelli Dec. 17 Justin Timberlake Dec. 27 Jay Z Dec. 31 Widespread Panic Jan. 9 Jeff Dunham Feb. 12-17 Ringling Bros. and Barnum & Bailey Circus Feb. 14-March 1 “Tuesdays polkstreetplayers.com. with Morrie” at the Stellar Cellar Theatre, St. James’ Episcopal Church, 161 Church St., Marietta. Shows at 8 PM Thurs.-Sat., 2:30 PM Sun. 770/218-9669 RIALTO SERIES @ GSU 9TIX. rialtocenter.org. GSU School of Music. 8 PM Sat., 3 PM Sun. Jan. 31 Off the EDGE, Biennial Dance Immersion Feb. 8 Soweto Gospel Choir from South Africa, 8 PM. Feb. 22 Capitol Steps, musical and political satire straight from Washington D.C., 8 PM. Feb. 28 Johnny Mercer Celebration with Joe Gransden and Carmen Bradford, 8 PM. 80 Forsyth Street NW. 404/413- Dec. 7-8 Gala Holiday Concert, Dec. 6-29 “Peter Pan & Wendy,” swashbuckling adventure. Ages 4+. Co-produced with Aurora Theatre. Feb. 7-March 2 “Where the Mountain Meets the Moon,” fantasy spin on Chinese folklore. Ages 5+. Staged at 14th Street Playhouse, 173 14th St. NE, Atlanta. 404/4848636. synchrotheatre.com. SYNCHRONICITY THEATRE Jan. 28-Feb. 9 Award-winning musical at the Fox Theatre. 7:30 PM Tues.-Thurs., 8 PM Fri., 2 & 8 PM Sat., 1 & 6:30 PM Sun. Explicit language. broadwayinatlanta.com. Through Dec. 22 “Gifts of the Magi,” heartwarming musical. Jan. 29-Feb. 23 “The Best of Enemies,” modern-day parable of transformation and triumph. Balzer Theater at Herren’s, 84 Luckie St. NW, Atlanta. 678/5281500. theatricaloutfit.org. “THE BOOK OF MORMON” THEATRICAL OUTFIT ROSWELL DANCE THEATRE staged at Roswell Cultural Arts Center, 950 Forest St. 770/998- Through Dec. 8 “The Nutcracker,” 0259. tysod.com. TRUE COLORS THEATRE Feb. 25-March 23 “Race,” suspense story, staged at Southwest Arts Center, 915 New Hope Road, Atl. 877/725-8849. TAKE 6 JOINS AN ALL-STAR CAST OF PERFORMERS. Feb. 27-March 23 “Red Badge of Courage,” combining puppets, animation, and live actors, co-produced with Kennesaw State University. 8 PM Thurs.-Sat., 5 PM Sun., plus 2 PM March 15 & 22. 1105 Euclid Ave., Atl. 404/5237647. 7stages.org. 7 STAGES truecolorstheatre.org. com. Dec. 30 Benefit concert at Atlanta Symphony Hall. widespreadpanic. WIDESPREAD PANIC’S TUNES FOR TOTS Jazz Roots: Ray Charles Celebration Feb. 26 at Cobb Energy Centre Dec. 6-22 “Ken Ludwig’s The Game’s Afoot,” comedy whodunit. Jan. 24-Feb. 16 “On Golden STAGE DOOR PLAYERS ALL TIMES AND DATES SUBJECT TO CHANGE. PLEASE CONTACT INDIVIDUAL VENUE FOR CONFIRMATION. SOUTHERN SEASONS MAGAZINE | 105 ANDREW ECCLES FUN around town WINTER ALPHARETTA CHRISTMAS CELEBRATION Dec. 6 36th annual Christmas Tree Lighting, 5-8 PM at Milton Square City Park, with musical performances, Santa, Christmas Farmer’s Market, food trucks, and lighting of a 45-foot live blue spruce with 10,000 lights at 7 PM. 678/297-6078. Dec. 7 Snow on the Square, 1 PM, 2 South Main St. Make snowballs and snowmen, visit Santa and Mrs. Claus in the gazebo, and shop at the Christmas Farmers Market, open from 10:30 AM-4:30 PM. 678/297-6000. awesomealpharetta.com. alpharetta.ga.us. Global Winter Wonderland at Turner Field – Through Jan. 5 story is read aloud. Children will decorate holiday cookies with the chefs. 2:30 PM. 75 Fourteenth St., Atlanta. Reservations required. by St. Ann’s Women’s Guild, at St. Ann Catholic Church, 4905 Roswell Road, Marietta. 9 AM-7 PM Fri., 9 AM-2 PM Sat. st-ann.org. Dec. 29 Panthers atlantafalcons.com. Games at Georgia Dome. ARCHIBALD SMITH PLANTATION HOME 404/253-3840. ATLANTA HAWKS Christmas, with seasonal decor by Roswell Garden Club. Dec. 14 Gingerbread Christmas kids workshops (ages 6 & up), 10:30 & 11:30 AM and 1 PM. $10. Reservations required. Jan. 1-31 “Archie and Gulie: The Smiths Born from Reconstruction” exhibit. 935 Alpharetta St., Roswell. Through Dec. 30 Images of ATLANTA BELTLINE EASTSIDE 10K Dec. 7 Part of the Atlanta BeltLine Running Series and a Peachtree Qualifier, the race starts at 10 AM at Stoveworks, 112 Krog St., NE Atlanta. Activities at the run/walk include the College Alumni Tailgate Challenge and Neighborhood Challenge. $45. run.beltline.org. archibaldsmithplantation.org. 770/641-3978. ATLANTA BALLET “NUTCRACKER” TEAS APPLE ANNIE CRAFT SHOW Dec. 6-7 Annual juried show of original arts and crafts by over 100 of the Southeast’s finest artisans, plus homemade soups and goodies Dec. 8, 15, 22 Enjoy tea time at the Four Seasons Atlanta, with seasonal sandwiches and sweets, as dancers from “Atlanta Ballet’s Nutcracker” perform vignettes and the “Nutcracker” Jan. 9-12 Georgia’s largest boating event features hundreds of boats, marine accessories and electronics, boating and fishing seminars, trout pond and more at the Ga. World Congress Center, Hall C, 285 Andrew Young International Blvd., NW, Atlanta. 11 AM-9 PM Thurs.-Fri., 10 AM-9 PM Sat., 10 AM-6 PM Sun. $12 (free, 15 & under). 954/441-3227. atlantaboatshow.com. ATLANTA BOAT SHOW Home Games Dec. 4 L.A. Clippers Dec. 6 Cleveland Dec. 10 Oklahoma City Dec. 13 Washington Dec. 16 L.A. Lakers Dec. 18 Sacramento Dec. 20 Utah Dec. 28 Charlotte Jan. 3 Golden State Jan. 8 Indiana Jan. 10 Houston Jan. 20 Miami Jan. 24 San Antonio Jan. 29 Detroit Feb. 1 Minnesota Feb. 4 Indiana Feb. 8 Memphis Feb. 19 Washington Feb. 22 New York Feb. 25 Chicago Philips Arena. hawks.com. ATLANTA HISTORY CENTER AM-4:30 PM. Celebrate the season with festive family activities, plus “time travel” to holidays past at the Smith Family Farm and Swan House, with living history characters. Jan. 5 Three Kings Day, 1-5 PM. Hispanic holiday festival, held in collaboration with the Mexican Consulate and the Instituto de Mexico, with storytelling, music, performances, food and fun. 130 West Paces Ferry Road, Atlanta. 404/814-4000. Dec. 14 The Holiday Spirit, 10:30 Garden Lights, Holiday Nights Atlanta Botanical Garden – Through Jan. 4 Through Jan. 4 Garden Lights, Holiday Nights, featuring 30 acres of botanical-inspired displays created with nearly 1.5 million lights. Open nightly, 5-10 PM (except. Dec. 24 & 31). Dec. 7 Reindog Parade, 11 AM. Costumed canines strut their stuff (must RSVP to participate), plus a Doggie Expo and pictures with botanical St. Nick. 1345 Piedmont Ave. NE. atlantabotanicalgarden. org. 404/876-5859. ATL. BOTANICAL GARDEN atlantahistorycenter.com. ATLANTA FALCONS Home Games Dec. 15 Redskins ATL. JEWISH FILM FEST showcases an international Jan. 29-Feb. 20 Annual festival 106 Christmas at Callanwolde – Dec. 6-17 collection of narrative and documentary films that explore Jewish life, culture and history, with screenings at several area venues. vnhs.org. 404/215-6010. benefit for the Children’s Program of Visiting Nurse Health System. ajff.org. 404/806-9913. BACK-TO-NATURE HOLIDAY MARKET AND FESTIVAL Dec. 7 Browse hand-made and eco-friendly goods from 40 vendors at the Chattahoochee Nature Center, from 10 AM-4 PM. Offerings range from original art and ornaments to candles and clothing. Musical entertainment will be on tap, with lunch and desserts available for purchase, plus free admission to CNC, 9135 Willeo Road, Roswell. 770/992-2055 ext. Jan. 26 Browse themed wedding galleries, see a couture fashion show, sample appetizers and cake, meet the city’s best wedding experts and more at the Atlanta Convention Center, AmericasMart, Bldg. 2, 230 Sprint St. $20 ($15 adv). beabride.net. BRIDAL EXTRAVAGANZA 5338. callanwolde.org. demonstrations, live acoustic music and gourmet food trucks. 10 AM-6 PM Sat., 11 AM-5 PM Sun. Plans for a Preview Party, Jan. 24. 980 Briarcliff Road NE. 404/872- Jan. 30-Feb. 2 Inspiration Avenue, designer house. 11 AM-5 PM. Feb. 1 Drinks & Antiques, 7-10 PM, enjoy drinks and hors d’oeuvres from Soirée, while browsing the booths. cathedralantiques.org. 404/3651107. BULLOCH HALL EVENTS Stories of the 1800s, self-guided holiday tours, $8. Dec. 3, 10, 12, 17 Christmas High Teas, 4 PM. Ladies in period dress serve a two-course tea, plus a tour of the house decorated for Christmas. $40. Reservations only. Dec. 7, 14-16, 18-19, 21-23 “A Christmas Carol” performances by Kudzu Players. $15. RSVP. Dec. 14 Christmas for Kids (ages 5-11), with yule log and candy cane hunt, craft, snack, storytelling, and visit from Mr. & Mrs. Claus. 10 AM-noon. $10. Reservations only. Dec. 19 Mittie Bulloch and Theodore Roosevelt Sr.’s 1853 Wedding Reenactment with tours from 6:30-9 PM. Reservations only. $12 ($6, ages 6-18). 180 Bulloch Ave., Roswell. Through Dec. 30 Best Loved callawaygardens.com. Through Dec. 30 22nd annual Fantasy in Lights®, Christmas spectacular with 8 million lights and over a dozen custom scenes in a dazzling outdoor display. Pine Mountain. 1-800-CALLAWAY. CALLAWAY GARDENS CANDLELIGHT HIKE TO THE MILL 236. chattnaturecenter.org. Dec. 14 Mile-long guided night Feb. 8 Annual 10K race at 7:30 AM along the Chattahoochee River in Roswell to benefit the Chattahoochee Nature Center. Also a 1-Mile Fun Run at 8 AM. 10K is official qualifier for the Peachtree Road Race. 770/992-2055. chattnaturecenter.org. CHATTAHOOCHEE CHALLENGE 10K RACE BARRINGTON HALL Make-A-Wreath for Make-AWish, with festive holiday wreaths available for purchase. 1-4 PM. Dec. 7-8 Holiday Bazaar, 11 AM-5 PM Sat., 1-5 PM Sun. Dec. 21 & 23 Cookies with Mrs. Claus, 10 AM & noon. $10 per child. Reservations required. 535 Barrington Dr., Roswell. Dec. 1, 8, 15 Open House for georgiastateparks.org. hike goes inside the Civil War era textile mill ruins of New Manchester, 7-9 PM, at Sweetwater Creek State Park in Lithia Springs. $5 hike, $5 parking. 770/732-5871. CHATTAHOOCHEE NATURE CENTER Market and Festival, 10 AM-4 PM, featuring local, handmade items. Dec. 15 Reindeer Day, 1-4 PM. Meet live reindeer, plus crafts and campfire treats. Feb. 8 Chattahoochee Challenge 10K Race and Fun Run, 8 AM. 9135 Willeo Rd., Roswell. 770/992- Dec. 7 Back to Nature Holiday CATHEDRAL ANTIQUES SHOW & TOUR OF HOMES and most prestigious antiques show features exquisite 18th, 19th and early 20th century antiques (from rugs and furniture to art, porcelain and silver) at The Cathedral of St. Philip, 2744 Peachtree Road, Atlanta. 10 AM-5 PM Thurs.-Fri., 10 AM-4 PM Sun. This 43rd annual show benefits Crossroads Community Ministries. Feb. 2 Tour of Homes, self-guided tour of beautifully designed homes in distinctive neighborhoods. Additional special events: Jan. 26 First Place Passion Tour, 1-4 PM. Tour five charming houses of young professionals. Jan. 29 Gala Preview Party Jan. 30 Bobby McAlpine’s and Susan Ferrier’s talk & book signing. Jan. 30-Feb. 1 Atlanta’s oldest southerntrilogy.com. 770/6403855. roswellgov.com. 2055. chattnaturecenter.org. CHICK-FIL-A BOWL BIG APPLE CIRCUS Dec. 31 46th annual rivalry game 8499. chick-fil-abowl.com. Jan. 30-Feb. 17 “LUMINOCITY,” bigapplecircus.org. featuring soaring trapeze artists, flying acrobats, magnificent steeds juggler extraordinaire, amazing pups and more – with no seat more than 50 feet from the ringside, at Verizon Wireless Amphitheatre, 2200 Encore Pkwy., Alpharetta. bullochhall.org. 770/992-1731. between the ACC and SEC at 7:30 PM in the Georgia Dome. 404/586- CALLANWOLDE FINE ARTS CENTER Callanwolde. Jan. 25-26 Callanwolde Arts Festival, inaugural indoor arts fest, featuring works by 150 painters, photographers, sculptors, metalwork, glass artists, jewelers and more, plus artist Dec. 6-17 Christmas at BIG-TO-DO Feb. 9 A snow adventure awaits at Snow Mountain at this annual Dec. 7 33rd annual parade in Atlanta, from 10:30 AM-noon. Route starts at Peachtree St. near Baker St., turns right onto Marietta St., turns left and ends on Centennial Olympic Park Drive. choa.org. CHILDREN’S CHRISTMAS PARADE SOUTHERN SEASONS MAGAZINE | 107 FUN AROUND TOWN CHRISTMAS AT CALLANWOLDE Gothic-Tudor mansion decked for the season by Atlanta’s top interior and floral designers. Holiday shopping and holiday music take place throughout the event, with daily seasonal workshops. 10 AM-4 PM Mon.-Sat., 11 AM-6 PM Sun., extended hours 5-7:30 PM Wed. $20 ($15 seniors, $12 ages 4-12, free for 3 & under). Dec. 5 VIP Preview Party with Von Grey. 7 PM. $75. RSVP only. Dec. 6 & 13 Musical Winter Nights, 6-9 PM. $30. RSVP only. Dec. 7 & 14 Breakfast with Santa, 9 AM. $20. RSVP only. Dec. 8 & 15 Teddy Bear Tea, 3 PM. $25. RSVP only. Dec. 11 Cocoa & Caroling, 5:307:30 PM. $5 (free with $20 tour). Dec. 12 Garden Club Day Dec. 14 Holiday Handmade 980 Briarcliff Road NE, Atlanta. 404/872-5338. Dec. 6-17 Tour the magnificent Dec. 14-15 “Pirate & Princess Adventure” features characters from “Sofia the First” and “Jake and the Never Land Pirates” at Cobb Energy Centre, 2800 Cobb Galleria Pkwy., Atlanta. Shows at 7 PM Sat.; 11 AM, 2 & 5 PM Sun. ticketmaster.com. DISNEY JUNIOR LIVE Dec. 31 New Year’s Eve bash at the Hyatt Regency Atlanta with live bands and DJs. 8:30 PM-2 AM. downtowncountdown.net. DOWNTOWN COUNTDOWN EVENING IN BETHLEHEM Dec. 7 Roam through a 1st century marketplace, brought to life by costumed characters and stable animals, and witness a live production that tells the story of the first Christmas at Roswell United Methodist Church, 814 Mimosa Blvd. Free (suggested donation: $10 per family). Indoors. Reserve time for nativity at 5, 6 or 7 PM. 770/993-6218. rumc.com. Virginia-Highland Tour of Homes – Dec. 7-8 800/864-7275. sponsored by America’s State Parks. georgiastateparks.org. Plantation, Greensboro. $30 ($20, 12 & under). ritzcarltonlodge.com. christmasatcallanwolde.org. Through Dec. 22 Holiday fun around the Dahlonega Square, with lighting of the square, hometown parade with Santa’s arrival, caroling, entertainment, wine sampling and live theater. dahlonega.org. 706/864-3513. DAHLONEGA’S OLD FASHIONED CHRISTMAS DECATUR CANDLELIGHT TOUR OF HOMES Dec. 6-7 31st annual holiday homes tour features seven homes and two points of interest around the city of Decatur. 5:30-9:30 PM. $25. decaturtourofhomes.com. DINOSAUR TRAIN LIVE Through Jan. 5 “Marco Polo: Man & Myth,” highlights the epic 13th century journey of the legendary traveler that spanned thousands of miles and more than 24 years. Through Jan. 5 “Winter Wonderland,” festive exhibit of decorated trees and artifact displays celebrates holidays, traditions and cultures. Dec. 7, 14, 21 Holly Jolly Film Fest Opening Feb. 15 “Whales: Giants of the Deep,” features life-size models, interactive displays and videos about the marine mammals. 767 Clifton Road. 404/929-6300. fernbankmuseum.org. FERNBANK MUSEUM OF NATURAL HISTORY Ongoing Take an hour-long guided tour of this historical movie palace, rich in performing arts and architectural history. 10 AM-1 PM Mon. & Tues.; 10 & 11 AM Sat. $18 (free, 10 & under). foxtheatre.org. FOX THEATRE TOURS GALLERY GLASS SHOW Dec. 7 10th annual show at Taylor Kenzil Gallery in Roswell features original glass sculpture, functional works and unique ornaments. 1-5 PM. Bring new unwrapped toy for Toys for Tots fundraiser. 16 Elizabeth Way. 770/993-3555. Through Jan. 12 “Moneyville,” hands-on tour through money factory, bank, shopping district, stock market and more. Jan. 29-Feb. 1 Adventures Through Words!, activities & crafts. Open daily. 275 Centennial Olympic Park Dr., NW. 404/659-KIDS. childrensmuseumatlanta.org. IMAGINE IT! CHILDREN’S MUSEUM OF ATLANTA IMAX® THEATRE AT FERNBANK MUSEUM Universe,” a breathtaking tour of deep space through images captured by Hubble and the world’s most powerful telescopes. Through Jan. 23 “Penguins” follows a brave king penguin on the journey of a lifetime. Ongoing Martinis & IMAX®. Enjoy cocktails, films, live music or DJ, and cuisine, 6:30-11 PM Fridays. Tickets: 404/929-6400. 767 Clifton Road, Atlanta. fernbank Through Jan. 2 “Hidden taylorkinzelgallery.com. GEORGIA BRIDAL SHOW Feb. 22 Kids will be transported ticketmaster.com. on a journey back to the Mesozoic, when dinosaurs roamed the earth – and rode on trains. Cobb Energy Centre, 2800 Cobb Galleria Pkwy., Atlanta. Shows at 1 & 4 PM. FIRST DAY HIKES Jan. 1 Guided hikes at state parks across Georgia to motivate people to exercise outdoors and re-connect with nature and family, as part of the nationwide event to create a dream wedding, view photographers’ pictures, taste cake and food samples from caterers, watch a fashion show, plan a honeymoon and more at Cobb Galleria Centre, 2 Galleria Pkwy., Atlanta. Noon-5 PM. $15. Jan. 26 Gwinnett Center, 6400 Sugarloaf Pkwy., Duluth. $15. Jan. 5 Find everything needed museum.org. 404/929-6300. eliteevents.com. JINGLE BELL RIDE Dec. 14 Gentle pace ride on GLOBAL WINTER WONDERLAND Through Jan. 5 View larger-than- “Hidden Universe” at Fernbank’s IMAX life lantern designs of landmarks from countries all over the world at this multicultural theme park at Turner Field, Atlanta. $25 ($19 seniors, $17 ages 5-12, free ages 4 & under). globalwonderland.com. Riverside/Azalea Multi-Use Trail, followed by hot chocolate and cookies. 3 PM, Riverside Park, Roswell. 770/643-8010. JINGLE BELL RUN/WALK® Arthritis Foundation’s 5K run/ walk at Turner Field, 755 Hank Aaron Dr. SW, Atlanta. Holiday costumes encouraged. arthritis. Dec. 7 Get in the spirit at the HOME FOR THE HOLIDAYS Through Dec. 8 Designer Showhouse & Marketplace with tours, workshops and seasonal activities at a newly constructed designer home at 58 Blackland Road NW, Buckhead. 11 AM-5 PM Thurs.-Sat., 1-5 PM Sun. $25 (free for 10 & under). org. 678/237-4458. LAKE LANIER’S MAGICAL NIGHTS OF LIGHTS MACGILLIVRAY FREEMAN FILMS atlantaholidayhome.com. MAGNIFIED VIEW OF THE SUN FROM ONE OF THE WORLD’S MOST POWERFUL TELESCOPES. ICE SKATING RINK AT THE RITZ-CARLTON LODGE sessions daily, from 10 AM-2 PM and 4-8 PM, at Reynolds a 7-mile drive-through tour of animated holiday light displays at Lake Lanier Island Resort, plus a Holiday Village with carnival rides and games, pony rides and sweet treats; and shopping and Santa visits at Santa’s Workshop. Through Dec. 31 5-10 PM. Enjoy Through Feb. 16 Ice skating lakelanierislands.com. MACY’S PINK PIG TRAIN Through Jan. 5 Ride Priscilla 108 Stone Mountain Christmas Through Jan. 1 the Pink Pig as she makes tracks through a life-sized storybook beneath the 1950s-themed Pink Pig Tent on the upper-level parking deck at Lenox Square Mall near Macy’s. 3393 Peachtree Road, Atlanta. $3 per ride. Open daily (except Dec. 25). Benefits Children’s Healthcare of Atlanta. comes to town at Gwinnett Center on Feb. 5-9 and Philips Arena on Feb. 12-17 with “Built to Amaze” show. ringling.com. ticketmaster.com. ROAD ATLANTA Dec. 6-8 NASA 5300 Winder Hwy., Braselton. macys.com/believe. roadatlanta.com. 800/849-RACE. MARIETTA PILGRIMAGE CHRISTMAS HOME TOUR ROSWELL HOLIDAY EVENTS by Roswell Dance Theatre at Roswell Cultural Arts Center, 950 Forrest St. tysod.com. Dec. 6-8 1850s Heirloom Holidays Living History weekend. in the Kennesaw Avenue historic district that are lovingly restored and decorated for the season at this 27th annual award-winning event. Day tour, $25; candlelight tour, $20; combo ticket, $30. Dec. 7-8 Tour private residences Dec. 1-8 “The Nutcracker,” staged mariettapilgrimage.com. 770/429-1115. 770/426-4982. at The Gardens Great Oaks, 786 Mimosa Blvd. 7 PM. $60. 770/992- roswellheirloomholidays.com. Dec. 6 Antebellum Holiday Dinner MARIETTA SQUARE FARMER’S MARKET 1665. Dec. 7 Breakfast with Santa, variety of fresh, locally grown, seasonal produce and garden products from 10 AM-1 PM Sat. (year-round) and noon-3 PM Sun. (April-Nov). North Park Square. Weekends Vendors offer a 770/641-3760. Dec. 7 Holiday Tour of Historic 9-11 AM, Bill Johnson Community Activity Bldg., Roswell Area Park. Homes, including Southern Trilogy. 11 AM-5 PM. $30 ($5, under 12). Living History interpreters and period crafters on tour route, 1-5 PM. roswellhistoricalsociety.org. mariettasquarefarmersmarket. net. 770/499-9393. Jan. 11 Advance Auto Parts’ monster truck series comes to the Georgia Dome in Atlanta. Gates open at 2 PM, show at 7 PM, Pit Party from 2-5 PM. $25-$80. monsterjam.com. MONSTER JAM the Square, 5 PM, with carolers, lighting of the Square and Santa Claus. Intersection of Hwy. 9 & Hwy. 120. Living history interpreters on Canton Street and bonfire in Roswell Park, 6-10 PM. 770/992-1665. Dec. 7 Holiday Celebration on Feb. 21-23 Experts in the landscaping and home remodeling industries will showcase the latest services and products, plus radio broadcasts and speakers. Gwinnett Center, 6400 Sugarloaf Pkwy., Duluth. $7 ($6 seniors; free, 12 & under). atlantahomeshow.com. NORTH ATL. HOME SHOW (ages 12 & under), shopping and crafts at Adult Rec Center, Grimes Bridge Road. 6:30-8:30 PM Sat., 1-4 PM Sun. roswellgov.com. roswellgov.com. 770/641-3950. Dec. 7-8 Santa’s Secret Gift Shop Weekends “Sights, Symbols and Stories of Oakland,” 2 PM Sat. & Sun. $10 ($5 students & seniors). 248 Oakland Ave. SE, Atlanta. oaklandcemetery.com. OAKLAND CEMETERY GUIDED WALKING TOURS Barrington Hall. 11 AM-5 PM Sat., 1-5 PM Sun. Dec. 8 Caroling on Canton Street, 5-7:30 PM. Dec. 14 Night at the Polar Express, family movie night, 6:3010 PM at E. Roswell Rec. Center, 9000 Fouts Road. Must RSVP. 770/641-3950. Dec. 7-8 Holiday Bazaar at 770/594-6134. PROFESSIONAL BULL RIDERS ROSWELL GHOST TOUR Ongoing Learn about the unique history and stories of paranormal activity on guided walking tour. 8 PM. Must RSVP. $15 ($10, 12 & under). roswellghosttour.com. Jan. 18-19 Top 35 bull riders in gwinnettcenter.com. pbr.com. the world compete at the Arena at Gwinnett Center, Duluth. 770/649-9922. RHODES HALL SANTA winter wonderland at the “Castle on Peachtree” with holiday music, refreshments, art activities and personal appointments with Santa. Benefits Ga. Trust for Historic Preservation. Must RSVP. 1516 Peachtree St., N.W. Atlanta. Dec. 7-21, weekends Step into a ROSWELL ROOTS Feb. 1-28 A celebration of Black roswellroots.com. History & Culture, with numerous events at a variety of locations. 404/885-7812. georgiatrust.org. Macy’s Pink Pig – Through Jan. 5 RINGLING BROS. AND BARNUM & BAILEY CIRCUS Feb. 5-9, 12-17 The circus Dec. 5, 7-8, 14-15, 19-24 Santa arrives in Glover Park on the Marietta Square at 5 PM Dec. 5, followed by tree lighting at 6 PM. Santa’s Workshop on select dates. marietta.gov. 770/794-5601. SANTA ON THE SQUARE SOUTHERN SEASONS MAGAZINE | 109 FUN AROUND TOWN JEFF ROFFMAN PHOTOGRAPHY “Moneyville” Through Jan. 12 Imagine It! Children’s Museum of Atlanta SCOTT ANTIQUE MARKETS SHOWS Dec. 12-15, Jan. 9-12, Feb. 6-9 World’s largest series of indoor antique shows at Atlanta Expo Center, 3650 Jonesboro Road, SE. 1-6 PM Thurs., 9 AM-6 PM Fri.Sat., 10 AM-4 PM Sun. 404/361in the Southeast, hosted by Trinity School for 32 years, with original works by 350 selected artists. 4301 Northside Pkwy. Free parking and admission. Opening Night: 6-9 PM Feb. 3. Cocktails & Canvases special evening: 6-9 PM Feb. 8. Feb. 15 Spotlight on Art Gala Auction at InterContinental Buckhead. Saks Gallery Nov. 25-Jan. 28 Saks Fifth Avenue Gallery at Phipps Plaza, with a special collection from Trinity Artists Market for sale. Dec. 7 Holiday Sing-along, 11 AM. Jan. 10 Saks Night Out, 6-8 PM. Ringling Bros. and Barnum & Bailey Circus Feb. 5-17 – Gwinnett Center & Philips Arena Tues.-Sat., 12-5 PM Sun. Special events include: Family Day, 12-3 PM Dec. 7; Late Night Shopping, 6-8 PM Thursdays; Jewelry Trunk Show, 11 AM-3 PM Dec. 14; Holiday Sale, 10 AM-5 PM Dec. 23. THOMASVILLE’S VICTORIAN CHRISTMAS Dec. 12-13 Journey back to the 770/394-4019. spruillarts.org. Through Jan. 1 (select dates) 2000. scottantiquemarket.com. STONE MTN. CHRISTMAS St. Mtn. Park’s Crossroads is transformed into a winter wonderland with two million lights, parade, live shows, carolers, Santa, the Snow Angel, Polar Express 4D, sing-a-long train and holiday laser show. Adventure pass, plus park entry fee. stonemountainpark. 866/577-3600. 1890s for holiday fun in downtown Thomasville with horse-drawn carriage rides, strolling carolers, food, shopping and St. Nick. 6-10 PM. downtownthomasville.com. SEASON OF MAGIC IN PIEDMONT PARK Take a horse-drawn carriage ride through historic Piedmont Park, with festive holiday lights, or enjoy a spin on the carousel. Offered Thurs.-Sun. Carriage: $25 adults, $20 kids. Carousel: $3. Through Jan. 5 (select dates) VININGS JUBILEE TREE LIGHTING & SLEIGH RIDES 6-8 PM, with holiday music, treats, reindeer pony rides, Vinings Express train rides, face painting, balloon twisting and Santa Claus. Dec. 13, 20 Horse-drawn Sleigh Rides, 5-8 PM. Free. 4300 Paces Ferry Road, Atlanta. Dec. 5 Christmas Tree Lighting, piedmontpark.org. spotlightonart.com. 404/2318119. com. 770/498-5690. SNOW MOUNTAIN A snow-packed mountainside of action-packed excitement, with 20 lanes of tubing, snowman building and more. Ticketed event, plus park entry fee. 770/498-5690. Through Feb. 17 (select dates) Through Dec. 23 Peruse unique handcrafted items by local artisans at the Spruill Gallery, 4681 Ashford Dunwoody Road. 10 AM-6 PM SPRUILL GALLERY HOLIDAY ARTISTS MARKET STONE MOUNTAIN PARK Christmas. Through Jan. 1 Stone Mountain Through Feb. 17 Snow Mountain Vehicle entry fee. 770/498-5690. stonemountainpark.com. viningsjubilee.com. VIRGINIA-HIGHLAND TOUR OF HOMES Dec. 7-8 19th annual tour features seven distinct homes in this charming Atlanta neighborhood, with food tastings at each stop. 10 AM-4 PM Sat., 12-4 PM Sun. $30 ($25 adv.) vahitourofhomes.org. stonemountainpark.com. SOUTHERN LIVING SHOWCASE HOME North Atlanta Home Show Feb. 21-23 – Gwinnett Center 4,000-square-foot, customdesigned home overlooking the golf course at Currahee Club on Lake Hartwell in Toccoa. Open Fri.-Sun. $10, benefits several nonprofits. 864/527-0463. Through Dec. 22 Tour a Dec. 5-8 Handmade ceramic works, from functional to sculptural, by more than 40 local artists, at Art Center West, 1355 Woodstock Road, Roswell. 770/641-3990. WORKS IN CLAY HOLIDAY SHOW AND SALE dillardjones.com. Feb. 3-8 One of the largest, most diverse art exhibitions and sales SPOTLIGHT ON ART ALL TIMES AND DATES SUBJECT TO CHANGE. PLEASE CONTACT INDIVIDUAL VENUE FOR CONFIRMATION. 110 TRA VEL © DEAN FIKAR | ISTOCKPHOTO.COM Wildflower magic in the Texas Hill Country. SOUTHERN SEASONS MAGAZINE | 111 Country If your mind’s-eye images of Texas mainly evoke the endlessly flat landscapes of vintage Western movies and the high-rise skylines of throbbing contemporary capitals, then it’s clear you need a trip to the Hill Country. BY VIVIAN HOLLEY Deep in the heart of Hill 112 A sizeable 15-county swatch between San Antonio and Austin, the Hill Country is nothing at all like either of those Lone Star scenes. Or, for that matter, any other scene. Here rustic, there refined (and sometimes both), it’s a territory with an easy-breezy style and a storied heritage all its own. Set in rolling, bluebonnet-blessed terrain accented by rugged reaches of limestone, spreading oaks, and cypress-shaded creeks are singular attractions and lodgings that guarantee an uncommon getaway. AL RENDON The Westin La Cantera High Country Resort. A taste for luxury? A craving for golf? Head straight to La Cantera, a Westin resort set high on a hilltop with sweeping views including an overlook of the premiere golf experience in South Texas (it’s the only area resort to serve up 36 holes of golf). Also on tap are a pampering spa and a health club, five lagoon-style pools, facilities for kids and teens, and a mile-long nature trail. A complimentary shuttle will whisk you to The Shops at La Cantera for a million-plus square feet of upscale shopping happiness. With its red tile roof and tall white walls, the property commands a Texasbig presence, its architectural ambiance inspired by the legendary King Ranch where, it’s said, the lady of the house directed builders to create rooms where “anybody could walk in in boots.” And so they did at La Cantera, with a seamless fusion of regional and Spanish design elements christened Texas Colonial. Massive doors open into a lofty lobby – picture blue slate floors, loomed rugs, handsomely carved furnishings – where guests gather pre-dinner for icy margaritas beside a giant fireplace. Then they move on to Francesca’s at Sunset, as romantic as it sounds, with a menu that ranges from buffalo and antelope to produce and wine from area ranchers, farmers, and vintners. SOUTHERN SEASONS MAGAZINE | 113 WILDSEED FARMS FREDERICKSBURG CONVENTION & VISITOR BUREAU “Sunday Houses,” Fredericksburg Herb Farm. Magnolia House. Next stop, Fredericksburg – a skip away but a different experience altogether. Canny natives have long flocked to this charmer situated near the center of the Hill Country, and if they should prefer to keep it a secret . . . well, who could blame them. The town was settled in 1846 by German immigrants, whose legacy still thrives in the local architectural, cultural and culinary styles, not to mention street names and pioneer museums. Accommodations are a perfect fit for the unpretentious, supremely walkable environs where shops and eateries cluster on and around Main Street. Most distinctive are the “Sunday Houses,” historic cottages built by settlers who lived in distant rural areas and used them during weekends in town for trading and attending church. Today, they serve as one-of-a-kind guest houses. In Fredericksburg’s wide-ranging collection of B&Bs, a prime candidate is Magnolia House, owned by Claude and Lisa Saunders. Claude happens to be a master in the kitchen, treating guests to bountiful breakfasts and a supply of from-scratch cookies. Not to be missed is the parade of art galleries, including The Good Art Company, Insight Gallery, Whistle Pik Galleries, and Artisans at Rocky Hill – all on the map of the Art Walk staged the first Friday of each month. Another must is Fischer & Wieser, a shop packed with specialty foods from peach pecan butter to roasted raspberry chipotle sauce. Antiques and interior design emporiums abound, the likes of Carol Hicks Bolton Antiquities, with 14,000 square feet to troll for treasures. 114 Torre di Pietra Vineyards in Fredericksburg. KEN MAXWE Wine tasting is a given. To check out the rapidly exploding winery story in these parts – the current tally is 20 in town and some three dozen in the region – set sights on Wine Road 290. When feet fail, lunch beckons at fun stops such as Wildseed Farms, home to acres of blooms, and The Pink Pig, down home and delicious. Come dinner, make it August E’s, or the Farm Haus Bistro, part of a complex with its own cottages and spa. Don’t let the captivating small-town vibe fool you. Along with the low-key lures you get two attractions of national note and justifiable local pride. One is right in town: the outstanding National Museum of the Pacific War, inspired by respect for Fredericksburg’s native son, World War II Fleet Admiral Chester W. Nimitz. The other, nearby at Stonewall, is the LBJ Ranch at Lyndon B. Johnson National Historical Park. Known as the Texas White House during the administration of the 36th president, it’s an intimate look at Johnson family life, operated by the National Park Service. Meander in your car along the Pedernales River and you’ll see the reconstructed birthplace of the president, the one-room schoolhouse where he learned to read, and the comfortable, modest home that was opened for public tours after the 2007 passing of Lady Bird. With the region’s year-round calendar of kickup-your-boots events, it’s hard to settle on the ideal season to head for the hills. But my vote goes to the state’s wildflowering springtime – and its magical bonus of bluebonnets. INFORMATION:; (210) 558-6500; (888) 997-3600; (830) 997-0306 © NATALIA BRATSLAVSKY | DREAMSTIME.COM BLAKE MISTICH SOUTHERN SEASONS MAGAZINE | 115 Best Road Food in the South According to AAA, nearly 25% of the U.S. population is likely to hit the road during the holidays. Southern Living has named the best food stops along Southern interstates that are heavily traveled this time of year. From cafes and barbecue joints to meat ‘n’ threes and bakeries, these fast food alternatives are less than 5 miles off the interstate, so travelers will be back on the road in no time. I-40: Magpies Bakery, Exit 289, Knoxville, TN. The Feed Bag Restaurant, Exit 174, Farmington, NC. Sunrise Biscuit Kitchen, Exit 270, Chapel Hill, NC. I-35: Miller’s Smokehouse, Exit 294A, Belton, TX. Babe’s Chicken Dinner House, Exit 477, Sanger, TX. Fancy That, Exit 109, Norman, OK. I-95: Wilson’s BBQ and Grill, Exit 11A, Emporia, VA. Broad Street Deli & Market, Exit 73, Dunn, NC. Clark’s Inn & Restaurant, Exit 98, Santee, SC. I-77: Local Dish, Exit 85, Fort Mill, SC. Lake View Restaurant, Exit 8, Fancy Gap, VA. Taste of West Virginia Food Court, Exit 45, Beckley, WV. For even quicker stops, fuel up on one of these regional specialties from gas stations along the way: Blue Bell Ice Cream (TX), Carolina Country Snacks Fried Pork Rinds (NC), Goo Goo Clusters (TN), Lone Star Western Beef Jerky (WV), Zapp’s Potato Chips (LA). As for Southern sodas, sip on this: Ale-8-One (KY), Buffalo Rock Golden Ginger Ale (AL), Cheerwine (NC), Nehi (GA), Old Dominion Root Beer (VA). FOR MORE OF THE SOUTH’S BEST FOOD AND TRAVEL TIPS, VISIT SOUTHERNLIVING.COM. Home for the Holidays This holiday season, Davio’s Atlanta is throwing a house party. A gingerbread house party, that is! On Sunday, Dec. 8, from 12-2 p.m., children of all ages are invited to participate in a oneof-a-kind workshop with some of Atlanta’s renowned pastry chefs and sugar artists. Davio’s Pastry Chef Kathleen Miliotis will lead a demonstration and workshop, teaching guests how to build personalized gingerbread houses fit for the North Pole! The event is $30 per child, with free adult admission. Each participating child will receive one gingerbread house to decorate with all the delectable trimmings, as well as delicious afternoon snacks prepared by Executive Chef Richard Lee. This event is sweet as can be with gingerbread, marshmallows, and all kinds of candy. In the generous spirit of the holidays, each family is asked to help in one small way: arrive at the workshop with one unwrapped toy so North Atlanta Toys for Tots can spread the seasonal joy! TO REGISTER, CALL 404/844-4810. DAVIOS.COM/ATL. PHOTOGRAPHY ©DAVIO’S Thanksgiving Feast of the Seven Fishes New Year’s Eve 116 DINING UMI’S LOBSTER BOX ROLL SARA HANNA PHOTOGRAPHY SOUTHERN SEASONS MAGAZINE | 117 BY JENNIFER BRADLEY FRANKLIN PHOTOGRAPHY BY SARA HANNA SUSHI at UMI in Buckhead Plaza Sushi can be a little intimidating, even to those who write about food for a living. However, at Umi, Buckhead’s newest shrine built for Japan’s most well-known – and yet, perhaps most misunderstood – cuisine, it’s hard to make a misstep with the spectacular menu. Umi is located just across the Buckhead Plaza courtyard from Chops (using the same packed, but efficient, valets) and once you’re tucked into the dimly lit dining room, you’ll quickly see what all the buzz is about. Stunning staff scurry about, carrying trays topped with craft cocktails and dishes so colorful you may be tempted to try one of everything (not advisable, unless you have an unlimited expense account – or a trust fund). CHILEAN SEA BASS YU-AN YAKI Star-Studded 118 Umi’s dining room is bathed in soft light, for a modern look that’s at once appetizing and cozy. EXECUTIVE CHEF FUYUHIKO ITO he menu itself is exquisitely created by Fuyuhiko Ito, who has long been a favorite chef among people who really know their sushi. He gained a staunch following as the man behind the knife at MF Buckhead. The story goes that two of the restaurant’s regulars, Charlie Hendon and Farshid Arshid, were so saddened when the posh temple of sushi shuttered in 2012, they offered to open an equally upscale spot to present Ito’s pristine dishes. Since Umi opened in May, it’s already become a destination for visiting and local celebrities, beautiful people and monied diners. Sir Elton John attended the friends and family pre-opening dinner, visiting movie stars make regular appearances and on any given night the dining room might be filled with guests whose names you’d recognize from reading Atlanta’s business paper. Even with the who’s who of guests populating the intimate dining room and sushi counter, it’s refreshing that each guest seems to get the star treatment. If the menu seems overwhelming, as it did to me and my guest, the expert, friendly staff is only too happy to make recommendations (and patiently answer a litany of questions). For our meal, we started with a beautiful plate of thinly sliced yellowtail, cilantro, ponzu and jalapeno peppers, which gave the dish just enough heat to make the ingredients sing. The madai (a variety of red snapper) carpaccio might have gone unnoticed, except for a suggestion from our server, Michelle. The T firm, white fish was simply dressed with sea salt, lemon, olive oil and yuzukosho, a fermented paste of yuzu peels, chili peppers and salt. One of the highlights of the meal wasn’t sushi at all: the lobster toban-yaki came with its own petite griddle over bluehot flames. Nuggets of sweet lobster meat, drenched in insanely delicious soy butter sauce, are allowed to sizzle until they are golden brown and caramelized. Michelle also recommended the American tuna roll (spicy tuna and cucumber topped with tuna, avocado, spicy mayonnaise and eel sauce) and the sake salmon sashimi, which was so rich one bite was incomparably satisfying. The perfect, authentic pair for our meal came in the form of Akita Homare sake – floral, light and dry – served in a jaunty little carafe. As a finale, you’ll want to have the green tea soufflé. Order it as your last savory dish is being delivered, as it takes time for pastry chef Lisa (Ito’s wife – the two met and fell in love while working at MF Buckhead, so it’s truly a match made in the kitchen) to coax the delicate confection to rise, its grassy, verdant flavor only faintly sweet. When I return (as I’m sure to do), I’ll opt for the omakase experience, effectively giving the chef free reign to choose what he would like to serve. Omakase, translated from Japanese means “I leave it to you,” which is the most advisable tact to take when dining with such an accomplished sushi chef at the helm. As I quickly learned, though, at Umi, there’s no such thing as a bad menu selection. So check whatever intimidation you feel at the door, and have fun with this beautifully crafted menu. YUZU MERINGUE SQUARE, MOCHI ICE CREAM CREPE WITH SUPER FRUITS. ROCK SHRIMP TEMPURA WITH SPICY CREAMY SAUCE. BLACK COD MISOYAKI (GRILLED BLACK COD MARINATED IN MISO). VISIT UMI AT 3050 PEACHTREE ROAD NE, ATLANTA. 404/841-0040. UMIATLANTA.COM. SOUTHERN SEASONS MAGAZINE | 119 Parish Foods & Goods xxxx DINING GUIDE AMERICAN warm, clubby atmosphere. p }}} ABATTOIR CHOPHOUSE 1170 Howell Mill Road, Atl. 404/892-3335. Fresh whole fowl, fish, beef, pork and other game served in a variety of ways. } starprovisions.com. ANOTHER BROKEN EGG CAFE 2355 Peachtree Road NE, Peachtree Battle Shopping Center, Atl. 404/254-0219. 4075 Old Milton Pkwy., Alpharetta. 770/837-3440. 4300 Paces Ferry Road, Vinings. 770/384-0012. Southern regional cooking with an edge. } anotherbrokenegg.com. ARIA 490 E. Paces Ferry Road NE, Atl. 404/233-7673. Buckhead hot spot with creative “slow food” served in a sleek space. p }}} aria-atl.com. ★★★ ATLANTA GRILL 181 Peachtree St., NE, Atl. (2nd floor of The Ritz-Carlton, Atlanta), 404/221-6550. Grilled steaks, chops, seafood and Southern-inspired cuisine are served in a 120 BACCHANALIA 1198 Howell Mill Road, Atl. 404/365-0410. Great service and generous portions with a heavenly menu of specialties served in a warehouse-chic setting. p h }}} starprovisions.com. ★★★★ BLUE RIDGE GRILL 1261 West Paces Ferry Road, Atl. 404/233-5030. Signature dishes, from grilled Georgia trout and slow-roasted grouper to iron skillet mussels and hickorygrilled rib eye, are served in the cozy comforts of a mountain lodge, with stone fireplace, log walls and red leather booths. p }}} blueridgegrill.com. ★★★/432-2663. Culinary expertise and natural aesthetics come together for a rich, flavorful experience, with a seasonal menu and inviting interior. p }} canoe-atl.com. ★★★ CAPITAL GRILLE-ATLANTA 255 East Paces Ferry Road, Atl. 404/262-1162. Classic steak house offerings, from chops to fresh seafood, in a relaxed atmosphere that features a sweeping view of Buckhead. p }}} thecapitalgrille.com. ★★ EMPIRE STATE SOUTH 999 Peachtree St., Atl. 404/541-1105. A community restaurant that appeals to a broad range, a la celebrated Athens chef Hugh Acheson, with authentic Southern dishes served in a meat-and-three format. p }} empirestatesouth.com. 4TH & SWIFT 621 North Ave. NE, Atl. 678/904-0160. Enjoy such specialties as crispy brussels sprout, North Georgia apple salad and sticky toffee pudding in a quaint setting, in the former engine room of the Southern Dairies Co. in the Old Fourth Ward. p }} 4thandswift.com. FLIP BURGER BOUTIQUE 1587 Howell Mill Road, Atl. 404/352-3547. 3655 Roswell Road NE, Atl. 404/549-3298. Unique menu of burgers, sandwiches, sides and salads served in a contemporary, hip space. } flipburgerboutique.com. GORDON BIERSCH BREWERY RESTAURANT 3242 Peachtree Road NE, Atl., 404/264-0253; 848 Peachtree St. NE, Atl., 404/870-0805. Hand crafted beer and madefrom-scratch food served in a fun atmosphere. p } gordonbierschrestaurants.com. GRACE 17.20 5155 Peachtree Pkwy., Ste. 320, Norcross. 678/421-1720. Changing menu of fresh seasonal ingredients in a casually elegant setting. p }} grace1720.com. ★★ HAVEN RESTAURANT AND BAR 1441 Dresden Dr., Ste. 160, Atl. 404/9690700. Casual neighborhood dining in historic Brookhaven, with a fresh seasonal menu and an impressive wine list. p }} havenrestaurant.com. ★★★ HOBNOB NEIGHBORHOOD TAVERN 1551 Piedmont Ave. NE, Atl. 404/968-2288. Comfort pub cuisine and craft beers in a community-driven establishment in Ansley Park. p } hobnobatlanta.com. HOLEMAN & FINCH PUBLIC HOUSE 2277 Peachtree Road, Atl. 404/948-1175. Hailed as a British gastropub with a Southern accent, with savvy cocktails and a meaty menu. } holeman-finch.com. HOUSTON’S 2166 Peachtree Road NW, Atl., 404/351-2442; 3321 Lenox Road, Atl., 404/237-7534; 3539 Northside Pkwy., Atl., 404/262-7130; 3050 Windy Hill Road SE, Atl., 770/563-1180. Lavish portions of fresh American fare, from hickory-grilled burgers to tender, meaty ribs. } houstons.com. ★ JCT. KITCHEN & BAR 1198 Howell Mill Road, Ste. 18, Atl. 404/355-2252. A casual, yet upscale setting to enjoy such specialties as angry mussels, chicken and dumplings, fried chicken, truffle-parmesan fries and Georgia peach fried pies. p } jctkitchen.com. JOEY D’S OAKROOM 1015 Crown Pointe Pkwy., Atl. 770/512-7063. Upscale steakhouse features choice-aged charbroiled steaks, signature sandwiches, salads, pastas, chicken and fish, plus over 400 brands of spirits. p }} JoeyDsOakRoom.com. ★★ LIVINGSTON RESTAURANT AND BAR 659 Peachtree St., Atl., @ Georgian Terrace Hotel. 404/897-5000. Fresh American cuisine in a classy setting. p }} livingstonatlanta.com. LOBBY BAR AND BISTRO 361 Seventeenth St., Atl. 404/961-7370. Seasonal menu with a comfort food edge in a casual atmosphere. p } lobbyattwelve.com. LOCAL THREE 3290 Northside Pkwy NW, Atl. 404/968-2700. Fresh-from-thefarm seasonal fare, from Georgia Mountain Trout and Grilled Hanger Steak to Springer Mountain Farm Chicken Pot Pie, served in a comfy space. p } localthree.com. MILTON’S CUISINE & COCKTAILS 800 Mayfield Road, Milton. 770/817-0161. Feast on such Southern specialties as sweet potato and shrimp fritters, fried chicken, pork loin and chef ’s veggie plate in the charming setting of a restored 150-year-old farmhouse and 1930s cottage. p }} miltonscuisine.com. MODERN RESTAURANT + BAR 3365 Piedmont Road NE, Atl. 404/5541100. Innovative culinary style with a heavy emphasis on seafood, from butter-poached lobster to wild Scottish salmon, plus special chef tasting menus with wine pairings. Private GEORGE SANCHEZ Aria dining and outdoor patio available. p }} modernbuckhead.com. MOSAIC 3097 Maple Drive, Atl. 404/8465722. Neighborhood bistro features modern American cuisine with Mediterranean flavors. p }} mosaicatl.com. MURPHY’S 997 Virginia Ave., Atl. 404/8720904. Inventive, fresh seasonal fare, excellent service and basement charm. p } murphysatlanta-restaurant.com. ONE. MIDTOWN KITCHEN 559 Dutch Valley Road, Atl. 404/892-4111. Inventive atmosphere, food and wine served in a renovated urban warehouse space. p } onemidtownkitchen.com. ★★ PARK 75 75 Fourteenth St. NE, Four Seasons Hotel Atlanta. 404/253-3840. An elegant place to enjoy seasonal and regional favorites, from crispy lobster with shittake sticky rice and Asian vegetables to barbecue “Kobe” shortQUICK GUIDE p reservations h dress restrictions } entrees $10-20 }} entrees $20-30 }}} entrees $30+ SOUTHERN SEASONS STARS ★ great ★★ excellent ★★★ superb ★★★★ the best | 121 SOUTHERN SEASONS MAGAZINE rib with smoked Gouda grits and truffled potatoes. p }} fourseasons.com. ★★★ PAUL’S RESTAURANT 10 Kings Circle, Atl. 404/231-4113. Chef Paul Albrecht creates new American cuisine and sushi in an open kitchen, from herb crusted flounder filet and roasted lamb shank to batter fried lobster tail. p }}} greatfoodinc.com. ★★★ PUBLIK DRAFT HOUSE 654 Peachtree St., Atl. 404/885-7505. Great gastropub cuisine, from small bites and salads to burgers and entrees, served in a fun atmosphere. p } publikatl.com. RATHBUN’S 112 Krog St., Atl. 404/5248280. New American food served with Southern flair in a swanky space at the Stove Works in Inman Park. p }} rathbunsrestaurant.com. ★★★★ RESTAURANT EUGENE 2277 Peachtree Road, Atl. 404/355-0321. Seasonal cuisine and boutique wine combined with gracious service in a sophisticated spot in the Aramore Building. p }}} restauranteugene.com. RIVER ROOM Post Riverside Town Square, 4403 Northside Pkwy., Atl. 404/233-5455. New American cuisine served in an elegant and modern European atmosphere. p }}} riverroom.com. SAGE WOODFIRE TAVERN 11405 Haynes Bridge Road, Alpharetta. 770/569-9199. 4505 Ashford Dunwoody Road, Atl. 770/8048880. City chic yet casual atmosphere featuring contemporary American cuisine with global influences. p }} sagewoodfiretavern.com. SALT FACTORY 952 Canton St., Roswell. 770/998-4850. Neighborhood gastropub with exceptional food and drink served in a comfy setting, from soups, salads and appetizers to specialty burgers, pizza, pasta, fish and beef. } saltfactorypub.com. ★★★ SALTYARD 1820 Peachtree Road NW, Atl. 404/382-8088. Diverse selection of seasonal dishes, with signature cocktails and craft beer in spirited setting. p } saltyardatlanta.com. SEASONS 52 90 Perimeter Center West, Dunwoody, 770/671-0052; Two Buckhead Plaza, 3050 Peachtree Road NW, Atl. 404/8461552. A seasonally changing menu of fresh food grilled over open wood fires and a bythe-glass wine list in a casually sophisticated setting with live piano music in the wine bar. p }} seasons52.com. SHULA’S 347 GRILL 3405 Lenox Road NE, Atl., Atlanta Marriott Buckhead Hotel lobby. 404/848-7345. Signature meals from Hall of Fame football coach Don Shula in a casual chic setting. p } shulas347atlanta.com. SOUTH CITY KITCHEN 1144 Crescent Ave., Atl., 404/873-7358; 1675 Cumberland Pkwy., Suite 401, Vinings, 770/435-0700. The Old South meets the big city, with contemporary Southern cuisine dished out from the exhibition kitchen. p }} southcitykitchen.com. ★★★ SOUTHERN ART 3315 Peachtree Road NE, Atl., InterContinental Buckhead. 404/9469070. Southern-inspired cuisine and cocktails in a relaxed atmosphere, with an artisan ham bar, vintage pie table, and sophisticated bar and lounge area. Menu highlights: baked oysters with crispy pork belly, chicken and dumpling soup and Low Country seafood platter. p }} southernart.com. TAP 1180 Peachtree St., Atl. 404/347-2220. A convivial place with innovative comfort food and an extensive draft beer and barrel wine selections. p } tapat1180.com. TERRACE 176 Peachtree St. NW, Atl., The Ellis Hotel. 678/651-2770. Flavorful farm-totable dishes, from Georgia mountain trout to Amish chicken breast, served in a chic setting. p } ellishotel.com/terrace. THE CAFE AT THE RITZ-CARLTON, BUCKHEAD 3434 Peachtree Road, Atl. 404/240-7035. Delightful menu, sunny ambiance and live piano music. Seasonal patio seating. p }}} ritzcarlton.com. ★★ THE SUN DIAL RESTAURANT 210 Peachtree St. NW, Atl., The Westin Peachtree Plaza, 404/589-7506. Offers a 360-degree dining experience, 723 feet above the city, with contemporary cuisine and live jazz. p }}} sundialrestaurant.com. THREE SHEETS 6017 Sandy Springs Cir., Atl. 404/303-8423. A refreshing escape with cocktails, music and small plates. } threesheetsatlanta.com. ★★★ TRUFFLES CAFE 3345 Lenox Road, Atl. 404/364-9050. Upscale gourmet café with a diverse menu of Low Country dishes, fresh fish, center-cut steaks, soups, salads and sandwiches. p } trufflescafe.com. TWO URBAN LICKS 820 Ralph McGill Blvd., Atl. 404/522-4622. Fiery cooking with wood-roasted meats and fish, plus a touch of New Orleans and barbecue, in a chic warehouse. p }} twourbanlicks.com. VILLAGE TAVERN 11555 Rainwater Dr., Alpharetta. 770/777-6490. Fresh fish, pastas, salads, chicken, steaks and chops in an upscale, casual setting. p }} villagetavern.com. ONE. midtown kitchen 122 WATERSHED ON PEACHTREE 1820 Peachtree Road, NW, Atl. 404/809-3561. Southern-inspired menu in farmhouse-chic setting, from fried pimento cheese sandwich to bone-in ribeye with black truffle gravy. p }} watershedrestaurant.com. WOODFIRE GRILL 1782 Cheshire Bridge Road, Atl. 404/347-9055. Menu follows a farm-to-table philosophy, with specialties like pan-roasted wild striped bass and wood-grilled quail. p }} woodfiregrill.com. YEAH! BURGER 1168 Howell Mill Road, Suite E. 404/496-4393. 1017 North Highland Ave., Virginia-Highland. 404/437-7845. Organic, eco-friendly burger restaurant offers customizable burgers in a fast-casual, familyfriendly format. } yeahburger.com. Bistro Niko ASIAN AJA 3500 Lenox Road, Atl. 404/231-0001. Modern Asian kitchen with sushi, dim sum and entrees served family-style. Red and black walls and dimmed lighting add to the exotic atmosphere. p }} h2sr.com. ★★★ BRAZILIAN FIRE OF BRAZIL 118 Perimeter Center West, Atl., 770/551-4367. 218 Peachtree St. NW, Atl. 404/525-5255. Marinated slow roasted choice cuts of meat prepared in the centuries-old Brazilian tradition. p }}} fireofbrazil.com. FOGO DE CHAO 3101 Piedmont Road, Buckhead. 404/266-9988. Delectable cuts of fire-roasted meats, gourmet salads and fresh vegetables, and a variety of side dishes. p }}} fogodechao.com. ★★★ THE REAL MANDARIN HOUSE 6263 Roswell Road, Atl. 404/255-5707. Upscale Asian dining with dishes ranging from chicken and beef to seafood and pork. } ★★ CREOLE FRENCH AMERICAN BRASSERIE 30 Ivan Allen Jr. Blvd., Atl. 404/266-1440. Feast on French cuisine and American chops in the dining room or enjoy a cocktail on the canopied rooftop terrace overlooking the city skyline. p }} fabatlanta.com. ★★★★ LA PETITE MAISON 6510 Roswell Road, Sandy Springs. 404/303-6600. French bistro, serving everything from filet mignon to grilled salmon, in a charming setting. } lapetitemaisonbistro.com. ★★ NIKOLAI’S ROOF 255 Courtland St., Atl. 404/221-6362. Fantastic fare in elegant surroundings with attentive service and spectacular skyline views. p }}} nikolaisroof.com. ★★★ MCKINNON’S LOUISIANE RESTAURANT 3209 Maple Dr., Atl. 404/237-1313. Louisiana seafood dishes reflect the delicately refined cooking of New Orleans and the pungent, highly seasoned dishes of the Cajun Bayou. p }} mckinnons.com. CHINESE ECLECTIC CANTON HOUSE 4825 Buford Hwy., Chamblee. 770/936-9030. Authentic Chinese cuisine in a spacious dining room with efficient, friendly service. } icantonhouse. com. ★★★★ CHOPSTIX 4279 Roswell Road NE, Atl. 404/255-4868. Upscale dining with lively piano bar. p } chopstixatlanta.net. ★★★ P.F. CHANG’S CHINA BISTRO 7925 North Point Pkwy., Alpharetta, 770/992-3070; 500 Ashwood Pkwy., Atl., 770/352-0500; 3333 Buford Dr., Buford, 678/546-9005. Enjoy diced chicken wrapped in lettuce leaves, orange-peel beef with chili peppers, and wokfried scallops with lemon sauce in a stylish space. p }} pfchangs.com. SHOUT 14th and Peachtree Road at Colony Square, Atl. 404/846-2000. Dine on tapas or sip a martini on the rooftop lounge at this ultra-hip hotspot. p } h2sr.com. TWIST 3500 Peachtree Road NE, Atl. 404/869-1191. Creative cuisine, from sushi and seafood to satays and wraps, served in a 300-seat dining room with a centerstage bar. Patio dining available. p } h2sr.com. AQUA BLUE 1564 Holcomb Bridge Road, Roswell. 770/643-8886. Choose from sushi, seafood, steaks and chops in a soothing setting. p }} aquablueatl.com. ★★ JOLI KOBE BAKERY & BISTRO 5600 Roswell Road NE, Atl., 404/843-3257; 1545 QUICK GUIDE p reservations h dress restrictions } entrees $10-20 }} entrees $20-30 }}} entrees $30+ SOUTHERN SEASONS STARS ★ great ★★ excellent ★★★ superb ★★★★ the best | 123 FUSION FRENCH SOUTHERN SEASONS MAGAZINE Pointe. 404/888-8709. Italian cooking with a contemporary twist, in a relaxed atmosphere. p }} lapietracucina.com. MAGGIANO’S LITTLE ITALY 3368 Peachtree Road, Atl., 404/816-9650; 4400 Ashford Dunwoody Road, Atl., 770/8043313. Divine dining in a nostalgic setting reminiscent of pre-World War II Little Italy. p } maggianos.com. MEDICI 2450 Galleria Pkwy., Atl., Renaissance Waverly Hotel. 770/953-4500. Mediterranean-inspired Tuscan grill with herb-rubbed prime steaks, hand-crafted pastas and market-fresh seafood. p }} renaissancewaverly. p }} portofinobistro.com. PRICCI 500 Pharr Road, Atl. 404/2372941. Creative menu, dramatic interior and friendly service. Enjoy wood-fired pizza, tortelli pasta, beef short rib ravioli and roasted Mediterranean sea bass. p h }} buckheadrestaurants.com. ★★★★ SOTTO SOTTO 313 N. Highland Ave. NE, Atl. 404/523-6678. Italian dishes served with a creative twist in a revived brick storefront. p }} sottosottorestaurant.com. SUGO 408 S. Atlanta St., Roswell, 770/6419131; 625 W. Crossville Road, Roswell, 770/817-4230; 10305 Medlock Bridge Road, Duluth, 770/817-8000. Authentic cuisine served with gracious hospitality, from Mediterranean mussels to Greek pizza. p } sugorestaurant.com. ★★★ TAVERNA FIORENTINA 3324 Cobb Pkwy., Atl. 770/272-9825. Tuscan bistro presents authentic Florentine dishes and contemporary classics in an intimate dining room. p }} tavernafiorentina.com.. VENI VIDI VICI 41 Fourteenth St., Atl. Flip Burger Boutique Peachtree St. NE, Atl., 404/870-0643. Great neighborhood spot for coffee and dessert, Sunday brunch or a meal, from almond chicken curry salad to potato crusted salmon. p } jolikobe.com. MARKET W Atlanta-Buckhead, Atl., 3377 Peachtree Road NE. 404/523-3600. Chef JeanGeorges Vongerichten reinvents classic dishes with an eclectic flair, from Maine lobster with crispy potatoes and spicy aioli to bacon wrapped shrimp with avocado and passion fruit mustard. p }} marketbuckhead.com. 10 DEGREES SOUTH 4183 Roswell Road, Atl. 404/705-8870. South African restaurant offers a cultural fusion of cuisine, from calamari and lamb chops to sosaties and chicken curry, in lively setting. p }} 10degreessouth.com. Italian cuisine, from homemade pastas and pizzas to grilled dishes, served in a charming setting, with an expansive wine list. p }} baraondaatlanta.com. DAVIO’S NORTHERN ITALIAN STEAKHOUSE 3500 Peachtree Road NE, Atl. 404/844-4810. Simple, regional Italian foods with a focus on the grill, from aged steaks to unique pasta creations and signature veal chop. p }} davios, Suite 15, Atl. 404/892-1414. Fresh seasonal cuisine is created with country French, Mediterranean and Italian influences. p }} starprovisions.com. IL LOCALINO 467 N. Highland Ave., Atl. 404/222-0650. Flavorful food in a fun setting, with cozy dimensions, eclectic decor and warm hospitality. p }} localino.info. ★★★★ LA GROTTA 2637 Peachtree Road, Atl, 404/231-1368; 4355 Ashford Dunwoody Road NE, Dunwoody, 770/395-9925. Enjoy a three-course dinner in an intimate place overlooking a beautiful garden. p h }} lagrottaatlanta.com. ★★★★ LA PIETRA CUCINA 1545 Peachtree St. NE (Beverly Road), Atl., One Peachtree GREEK KYMA 3085 Piedmont Road, Atl. 404/2620 124 404/875-8424. Heavenly cuisine, extensive wine list, attentive service and warm ambience. Specialties include veal lasagne and pappardelle with pulled rotisserie duck. p h }} buckheadrestaurants.com. ★★★ JAPANESE KOBE STEAKS 5600 Roswell Road, Sandy Springs. 404/256-0810. Hibachi cooking in a fun atmosphere, where chefs prepare meals at the table. }} kobesteaks.net. ★★ MO MO YA 3861 Roswell Road, Atl. 404/261-3. Dine on some of the freshest, most authentic sushi in the city in intimate booths. } sushihuku.com. Veni Vidi Vici PERSIAN RUMI’S KITCHEN 6152 Roswell Road, Atl. 404/477-2100. Chef Ali Mesghali’s fresh Persian dishes, from kabobs and dolmeh to fresh-baked flat bread, served in an intimate dining room with attentive hospitality. } rumisrestaurant.com. COAST SEAFOOD AND RAW BAR 111 West Paces Ferry Road, Atl. 404/869-0777. Fresh seafood and island cocktails in a casual setting, with signature seafood boil, fresh catch entrees and a variety of raw or steamed oysters, clams and mussels. p } h2sr.com. GOLDFISH 4400 Ashford Dunwoody Road, Perimeter Mall. 770/671-0100. Seafood, sushi and steaks in a spectacular setting that features a 600-gallon saltwater aquarium and live music. p }} h2sr.com. ★★★ LURE 1106 Crescent Ave. NE, Atl. 404/8811106. Contemporary fish house serving only the freshest ingredients delivered daily, from smoked seafood platter to fried oyster slider. p }} lure-atlanta.com. RAY’S IN THE CITY 240 Peachtree St., Atl. 404/524-9224. Enjoy a selection of the freshest seafood, made-to-order sushi and QUICK GUIDE p reservations h dress restrictions } entrees $10-20 }} entrees $20-30 }}} entrees $30+ SOUTHERN SEASONS STARS ★ great ★★ excellent ★★★ superb ★★★★ the best | 125 MEDITERRANEAN ECCO 40 Seventh St., Atl. 404/347-9555. A bold approach to seasonal European cuisine, from paninis, pastas and pizza to fig-glazed lamb loin, all served in a warm, welcoming setting. p }} ecco-atlanta.com. ★★★ MILAN MEDITERRANEAN BISTRO & GRILL 3377 Peachtree Road, Atl., Crowne Plaza. 678/553-1900. Mediterranean dining in a casually elegant setting, from mahi mahi with port-glazed figs and grilled salmon romesco to filet of beef Monte Carlo. p }} SEAFOOD MOROCCAN IMPERIAL FEZ MOROCCAN 2285 Peachtree Road, Atl. 404/351-0870. An oasis of good food and entertainment with traditional cuisine including fresh legumes, meats and fish. p }}} imperialfez.com. ATLANTA FISH MARKET 265 Pharr Road, Atl. 404/262-3165. Southeast’s largest selection of fresh seafood offered in a neighborhood setting. Specialties include Hong Kong sea bass, cashew crusted swordfish and blackened mahi mahi. p h }} buckheadrestaurants.com. ★★★★ ATLANTIC SEAFOOD COMPANY 2345 Mansell Road, Alpharetta. 770/640-0488. Contemporary atmosphere showcases modern American seafood flown in fresh daily. p }}} atlanticseafoodco.com. C&S SEAFOOD AND OYSTER BAR 3240 Cobb Pkwy., Atl. 770/272-0999. Fresh seafood, a well-stocked raw bar and classic prime steaks in an elegant setting, with classic cocktails. p }} candsoysterbar.com. NEW ORLEANS PARISH: FOODS & GOODS 240 North Highland Ave., Atl. 404/681-4434. New Orleans-inspired, bi-level restaurant and market in the beautifully restored 1890s Atlanta Pipe and Foundry Company terminal building. p } PARISHatl.com. SOUTHERN SEASONS MAGAZINE hand-cut steaks, in a casual yet elegant setting. p }} raysinthecity.com. RAY’S ON THE RIVER 6700 Powers Ferry Road, Atl. 770/955-1187. A palate-pleasing menu, an award-winning wine list and a romantic view of the Chattahoochee assure a delightful dining experience. p h }} raysontheriver.com. ★★★ SEABASS KITCHEN 6152 Roswell Road NE, Atl. 404/705-8880. A Mediterraneanflavored menu of delicious dishes, with market-fresh seafood, from Red Snapper to Black Sea Bass, as well as certified prime beef and braised lamb shank, served in an upscale casual setting with exceptional service. p }} seabasskitchen.com. THE OPTIMIST 914 Howell Mill Road, Atl. 404/477-6260. Upscale seafood with playful flavor combinations served in a beautiful space, with an experienced staff, wellrounded wine list and upbeat vibe. p }} theoptimistrestaurant.com. Lure CANTINA TAQUERIA & TEQUILA BAR 3280 Peachtree Road, Atl., Terminus 100. 404/892-9292. Mexican cuisine with housemade tortilla chips and salsa and specialties ranging from stewed pork with hominy to fish tacos and enchiladas. p } h2sr.com. NOCHE 1000 Virginia Ave., Atl. 404/8159155. 705 Town Blvd., Atl. 404/364-9448. 2580 Paces Ferry Road, Atl. 770/432-3277. 3719 Old Alabama Road, Johns Creek. 770777-9555. Bold Southwestern cuisine with a hint of seafood and game, and a high-energy bar. p } h2sr.com. SOUTHWESTERN ALMA COCINA 191 Peachtree St. NE, Atl. 404/968-9662. Dine on green chorizo tostadas, bay scallop ceviche and braised goat huaraches in a sophisticated and spirited venue. p } alma-atlanta.com. STEAKHOUSES BLACKSTONE 4686 S. Atlanta Road, Smyrna. 404/794-6100. Top-quality steaks, fresh seafood, award-winning wine list and great service, with an ambience suited for upscale dining and after-dinner cocktails. p }} blackstoneatlanta.com. ★★★ BLT STEAK 45 Ivan Allen Jr. Blvd., Atl., filet mignon, batterfried lobster tail and lump crab cake, are served on the upper level Chops steakhouse and lower-level Lobster Bar. p h }}} buckheadrestaurants.com. ★★★★ Cantina 126 HAL’S 30 Old Ivy Road, Atl. 404/261-0025. Award-winning steak prepared over an open flame grill, plus fresh seafood, pasta, veal, lamb and fish, fish, soups, salads and sashimi, as well as a list of 200 wines. p }} kevinrathbunste; 3379 Peachtree Road, Atl., 404/816-6535. Generous portions of USDA prime aged beef, as well as fresh fish, fish, with classic sides ranging from creamed spinach to cheese mashed potatoes. p h }}} newyorkprime.com. ★★★ PRIME 3393 Peachtree Road NE, Atl., Lenox Square. 404/812-0555. Superior primeaged beef, sushi bar and seafood offered in a casually chic setting. p } h2sr.com. ★★★ RAY’S ON THE CREEK 1700 Mansell Road, Alpharetta. 770/649-0064. North Fulton’s award-winning steakhouse delivers with prime steaks, fresh seafood and fine wines. p h }}} raysrestaurants.com. RUTH’S CHRIS STEAKHOUSE 5788 Roswell Road NW, Sandy Spring, 404/2550035; 267 Marietta St., Embassy Suites Hotel (Centennial Park), Atl., 404/223-6500; 3285 Peachtree Road NE, Embassy Suites Buckhead, Atl., 404/365-0660. Revered by steak connoisseurs around the globe for its USDA prime, aged Midwestern corn-fed beef, extraordinary STRIP 245 Eighteenth St., Atl. 404/385-2005. Great steak and sushi with multi-level dining, lounge and patios in a super hip setting, with nightly DJ and open air rooftop deck. p }} h2sr.com. THE PALM 3391 Peachtree Road, Atl., Westin Hotel. 404/814-1955. Prime cuts of beef and jumbo lobsters are served in a casual setting, with a caricature gallery of famous faces. p }}} thepalm.com. ★★★ PETER VITALE Park 75 and Thai fusion dishes with an artistic flair, reminiscent of the grand style of the ’40s and ’50s. p h }}} nanfinedining.com. ★★ RICE 1104 Canton St., Roswell, 770/6400788; 1155 Hammond Dr., Sandy Springs, 770/817-9800. Grilled New Zealand lamb, Atlantic salmon, pad Thai and a variety of authentic Thai dishes. p } goforthai.com. TAMARIND SEED 1197 Peachtree St. NE, Ste. 110, Atlanta. 404/873-4888. Savor authentic Thai, fresh curry and herb spices, meat, seafood and vegetables in an upscale setting, with specialties such as roasted duck breast, braised lamb tenderloin and Chilean sea bass. p }}} tamarindseed.com. QUICK GUIDE p reservations h dress restrictions } entrees $10-20 }} entrees $20-30 }}} entrees $30+ SOUTHERN SEASONS STARS ★ great ★★ excellent ★★★ superb ★★★★ the best | 127 THAI HUNAN GOURMET 6070 Sandy Springs Circle NE, Atl. 404/303-8888. Enjoy a variety of authentic Thai and Chinese cuisine in a relaxing setting. p } hunangourmetrestaurant.com. ★★ NAN THAI FINE DINING 1350 Spring St. NW, Atl. 404/870-9933. Rich, tasty Thai SOUTHERN SEASONS MAGAZINE treats For a heavenly cup of hot chocolate, swirl a Peppermint Hot Chocolate on a Stick into a mug of steaming milk or water. Handmade in the San Francisco kitchen of Ticket Chocolate, the block of fine Belgian milk chocolate is paired with an old fashioned peppermint stick for a cafe worthy drink. theticketkitchen.com. 5 Almost too cute to eat, these festive and fun Holiday Ladybugs from John & Kira’s delight in two sensational flavors: delicate spiced caramel chocolates (green) and almond and hazelnut praline chocolates (red), beautifully bundled in a red boutique box. johnandkiras.com. sweet 5 Indulge in a deliciously rich journey of flavor with Godiva’s Cake Truffle Flight, featuring six select truffles: Birthday Cake, Pineapple Hummingbird Cake, Cheesecake, Red Velvet Cake, Chocolate Lava Cake and Lemon Chiffon Cake. godiva.com. 3 New England chocolatier Harbor Sweets teamed up with food veteran Lora Brody to create the divine Salt & Ayre line, featuring truffles in Chai, Café au Lait, Hazelnut and Espresso flavors, and dark chocolate-covered sea salt pieces filled with Caramel, Crystallized Ginger and Almond Buttercrunch. harborsweets.com. 128 Details Make the Difference Social • Corporate • Weddings • Soiree to Go PHOTOGRAPHY BY REICHMAN
http://issuu.com/southernseasons/docs/holiday-winter13
CC-MAIN-2015-14
refinedweb
39,782
65.42
tag: Kickstarter: Pinball Arcade: Star Trek The Next Generation 2015-05-10T12:19:43-04:00 tag: 2015-05-10T12:19:43-04:00 2015-05-14T08:33:35-04:00 Pinball Arcade coming back to 360! This post is for backers only. Please visit <a href="">Kickstarter.com</a> and log in to read. tag: 2014-09-13T14:21:52-04:00 2014-12-30T17:50:35-05:00 Addams Family Kickstarter! <a href="" target="_blank"><div class="template asset" contenteditable="false" data- <figure> <img alt="" class="fit" src=""> </figure> </div> </a><p> The Addams Family Kickstarter is now a reality! </p><p.</p><p>We wouldn't be able to do these big license tables without the help of our amazing fans. We are going to need your help to bring this table to the Pinball Arcade Collection. Head on over to the Kickstarter page and check it out! <a href="" target="_blank"></a></p> tag: 2013-11-04T17:37:43-05:00 2014-12-30T17:50:41-05:00 Pinball Arcade Now Available On Steam! <p> Hello Backers,</p><p>We have some great news for those of you who chose PC/Steam as your platform of choice!</p><p>The Steam version of Pinball Arcade has just been released and your long awaited tables have already been unlocked.</p><p>1. If you don't already have Steam on your computer you can download it from here <a href="" target="_blank"></a></p><p>2. Once installed, you will need to create a Steam account (This is different than your FarSight/Pinball Arcade account).</p><p>3. Search the store for Pinball Arcade, download and Install. (Or, <a href="" target="_blank">click here!</a>)</p><p.</p><p>If you have any concerns at all, feel free to email us at support@pinballarcade.com.</p><p>Thanks everyone for helping us bring this amazing piece of pinball history to a PC near you!</p><p>Enjoy! </p> tag: 2013-08-28T11:53:07-04:00 2013-08-28T11:53:43-04:00 Rosa Shelton <p>Hello everyone,</p><p>It is with regret that I have left FarSight Studios. You will be very well taken care of by Mike Lindsey.</p><p>Believe it or not, I will miss y'all (even some of us "argued")</p><p>Keep flippin' and with </p><p>Warm Regards,</p><p>Rosa</p> tag: 2013-08-24T13:49:42-04:00 2013-08-24T13:52:30-04:00 PC Version We wanted to update you that PC/Steam beta testing has begun. The PC version should be out in less than a month for those of you that have been patient enough to wait, this is exciting news. tag: 2013-08-15T14:30:37-04:00 2013-08-16T13:13:04-04:00 Exclusive STTNG Tournament Entry Has Begun <p>Hello Everyone,</p><p>Just a reminder that the Star Trek: The Next Generation tournament has begun for mobile device users and PSN.</p><p>If you're at that tier level, beginning at $75, you are granted entry into this tournament. The tournament ends on August 29.</p><p>We wish you all luck. If you have a questions entering into the tournament, please contact me at <a href="" target="_blank">rosa@farsightstudios.com</a>.</p><p>We appreciate your support and again, Good Luck.</p><p>As with the Twilight Zone, the top 20% of winners will receive Season Pro Pack One which you can gift to familes and/or friend, if you already have it! </p><p>The ultimate winners of both tournaments will receive a signed, framed Translite for STTNG.</p> tag: 2013-07-05T20:01:24-04:00 2013-07-05T20:08:00-04:00 Help us Kickstart the incredible Terminator 2: Judgement Day table! <p>Hello Star Trek: The Next Generation backers! We hope you're enjoying seeing the Star Trek table playing in all of its digital glory. <b>We are very thankful for your support!</b> </p><p><b>We'd like to let you know about a new Kickstarter project we've created to fund the license costs for the incredible Terminator 2: Judgement Day table</b>.. <b>It's one of our favorite tables ever</b>! </p><p>We have a unique opportunity to secure the licenses we need to preserve this table in the Pinball Arcade, but <b>we must act now</b>- if we don't, the opportunity will be lost and it might be quite some time (if ever!) before we'd be able to put this together again.<b> We've created a video explaining the new project- you can watch it by following the link below:</b></p><p><a href="" target="_blank"></a> </p><p><b>We'd love to count you as a Backer again!</b></p> tag: 2013-05-09T15:57:25-04:00 2013-05-09T16:21:16-04:00 Star Trek <p>Hello everyone,<?xml:namespace prefix = o</p> <p>It seems in going through Kickstarter emails, that some of you have not been answered. Unfortunately, <a href="" target="_blank">kickstarter@farsightstudios.com</a> with any unresolved issues. </p> <p>We'll do our best to take care of you as soon as possible. You've been more than patient!</p> tag: 2013-02-25T13:21:12-05:00 2013-02-25T14:22:38-05:00 BIG THANKS TO EVERYONE!!! <p>We are thrilled to announce that this wonderful piece of pinball history is now available in the Pinball Arcade for iOS, Mac, Android, and Amazon worldwide!</p><p>If you’re a backer, you should have already received an email from us asking which platform(s) you want your rewards on. To claim your free Star Trek table on iOS, Mac, Amazon, and Android, simply log out of your Pinball Arcade account, close the app, and log back in. Easy as that!</p><p>We have submitted Star Trek: The Next Generation on the PlayStation3 and PS Vita to Sony in both North America and Europe- the table should be available mid-March. If you are a PlayStation3 or PS Vita backer we will be emailing you your PSN codes just as soon as we get them from Sony.</p><p>Due to the bankruptcy of our Xbox 360 publisher last year, we are temporarily unable to release new tables on that platform. We are currently seeking bankruptcy court approval to transfer the game to a different publisher, and we will release Star Trek: The Next Generation on the Xbox 360 as soon as that happens.</p><p>Thanks again for backing us and making this fantastic table a reality!</p> tag: 2012-11-02T14:50:44-04:00 2012-11-02T15:05:44-04:00 UPDATE #5! <p>Thanks to all our fantastic supporters and backers! Because of your outstanding efforts, we can post this video showing you our progress on The Twilight Zone table. It is coming along very nicely and without a hitch! Enjoy!</p>
https://www.kickstarter.com/projects/1067367405/pinball-arcade-star-trek-the-next-generation/posts.atom
CC-MAIN-2015-22
refinedweb
1,180
64.61
. We. - - Define a main function. Then flesh out your main function as below. The purpose of the function is to make a Circle and then have it move randomly around the screen until the user clicks in the window. You will need to make a Circle object, and can look up the syntax for it in the documentation. def main(): # assign to win a GraphWin object made with title, width, height, and the value False # (note the final parameter is not one we have used before. It makes it so the # window doesn't attempt to redraw after every change). # referred to by thing, passing in dx and dy # tell the window to update its display (call win.update()) # if win.checkMouse() is not None # break out of the while loop # close the window Set up a call to your main function inside the conditional that runs only if the file is run on the command line. Always put these two lines at the bottom of your code. if __name__ == "__main__": main() Run your test.py program. The shape should bounce around the screen in Brownian motion, which is another the function random.choice with shapes as its input # # return the shapes list Note the structure of the above function. First, create an empty list. Second, create all of the basic shape objects needed to create the complex shape. For each of the basic shape objects, append it to the list. Third, return the list. All of the primitive objects in your.py file for draw, move, and undraw. The skeletons are below. def draw( objlist, win ): """ Draw all of the objects in objlist in the window (win) """ # for each thing in objlist # call the draw method on thing with win as the argument def move( objlist, dx, dy ): """ Draw all of the objects in objlist by dx in the x-direction and dy in the y-direction """ # for each item in objlist # call the move method on item with dx and dy as arguments def undraw( objlist ): """ Undraw all of the objects in objlist """ # for each thing in objlist # call the undraw method on thing When you are done with that, go back to the test begin the project.
http://cs.colby.edu/courses/F17/cs151/labs/lab06/
CC-MAIN-2017-51
refinedweb
370
79.4
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of C++11 status. Section: 20.3.1.2.3 [unique.ptr.dltr.dflt1] Status: C++11 Submitter: Howard Hinnant Opened: 2008-12-07 Last modified: 2016-02-10 Priority: Not Prioritized View all issues with C++11 status. Discussion: Consider: derived* p = new derived[3]; std::default_delete<base[]> d; d(p); // should fail Currently the marked line is a run time failure. We can make it a compile time failure by "poisoning" op(U*). [ Post Summit: ] Recommend Review. [ Batavia (2009-05): ] We agree with the proposed resolution. Move to Tentatively Ready. Proposed resolution: Add to 20.3.1.2.3 [unique.ptr.dltr.dflt1]: namespace std { template <class T> struct default_delete<T[]> { void operator()(T*) const; }; }
https://cplusplus.github.io/LWG/issue938
CC-MAIN-2022-21
refinedweb
139
59.09
On Mon, Sep 26, 2011 at 11:45 AM, Mike Meyer <mwm at mired.org> wrote: > When this conversation started discussing namespaces, it occurred to me that > we've had a number of suggestions for "statement-local" namespaces shot > down. It seems that they'd solve this case as well as the intended case. I > don't expect this to be acceptable, but since it solves part of this problem > as well as dealing with the issues for which it was originally created, I > thought I'd point it out. See PEP 3150 - I've written fairly extensively on the topic of statement local namespaces, including their potential application to this use case :) There are some fairly significant problems with the idea, which are articulated in the PEP. If you'd like to discuss it further, please start a new thread so this one can stay focused on the more limited 'nonlocal' suggestion. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia
https://mail.python.org/pipermail/python-ideas/2011-September/011827.html
CC-MAIN-2016-30
refinedweb
163
69.01
28 August 2012 07:15 [Source: ICIS news] SINGAPORE (ICIS)--China’s Sinopec Henan Oilfield is expected to restart its 50,000 tonne/year Group I base oils plant at Nanyang in Henan province, in north China, on 2 September, a company source said on Tuesday. The company shut the plant on 26 August for maintenance, the source said, adding that some stocks will continue to be supplied in the local market. However, the shutdown will not have significant impact on the Group I base oils markets in central and north ?xml:namespace> Prices of Group I base oils are expected to firm in September as a result of rising demand from lubricant plants during the consumption peak, the sources
http://www.icis.com/Articles/2012/08/28/9590198/chinas-sinopec-henan-oilfield-to-restart-group-i-base-oils-unit.html
CC-MAIN-2015-06
refinedweb
120
59.47
+ Post New Thread I am using gxt demo sample code of GroupingView but with the editable Grid. On editing a Row, the textfield jumps up by one level. attached the... Hi, Cell tabbing is not working properly after finishing the edition of a cell. If you press tab after the edition of a cell the cursor will NOT go... We noticed that below error often throwing in the server log, but not sure about the root cause of the element. ... Hi, I have a mix-charts with Bar, Scatter with Line series. Whenever we hide Bar-series its hide the tool tip for that series. But whenever we hide... Hi, I'm re-configuring a grid which uses a live grid view with a new store and column model. If I reconfigure the grid when the scroller is on top... When Placed the Mouse over on any menu items and do Mouse scroll Up/Down , menu items in Menu scroll loosing its focus. Steps to follow: 1. Click... Thanks Collin, I installed the library manually with mvn install:install-file... and it worked. However, I need to ask for your help with... In the method "hide(TextButton buttonPressed)" the beforeHideEvent is called before hideButton is set. This is causing getHideButton() to return null... Required Information Version(s) of Ext GWT «Ext GWT 3.0 Release Candidate» Browser versions and OS (and desktop environment, if applicable) ... Hi everybody, I am working on a big application which will use GXT. l do not know how to change/customize the month name (eg. December to Dec,... Hi! I create button as image in GXT. Then, I create menu and add this menu in that button. So, sometime when I click on button, the menu opens. But... Hi, guys. I found that ProgressBar doesn't update first time it's used. It doesn't repaint upon updateText(), which is the strongest repaint... Version(s) of Ext GWT Ext GWT 3.0.0b Browser versions and OS (and desktop environment, if applicable) Internet Explorer 8, Windows XP FF17... Hello, The Window component that I use has Resizable flag set to false but the resize cursor on the edges still appear. Is there any way to change... I've been trying to get the BorderLayoutContainer to work following the examples. It seems that it simply ignores the layoutData I pass. I've even... create a check box tree (tri state) with ~10K leafs, 4-5 levels of depth ~20 elements per level evenly distributed (till we have 10K leafs) ... Hello. Here is my code public class FramedPanelTest implements EntryPoint { @Override public void onModuleLoad() { Version(s) GXT 3.0.1 Browser versions and OS Firefox, Windows XP Virtual Machine No Hello all, When you create a new BarSeries and put only one item at store, that bar will fill all chart space. This happens with bar or column. I... Good day. Source code: ... List<ColumnConfig<MerchantNotification, ?>> ccs = new LinkedList<ColumnConfig<MerchantNotification, ?>>(); ... Sencha is used by over two million developers. Join the community, wherever you’d like that community to be or Join Us
http://www.sencha.com/forum/forumdisplay.php?84-Sencha-GXT-Bugs/page22&order=desc
CC-MAIN-2014-42
refinedweb
516
67.55
How To Install and Configure OpenShift Origin on CentOS 6.5 Introduction OpenShift is Red Hat's Platform-as-a-Service (PaaS) that allows developers to quickly develop, host, and scale applications in a cloud environment. OpenShift Origin is the open source upstream of OpenShift. It has built-in support for a variety of languages, runtimes, and data layers, including Java EE6, Ruby, PHP, Python, Perl, MongoDB, MySQL, and PostgreSQL. You can add new runtimes and frameworks to OpenShift with custom or community cartridges. Easily scaling your web applications is a primary reason to run them on OpenShift Origin. NOTE: Throughout this tutorial user input will be highlighted in red. How OpenShift Works OpenShift Roles There are four roles used on the OpenShift platform. While it is not significantly important you know what the roles do for this tutorial, if you wish to deploy a cluster of servers to offer high availability, load-balancing, etc., you will need to understand the functions these roles provide. In our tutorial, we'll be configuring a single server to run all of these roles. Broker The Broker role consists of the OpenShift Broker RPMs and an MCollective client. The Broker serves as a central hub of the OpenShift deployment, and provides a web interface where users can manage their hosted applications. DBServer This role consists of the MongoDB database that the Broker uses to track users and applications. MsgServer The MsgServer role includes the ActiveMQ server plus an MCollective client. Node The Node role is assigned to any host that will actually be used to store and serve OpenShift-hosted applications. oo-install supports the deployment of Nodes as part of an initial installation and as part of a workflow to add a new Node to an existing OpenShift deployment. The OpenShift Architecture tutorial we will set up our first OpenShift host running all the roles required by OpenShift. How it works from a client's perspective - A client wants to visit the site app-owner.apps.example.com. - The client's browser requests the DNS record for the domain. - The DNS server responds with the IP Address of a Node hosting the application. - The clients browser sends a GET request to the Node. - The Node maps the request to the desired Application. - The Application itself responds to the request directly. How did the DNS server know which Node is running the application? The developer connects to a Broker to create/manage an application. When the owner modifies an application, the Broker will send a message to the DNS server with the new information. This information includes the domain(s) being used for the application, and which Nodes are hosting the application. Because of this automation, it is a requirement for OpenShift to have control over the DNS Zone of the domain or subdomain used for apps. OpenShift uses the Bind DNS Server. If you have an existing Bind DNS server, you can configure OpenShift to work with it. However, in this tutorial, we will cover the process of using a new DNS server configured automatically by the OpenShift Origin installer. If you would prefer to use an existing BIND DNS server, you can read instructions for setting up DNS in the OpenShift Origin Comprehensive Deployment Guide. DNS Configuration For the remainder of this tutorial we will be using the following domains. Substitute these with your own, and feel free to use a personalized naming convention. - example-dns.com - Used for our nameservers - ns1.example-dns.com - ns2.example-dns.com - example.com - apps.example.com - Used for OpenShift applications - openshift.example.com - Used for OpenShift Hosts - master.openshift.example.com - The host name of our Droplet Prerequisites Droplet Requirements - 1GB Droplet or larger The Installation of OpenShift is fairly resource intensive, and some packages can exceed 512 MB of RAM usage. You should use a 1 GB or larger Droplet. If you have any issues registering the cartridges at the end of the installer, chances are some packages failed to install do to the lack of memory. This can be confirmed by examining the installation logs. To check the installation log: cat /tmp/openshift-deploy.log Supported Operating Systems - CentOS 6.5 64-bit (standard DigitalOcean image) OpenShift Origin 4 is supported on 64-bit versions of Red Hat Enterprise Linux (RHEL) 6.4 or higher and CentOS 6.4 or higher. It is not supported on Fedora, RHEL 7.x, or CentOS 7.x. A minimal installation of RHEL / CentOS is recommended to avoid package incompatibilities with OpenShift. This tutorial will use the standard Digital Ocean CentOS 6.5 x64 image on a 1 GB Droplet. Installer Dependencies The following utilities are required by the OpenShift Origin installer. This tutorial will show you how to install Ruby. The other packages are already installed by default with the DigitalOcean CentOS 6.5 image. - curl - ruby - 1.8.7 or greater - ssh - If you are deploying to systems other than the installer host Root Access The rest of this tutorial will assume you are connected to your server with the root user account, or a user account with sudo privileges. To enter the root shell from another account: sudo su Step One — Install Updates Before proceeding, it is always a good idea to make sure you have the latest updates installed. To install updates: yum update Step Two — Install Preferred Text Editor You can use your favorite text editor throughout this tutorial; however, the examples will use Nano. Install Nano with: yum install nano When you are done editing a file in Nano, press Ctrl+X, press Y to save, and press Enter to overwrite the existing file. Step Three — Install Ruby Ruby is not installed by default on a minimal CentOS 6.5 installation. To install Ruby: yum install ruby Step Four — Set Your Hostname We need to make sure our hostname is configured correctly and resolves to our local machine. If this is configured incorrectly, Puppet will not be able to deploy some required services. To check the current hostname: hostname It should show the URL you want to use for the OpenShift control panel. In our case, this is master.openshift.example.com. Open the file /etc/sysconfig/network: nano /etc/sysconfig/network Edit the file to suit your needs: NETWORKING=yes HOSTNAME=master.openshift.example.com Upon next reboot your hostname will be updated. We will reboot after a few more steps. Step Five — Make Hostname Resolve to localhost This will ensure that puppet can resolve the hostname correctly during the installation. Next, open the file /etc/hosts: nano /etc/hosts Add your hostname to the 127.0.0.1 line: 127.0.0.1 master.openshift.example.com localhost localhost.localdomain ::1 localhost6 localhost6.localdomain6 Step Six — Enable SELinux SELinux (Security-Enhanced Linux) is a Linux kernel security module that provides a mechanism for supporting access control security policies, including United States Department of Defense–style mandatory access controls (MAC). This kernel module is a requirement for OpenShift to isolate applications securely. For more information on SELinux, and advanced configurations that should be done before using OpenShift in a production environment, please see the series linked below. While the series is based on CentOS 7, the principles and deployment process are the same. For the purposes of this tutorial we will enable SELinux by setting it to enforcing mode. Open /etc/sysconfig/selinux: nano /etc/sysconfig/selinux Change SELinux to enforcing: # Then reboot to enable our settings: reboot If using SSH, you will have to reconnect after the reboot is complete. Step Seven — Install OpenShift Origin Now we'll install OpenShift Origin. We have three options to install OpenShift: curl-to-shell, a portable installer, or installing from source. In this article we will be using the curl-to-shell method for installing OpenShift Origin. This configuration will take a few minutes, and the installation itself can take up to an hour, although you don't have to babysit the server for that part. To start the installer: sh <(curl -s) (Optional) Installation Options The command line options are useful for larger and Enterprise deployments. If you have predefined configuration files or have an existing Puppet installation, you can use these options to speed up the installation process. Since this is our first deployment on a single server, we will not be using any of the options listed below. However, it's useful to know what functions these options provide if you need to scale your Openshift deployment in the future. For more information you can check the official documentation. -a --advanced-mode Enable access to message server and db server customization -c --config-file FILEPATH The path to an alternate config file -w --workflow WORKFLOW_ID The installer workflow for unattended deployment --force Ignore workflow warnings and automatically install missing RPMs -l --list-workflows List the workflow IDs for use with unattended deployment -e --enterprise-mode Show OpenShift Enterprise options (ignored in unattended mode) -s --subscription-type TYPE The software source for installation packages -u --username USERNAME Login username -p --password PASSWORD Login password --use-existing-puppet For Origin; do not attempt to install the Puppet module -d --debug Enable debugging messages Step Eight — Answer Installer Questions OpenShift Origin uses an interactive installation process. There are quite a few questions to answer, so pay attention! The questions are shown below, with the user input in red. Welcome to OpenShift. This installer will guide you through a basic system deployment, based on one of the scenarios below. Select from the following installation scenarios. You can also type '?' for Help or 'q' to Quit: 1. Install OpenShift Origin 2. Add a Node to an OpenShift Origin deployment 3. Generate a Puppet Configuration File Type a selection and press <return>: 1 The installer will prompt you for an installation scenario. Enter 1 and press Enter. DNS — Install a new DNS Server ---------------------------------------------------------------------- DNS Configuration ---------------------------------------------------------------------- First off, we will configure some DNS information for this system. Do you want me to install a new DNS server for OpenShift-hosted applications, or do you want this system to use an existing DNS server? (Answer 'yes' to have me install a DNS server.) (y/n/q/?) y For this tutorial we want to deploy a new DNS server, so enter y and press Enter. DNS — Application Domain All of your hosted applications will have a DNS name of the form: <app_name>-<owner_namespace>.<all_applications_domain> What domain name should be used for all the hosted apps in your OpenShift system? |example.com| apps.example.com Enter the domain you would like to use for your hosted applications, which in this example is apps.example.com, and press Enter. DNS — OpenShift Hosts Domain.example.com Enter the domain you would like to use for your OpenShift Hosts, which in this example is openshift.example.com, and press Enter. DNS — FQDN of the Name Server Hostname (the FQDN that other OpenShift hosts will use to connect to the host that you are describing): master.openshift.example.com Since we are hosting the DNS on the same Droplet, we will use this machine's Fully Qualified Domain Name. Enter your host's FQDN, which in this example is master.openshift.example.com, and press Enter. DNS — SSH Host Name Hostname / IP address for SSH access to master.openshift.example.com from the host where you are running oo-install. You can say 'localhost' if you are running oo-install from the system that you are describing: |master.openshift.example.com| localhost Using current user (root) for local installation. This is the hostname used to perform the installation of OpenShift. Since we are installing to the same Droplet running the installer, we can use localhost. Enter localhost, and press Enter. DNS — IP Address Configuration If you have private networking enabled, you will need to use the WAN interface/IP Address for any host you wish to assign the Node Role. Since we are only installing to a single host in this tutorial, make sure you use eth0 as your interface for this host. In a large setup with multiple Brokers and DBServers, you would use the private networking interface for those hosts only. Attempting to use the private interface on a Node will cause an IP address error during deployment. Detected IP address 104.131.174.112 at interface eth0 for this host. Do you want Nodes to use this IP information to reach this host? (y/n/q/?) y Normally, the BIND DNS server that is installed on this host will be reachable from other OpenShift components using the host's configured IP address (104.131.174.112). If that will work in your deployment, press <enter> to accept the default value. Otherwise, provide an alternate IP address that will enable other OpenShift components to reach the BIND DNS service on this host: |104.131.174.112| 104.131.174.112 That's all of the DNS information that we need right now. Next, we need to gather information about the hosts in your OpenShift deployment. For the purposes of this tutorial we will use the default settings, as shown in the image above. Broker Configuration ---------------------------------------------------------------------- Broker Configuration ---------------------------------------------------------------------- Do you already have a running Broker? (y/n/q) n Okay. I'm going to need you to tell me about the host where you want to install the Broker. Do you want to assign the Broker role to master.openshift.example.com? (y/n/q/?) y Okay. Adding the Broker role to master.openshift.example.com. That's everything we need to know right now for this Broker. Do you want to configure an additional Broker? (y/n/q) n Moving on to the next role. The installer will now ask us to set up a Broker. In this example we do not have any Brokers yet, so we will install the role on master.openshift.example.com. Node Configuration ---------------------------------------------------------------------- Node Configuration ---------------------------------------------------------------------- Do you already have a running Node? (y/n/q) n Okay. I'm going to need you to tell me about the host where you want to install the Node. Do you want to assign the Node role to master.openshift.example.com? (y/n/q/?) y Okay. Adding the Node role to master.openshift.example.com. That's everything we need to know right now for this Node. Do you want to configure an additional Node? (y/n/q) n The installer will now ask us to set up a Node. In this example we do not have any Nodes yet, so we will install the role on master.openshift.example.com. At this point the installer will also ask you to configure the user accounts. In this example we chose to have the installer generate the credentials for us. Username and Password Configuration Do you want to manually specify usernames and passwords for the various supporting service accounts? Answer 'N' to have the values generated for you (y/n/q) n If you would like to manually configure the usernames and passwords used for your deployment, you can do that here. In our example we decided to have them automatically generated for us. Enter n, and press Enter. Pay attention to the output. You will need the values in the "Account Settings" table later in the tutorial, specifically the OpenShift Console User and the OpenShift Console Password. Account Settings +----------------------------+------------------------+ | OpenShift Console User | demo | | OpenShift Console Password | S94XXXXXXXXXXXXXXXH8w | ... Finish Deployment Host Information +------------------------------+------------+ | Hostname | Roles | +------------------------------+------------+ | master.openshift.example.com | Broker | | | NameServer | | | Node | +------------------------------+------------+ Choose an action: 1. Change the deployment configuration 2. View the full host configuration details 3. Proceed with deployment Type a selection and press <return>: 3 When you are satisfied with the configuration, enter 3, and press Enter. Repository Subscriptions Do you want to make any changes to the subscription info in the configuration file? (y/n/q/?) n Do you want to set any temporary subscription settings for this installation only? (y/n/q/?) n For the purposes of this tutorial we will use the default mirrors. Enter n and press Enter, for both questions. Pre-Flight Check The following RPMs are required, but not installed on this host: * puppet * bind Do you want to want me to try to install them for you? (y/n/q) y The installer will now perform a pre-flight check. If you need any packages installed, such as Puppet and BIND in our example, enter y and press Enter. Note: Once you answer this question, Puppet will run for up to an hour on your server to configure OpenShift Origin. Here's some example output: master.openshift.example.com: Running Puppet deployment for host <^>Error: Could not uninstall module 'openshift-openshift_origin' Module 'openshift-openshift_origin' is not installed master.openshift.example.com: Puppet module removal failed. This is expected if the module was not installed.<^> master.openshift.example.com: Attempting Puppet module installation (try #1) <^>Warning: Symlinks in modules are unsupported. Please investigate symlink duritong-sysctl-0.0.5/spec/fixtures/modules/sysctl/manifests->../../../../manifests. Warning: Symlinks in modules are unsupported. Please investigate symlink duritong-sysctl-0.0.5/spec/fixtures/modules/sysctl/lib->../../../../lib.<^> master.openshift.example.com: Puppet module installation succeeded. master.openshift.example.com: Cleaning yum repos. master.openshift.example.com: Running the Puppet deployment. This step may take up to an hour. NOTE: Red text in the output is used to highlight errors and warnings. The installer will now perform the rest of the deployment. You may see some warnings during this process (see the image above). These are normal and will not affect the deployment. This process can take upwards of an hour to complete. Redeploying If Puppet did not configure everything correctly the first time, you can re-run the Puppet deployment without running the entire configuration again. If you see an error when you first access the OpenShift Origin dashboard, you'll probably want to do this. Run the installer again: sh <(curl -s) This time, you'll selection the third option, to generate a new Puppet configuration file. Not all of the output is shown below - just the questions and answers. Select from the following installation scenarios. You can also type '?' for Help or 'q' to Quit: 1. Install OpenShift Origin 2. Add a Node to an OpenShift Origin deployment 3. Generate a Puppet Configuration File Type a selection and press <return>: 3 Choose an action: 1. Change the deployment configuration 2. View the full host configuration details 3. Proceed with deployment Type a selection and press <return>: 3 Do you want to make any changes to the subscription info in the configuration file? (y/n/q/?) n Do you want to set any temporary subscription settings for this installation only? (y/n/q/?) n Make a note of the file name shown in the output: Puppt template created at /root/oo_install_configure_master.openshift.example.com.pp To run it, copy it to its host and invoke it with puppet: `puppet apply <filename>`. All tasks completed. oo-install exited; removing temporary assets. Run the Puppet configuration, using the file name you were given: puppet apply /root/oo_install_configure_master.openshift.example.com.pp Step Nine — Test Your OpenShift Deployment Your OpenShift installation is now complete. You can test your OpenShift Deployment by visiting the following url in a web browser. OpenShift will be using a self-signed certificate, so you will have to add an exception for this in your web browser. If you didn't note the credentials before, scroll back up to the "Account Settings" output section, and use the OpenShift Console User and OpenShift Console Password to log in. Account Settings +----------------------------+------------------------+ | OpenShift Console User | demo | | OpenShift Console Password | tARvXXXXXXXmm5g | | MCollective User | mcollective | | MCollective Password | dtdRNs8i1pWi3mL9JsNotA | | MongoDB Admin User | admin | | MongoDB Admin Password | RRgY8vJd2h5v4Irzfi8kkA | | MongoDB Broker User | openshift | | MongoDB Broker Password | 28pO0rU8ohJ0KXgpqZKw | +----------------------------+------------------------+ If you can log into the console but see an error, you may need to redeploy the Puppet configuration. See the previous section for details. Step Ten — Configure Your Domains for OpenShift In general you will want to follow your domain registrar's documentation for creating your DNS entries. We have provided images below for illustrative purposes. For the nameserver domains, you will want to substitute the IP address of your OpenShift host or BIND DNS server. In our example we created two name server records that point to the same IP. This is because most domain registrars will require a minimum of two NS records. In this tutorial we did not setup a Secondary Bind DNS server. example-dns.com A Record | ns1.example-dns.com => 104.131.174.112 A Record | ns2.example-dns.com => 104.131.174.112 Direct the application domain to use the OpenShift DNS servers you just set up. example.com NS Record | ns1.example.com. NS Record | ns2.example.com. Note: For testing purposes, you can also just point your app domain or subdomain to the OpenShift server's IP address, since we're deploying only a single OpenShift Origin server at this time. Now you will be able to access the OpenShift Console from the domain name of your Broker. In our example we used master.openshift.example.com. You will have to add an exception for the self-signed certificate again with the new domain. For in-depth information on configuring your DNS records, please see the tutorials listed below. How To Create Vanity or Branded Nameservers with DigitalOcean Cloud Servers How To Set Up and Test DNS Subdomains with DigitalOcean's DNS Panel How to Point to DigitalOcean Nameservers From Common Domain Registrars Step Eleven — Create Your First Application In the OpenShift Origin console, click Create your first application now on the Applications page. Click PHP 5.4 to select it as your cartridge. Since this is your first application, you will also have to specify a domain name. In our example we used demo.apps.example.com with the application name of php. The final URL will be php-demo.apps.example.com. Leave the rest of the default settings. Click Create Application. It may take a couple minutes to initialize the application. Once this process is complete, you can click visit app in the browser to see the test application. You be presented with the default PHP cartridge page. This page will also give you useful information on how to edit and deploy applications using OpenShift. Conclusion We have successfully deployed a single-server Openshift Origin environment. This server has all four OpenShift roles applied to it. It is also configured to be a DNS server. We configured one domain (example-dns.com) used for our nameserver pointers. We configured a second domain (example.com) used to resolve applications and OpenShift hosts. 31 Comments
https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-openshift-origin-on-centos-6-5
CC-MAIN-2018-17
refinedweb
3,787
57.47
Creating AWS Lambda using python 3.6 Carvia Tech | August 31, 2019 | 3 min read | 147 views | AWS Tutorials In this article we will setup a simple AWS lambda function using python 3.6 language and invoke it from aws cli. Perquisites AWS account setup (Create a free account at) Familiarity with Python 3.6 No IDE required as we can directly work on AWS Lambda console for writing python code Create a lambda function Goto lambda section of AWS services and click on create function button. We need to select the following: Author from scratch Function name - python-lambda-demo Runtime as Python 3.6 As shown in the screenshot below: Python code for our lambda function We can create a simple python function that prints the character frequency from a given input string. import json def lambda_handler(event, context): text = 'Input {}'.format(event['text']) return { 'statusCode': 200, 'input': text, 'frequency': get_char_frequency(text) } def get_char_frequency(text): freq = {} ls = [i for i in list(text.lower()) if i !=' '] for i in ls: if i in freq: freq[i] += 1 else: freq[i] = 1 return freq We need to put the above code in inline editor of lambda function and hit the save button. Now we are ready to test the lambda function. Configure Input AWS Lambda console provides testing functionality, we need to configure the test events for that. After that save the test event and click on Test button. We will see the results on the screen itself. Invoking the lambda using aws cli We can use AWS CLI to invoke the lambda provided you have aws cli setup on your system. You will have to follow Amazon AWS official documentation to setup AWS CLI on your target platform: $ aws lambda invoke --function-name python-lambda-demo --payload '{"text": "my name is carvia"}' output.txt $ cat output.txt { "statusCode": 200, "input": "Input my name is carvia", "frequency": {"i": 3, "n": 2, "p": 1, "u": 1, "t": 1, "m": 2, "y": 1, "a": 3, "e": 1, "s": 1, "c": 1, "r": 1, "v": 1} } Other invocation mechanisms We can use other mechanisms to invoke the lambda function, for example we can setup API gateway or SQS trigger that will invoke the lambda function. We will cover API Gateway trigger in this article. Exposing Lambda through API Gateway From the left hand side on lambda configuration page, we need to select API Gateway as the Trigger, as shown in below image: Now we need to configure the necessary inputs for API Gateway, specifically: Creating a new API or using existing API Security: Open/Open with Key or IAM API name: this will appear in API Gateway After that we are all set. We will get the final invocation URL after we save the changes. For demo purpose, we will keep this API open and hit it from the Postman. That’s all for this article. Stay tuned. Top articles in this category: - Introduction to Python 3.6 & Jupyter Notebook - Python coding challenges for interviews - Flask Interview Questions - Creating custom Keras callbacks in python - Top 100 interview questions on Data Science & Machine Learning - Connect to Postgresql with Python 3.x and get Pandas Dataframe - Part 1: Creating and testing Flask REST API
https://www.javacodemonk.com/creating-aws-lambda-using-python-3-6-7803cc53
CC-MAIN-2021-21
refinedweb
540
60.55
take input 2147395599 as example Expect output is 46339 (46339^2-2147395599=-92678) but i think 46340 is better (46340^2-2147395599=1) and why i can change int sqrt(int x) to int sqrt(float x) without any complie error? My code comes from Quake-III Arena class Solution { public: int sq x = x*(1.5f-xhalf*x*x); // Newton step, repeating increases accuracy x = x*(1.5f-xhalf*x*x); // Newton step, repeating increases accuracy return (int)(abs(1/x)); } }; you should return the floor of actual value; input : 8 return : 2 (not 3) because 2*2 < 8 The question is designed for the world of integers (without even taken the decimals into consideration). So you only need to output the integer part. As for the Quake III code, it is fast alright, but it is more like a hack. I don't think it is an answer the interviewers look for. First of all, it's basically unexplainable, if the interviewer asks you: "okay... please explain how you came up with this magic number 0x5f375a86", or "can you please show me how this magic number leads to a good initial guess?" how would you respond? Secondly, it shows the interviewer that you are probably one hard-core game programmer, but it does not tell them how much you understand about binary search. Thirdly, you have to remember that number for the interview. So all in all, this is not a practical answer.
https://discuss.leetcode.com/topic/1540/this-problem-is-not-very-good
CC-MAIN-2018-05
refinedweb
245
62.88
On 11/17/06, fizban <fizban at paranoici.org> wrote: > 1* take req.uri, str() it (just in case?) and split('/') it. > [stuff = str(req.uri).split('/') There's no need to str() it. It's already a string. It will also have been url decoded. However req.uri is not UTF-8 decoded, in case you deal with Unicode URLs. If you care about that you should probably do try: uri = req.uri().decode('utf8') except UnicodeDecodeError: raise apache.SERVER_RETURN, apache.HTTP_BAD_REQUEST Doing a split('/') is fine, assuming of course you are not overriding the default Apache configuration of the AllowEncodedSlashes directive: > 2* take stuff[1], see if isalpha(), if so see if stuff[1] is in a tuple > (contains all the valid "sections"). if it is, we assume stuff[1] is > safe to deal with. if not, we return a custom 404. Watch out for empty parts. For example if the url contains ///// The isalpha() test is fine. If you've unicode decoded it just be aware that isalpha() will also allow non-latin letters too. > 3* if stuff[1] is valid, and it is in a tuple containing a list of > special sections with a matching function, we run that function > [eval("%s(%s)" % (section, "req"))]. This should be secure, if you are definitely checking the string against a known acceptible list of them. But it's bad Python form! eval should only be used as a last resort (unless you don't care about form/style). It's almost never necessary. This is a bad habit encouraged by PHP that you will want to unlearn quickly. The simplest way to do this without eval is to change your list into a dictionary. Assuming you have something like allowed_sections = ['one', 'two', 'three'] change it to def one(req): ... def two(req): ... def three(req): ... section_mapping = { 'one': one, 'two': two, 'three': three } and then rather than eval call them as try: section_mapping[section]( req ) except KeyError: raise apache.SERVER_RETURN, apache.HTTP_NOT_FOUND Another common way is to put all your functions inside a class and use getattr and such. You might want to get the O'Reilly book "Python Cookbook" to learn more techniques of Python programming. > some of these functions take other > arguments, like a (pre validated with similar approach) stuff[2], or > req.args (same here). otherwise we run some other routine, by parsing > and req.writing a template. > [stuff[2] or req.args are this time matched against regular expressions, > to see if they fit the arguments taken by the section functions] It is very common to use regular expressions to parse URIs. Many python web frameworks do this, and no reason you shouldn't as well. Particularly in Python the named epxression ?P<..> syntax is quite useful to keep your regex code readable, m = re.match( r'/one/(?P<size>\d+)', url) size = m.group('size') > Do you guys think it's a decent approach in terms of "security"? Would > you take any other validation steps? As I said I'm really new to python > and mod_python, so since the website has some huge userbase, I'm really > worried about security.. Lets just say that what you've shown us shouldn't be insecure, but we can't say it's secure either. There's so much that's not even talked about. For example the whole user authentication, SSL, use of database queries, embedded/hardcoded passwords (which are a definte no-no, especially if you have PythonDebug turned on). > We are not using (for various reasons) sql db, > only templates and local xml basically, so sql inj. is not an issue. Okay, but you still might want to worry about other kinds of injection. Such as pathname injection, javascript injection, etc. What if the URL contains the characters "<![CDATA[" for example. That could really mess up some XML processors. Just be cautious of where the data came from and how you use it and you should be fine. > Since the site re-design will force us to change all the URI, I have > setup some other function to see if str(req.uri) matches moved or > deleted pages, if so we return 410 or 301 messages. 404 give the > impression of a messed up site. Is str(req.uri) safe enough to be passed > as argument to the notfound() or moved() functions I've made? Sending a 301 is a very good thing when you move URLs around. For instance the googlebot indexer when seeing a 301 will be more likely to trust the new URLs, carrying forward all your earned rankings, etc. Also 301's are used by some browsers' bookmark systems, so that bookmarks are automatically updated. As for 404s, don't worry about any "impression". Use the correct code for the correct situation. The only case where I might deviate from this is to send 302 rather than the recommended 307, since many old browsers (IE) don't understand 307. Also you may want to use the symbolic names, such as apache.HTTP_NOT_FOUND, provided by the modpython.apache module rather than numbers, as it makes your code more readable. Good luck. -- Deron Meranda
http://modpython.org/pipermail/mod_python/2006-November/022630.html
CC-MAIN-2017-51
refinedweb
862
75.81
The Amazon SNS platform (SNS) is a web service that enables you to send an SMS or a text-enabled mobile push notification to targeted subscribers. Patreon is a membership platform that helps artists & creators have a direct relationship with their biggest fans, get recurring revenue for their work, and create works on their own terms.Pat. Patreon + Google SheetsCreate new rows on Google Sheets for new pledge on Patreon Patreon + SlackSend Slack Channel message for new Patreon supporters Amazon SNS + {{item.actionAppName}}{{item.message}} It's easy to connect Amazon SNS + Patreon without coding knowledge. Start creating your own business flow. Triggers when you add a new subscription. Triggers when you add a new topic. Triggers when a new member is created, either by pledging or by following a campaign. Triggers when a new pledge is received on a campaign. Create a new message. Create a new topic. (30 seconds) (10 seconds) (30 seconds) (10 seconds) (2 minutes) Amazon Simple Notification Service (Amazon SNS)is a fast, flexible, fully managed push-notification service for applications of all sizes. Amazon SNS makes it simple and cost-effective to push notifications to mobile devices, applications, and email subscribers. You can use the API interfaces in your applications to publish messages to subscribed endpoints such as mobile devices or SQS.The Amazon Simple Notification Service (Amazon SNS. offers developers a highly scalable, reliable, and cost-effective service for distributing notifications from applications to users via email, SMS, social media, push notifications, or webhooks. With Amazon SNS's flexible targeting options, you can send notifications to individual endpoints or to groups of subscribers organized within topics. You can also subscribe an existing topic to receive delivery of real-time notifications.You can easily integrate the push notification service with your existing application using the web service API, which is available in Java, .NET, PHP, Python, Ruby, Go, Node.js, and C#/.NET Core. The Amazon SNS console provides an easy visual interface for creating topics and subscribing endpoints. Once you've subscribed endpoints to a topic, you can publish notifications without having to write any code.Push notifications are delivered from a topic to one or more subscriber endpoints and are designed for messaging short-lived events between your server and your app. For example, if your app has a new photo album that customers want to know about, you can create a topic with the photos and then send a notification from your server to that topic when the album is posted.For more information, see Amazon SNS in the Amazon Web Services General Reference. Patreon is a membership platform that makes it easy for artists and creators to get paid. It allows artists and creators to receive funding directly from their fans. There are many ways you can go about using Patreon:You can sign up as a creator first , then use Patreon as your own payment processor. Let's say you're an artist who wants to sell prints of your work online. By signing up for Patreon as a creator first , you get access to tools that help you build a following around your work and start selling prints of your work right away. As you do so, you'll gain access to more features on Patreon itself. Then, if you choose to start charging patrons on your own site using Patreon's hosted checkout, it's easy! Just switch on Patreon's hosted checkout for your account. You'll be able to connect your own payment processor account like Stripe or PayPal with Patreon's hosted checkout instantly , with no additional setup required.As an artist or creator , you can make your content available for patrons first , before releasing it anywhere else. This helps you build a strong community around your work before it hits the mainstream market — and perhaps even start earning revenue from it before anyone else sees it! You can choose whether or not to charge patrons for your content; if you don't charge anything, all of your content will be free on Patreon too. And if you're already making money from selling things like music or merch online, you can turn your patrons into customers by adding checkout functionality directly into your Patreon page, and then processing payments through Patreon's hosted checkout. The best part? You can charge different amounts for different tiers of rewards (for example, $1 per song download.If you're just starting out as an artist or creator , you can use Patreon's hosted checkout as a way to get paid by your fans for creating great content . You could also use Patreon as a way to get preorders for work before it's finished—in other words, ask people pledge money toward whatever it is you're creating next (an album, a comic book, a novel.Patreon also empowers creators with tools that help them run their business:· You can set up monthly subscription payments from patrons so that you always have an income stream.· You can integrate with PayPal or use Stripe's API to process payments outside of Patreon if you want to process payments yourself instead of using Patreon's hosted checkout.· You can offer exclusive experiences like private chats or early access to new content as rewards for higher level pledges—and tie content delivery directly to that reward level (for example. "As a patron at this level I get everything I need every month but I'll only get the next 3 videos released once I become a patron".C. Integration of Amazon SNS and Patreon First step. Create an Amazon Web Services accountTo create an Amazon Web Services account , visit and click Sign Up Now button. Then follow these steps. Click Sign Up button and fill in the required fields –> click Create a new AWS account button. Now sign in with your new account credentials –> click Sign In Using Our Secure Server . If don't have an account yet , fill in required information and click Continue button –> click Create a new AWS account button . Once signed in , confirm your account by filling required information –> click Confirm My Identity button .Second step. Create Topic & Endpoint for Amazon SNSTo create the topic and endpoint for Amazon SNS , go to the Amazon SNS console at , select Topics tab and then click Create Topic button . Fill in the required fields and click Create Topic button –> wait while AWS creates the topic . Then click Manage Endpoints tab . Then click Create Endpoint button –> fill in the required information and click Create Endpoint button . Now we have created a topic & endpoint for Amazon SNS .Third step. Subscribe Endpoint to TopicTo subscribe the endpoint to topic , go back to the Amazon SNS console at , select Topics tab and click on "Name_of_the_Topic" topic name –> click on Subscriptions subtab . Click Subscribe To button –> select endpoint name –> fill in the required information and click Subscribe button –> wait while AWS creates the subscription . Now we have subscribed an endpoint to a topic .Fourth step. Publish notification from serverTo publish notification from server , first we need to create a new file called pubsubclient.py under PYTHONPATH directory with following contents. #!/usr/bin/python import json import boto3 #set necessary client variable here # Get namespace uri NS_URI = "arn:aws:sns:us-east-1:123456789012:your-topic-name" def main (. sns = boto3 . client ( 'sns' . try . response = sns . generate_notification ( MessageBody = "{ " message " . " Hello World! " }" . print ( response . except . print ( "Error publishing message." . sys . exit ( 1 . if __name__ == "__main__" . main (. Now go back to terminal window running Python project and type below command to publish notification from server . $ sudo python pubsubclient.py 2017 -09 -15 14 . 34 . 11 Successfully published message Hello World! Now we have published notification from server . Now we will integrate this with our project .In this blog post I have explain how we can integrate Amazon Simple Notification Service (Amazon SNS)with our project using Python programming language through Boto3 library . In next post I will show how we can integrate Amazon Simple Notification Service (Amazon SNS)with our project using NodeJS through Nodemailer library . The process to integrate Amazon SNS and Patreon may seem complicated and intimidating. This is why Appy Pie Connect has come up with a simple, affordable, and quick solution to help you automate your workflows. Click on the button below to begin.
https://www.appypie.com/connect/apps/amazon-sns/integrations/patreon
CC-MAIN-2022-05
refinedweb
1,392
54.42
hello, I recently have inherited a rather large converted code base ( vb6 -> vb.net ) and am trying to come to grips with the use of 'modules'. First, my application is a windows forms application, with this said, are there 'best practices' that I could use as a reference? The real concern that I have is I have run into a massive 'utility' like module that holds a great deal of things ( user credentials, random methods, connection strings, ect ), making all of it available throughout the application. Is this normal in windows forms? The majority of my experience is in web applications and do not recall experiencing something like this before... I am kind of lost as to make this heap of code more manageable so it can be digested and used properly. I am not opposed to the module itself, but I personally am leaning toward breaking some of this functionality into classes ( User class, DataAccess class, ect ) BUT, I want to give what VB.net has to offer a fair shake before doing so. At the least, I could break the module into a few modules with a more specific direction but I don't see the benefit of not breaking them into classes ( I have read .net creates a class from the module... ). I am looking for some solid advice! Thanks Modules were used quite heavily in VB6 and older versions even more so but are used little if at all in most .net applications. I would break the module into classes where possible. Converted code is often less than ideal and in most cases will benefit from being updated to newer methods and removing any dependancy on outdated runtime and com componets that can be carried over during conversion. After converting a couple of projects myself I ended up rewriting them with the end result containing no modules. Always use [code][/code] tags when posting code. I found a few good threads about modules which really explained a lot. My intention is to go to C#, but with the use of modules I thought I was screwed. If others are experiencing a similar issue, a module compiles to a static class with static members ( shared in Vb.net ). Sorry for the c# example. namespace ModuleTest { * * public sealed class TestModule * * { * * * * public static void DoSomething(); * * } } I definitely agree with the converted code being less than ideal. We now have a lot of things wrapped in 'VB6.format()' which was also new to me. View Tag Cloud Forum Rules
http://forums.codeguru.com/showthread.php?514656-Help-understanding-VB-(-conversion-amp-philosophy-)&p=2025368
CC-MAIN-2015-22
refinedweb
419
63.19
Talk:Proposed features/Directional Prefix & Suffix Indication Contents Vid the Kid's cited practices Firstly: I need to update my "osm-abbr.html" to reflect my current practices. Secondly: yes, I have recently adopted the practice of removing directional prefixes from street names, though I may consider retaining them in some situations where casual usage typically includes them; for example, the extremities of High Street and Broad Street in Columbus. Usually at the same time, I've been expanding street "type" suffixes, and adding a tag called abbr_name which holds the street name as abbreviated on signage. My current practice is to leave the directional prefix out of those too — though they are present on the signs, usually in the same font as the type suffix which may be smaller in size than the core name — mostly to reinforce the idea of using abbr_name in rendering cases where a shorter label is desired. Of course, I insist that the addr:street tag of the Karlsruhe Schema should include the directional prefix, and be abbreviated according to USPS practices (which may occasionally differ from signed abbreviations). Vid the Kid 02:50, 18 August 2010 (BST) Expanded name keys There seem to be a lot of additions and variations to the name key. They can be of the form foo_name, which I guess specify various kinds of alternate names, and then there are name:bar, which usually specifies different languages. This proposal seems to introduce a lot of name:bar to break the name into pieces. Yet, when mappers unabbreviate street names while keeping the abbreviated version in another tag, I think abbr_name may be more common than name:abbr. Maybe the latter is better? Anyway, as for this particular proposal there should probably be some clarification as to where the default name value should fall between name:full and abbr_name/name:abbr. Vid the Kid 02:50, 18 August 2010 (BST) - The idea behind using "name:bar" is to make it clear that "bar" applies to "name" and not necessary "alt_name" if one exists, etc. I think "name:abbr" is better. I'm not sure if I fully understand your question, but "name:prefix" should still be used even if "abbr_name" is used. Kevina 03:34, 18 August 2010 (BST) Examples I'll say right up front that it will be hard to convince mappers to regularly break a street name into multiple keys, though I do recognize the utility of this proposal. I think it would be helpful to include, in the proposal itself, examples of how all the name-related tags for a single street (or each for a few streets) might look. That will help people get a firmer concept in their minds about what exactly is proposed. Vid the Kid 02:50, 18 August 2010 (BST) - Added, I used two roads from your area, let me now if I got them right. Also if you know of an example of a street with both a directional prefix that should most likely not be kept, and a suffix that should be kept, please add it too the table. Kevina 09:02, 18 August 2010 (BST) Storing it in the addr: namespace As it's part of the address information I'd rather see it added to addr:full or addr:prefix/suffix. It's not strictly part of the name of the object (a street). Grille Chompa - When breaking down an address the prefix will still be considered part of the street name and not of the address. Also addr seams like it would apply to points, not ways. (Also can you please remember to tag you posts with ~~~~, I added your name to you post). Kevina 10:29, 23 August 2010 (BST) Other possible uses One additional use that comes to mind is "Northbound" and the like, for use on divided highways. For example, "Northbound Northeast Expressway" would be tagged as name=Northeast Expressway; name:prefix=Northbound; name:full=Northbound Northeast Expressway. – Minh Nguyễn (talk, contribs) 05:27, 24 August 2010 (BST) Next step? It seems like the main objection on the mailing list is that this should be done purely on a local basis, making sure to consult official documents. The next step is voting, right? FWIW I agree with this proposal, I think it's appropriate for the area where I map. Evil saltine 23:14, 6 March 2011 (UTC) In Greece our streets are "Street Foobar" or "Avenue Foobar" In Greece we got a problem as of late because some mappers want to remove "Street" or "Avenue" from the name of a street or avenue because of routing problems. I don't want the "Avenue" or "Street" name to be removed so I want to have a tag that keeps this information. 1st real world example: "Οδός Νικ. Παρασκευά" 'Οδός'='Street' 'Νικ. Παρασκευά'=the name of the street Οδός is very common in urban areas. On almost every sign you got Οδός on it in front of the name of the street. 2nd real world example: "Λεωφόρος Γ' Σεπτεμβρίου" 'Λεωφόρος'='Avenue' 'Γ' Σεπτεμβρίου'=name of the avenue Λεωφόρος is less common. So I'm considering putting Λεωφόρος and Οδός in name:prefix and the name of the avenue/street in name:suffix. I only plan to use this scheme in Greece. logictheo 07:54, 10 May 2012 (BST) - I'd suggest to introduce a pair of new tags. status_name=* for "Οδός", "Λεωφόρος" and such things as "city", "town", "river" etc. And pure_name=* for the name of an object. The "name" tag retain for the full name. --Surly 12:12, 11 May 2012 (BST) Routing Problems I'm concerned that routing apps may have problems with address ambiguity if they are not programmed to be aware of this proposal. For example the address "300 South 200 West" is a distinct, single location, but if you set the name tag to "200 West" (as the street sign indicates), the way will have two places where house number 300 could be -- one North and one South. Only the name:prefix will disambiguate whether the number is North or South, and routing apps tend to require a numeric house number and an unambiguous street name. To resolve this, would you consider changing the proposal so it is backwards-compatible, for example use "name:core" for the "200 West" part as indicated on the sign, but leave "name" as the full, unambiguous name -- "South 200 West". That way a renderer may choose to show only the "core" part of the name. I am aware that the ambiguity for routing can be resolved by judicious use of the addr:street tag in Karlsruhe schema-mapped addresses (eg. addr:housenumber="250", addr:street="W 400 S" next to a road with name="400 South") but I am still concerned about how a routing app should determine that "W 400 S" is the same road as "400 South" on its display - it would require relation-based mapping of addresses which is a lot more work and less common than addr:street-based addressing, or it would also require careful tagging to ensure that the matching short_name (eg. "W 400 S") appears on the associated road. I'd like to see more discussion about the possible impacts of this proposal before mappers continue to make the changes. Right now in downtown Salt Lake City there is a mix of both, sometimes on the same road (West 400 South in one direction and 400 South in the other). It will be easy to automate the conversion if this naming scheme is adopted. - tedp (talk) 06:22, 4 July 2013 (UTC)
http://wiki.openstreetmap.org/wiki/Talk:Proposed_features/Directional_Prefix_%26_Suffix_Indication
CC-MAIN-2017-43
refinedweb
1,276
66.47
The way to write American music is simple. All you have to do is be an American and then write any kind of music you wish. -- Virgil Thompson In the same vein, the way to write Eclipse plugins is simple too. You have to write an application and "attach" it to Eclipse. But like music, you first have to learn a lot of things before you can write a masterpiece. So let's get started. In this article we'll discuss a few simple GUI elements: As examples of these, we'll modify slightly our existing plugin, and write a utility class that we'll need down the road. One can't just add widgets to Eclipse's user interface anywhere. It has to be done at specific, named, documented places. These are called extension points. In a basic Eclipse installation, there are hundreds of extension points available. Plugins themselves can publish new extension points. Let's have a look at the list. Open Invokatron's plugin.xml file and go to the Extensions tab. The items shown in the All Extensions tree list are the different widgets of your plugins, categorized by the extension points in which they appear. For example, the Invokatron Editor is inside an Editor (org.eclipse.ui.editor) extension point. Figure 1 shows the tab in question. org.eclipse.ui.editor Figure 1. Extensions tab Now press the Add button. A list (shown in Figure 2) appears. Figure 2. New extension wizard, list of extensions Scroll up and down this list to see the many extension points available. You'll notice that there are two kinds of extension points: those with extension templates (identified by the plus-sign icon), and those without. Most of the often-used extension points come with templates to help us developing extensions. The one selected in Figure 2, Action Set, has a template named "Hello World" action set. When you select a template, a short description appears. The next page asks for the parameters to the template. Related Reading Eclipse By Steve Holzner Now close the wizard and go back to the Extensions tab. Select the Invokatron Editor. You'll remark in this tab the information that we entered in the wizard for the Invokatron Editor, in the last article. Normal extensions need a unique identifier (Id field), a name for display (Name field), and the extension point where it belongs (Point field). Extensions created from templates (like the Invokatron Editor) need more parameters. Extension points without a template need more information, as well, but it must be entered in the text editor. Id Name Point Now that we know what extension points are, let's add an extension. The first one we're going to add is a toolbar button. This button will invoke the New Invokatron wizard we've created. There are three steps in adding a toolbar button: 1. Declare a new extension We already know how to do this. Simply go back to the plugin.xml editor, under the Extensions tab. Click Add. Toolbar buttons fall under the org.eclipse.ui.actionSets extension point. Don't use the template. Click Finish directly. Enter the following for this new action: org.eclipse.ui.actionSets Go to the tab plugin.xml. Eclipse added a new section to this file. 2. Augment the declaration with specific tags This new extension is a bit naked. Let's add tags under here. How do you know which tags are allowed? You can right-click on elements in the All Extensions tree, and go to the menu New; this will give you a list. Or you can look in the Eclipse documentation under Action Sets. Here we see that inside of the <extension> tag we can put a single <actionSet> tag. This can contain zero or more <menu> tags, followed by zero or more <action> tags, and an optional <description> tag. But the most important tag is <action>. This can describe both toolbar buttons and menu items. (We'll discuss the <menu> tag later on.) <extension> <actionSet> <action> <description> Below is a snippet of XML code for the toolbar button we're going to add. The new code is in bold type. We will dissect this code right after. <extension id="NewInvokatronAction" name="New Invokatron Document Action" point="org.eclipse.ui.actionSets"> <actionSet id="invokatron.actionSet" label="Invokatron Actions" visible="true"> <action id="invokatron.wizard.RunWizardAction" toolbarPath="org.eclipse.ui.workbench.file/new.ext" icon="icons/InvokatronIcon16.gif" tooltip="Starts the New Invokatron Document Wizard." class="invokatron.wizard.RunWizardAction"> </action> </actionSet> </extension> All of this is also available graphically in the plugin.xml editor, but we're looking at the XML to be able to see the full text of the fields. The <actionSet> tag shown here has only one action. An action is an object that represents an item in a menu or a button in a toolbar. Action attributes are too many to list here, but they are all documented in the online help. The most interesting attributes are: id toolbarPath icon tooltip class A toolbar path indicates where to add a toolbar button. Since anyone can create toolbars, and sometimes a button can contain sub-items, we refer to this location with a hierarchical list of identifiers. Here's a list of often-used toolbars, with their paths: org.eclipse.ui.workbench.file new.ext save.ext print.ext build.ext org.eclipse.ui.workbench.navigate org.eclipse.debug.ui.launchActionSet org.eclipse.ui.edit.text.actionSet.presentation org.eclipse.search.searchActionSet org.eclipse.jdt.ui.JavaElementCreationActionSet Team CVS If you provide the toolbar ID without a marker ID, your button will be added in a new toolbar right next to this one. Creating toolbars is as simple as using a new toolbar name. This new toolbar will then be available to add on the Eclipse GUI. You will sometimes see plugins using the toolbar path "Normal." This is the old naming convention. Using this in Eclipse 3 will create a new toolbar named "Normal." If you create a new toolbar ID, your toolbar is added after the File toolbar. Notice the New group marker of the File toolbar. This is where we'll add our button. Since the marker ID is new.ext, the complete path is: org.eclipse.ui.workbench.file/new.ext 3. Write the action delegate class The last step is to write a bit of Java to implement the action. This class is called an action delegate. package invokatron.wizard; public class RunWizardAction extends Action implements IWorkbenchWindowActionDelegate { /** Called when the action is created. */ public void init(IWorkbenchWindow window) { } /** Called when the action is discarded. */ public void dispose() { } /** Called when the action is executed. */ public void run(IAction action) { InvokatronWizard wizard= new InvokatronWizard(); Shell shell = PlatformUI.getWorkbench().getActiveWorkbenchWindow().getShell(); WizardDialog dialog= new WizardDialog(shell, wizard); dialog.create(); dialog.open(); } /** Called when objects in the editor are selected or deselected. */ public void selectionChanged(IAction action, ISelection selection) { } } Pages: 1, 2 Next Page © 2018, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.onjava.com/pub/a/onjava/2005/03/30/eclipse.html
CC-MAIN-2018-09
refinedweb
1,193
60.01
- 03 Apr, 2013 1 commit - using namespace boost is no longer used in pkt6_unittest.cc - copyright years updated - Pkt6::getRelayOption() documented - several methods are now better commented - ChangeLog now refers to libdhcp++, not b10-dhcp6 - 06 Mar, 2013 2 commits - Jeremy C. Reed authored reviewed by jelte via jabber. Changes some log message descriptions and some function arguments and some output. - 13 Dec, 2012 2 commits - 06 Dec, 2012 1 commit - 03 Dec, 2012 6 commits - 30 Nov, 2012 2 commits - 29 Nov, 2012 1 commit - 28 Nov, 2012 3 commits - Stephen Morris authored - 27 Nov, 2012 5 commits - 26 Nov, 2012 3 commits
https://gitlab.isc.org/sebschrader/kea/-/commits/1eb1178479503a5daf0f0ba43bf729d64c6ecd7c/src/lib/dhcp/tests/option_custom_unittest.cc
CC-MAIN-2021-31
refinedweb
104
59.43
2009-09-11 12:27:12 8 Comments The meaning of both eludes me. Related Questions Sponsored Content 19 Answered Questions [SOLVED] What's the difference between a proxy server and a reverse proxy server? - 2008-10-22 06:27:32 - Vicky - 544392 View - 1711 Score - 19 Answer - Tags: proxy webserver terminology reverse-proxy 31 Answered Questions [SOLVED] What's the difference between a method and a function? - 2008-09-30 23:45:10 - willc2 - 655125 View - 1645 Score - 31 Answer - Tags: function oop methods language-agnostic terminology 1 Answered Questions [SOLVED] The Definitive C++ Book Guide and List 35 Answered Questions [SOLVED] What is dependency injection? - 2008-09-25 00:28:40 - AR. - 789344 View - 2953 Score - 35 Answer - Tags: design-patterns language-agnostic dependency-injection terminology 37 Answered Questions [SOLVED] What are the differences between a pointer variable and a reference variable in C++? 27 Answered Questions [SOLVED] What is the difference between #include <filename> and #include "filename"? - 2008-08-22 01:40:06 - quest49 - 535977 View - 2163 Score - 27 Answer - Tags: c++ c include header-files c-preprocessor 32 Answered Questions [SOLVED] What exactly is RESTful programming? - 2009-03-22 14:45:39 - hasen - 1628787 View - 3899 Score - 32 Answer - Tags: http rest definition 7 Answered Questions [SOLVED] What are the basic rules and idioms for operator overloading? - 2010-12-12 12:44:56 - sbi - 879448 View - 2049 Score - 7 Answer - Tags: c++ operators operator-overloading c++-faq 8 Answered Questions [SOLVED] What is The Rule of Three? - 2010-11-13 13:27:09 - fredoverflow - 261493 View - 2053 Score - 8 Answer - Tags: c++ copy-constructor assignment-operator c++-faq rule-of-three 11 Answered Questions [SOLVED] What does the explicit keyword mean? - 2008-09-23 13:58:45 - Skizz - 793564 View - 2704 Score - 11 Answer - Tags: c++ constructor explicit c++-faq explicit-constructor @Brad Solomon 2019-08-16 13:53:08 There are some very clear definitions sprinkled throughout K&R (2nd edition); it helps to put them in one place and read them as one: @Puneet Purohit 2013-01-03 06:54:23 Declaration means give name and type to a variable (in case of variable declaration), eg: or give name,return type and parameter(s) type to a function without body(in case of function declaration), eg: whereas definition means assign value to a variable (in case of variable definition), eg: or provide/add body(functionality) to a function is called function definition, eg: many time declaration and definition can be done together as: and: In above cases we define and declare variable iand function max(). @Puneet Purohit 2013-01-03 06:57:17 the actual mean of definition if to assign value/body to a variable/function whereas declaration means provide name,type to a variable/function @Lightness Races in Orbit 2013-04-14 17:36:51 You can define something without assigning it a value. @Puneet Purohit 2013-04-15 09:03:06 @LightnessRacesinOrbit ....can you please give an example for that...thank you @Lightness Races in Orbit 2013-04-15 11:19:59 Just like this: int x; @Puneet Purohit 2013-04-15 11:27:21 its a declaration of variable x not its defination @Lightness Races in Orbit 2013-04-15 14:10:45 No, it is both. You are confusing definition with initialisation. @Michael Kristofik 2009-09-11 13:53:50 From the C++ standard section 3.1: The next paragraph states (emphasis mine) that a declaration is a definition unless... ... it declares a function without specifying the function’s body: ... it declares a static member within a class definition: ... it declares a class name: ... it contains the externkeyword without an initializer or function body: ... or is a typedefor usingstatement. Now for the big reason why it's important to understand the difference between a declaration and definition: the One Definition Rule. From section 3.2.1 of the C++ standard: @RJFalconer 2014-03-04 13:27:46 "declares a static member within a class definition" - This is true even if the static member is initialised, correct? Can we make the example struct x {static int b = 3; };? @Kyle Strand 2014-08-14 17:08:47 @RJFalconer You're correct; initialization does not necessarily turn a declaration into a definition (contrary to what one might expect; certainly I found this surprising). Your modification to the example is actually illegal unless bis also declared const. See stackoverflow.com/a/3536513/1858225 and daniweb.com/software-development/cpp/threads/140739/… . @Victor Zamanian 2014-10-07 13:52:17 This is interesting to me. According to your answer, it seems that in C++, a declaration is also a definition (with exceptions), whereas in the C standard it is phrased from the other perspective (C99, section 6.7, Declarations): "A definition of an identifier is a declaration for that identifier that: [followed by criteria for different cases]". Different ways to look at it, I suppose. :) @Gab是好人 2016-02-11 14:45:27. @Santosh 2014-03-12 18:01:50 Find similar answers here: Technical Interview Questions in C. A declaration provides a name to the program; a definition provides a unique description of an entity (e.g. type, instance, and function) within the program. Declarations can be repeated in a given scope, it introduces a name in a given scope. A declaration is a definition unless: A definition is a declaration unless: @Jeet Parikh 2018-08-08 04:06:13 Stages of an executable generation: In stage 2 (translator/compiler), declaration statements in our code tell to the compiler that these things we are going to use in future and you can find definition later, meaning is : and (3) stage (linker) needs definition to bind the things @LinuxBabe 2018-03-07 23:06:30 According to the GNU C library manual () @Karoly Nyisztor 2018-02-20 18:56:26 To understand the nouns, let's focus on the verbs first. declare - to announce officially; proclaim define - to show or describe (someone or something) clearly and completely So, when you declare something, you just tell what it is. This line declares a C function called sumthat takes two arguments of type intand returns an int. However, you can't use it yet. When you provide how it actually works, that's the definition of it. @plinth 2009-09-11 18:20:35 Declaration: "Somewhere, there exists a foo." Definition: "...and here it is!" @Gab是好人 2016-02-11 14:41:46. @princio 2017-10-03 15:30:37 To understand the difference between declaration and definition we need to see the assembly code: and this is only definition: As you can see nothing change. Declaration is different from definition because it gives information used only by the compiler. For example uint8_t tell the compiler to use asm function movb. See that: Declaration haven't an equivalent instruction because it is no something to be executed. Furthermore declaration tells the compiler the scope of the variable. We can say that declaration is an information used by the compiler to establish the correct use of the variable and for how long some memory belongs to certain variable. @hdante 2017-05-10 04:54:23 A declaration presents a symbol name to the compiler. A definition is a declaration that allocates space for the symbol. @Sridharan 2017-01-04 12:13:02 Declaration : Thus declaration associates the variable with a type. Following are some examples of declaration. Now function declaration : Note the semicolon at the end of function so it says it is only a declaration. Compiler knows that somewhere in the program that function will be defined with that prototype. Now if the compiler gets a function call something like this Compiler will throw an error saying that there is no such function. Because it doesn't has any prototype for that function. Note the difference between two programs. Program 1 In this, print function is declared and defined as well. Since function call is coming after the definition. Now see the next program. Program 2 It is essential because function call precedes definition so compiler must know whether there is any such function. So we declare the function which will inform the compiler. Definition : This part of defining a function is called Definition. It says what to do inside the function. Now with the variables. Some times declaration and definition are grouped into a single statement like this. @Joey Pabalinas 2018-02-11 18:06:31 int a; //declaration; a=10; //definitionThis is completely wrong. When talking about automatic storage duration objects (objects declared inside a function definition that are not declared with another storage class specifier like extern) these are always definitions. @Joey Pabalinas 2018-02-11 18:06:56 The main difference to grasp is that a declaration is saying "a thing exists somewhere that has these traits (type etc.)," whereas a definition is saying "I am declaring a thing with these traits, and I am also instantiating it here as well." Since you can't forward declare automatic storage duration objects like that, they will always be definitions. @Joey Pabalinas 2018-02-11 18:07:03 Except for maybe some weird typedef corner cases that I always forget about, a rule of thumb is that All definitions are declarations. Think about it; when you are instantiating something, you also need to tell the compiler that that thing exists and what its traits are right? @Jason K. 2016-10-09 23:15:02 My favorite example is "int Num = 5" here your variable is 1. defined as int 2. declared as Num and 3. instantiated with a value of five. We A class or struct allows you to change how objects will be defined when it is later used. For example When we learn programming these two terms are often confused because we often do both at the same time. @Jason K. 2016-10-09 23:25:33 I do not understand why so many people upvoted sbi's answer. I did upvote the answer by bjhend, which was quite good, concise, accurate and much more timely than mine. I was sad to see that I was the first person to do so in 4 years. @achoora 2014-11-13 11:44:27 The concept of Declaration and Definition will form a pitfall when you are using the extern storage class because your definition will be in some other location and you are declaring the variable in your local code file (page). One difference between C and C++ is that in C you the declarations are done normally at the beginning of a function or code page. In C++ it's not like that. You can declare at a place of your choice. @sbi 2015-07-31 22:27:27 This confuses declaration with definition and is plain wrong. @sbi 2009-09-11 12:43:03 A declaration introduces an identifier and describes its type, be it a type, object, or function. A declaration is what the compiler needs to accept references to that identifier. These are declarations: A definition actually instantiates/implements this identifier. It's what the linker needs in order to link references to those entities. These are definitions corresponding to the above declarations: A definition can be used in the place of a declaration. An identifier can be declared as often as you want. Thus, the following is legal in C and C++: However, it must be defined exactly once. If you forget to define something that's been declared and referenced somewhere, then the linker doesn't know what to link references to and complains about a missing symbols. If you define something more than once, then the linker doesn't know which of the definitions to link references to and complains about duplicated symbols. Since the debate what is a class declaration vs. a class definition in C++ keeps coming up (in answers and comments to other questions) , I'll paste a quote from the C++ standard here. At 3.1/2, C++03 says: 3.1/3 then gives a few examples. Amongst them: To sum it up: The C++ standard considers struct x;to be a declaration and struct x {};a definition. (In other words, "forward declaration" a misnomer, since there are no other forms of class declarations in C++.) Thanks to litb (Johannes Schaub) who dug out the actual chapter and verse in one of his answers. @San Jacinto 2009-09-11 12:56:33 Is that multiple-declaration legal according to the STANDARD, or your compiler, or what? I cannot do that multiple times in the same scope. @Steve Jessop 2009-09-11 13:09:38 @unknown: either your compiler is broken of you have mis-copied sbi's code. For example, 6.7.2(2) in N1124: "All declarations that refer to the same object or function shall have compatible type; otherwise, the behavior is undefined." @Steve Jessop 2009-09-11 13:10:51 @unknown: or possibly it's issuing a warning, which you're promoting to an error. @San Jacinto 2009-09-11 13:13:18 yes, it was a bad compiler. i tried it again on GCC and it workd. hint: don't use imagecraft's c compiler. @Steve Jessop 2009-09-11 13:13:35 "If you define something more than once, then the linker doesn't know which of the definitions to link references to". Although you can get away with that if the definitions are in different translation units, are equivalent, and and have appropriate modifiers telling the linker it's OK to fold them (principally "inline") @sbi 2009-09-11 13:39:19 @onebyone: Yes, there are exceptions, notably inlined functions that the compiler won't inline for whatever reason. However, I didn't want to add exceptions to the answer if the questioner doesn't know the difference between declaration and definition. I was quite surprised about all the misleading answers (that have since disappeared). I hadn't thought such misconceptions are so wide-spread. @Brian Postow 2009-09-11 13:59:55 I would say that "int i;" is also a declaration, and you never actually DEFINE an int variable... but other than that, +1 @David Thornley 2009-09-11 14:05:06 @Brian: "extern int i;" says that i is an int somewhere, don't worry about it. "int i;" means that i is an int, and its address and scope is determined here. @sbi 2009-09-11 14:09:23 @Brian: You're wrong. extern int iis a declaration, since it just introduces/specifies i. You can have as many extern int iin each compilation unit as you want. int i, however, is a definition. It denotes the space for the integer to be in this translation unit and advices the linker to link all references to iagainst this entity. If you have more or less than exactly one of these definitions, the linker will complain. @Steve Jessop 2009-09-11 14:14:47 @Brian int i;in file/global scope or function scope is a definition both in C and C++. In C because it allocates storage, and in C++ because it does not have the extern specifier or a linkage-specification. These amount to the same thing, which is what sbi says: in both cases this declaration specifies the object to which all references to "i" in that scope must be linked. @Johannes Schaub - litb 2009-09-11 16:54:04 @unknown, beware you cannot redeclare members in class scope: struct A { double f(int, double); double f(int, double); };invalid, of course. It's allowed elsewhere though. There are some places where you can declare things, but not define, too: void f() { void g(); }valid, but not the following: void f() { void g() { } };. What is a definition and what a declaration has subtle rules when it comes to templates - beware! +1 for a good answer though. @Marc van Leeuwen 2014-05-26 22:44:04 My main issue with this answer is that the initial explanation that a definition is what the linker needs is unhelpful in the case of a class definition. The linker never needs to link to a class itself (which like a typedef is really of declarative nature only); it may however link to a (static) class instance, to class methods, or to the class vtable, for none of which the class definition is used. But the compiler does not to see more than a class declaration for many things. So calling it a class definition is really just a matter of convention, not justified by the given explanation. @Thomson 2014-08-01 06:12:11 "A definition can be used in the place of a declaration." This may be incorrect. If is only legal if no duplicated definition will be introduced. @sbi 2014-08-02 07:21:53 @Thomson: "An identifier can be declared as often as you want. [...] However, it must be defined exactly once." @Thomson 2014-08-02 14:10:54 @sbi thanks for the clarification. I know the statement I quoted is correct with some condition. Just the condition is a little far from the quoted sentence. @Koray Tugay 2015-05-21 15:53:40 Quoting from this page: pubs.opengroup.org/onlinepubs/7908799/xsh/unistd.h.html "The following are declared as functions and may also be defined as macros. Function prototypes must be provided for use with an ISO C compiler." What is meant by 'Function prototypes must be provided for use with an ISO C compiler.' ? @Koray Tugay 2015-05-21 15:55:49 quoting from your answer: "then the linker doesn't know what to link references to" do you mean "then the linker doesn't know what the link references to"? @sbi 2015-05-23 07:00:48 @Koray: I didn't even mention C's macros, because they are so strange breasts. Basically, you cannot declare a macro, you can only define it. But the preprocessor is not a real compiler, but a simple text processor anyway... In K&R C, functions didn't declare their parameters, so all you needed as a declaration was their name. In ISO C, you need proper declarations. I suppose this is what the comment you quoted refers to. And, no, I meant that sentence to be the way I wrote it. @Destructor 2015-08-25 08:34:37 @sbi: why ideone.com/wIBBTi fails in compilation. there can be as many as I want declarations of identifier but must be exactly 1 definition as I think. I haven't defined & called any of the function. Why an error in the program? what is the reason? @Destructor 2015-08-25 08:35:48 @JohannesSchaub-litb: you says that double f(int, double); double f(int, double); } is allowed elsewhere but not in class scope. then why it isn't allowed at global scope. Why ideone.com/wIBBTi fails in compilation? @sbi 2015-08-25 13:39:08 @Pravasi: You declare two functions which have the same name, but differ in their return types. In C++, this is not allowed, you can only overload functions when their parameters differ. @Zebrafish 2016-12-24 01:04:02 @sbi I wanted to edit your answer, but I'm not confident enough to. I've tried putting extern before a struct or class and it seems to be in the category with the functions, ie., adding extern doesn't seem to make a difference. I'm talking about the bit: // no extern allowed for type declarations - at the top of your answer. @YuZ 2017-02-28 09:10:26 does int x; implicitly initiate x to zero whereas extern int x does not? @sbi 2017-02-28 17:55:37 @user3921720: Nope. @YuZ 2017-03-02 16:15:54 then why is int x; a definition and extern int x a declaration? @sbi 2017-03-27 14:28:06 Because one defines a variable xof type int, while the other declares xto be a variable of type intwhich is to be defined elsewhere? shrug I really don't know what to say here. (Have you tried reading my answer? It explains this.) @陳 力 2017-12-05 10:42:08 May mislead newbie: int xalso includes a declaration. Because definitionis a subsetof declaration @sbi 2017-12-09 09:43:15 @czxyl: "A definition can be used in the place of a declaration." @rehctawrats 2018-02-08 08:46:54 Would be great if someone could extend this answer to include the meaning of initialization. Such as stackoverflow.com/q/23345554/6060872 @It'sPete 2013-07-02 22:46:52 This is going to sound really cheesy, but it's the best way I've been able to keep the terms straight in my head: Declaration: Picture Thomas Jefferson giving a speech... "I HEREBY DECLARE THAT THIS FOO EXISTS IN THIS SOURCE CODE!!!" Definition: picture a dictionary, you are looking up Foo and what it actually means. @legends2k 2013-06-26 19:43:06 C++11 Update Since I don't see an answer pertinent to C++11 here's one. A declaration is a definition unless it declares a/n: enum X : int; template<typename T> class MyArray; int add(int x, int y); using IntVector = std::vector<int>; static_assert(sizeof(int) == 4, "Yikes!") ; Additional clauses inherited from C++03 by the above list: int add(int x, int y); extern int a;or extern "C" { ... }; class C { static int x; }; struct Point; typedef int Int; using std::cout; using namespace NS; A template-declaration is a declaration. A template-declaration is also a definition if its declaration defines a function, a class, or a static data member. Examples from the standard which differentiates between declaration and definition that I found helpful in understanding the nuances between them: @Marcin Gil 2009-09-11 12:30:09 From wiki.answers.com: The term declaration means (in C) that you are telling the compiler about type, size and in case of function declaration, type and size of its parameters of any variable, or user defined type or function in your program. No space is reserved in memory for any variable in case of declaration. However compiler knows how much space to reserve in case a variable of this type is created. for example, following are all declarations: Definition on the other hand means that in additions to all the things that declaration does, space is also reserved in memory. You can say "DEFINITION = DECLARATION + SPACE RESERVATION" following are examples of definition: see Answers. @sbi 2009-09-11 12:37:35 This, too, is wrong (although much closer than the others): struct foo {};is a definition, not a declaration. A declaration of foowould be struct foo;. From that, the compiler doesn't know how much space to reserve for fooobjects. @San Jacinto 2009-09-11 12:43:32 sbi, my answer reflects what you are saying. in your example, you define a foo as an empty structure. I fail to understand how our examples are technically different. @Steve Jessop 2009-09-11 13:01:44 @Marcin: sbi is saying that "compiler knows how much space to reserve in case a variable of this type is created" is not always true. struct foo;is a declaration, but it does not tell the compiler the size of foo. I'd add that struct _tagExample { int a; int b; };is a definition. So in this context it is misleading to call it a declaration. Of course it is one, since all definitions are declarations, but you seem to be suggesting that it is not a definition. It is a definition, of _tagExample. @David Thornley 2009-09-11 14:07:56 @Marcin Gil: Which means that "Answers" wiki is not always accurate. I have to downvote for misinformation here. @sbi 2009-09-11 14:14:43 So we have an answer copied straight from MSDN (adatapost's) and one from answers.com and both are misleading or even plain wrong. What do we learn from this? @Steve Jessop 2009-09-11 14:18:27 We learn that what adatapost quoted is true but does not (IMO) really answer the question. What Marcin quoted is false. Quoting the standards is true and answers the question, but is very difficult to make head or tail of. @sbi 2009-09-11 14:23:20 @onebyone: A very nice summary indeed! (I, however, had reinforced what I learned as a student and later tried to hammer into my students: Copying without thinking might lead to a disaster. :^>) @Marcin Gil 2009-09-11 18:15:31 @David Thornley - not a problem :) This is what this site is about. We select and verify info. @Johannes Schaub - litb 2009-09-11 18:25:45 It should be noted that struct foo { int a; };in C is not a definition. C doesn't know struct definitions. In reverse, a typedef is a definition in C. @bjhend 2012-04-17 18:15:50 Rule of thumb: A declaration tells the compiler how to interpret the variable's data in memory. This is needed for every access. A definition reserves the memory to make the variable existing. This has to happen exactly once before first access. @Lightness Races in Orbit 2013-04-14 17:38:53 This only holds for objects. What about types and functions? @user565367 2011-01-07 04:42:05 definition means actual function written & declaration means simple declare function for e.g. and this is definition of function myfunction @sbi 2013-04-24 12:57:08 And what about types and objects? @Johannes Schaub - litb 2009-09-11 18:15:37 There are interesting edge cases in C++ (some of them in C too). Consider That can be a definition or a declaration, depending on what type Tis: In C++, when using templates, there is another edge case. The last declaration was not a definition. It's the declaration of an explicit specialization of the static member of X<bool>. It tells the compiler: "If it comes to instantiating X<bool>::member, then don't instantiate the definition of the member from the primary template, but use the definition found elsewhere". To make it a definition, you have to supply an initializer @user154171 2009-09-11 14:46:45 Couldnt you state in the most general terms possible, that a declaration is an identifier in which no storage is allocated and a definition actually allocates storage from a declared identifier? One interesting thought - a template cannot allocate storage until the class or function is linked with the type information. So is the template identifier a declaration or definition? It should be a declaration since no storage is allocated, and you are simply 'prototyping' the template class or function. @sbi 2009-09-11 15:09:58 Your definition isn't per se wrong, but "storage definition" always seems awkward when it comes to function definitions. Regarding templates: This template<class T> struct foo;is a template declaration, and so is this template<class T> void f();. Template definitions mirror class/function definitions in the same way. (Note that a template name is not a type or function name. One place where you can see this is when you cannot pass a template as another template's type parameter. If you want to pass templates instead of types, you need template template parameters.) @user154171 2009-09-11 15:59:27 Agreed that 'storage definition' is awkward, especially regarding function definitions. The declaration is int foo() and definition is int foo() {//some code here..}. I usually need to wrap my small brain with concepts I am familiar - 'storage' is one such way to keep it straight to me at least... :) @Steve Jessop 2009-09-11 14:03:38 From the C99 standard, 6.7(5): A declaration specifies the interpretation and attributes of a set of identifiers. A definition of an identifier is a declaration for that identifier that: From the C++ standard, 3.1(2): A declaration is a definition unless it declares a function without specifying the function's body, it contains the extern specifier or a linkage-specification and neither an initializer nor a function-body, it declares a static data member in a class declaration, it is a class name declaration, or it is a typedef declaration, a using-declaration, or a using-directive. Then there are some examples. So interestingly (or not, but I'm slightly surprised by it), typedef int myint;is a definition in C99, but only a declaration in C++. @sbi 2009-09-11 14:20:11 @onebyone: Regarding the typedef, wouldn't that mean that it could be repeated in C++, but not in C99? @Steve Jessop 2009-09-11 14:35:26 That's what surprised me, and as far as a single translation unit is concerned, yes there is that difference. But clearly a typedef can be repeated in C99 in different translation units. C doesn't have an explicit "one definition rule" like C++, so the rules it does have just allow it. C++ chose to change it to a declaration, but also the one definition rule lists what kinds of things it applies to, and typedefs isn't one of them. So repeats would be allowed in C++ under the ODR as it's worded, even if a typedef was a definition. Seems unnecessarily picky. @Steve Jessop 2009-09-11 14:35:57 ... but I'd guess that list in the ODR actually lists all the things it's possible to have definitions of. If so, then the list is actually redundant, and is just there to be helpful. @sbi 2009-09-11 15:03:36 What does the std's ODR definition say about class definitions? They have to be repeated. @Johannes Schaub - litb 2009-09-11 20:52:59 I suspect the rationale for it is because a typedef declares just a name, without producing something beyond (like, a type, object or something). So it's just a declaration, much like a using-declaration. @Johannes Schaub - litb 2009-09-11 20:55:08 However, note that you can have multiple definitions of the same namespace, although it sounds odd: namespace A { } namespace A { } @Steve Jessop 2009-09-12 12:21:47 @sbi: ODR says "(1) No translation unit shall contain more than one definition of any ... class type" and "(5) There can be more than one definition of a class type ... in a program provided that each definition appears in a different translation unit" and then some extra requirements which amount to "the definitions are the same". @Steve Jessop 2009-09-12 12:24:00 @litb: yep, namespaces aren't mentioned in the first clause of the ODR. Indeed I often use that in header files, where I have groups of functions I close and re-open the namespaces between them, so that each "section" of the header file stands alone. @sbi 2009-09-12 13:35:18 @onebyone: I always thought ODR deals with definitions across translation units. I'm surprised it's limited to TUs. @Steve Jessop 2009-09-12 13:52:45 @sbi: it deal with both. Clause 1 is about what can't be duplicated in a single translation unit. Clause 5 lists some things which can be duplicated in the program provided they're in different units. There are other wondrous clauses too large to fit in this margin^Hcomment. @sbi 2009-09-12 21:58:38 @onebyone: I tried to dig through the Holy Paper several times, but my understanding of English seems to simply lack too much in order to understand standardeze. <sigh> But then again, maybe I'm just not the kind of person who reads legaleze, not matter what language it comes in... @Steve Jessop 2009-09-13 10:32:49 @sbi: I've certainly never read the standard cover-to-cover, but the index is pretty good :-) @Destructor 2016-02-03 16:40:08 @SteveJessop: update your answer according to C11 standard because as you know C11 allows repeated typedef also. @adatapost 2009-09-11 12:35:30 Declaration Definition @sbi 2009-09-11 13:03:17 Um, isn't it that you can even define classes and enums in each compilation unit? At least I put class definitions into my headers and include them all over. Er, class foo {};is a class definition, isn't it? @David Thornley 2009-09-11 14:01:49 Yes. However, "class foo;" is a declaration. It tells the compiler that foo is a class. "class foo {};" is a definition. It tells the compiler exactly what sort of class foo is. @sbi 2009-09-11 14:17:17 @David: Right. And since we all put class and enum definitions into our headers files, it's class and enum definitions, not declarations, that can be repeated for each compilation unit. That makes adatapost's answer, um, misleading. @Johannes Schaub - litb 2009-09-11 16:56:42 The exception are class member names which may be used before they're declared. @sbi 2009-09-11 17:41:35 @litb: I don't think that's true: class blah { foo bar(); typedef int foo; };gives a compile-time error. What you mean is that member function definitions are, even when they are defined within the class' definition, parsed as if they were defined right behind the class' definition. @Johannes Schaub - litb 2009-09-11 18:00:16 Yeah, that's what i meant. So you can do the following: struct foo { void b() { f(); } void f(); }, f is visible even though not declared yet. The following works too: struct foo { void b(int = bar()); typedef int bar; };. It's visible before its declaration in "all function bodies, default arguments, constructor ctor-initializers". Not in the return type :( @sbi 2009-09-11 19:27:36 @litb: It isn't visible before it's declaration, it's only that the use of the identifier is moved behind the declaration. Yeah, I know, the effect is the same for many cases. But not for all cases, which is why I think we should use the precise explanation. -- Oops, wait. It is visible in default arguments? Well, that surely wreaks havoc with my understanding. Dammit! <pouts>
https://tutel.me/c/programming/questions/1410563/what+is+the+difference+between+a+definition+and+a+declaration
CC-MAIN-2019-43
refinedweb
5,719
63.59
Utility function to debug with gdb and MPI. More... #include <vtkBreakPoint.h> Utility function to debug with gdb and MPI. Wherever you need to set a break point inside a piece of code run by MPI, Step 1: call vtkBreakPoint::Break() in the code. Step 2: start MPI, each process will display its PID and sleep. Step 3: start gdb with the PID: gdb –pid=<PID> Step 4: set a breakpoint at the line of interest: (gdb) b <option> Step 5: go out of the sleep: (gdb) set var i=1 Original instructions at the OpenMPI FAQ: 6.1. Attach to individual MPI processes after they are running. Definition at line 41 of file vtkBreakPoint.h. Process fall asleep until local variable `i' is set to a value different from 0 inside a debugger.
http://www.vtk.org/doc/nightly/html/classvtkBreakPoint.html
crawl-003
refinedweb
134
72.16
On Sat, Jan 26, 2013 at 2:31 AM, Jason Smith <jhs@iriscouch.com> wrote: > > Paul, we are in broad agreement, so therefore you are correct by > definition. :) > +1 > JavaScript has seen a huge explosion of tools to parse, process, rewrite, > etc. JavaScript source code. Esprima is the state of the art (at least in > my peer group) and I have had success with Uglify. > > The builtin view server might become much more sophisticated about how it > handles users' code. (This is actually independent of the Node.js and V8 > discussion.) > > So it is possible the "minor modifications" developers must make could be > automated and done transparently on the CouchDB side, perhaps during a > transitional phase. > > But long-term, we all agree to move away from "naked functions" since they > are not valid JavaScript. > Whole hearted +1 to moving away. I just want to try and avoid the "rewrite all your code" step if possible. While I'm not super enthused about the source code analysis path, I could see using it during a transitionary stage with the intent of just helping users manage the upgrade. > > > On Fri, Jan 25, 2013 at 9:36 PM, Paul Davis <paul.joseph.davis@gmail.com>wrote: > > > On Fri, Jan 25, 2013 at 8:51 AM, Jason Smith <jhs@iriscouch.com> wrote: > > > > > I agree with Benoit, *however* those are out-of-scope for my branch. The > > > experiment is to explore *build* dependencies, nothing else. The Node.js > > > component is simply 100% compatible with couchjs, providing the same API > > > for all the code in share/server/*.js. > > > > > > I am intrigued about hybrid Node/Couchapps, or new ways to do views and > > > build apps, but that's not related to this branch. This branch is to > > > improve building, not running. > > > > > > > > Definitely agree here. We should focus on the view engine and leave the > > other bits until a later time. Definitely agree that we should look into > > feasibility before we rush from one tar pit to another (if that's how it > > turns out). > > > > > > > Paul, > > > > > > Even today people have more success installing Node.js version 0.8 than > > > libmozjs version anything. Node.js ships binaries for every platform. The > > > situation will only improve. Node.js is a rocket ship. > > > > > > > > This is definitely possible but I'm not sure I'm going to take it as a > > matter of fact. We never hear from the people that don't have issues > > installing things so there could be a bit of a confirmation bias on which > > is more easily installed. That said, even though spidermonkey does have > > packages I agree that its definitely not turn key. And if Node provides > > binaries for a huge swath of platforms that may be more than enough for us > > to stop caring on the ease of installation issue. > > > > > > > My suspicion is Node.js simply moves so fast as a general-purpose tool > > that > > > it can or will provide everything needed for a replacement view server. > > > Should we even replace the view server? Change the protocol? Embed JS as > > a > > > NIF? All good questions. This branch will (slightly) inform those larger > > > discussions. > > > > > > Will you permit a prediction with no prior research: Node.js has a very > > > good sandboxing story. Some package out there has already exposed the V8 > > > API in a convenient way. Perhaps more people need Node.js sandboxing than > > > the size of the entire CouchDB community. The npm registry serves 100 > > > million requests per month and that number doubles every five months. > > > > > > I would like to see a clear definition of "sandboxing." I proposed a > > > definition in JIRA. Without clearly defining what it means to be "secure" > > > or "sandboxed" it is hard to give a good answer. > > > > > > > > Easy: code stored in a design doc is executed in an environment that has > > access limited to a specific set of white listed APIs in its execution > > context. > > > > My personal feeling? CouchDB has had AFAIK zero security issues with the > > > JavaScript VM. Over how many years? That is huge. Nobody comes home from > > > work and tells their spouse, "CouchDB protected my data today. Again." No > > > office has a chalkboard saying "952 days without a CouchDB security > > > breach." But they should. If Node.js cannot provide the same guarantees, > > > that is a deal breaker. > > > > > > > > +1 > > > > > > > Finally, IMO, the anonymous function issue is not CouchDB's problem. That > > > is the view server's problem. In the 1.x API, a view server must support > > > "function() {}" full-stop. Personally, I think JavaScript is a better > > place > > > to provide compatibility. Couch sends you a string. Parse the string, > > wrap > > > it in parens, wave a chicken, whatever. My implementation (optimized for > > > shipping date), creates a Function() with a body of "return (" + > > the_code + > > > ")". So it is a metafunction or something. I run that function and now I > > > have the map, reduce, filter, etc. function the user expects to run. > > > Instead, one might use Esprima to parse the source code and from there > > > anything can happen. My point (and opinion) is, the code that handles the > > > anonymous function problem should be JavaScript, not C or Autotools. > > > > > > > > > > > +1 in general. I agree with your sentiment but not on some of the > > specifics. While I think you're quite right that the place to deal with > > this issue is inside the view server itself and its not "CouchDB the > > Erlang/C/Autotools project" that should be managing this. That said it very > > much is "CouchDB the project"'s problem in that we need to provide clear > > coherent directions on exactly what works where and how to fix things. > > > > I could see this direction being a very clear "In 1.x/2.x you have to make > > minor modifications too all of your JS code" on one end of the spectrum vs > > a "We should be able to transparently upgrade your code" on the other end > > with the most likely "your code will just work if you don't use external > > helper functions". Just so long as we have a concise statement on the > > expected behavior for all of our users. > > > > > > > > > > On Fri, Jan 25, 2013 at 6:40 PM, Paul Davis <paul.joseph.davis@gmail.com > > > >wrote: > > > > > > > I preface this with the fact that I'm fairly interested in exploring > > > this. > > > > I think Jason's sentiments are right on in that SpiderMonkey has proven > > > to > > > > be a bit of a burden for us to depend on especially from the point of > > > view > > > > of instructing new users on how to satisfy that dependency. > > > > > > > > That said, the biggest hurdles that come to mind: > > > > > > > > How much better is the node.js package story? A random node I just > > logged > > > > into doesn't have it immediately available. Are there > > debs/rpms/whatevers > > > > available from the node project? > > > > > > > > I've only semi sorta investigated sandboxing. This I think is going to > > be > > > > the biggest issue. With spidermonkey we were able to be very specific > > on > > > > what enters the JS VM's world. With node we're going to be in the > > "remove > > > > all the things" territory which is always more worrisome. I know > > there's > > > > some code/module/thing in node for this but I'm under the impression > > that > > > > its not fully fleshed out. And the example project I saw that used it > > > > spawned a subprocess for every call which is untenable (though that > > could > > > > be something we change obviously). > > > > > > > > I'm not at all concerned about dropping E4X support but are there any > > > other > > > > issues we'll have to deal with? How about the anonymous function issues > > > we > > > > have with spidermonkey trunk? Not that !node will save us from dealing > > > with > > > > that particular nugget of awesomeness. > > > > > > > > Node has a history of API breakage though it appears to have been > > > > stabilizing in that respect. How much do we have to worry about that > > > these > > > > days? > > > > > > > > When do I get a unicorn? > > > > > > > > Most of those are serious questions. > > > > > > > > > > > > On Fri, Jan 25, 2013 at 5:18 AM, Jason Smith <jhs@iriscouch.com> > > wrote: > > > > > > > > > This > > > > > > > > > > > > > > > > > > > > > -- > > > Iris Couch > > > > > > > > > -- > Iris Couch
http://mail-archives.apache.org/mod_mbox/couchdb-dev/201301.mbox/%3CCAJ_m3YBapV75ONaesi9ODXq5yWT+ddZZta-TYXFnWmEg1G4PYw@mail.gmail.com%3E
CC-MAIN-2014-15
refinedweb
1,329
75.61
I have a large input file I need to read from so I don't want to use enumerate fo.readlines() for line in fo: input_file.txt: 3 # No of tests that will follow 3 # No of points in current test 1 # 1st x-coordinate 2 # 2nd x-coordinate 3 # 3rd x-coordinate 2 # 1st y-coordinate 4 # 2nd y-coordinate 6 # 3rd y-coordinate ... with open(input_file) as f: T = int(next(f)) for _ in range(T): N = int(next(f)) for i in range(N): x.append(int(f.next())) for i in range(N): y.append(int(f.next())) This might not be the shortest possible solution but I believe it is “pretty optimal”. def parse_number(stream): return int(next(stream).partition('#')[0].strip()) def parse_coords(stream, count): return [parse_number(stream) for i in range(count)] def parse_test(stream): count = parse_number(stream) return list(zip(parse_coords(stream, count), parse_coords(stream, count))) def parse_file(stream): for i in range(parse_number(stream)): yield parse_test(stream) It will eagerly parse all coordinates of a single test but each test will only be parsed lazily as you ask for it. You can use it like this to iterate over the tests: if __name__ == '__main__': with open('input.txt') as istr: for test in parse_file(istr): print(test) Better function names might be desired to better distinguish eager from lazy functions. I'm experiencing a lack of naming creativity right now.
https://codedump.io/share/Aq2bwY7O7aMK/1/python-most-optimal-way-to-read-file-line-by-line
CC-MAIN-2017-39
refinedweb
240
62.27
Seleccionar idioma Blog de Red Hat Blog menu A flaw in runc (CVE-2019-5736), announced last week, allows container processes to "escape" their containment and execute programs on the host operating system. The good news is that well-configured SELinux can stop it. Do you enable SELinux (setenforce 1)? A hypothetical attack using this security hole would start with an attacker packaging a container image with exploit code. If an admin makes the mistake of deploying this container image using tools like podman run or docker run or through an orchestrator like Kubernetes, then the executable could overwrite the runc command on the host, giving the attacker full access to the host. Similar attacks are available when users exec into a running exploited container image. This exploit effects any container engines (CRI-O, Podman, Docker, Containerd, Buildah) that use the runc container runtime. Potentially, if an application running in an individual container gets hacked over the network, the attacker could then download the exploit code and break out of the container. Either way, it's bad news. So how can SELinux prevent this? The vulnerability allows processes within the container to overwrite the /proc/self/exe file, which is meant to point to the currently running executable. For example, the next command shows you /proc/self/exe pointing to "ls", since you used ls to call it: $ ls -l /proc/self/exe lrwxrwxrwx. 1 dwalsh dwalsh 0 Feb 10 05:45 /proc/self/exe -> /usr/bin/ls In this exploit, security researchers discovered that it is possible to write to /proc/self/exe when it is pointing to runc, in order to modify that executable. Updated versions of runc now fix this issue. First, this exploit should not be possible if you run your containers as non-root, which is the Red Hat OpenShift default. This is why almost all containers should be run as non-root. But for the cases where your containers have to run as root, you are vulnerable if your system is running setenforce 0 (with SELinux disabled). SELinux to the rescue SELinux can block this exploit, as long as you are running in enforcing mode. Note: The post to the openwall list is incorrect when it states that containers run as container_runtime_t by default. The tester ran tests on moby-engine, which was not running with the --selinux-enabled flag. By enabling this flag moby-engine would have been protected also. The container engines packages Podman, Docker, Buildah, and CRI-O should have SELinux enabled by default, and were protected. A bug has been opened with the moby-engine package to change its default to --selinux-enabled. Docker-ce was vulnerable also since it does not run with --selinux-enabled by default. On an SELinux system, container processes are launched as the SELinux type container_t. SELinux policy defines that the only ordinary files container_t types can write to are labeled container_file_t. The default file label of runc is container_runtime_exec_t, which container_t would be denied when attempt is made to write to it. When a attacker tricks a user into running the exploit, SELinux will block the exploit and generate an AVC that looks like: feb 10 16:08:43 localhost.localdomain audit[1727]: AVC avc: denied { write } for pid=1727 comm="copy" name="runc" dev="sda3" ino=135490239 scontext=system_u:system_r:container_t:s0:c43,c93 tcontext=system_u:object_r:container_runtime_exec_t:s0 tclass=file permissive=0 In the example you'll see SELinux denying the container running as container_t attempting to write over runc which is labeled container_runtime_exec_t. SELinux blocks the attempted write. Several of the known container exploits that have been found so far are file system exploits (exploits that happen when the process inside of the container gets access to the file system on the host system.) SELinux can be an effective tool to block these types of exploits. Watch this video on SELinux and Containers from Devconf.CZ 2019. User Namespace would also protect against this exploit User namespace could also protect against this exploit, since the process inside of the container while running as root inside of the container, is actually running as non-root outside of the container. Since runc is owned by root, the process would not be allowed to overwrite it. Sadly, almost no one runs with user namespace enabled, because of limitations in Linux file systems support for User Namespace. User Namespace features have been added to podman and the CRI-O team plans to add features to allow easier running of containers with user namespace separation. Note: Podman has the capability to run containers in rootless mode, meaning that root inside of the container is actually running as the UID of the user running Podman. In this case the exploit would not be able to work, since runc would be owned by real root on the system. Conclusion Please patch all versions of runc or docker-runc in your environment. - Don't run random container images from the internet. Only run trusted content. - Always run containers as non root if at all possible (OpenShift Default). - Run your container engines in user namespace if at all possible. - Always run with SELinux in enforcing mode and ensure the container engine is running with SELinux separation enabled in production. Luckily, Red Hat Enterprise Linux, Fedora, and CentOS systems run with SELinux enabled by default. As I often recommend, do not run containers on systems with SELinux disabled or in Permissive mode. About the author Daniel Walsh has worked in the computer security field for over 30 years. Dan is a Senior Distinguished Engineer at Red Hat. He joined Red Hat in August 2001. Dan leads the Red Hat Container Engineering team since August 2013, but has been working on container technology for several years.
https://www.redhat.com/es/blog/latest-container-exploit-runc-can-be-blocked-selinux
CC-MAIN-2021-39
refinedweb
964
53.41
Help with a Java Program! I am a beginner of Java programing, i came across this problem and am unable to solve this. Please Help!! This is the Problem.. For a scholarship organization: Scholarship can be allotted to only one student per family. There is a collection of 20 Student objects (class given below). You are supposed to write a method to return List with distinct last names. E.g If Students are John Doe, John Lincoln, Amy Doe, Foo Bar => then output should return Student objects for John Doe, John Lincoln, Foo Bar. You are allowed to change the Student Class. Method signature should be List findStudentsWithDistinctLastNames( List allStudents ) Also write another method that finds out how many students applied from each family (i.e. same last name). The output of this method can be a Map. public class Student {; } } Name of Page Need Java codes Help with a Java Program!
https://thenewboston.com/page.php?pid=1947&action=follow&b3cfe21c9c4a406c3e9f8623eb12c2f2=1
CC-MAIN-2017-22
refinedweb
152
75.71
From: Yuriy Koblents-Mishke (yurakm_at_[hidden]) Date: 2007-03-03 18:29:03 1. Never though about enforcing derivation before, but yes, it looks as it may be useful. Especially if the utility class does not increase the size of the final objects. 2. The technique of using protected constructor and virtual inheritance to stop derivation does not look compiler-specific, limited to gcc 4.1.1. Just in case, I tested your codes under MinGW gcc 3.4.2 and under MSVC++ 8. Yes, the utility stops derivation. The following line is OK: Foo foo; but line Goo goo; does not compile. 3. If the final class is itself derived, using the utility involves multiple inheritances. Fortunately, it looks as in this context it is fine. Utility class does not have data members, and, any case, "rhombic" inheritance is impossible. 4. I still do not see why you need protected destructor: is not enough to have a protected constructor? Why is it bad to destroy an object of a final class by pointer? 5. It probably would be better to put the utility in a nested namespace under boost::. See, for example, noncopyable.hhp. Sincerely, Yura. On 3/3/07, Manuel Fiorelli <manuel.fiorelli_at_[hidden]> wrote: > Is there any interest in a small utility which let developers to define > pseudo-final classes, meaning > that it is not possible to instantiate objects belonging to derived classes. > > Unfortunately, I could test my tool only on a Linux platform with the > compiler gcc 4.1.1 and I am not > sure the technique I used is portable to other compilers. > > Usage --------------------------------------------------------------------// > Suppose you want to define a final class Foo, then you > have to write > > class Foo : BOOST_NON_DERIVABLE > { > // .... > }; > > Now, you try to subclass it > > class Goo : public Foo > { > // ... > }; > > When you use inheritance it is quite obvious, you want to instantiate > objects of the classes involved in the hierarchy; thus, > somewhere you will probably write > > Goo goo; // a variant or a member of type Goo > > But the compiler will produce an error like that > > utility/nonderivable.hpp: In constructor 'Goo::Goo()': > utility/nonderivable.hpp:23: error: > 'boost::nonderivable_helper::nonderivable_helper()' is protected > main.cpp:14: error: within this context > utility/nonderivable.hpp:30: error: > 'boost::nonderivable_helper::~nonderivable_helper()' is protected > main.cpp:14: error: within this context > main.cpp:14: error: no matching function for call to 'Foo::Foo()' > main.cpp:9: note: candidates are: Foo::Foo(int) > main.cpp:8: note: Foo::Foo(const Foo&) > main.cpp: In function 'int main()': > main.cpp:26: note: synthesized method 'Goo::Goo()' first required here > utility/nonderivable.hpp: In destructor 'Goo::~Goo()': > utility/nonderivable.hpp:30: error: > 'boost::nonderivable_helper::~nonderivable_helper()' is protected > main.cpp:14: error: within this context > main.cpp: In function 'int main()': > main.cpp:26: note: synthesized method 'Goo::~Goo()' first required here > > Best regards, > Manuel Fiorelli > > _______________________________________________ > Unsubscribe & other changes: > > Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2007/03/117362.php
CC-MAIN-2020-10
refinedweb
499
52.15
Re: A buffer mapped to a file? - From: "Skywing [MVP]" <skywing_NO_SPAM_@xxxxxxxxxxxxxxxxxxx> - Date: Wed, 28 Mar 2007 16:39:22 -0400 You could try a named pipe, if the program really just calls CreateFile and reads or write to the resulting handle, but as soon as a program tries to do something like query file size, set file pointer, or soforth, that will break. So that really tends to be program specific as to whether it will work. The most compatible solution is to create a temp file on disk and delete it when done, I would imagine. -- Ken Johnson (Skywing) Windows SDK MVP "rep_movsd" <rep.movsd@xxxxxxxxx> wrote in message news:1175105745.453461.126290@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Hi Its quite simple to map a file to memory using CreateFileMapping() and MapViewOfFile(). Now I need the converse.... Basically I have a buffer of data and need to feed it into an API which expects a filename. I can think of 2 ways of the top of my head, but I'm not sure about the details. 1) Launch a process with redirected stdin / stdout , make the API read from "CON:" or "\.\\Device\Console" or whatever and the parent process will feed the data into the <stdin> of the launched process. 2) Make some type of FIFO or pipe... , but do such objects have names that are within the filesystem namespace and will the ReadFile() API cope with them? As far as I can recall in linux it was quite simple to make a fifo right on the filesystem and feed arbitrary data via it into any program that needed it as a file. Any way to do this under Win32? Regards Vivek . - Follow-Ups: - Re: A buffer mapped to a file? - From: rep_movsd - References: - A buffer mapped to a file? - From: rep_movsd - Prev by Date: A buffer mapped to a file? - Next by Date: Re: A buffer mapped to a file? - Previous by thread: A buffer mapped to a file? - Next by thread: Re: A buffer mapped to a file? - Index(es):
http://www.tech-archive.net/Archive/Development/microsoft.public.win32.programmer.kernel/2007-03/msg00313.html
CC-MAIN-2014-42
refinedweb
340
81.73
This wasn't the first time this has happened to me, so with this laptop I decided to go all out and get something quality, even if it cost more up front; the time (and possibly data) loss resulting from faulty hardware can be just as significant. My brand-new Macbook Pro (with the 17" matte display!) arrived last month and I immediately installed Vista via Bootcamp and setup my Windows development tools. Once of our goals is to make Natural Selection 2 and the toolset work natively across Windows, OS X and Linux. With this in mind, whenever I write code I try to keep portability in mind. This factored into our decision to use wxWidgets to build all of our tools and to keep any platform specific code (Direct3D for example) abstracted so that it can easily be replaced. This has all existed in the theoretical realm though, because I've never actually ported anything to OS X or Linux. Since I have a shiny new computer that runs OS X, I figured it was time to put that to the test. While a complete port is out of the question at the moment, I wanted to get my foot in the door so I could begin to understand the issues, like how to organize the files that contain platform-specific code, any gotchas associated with wxWidgets, etc. For my weekend porting project I decided to tackle the Builder application. This is our tool which is responsible for automatically converting all of our art assets into game ready formats. Of all of our projects, this particular one depends on the fewest libraries (just wxWidgets) and only has a small bit of Windows specific code which is isolated in a single class. I also spent a lot of effort when writing the Builder to make it platform independent, so I was looking forward to seeing the fruits of that labor. Ultimately, getting the Builder running on OS X was a pretty easy process, although it took me both Saturday and Sunday to get it worked out. The first step was compiling wxWidgets for the Mac, which turned out to be pretty simple with the supplied config script and make file. Then came the task of creating my own make file for compiling and linking our project. I haven't written a make file since college, and a make file for a real project is quite a bit different than the ones I had done in the past or the simple samples you can find on the Internet. Presumably this is why there are so many alternatives out there to writing the makefile by hand (icompile, bakefiles, automake or using XCode) I looked into these a bit, but ultimately decided I'd have an easier time just writing the makefile myself. Once I had the makefile sorted out, I quickly found a number of places where my C++ code wasn't portable to the g++ compiler. I also discovered a few errors in the code which hadn't been caught by Microsoft compiler. These were all really easy to fix, and was exactly the kind of knowledge I was hoping to discover with this process. The final step was linking and creating a usable OS X executable file. The linking process gave me the most trouble, because I made the rookie debugging mistake of getting hung up on one possible explanation of what I was seeing. At the end of my linking process, I was left with the single error message "Undefined symbol _main". Since the main function is supplied by wxWidgets I figured this was a problem with the way I compiled the library or how I was linking to it. A few hours of head banging later I took a step back from the problem and realized the "main" function was there, but the way my code was written put it inside a namespace. Moving it outside the namespace fixed the linker problem and I had an executable file at last. Unfortunately this executable file didn't behave like a normal OS X program. When it started, I would see the window for the Builder, but there was no menu at the top of the screen and the window didn't respond to any mouse clicks. I was lucky with this problem, and after a little bit of research I stumbled across the solution; OS X applications are actually directories, and inside that directory is where the actual executable image goes. With a few quick changes to my makefile, everything was packaged properly and I could run the Builder properly! There are a few bits of the Builder that don't work properly on the Mac yet, but I'll be working on those soon. I also haven't rewritten the platform specific piece (which detects when files on disk have been modified), but I do have a good solution now for how to organize these files in our code base. People often think that Direct3D is the biggest barrier to porting a game engine, but we actually have very little Direct3D code. Direct3D represents only about 2,500 lines of code out of almost 100,000 in the engine. The bigger task is all of the small things, and this process has helped me flush about a bunch of them that should make things smoother going forward. We're not sure if we'll have enough time to release an OS X or Linux version simultaneously with the Windows version, but it's something we're very interested in doing. But I'm feeling good about the portability of the engine, and since our entire game is written in Lua it doesn't need to be ported at all! Cool ,Good luck with the game Less chatting moore gameplay videos :D i HATE linking. it doesn't tell you whats wrong so you spend hours pulling your hair out at 2am trying to figure it out. linking errors = programmer's worst nightmare why would you waste time making this for a mac, macs are the world's crappiest computers that can't play games; oh my bad, there is possibly minesweeper... Blog.wolfire.com ^^^ you should read this along with everyone else who thinks this. i'm not a mac fan at all, but indie devs supporting mac/linux is a great thing (for devs and consumers alike). Will there be linux support? The chances are if it supports OS X it will support Linux with relatively little work. If and when this is ported to a mac, I will order it right away. Period.
https://www.moddb.com/news/one-small-step-for-a-man-one-giant-leap-for-mac-kind
CC-MAIN-2021-43
refinedweb
1,108
64.54
Go To Code From Production Learn about the new and enhanced features in Visual Studio 2013 make testing Windows Store applications quicker and easier. Video available in: Chinese (Simplified), Chinese (Traditional), Czech, French, German, Italian, Japanese, Korean, Polish, Portuguese (Brazil), Russian, Spanish, and Turkish. Actual format may change based on video formats available and browser capability. The RangeTestMethod shown in this video isn't in the Microsoft.VisualStudio.TestPlatform.UnitTestFramework namespace. Please advise. Thanks, Michael Thanks @mscherotter, I found an article regarding the absence of DataTestMethodAttribute... It is only available for the WinRT / WinPhone8 -specific unit tests. I assume the same applies for RangeTestFramework. Unfortunately, I could not include the URL reference because Channel-9 kept rejecting my post, marking it as spam, but you can google DataTestMethodAttribute, and eventually you'll find a Stack Overflow article on it. meanwhile, it looks like NUnit supports Range tests and Data tests, so I guess I'll switch over to Nuget, instead. I don't have this section "Windows Store" (I have VS 2013) Why ? Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.
https://channel9.msdn.com/Series/Visual-Studio-2012-Premium-and-Ultimate-Overview/Testing-Windows-Store-Applications
CC-MAIN-2017-13
refinedweb
211
52.8
Ads by The Lounge I'm happy to see the huge growth of community contributed code - things like RSS.NET, sharpziplib, ftp classes to tide us over 'til .NET 2.0, etc. One thing that bothers me about it is the namespaces. The .NET System namespaces are beautifully organized, but community / open source code namespaces are an anarchistic babel. Those that originate from a big company usually start with the company name, those that come from larger project usually take the the project's name. One-off code snips / hobbyist / micro-projects usually contain a random concatenation of some or all of the following words: monkey, alien, squishy, bug, fuzzy, code, util, works, MyNamespace, namespace, ware, example, contrib, and lib: monkeyCode, fuzzyAlienWare, utilLib, bugware, etc. This is the case I'm talking about. I can sort of see the point of company based or project based namespaces. I don't see any value in random namespace names, though. They don't tell anyone anything, and they look suspect to managers and system admins - "What's this bugworksLib.dll?" I understand there may be concern about twenty different Community.Network.Ftp namespaces floating around, but .NET is smart enough to figure it out, or at least warn you at compile time. I'd like to see a community standard best practice for community code namespaces - anything remotely professional would do: Community, Shared, Code, etc. Thoughts? good luck with that... I'm not expecting everyone to follow it, but I'd bet if there was a general best practices recommendation out there then a lot of people would use it. Whenever I post code, I waste time trying to decide what the namespace should be. I'd prefer if someone else waste that time once and for all. +1 for Community.[ProjectName] I agree. I like the idea of [Organisation].[ProjectName]... or [Organisation].[ProductName]... If there was a sensible recommendation I'd probably follow it and encourage others to do so. I have a modest proposal for you: how about NO F'ING NAMESPACE! They're little more than vanity plates for your code. For example, I built this class MhtBuilder. It lets you duplicate the "Save as single file" functionality in IE using 100% managed code, and I posted it on CodeProject. It's not going to cure cancer or anything, but it's useful, not that common, and worth sharing. Do I really NEED to call this class.. JeffAtwoodIsSoCool.MhtBuilder AwesomeAtwoodWare.MhtBuilder AtwoodHeavyIndustries.MhtBuilder C'mon. Let's stop kidding ourselves here. How many namespace collisions are we going to have, where you just happen to have a class called MhtBuilder in your project? If you're distributing *signed binaries with no source*, then you can arguably make a case that you need a namespace. Otherwise, stop with the veiled ego tripping, call your class what it does, and have the cojones to leave it at that. That's an interesting spin Jeff. For me I'm not so worried about collisions. More about using a standard way to keep my stuff organised. And a standard would mean I don't need to think about it when creating a namespace. Often I have classes called the same (and it makes sense it these situations), so I definetly need to use namespaces. At work I use CompanyAbbreviation.Something where Something is specific to the code. If its library code, I'll try to use something similar to the System namespace. For example, my socket server class is: Company.Net.Sockets.SocketServerBase Haacked Phil - I think that's evolving into a standard for corporate code. What about for general community code, though? I think Megacorp.Net.SocketServerBase is becoming general practice for corporate code, and Socketworks.Net.SocketServerBase is expected if the code is part of a Socketworks project / product, but if I post code for a Socket Server Base, what would you expect the namespace to be? So if I'm writing a freeware WinForms application, do you think I should code its classes as being in a "Community.<ApplicationName>" namespace (and subnamespaces therein)? Or is this something you'd really only worry about with components and/or class libraries? I'm mostly just concerned with components, class libraries, and classes which are shared as code. I can see going with the Community.* namespaces if people will use the code later. Jon Galloway recently pointed out something that's been bothering me for a while: I'm happy to see the huge growth of community contributed code - things like RSS.NET, sharpziplib, ftp classes to tide us over 'til .NET 2.0, etc.... do you think its that bad: NAnt: Nant.Core.dll ( NAnt.Core namespace ) NANt.DotNetTasks ( NAnt.DotNetTasks nspace ) NDoc: NDoc.Core.dll ( NDoc.Core namespace ) amd others like NUnit seem similarly well organised. Naming the asembly after the namespace is a pretty good practice generally - and is whats ( mostly ) used by the framework libraries. Ian - I agree with everything you've said - namespace root is the name of the project, assembly name matches namespace. I'm not concerned with the namespaces used in projects. Some industry standards would be good there, too, but I don't see them as a problem. I'm more concerned with one off utility classes, tiny utility DLL's, freeware server controls, etc.
http://weblogs.asp.net/jgalloway/archive/2005/01/06/347876.aspx
crawl-002
refinedweb
890
66.64
has seemingly become traditional, your editor moderated a panel of kernel developers (James Bottomley, Christoph Hellwig, Greg Kroah-Hartman, and Andrew Morton, this year). We discussed a wide range of topics, but the subject that caught the attention of the wider world was the "graybeards" question. As a result, we've been treated to a number of lengthy discussions on just why the kernel is no longer attracting energetic young developers. The only problem is: the kernel doesn't really have that problem. Every three-month development cycle involves well over 1000 developers, a substantial portion of whom are first-time contributors. Nearly 20% of these contributors are working on their own time, because they want to. There does not appear to be a problem attracting developers to the kernel project; indeed, if anything, the problem could be the reverse: the kernel is such an attractive project that it gets an order of magnitude more developers than just about anything else. Your editor has heard it said in the past that Linux as a whole might be better off if some of those developers were to devote a bit more time to user-space projects, most of which would be delighted to have them. The question the panel discussed, instead, had to do with the graying of the ranks of the kernel's primary subsystem maintainers. Many of those developers wandered into kernel development in the 1990's, when there were opportunities everywhere. A decade or more later, those developers are still there; James Bottomley suggested they may stay in place until they start dying off. Entrenched graybeards at high levels could, someday, start to act as a brake on kernel development as a whole. That does not appear to be a problem now; it's just something to watch out for in the future. Andrew Morton raised an interesting related issue: as the core developers gain experience and confidence, they are happy to put code of increasing complexity into the kernel. That could indeed make it harder for new developers to come up to speed; it might also not bode well for the long-term maintainability of the kernel in general. Dan Frye's keynote reflecting on IBM's 10+ years of experience with Linux was easily one of the best of the day. IBM's experience has certainly not been 100% smooth sailing; there were a lot of mistakes made along the way. As Dan put it, it is relatively easy for a company to form a community around itself, but it's much harder - and more valuable - to join an established community under somebody else's control. A number of lessons learned were offered, starting with an encouragement to get projects out into the community early and to avoid closed-door communications. IBM discovered the hard way that dumping large blocks of completed code into the kernel community was not going to be successful. The community must be involved earlier than that. To help in that direction, IBM prohibited the use of internal communications for many projects, forcing developers to have their discussions in public forums. Beyond that, companies need to demonstrate an ongoing commitment to the community - no drive-by submissions. Developers need to remain immersed in the community in order to build the expertise and respect needed to get things done. Companies, Dan says, should manage their developers, but they should not attempt to manage the maintainers those developers are working with. In general, influence in the project should be earned through expertise, skills, and a history of engagement - and not through attempts to control things directly. One interesting final point is that what matters is results, not whose code gets merged. To that end, IBM has reworked things internally to reward developers who have managed to push things forward, even if somebody else's code gets merged in the end. This is an important attitude which should become more widely adopted; it certainly leads to a more team-oriented and productive kernel community in the long term. Chris DiBona's talk on Google and the community was always going to be a bit of a hard sell; Greg Kroah-Hartman's discussion of Android and the community had happened just two days before. Additionally, Chris was scheduled last in the day, immediately after Josh Berkus gave a high-energy version of his how to destroy your community talk. So perhaps Chris cannot be blamed for his decision to dump his own slides and, instead, give a talk to Josh's slides - running backward. Chris did eventually move on to his real talk. There was discussion of Google's contributions to the community, including 915 projects which have been released so far and something like 200 people creating patches at any given time. There are over 300,000 projects on code.google.com now. Google has also supported the community by providing infrastructure to sites like kernel.org and, of course, the Summer of Code program. In short: Chris doesn't think that Google has much to apologize for; indeed, the contrary is true. This extends to the Android situation, which, Chris says, he's not really unhappy with. The targeted users for Android are different, and, in any case, it always takes a long time to get a real community going. That said, more of the Android code should indeed get into the mainline, and Google should be doing a better job. Part of the problem, it seems, is finding "masochists" who are willing to do the work of trying to upstream the code. The truth of the matter is that Chris's talk failed to satisfy a lot of people; much of it came off as "Google is doing good stuff, we know best, we're successful, and we're going to continue doing things the same way." Starting with his playing with Josh's slides, Chris gave the impression that he wasn't taking the community's concerns entirely seriously. On the other hand, the announcement at the end of his talk that Google was giving a Nexus One phone to every attendee almost certainly served to mute the criticism somewhat. Outside of the sessions, your editor had the opportunity to talk with some of Google's Android kernel developers; the folks actually doing the work have a bit of a different take on things. They are working flat-out to create some very nice Linux-based products, and they are being successful at it, but they are in the bind that is familiar to so many embedded systems developers: the product cycles are so short and the deadlines are so tight that there just isn't time to spend on getting code upstream. That said, they are trying and intend to try harder. We are starting to see some results; for example, the NVIDIA Tegra architecture code has just been posted for review and merging. The Android developers seem to feel that they have been singled out for a level of criticism which few other embedded vendors - including those demonstrating much worse community behavior - have to deal with. It can be a dismaying and demotivating thing. Your editor would like to suggest that, at this point, the Android developers have heard and understood the message that the community has tried to communicate to them. There is a good chance that things will start to get better in the near future. Perhaps it's time to back off a little bit and focus on helping them to get their code merged when they get a chance to submit it. As promised, this article has barely scratched the surface of what happened at the 2010 Collaboration Summit. In particular, the large presence of MeeGo (which had a separate, day-long session dedicated to it) has been passed over, though some of that will be made up for in a separate article. In general, it was a good collection of people who do not always get a chance to mingle in the same hallway, all backed by the Linux Foundation's seamless "it all just works" organization. Improved collaboration within our community should indeed be the result of this event. The Linux Foundation's Collaboration Summit (LFCS) is focused on, well, collaboration, so it is no surprise that a recent high-profile collaborative effort in the Linux world, MeeGo, had a strong presence at the conference. Both sides of the merger of Moblin and Maemo, Intel and Nokia, had representatives giving keynote speeches about MeeGo and how it intends to interact with the community. Since the project is hosted by the foundation, it makes sense that it would devote a good portion of LFCS—a day-long MeeGo track in addition to the keynotes—to the mobile distribution. The focus of both speakers was on developing MeeGo in the open and working closely with upstream projects, rather than targeting MeeGo itself.. Nokia has been working with Linux and free software for a number of years, since 2002 or 2003, and that it has been a "learning exercise". It "made a lot of mistakes" but tried to work within the community by participating with many different projects. Its early realization that it needed to be part of the community, and not "just use the code", was important. But the integration process, where the various components that made up Maemo were built and collected into a release, was not open to community involvement. That is something that will change for MeeGo: "We are going to build the MeeGo platform in the open". It is a "huge change" that is going on "right now, as I speak". The idea of doing that "may seem trivial" to LFCS attendees, he said, "but it is a big deal with us". Sousou, who is the director of Intel's Open Source Technology Center, echoed that idea. Working in the open will make collaboration easier, but "you will see the messes, and we are OK with that". One of the keys to making that work will be to focus on the upstream projects, he said. It took Intel "some time to figure it out", but downstream projects must "contribute and use the open source model". There are "hundreds" of Intel engineers working on MeeGo, Sousou said, but most of the work is not actually in MeeGo itself. "It's happening upstream", at kernel.org, freedesktop.org, and others. He doesn't want to see kernel patches, for example, submitted to MeeGo, "submit it upstream". It's all about "working with upstream and contributing upstream — there is nothing more". Both speakers talked about governing MeeGo in the open, with steering committee meetings on IRC. Jaaksi notes that there is still some adjustment that Nokia needs to make. He gets email from other employees about seeing MeeGo roadmap plans on the Internet; they are worried about competitors getting that kind of formerly secret information. He tells them: "Yes, that's how we do things". Jaaksi notes that Palm had gotten products out earlier than Nokia, "with our code", and that was "not their fault, [but] our fault" by being too slow to market. Google has also used Maemo code, and "we hope to use theirs". A concern is that MeeGo will give competitors an advantage, but he believes that it is the companies which participate in the project that will see the most benefit. That concern may not be relevant for most of the people in attendance, he said, but within Nokia, there is a question on how to differentiate itself. Sousou listed oFono and ConnMan as two projects where the two companies had already worked together. For MeeGo, they complement each other well, Jaaksi said. Nokia brings experience working with mobile handsets, ARM, and the phone companies, while Intel has "so much knowledge about the Linux kernel". Both have good teams that "know how to work in open source and combine open source with business". Choosing the "best of breed" components from Moblin and Maemo—or elsewhere—for MeeGo is something that both speakers stressed—there is a general sense that the project is trying to avoid "turf wars". But it's not just Intel and Nokia, as the MeeGo project is looking for more contributors, which is another thing that both speakers emphasized. Because of their close working relationship, it was relatively straightforward to merge Maemo and Moblin into one project. They didn't bring in other companies at the start, because, in Jaaksi's opinion, "it would have taken too much time" to get agreement with more companies. He said that MeeGo wants to "demonstrate that it is an open source project" that others can participate in. He listed multiple companies that have become involved since the MeeGo announcement, including hardware vendors, Linux distributors, embedded Linux vendors, and so on. The two main participants have decided on a blueprint of the architecture, which includes Qt as a platform for application developers, but the design of the system is an "ongoing process that we invite people to participate in", Jaaksi said. "Now is the time to join MeeGo", he said, and there is much to be done, but there is little risk for others because they have made a commitment to do things in the open. Both stressed that there is a simple, open governance model. But, as an audience member pointed out, there is a veto power that Intel and Nokia have over the project. The audience member wondered if a community can still be built around that veto. Jaaksi responded that things "will be fixed if we need to fix them" and that changes will be made for anything that becomes an obstacle. Currently, MeeGo is focused on getting more people involved and having "simple governance". Earlier in his talk, he said that the governance needed to stay out of the way to maintain the speed of development. There are 200-300 participants in the IRC meetings, Sousou said, and anyone can get involved. "If you contribute, you can help make decisions", Jaaksi said, but MeeGo will "make some mistakes going forward". The veto power for Nokia and Intel was one of the things that LFCS participants grumbled about in the "hallway track". There is concern that community input will be ignored. One area that is particularly sensitive is the choice of RPM as the mandatory package format for MeeGo-branded devices. Debian/Ubuntu-oriented developers felt slighted by that requirement, and there seems to be no room to change that decision, which gave rise to concerns about governance. Assuming it wants only one package format, there is no "good" choice for the project as either plausible choice would irritate some. Beyond that, attendees seemed interested, some even excited, by the prospects of MeeGo. Some consolidation in the mobile Linux arena is to be welcomed. In the end, though, it will be the MeeGo devices that will largely decide its fate. While Moblin and Maemo are available, neither has gained the widespread device availability that Android is starting to enjoy. Most participants seemed to be taking a "wait and see" approach, with the sense that many will be watching developments fairly closely. April 19,:" [...] we have made it hard for people to take notice of what we are doing with the Linux Desktop since none of the brands are identifiable as "belonging to the same thing". Instead we end up with microbrands that nearly no one outside of the server room or the hardcore F/OSS community recognizes.. The LWN site code does not allow for comments on surveys directly, and we noticed that some of you had thoughts about it. Thanks for the email, but you can now also comment below. We are aware that some of the survey questions can have multiple interpretations or don't cover every situation and for that we apologize. Page editor: Jonathan Corbet Security A free. New vulnerabilities A race condition was found in the way mod_auth_shadow used an external helper binary to validate user credentials (username / password pairs). A remote attacker could use this flaw to bypass intended access restrictions, resulting in ability to view and potentially alter resources, which should be otherwise protected by authentication The. A Debian bug report notes that Gource creates its log file with a predictable name (/tmp/gource-$(UID).tmp), which a malicious user could use to overwrite arbitrary files via a symlink attack, with the privileges of the user running Gource.. The ip_evictor function in ip_fragment.c in libnids, as used in dsniff and possibly other products, allows remote attackers to cause a denial of service (NULL pointer dereference and crash) via crafted fragmented packets. memcached.c in memcached allows remote attackers to cause a denial of service (daemon hang or crash) via a long line that triggers excessive memory allocation. A format string flaw was found in scsi-target-utils' tgtd daemon. A remote attacker could trigger this flaw by sending a carefully-crafted Internet Storage Name Service (iSNS) request, causing the tgtd daemon to crash.. Page editor: Jake Edge Kernel development Brief items There have been no stable updates since April 1. In a simplified form, ondemand works like this: every so often the governor wakes up and looks at how busy the CPU is. If the idle time falls below a threshold, the CPU frequency will be bumped up; if, instead, there is too much idle time, the frequency will be reduced. By default, on a system with high-resolution timers, the minimum idle percentage is 5%; CPU frequency will be reduced if idle time goes above 15%. The minimum percentage can be adjusted in sysfs (under /sys/devices/system/cpu/cpuN/cpufreq/); the maximum is wired at 10% above the minimum. This governor has been in use for some time, but, as it turns out, it can create performance difficulties in certain situations. Whenever the system workload alternates quickly between CPU-intensive and I/O-intensive phases, things slow down. That's because the governor, on seeing the system go idle, drops the frequency down to the minimum. After the CPU gets busy again, it runs for a while at low speed until the governor figures out that the situation has changed. Then things go idle and the cycle starts over. As it happens, this kind of workload is fairly common; "git grep" and the startup of a large program are a couple of examples. Arjan van de Ven has come up with a fix for this governor which is quite simple in concept. The accounting of "idle time" is changed so that time spent waiting for disk I/O no longer counts. If a processor is dominated by a program alternating between processing and waiting for disk operations, that processor will appear to be busy all the time. So it will remain at a higher frequency and perform better. That makes the immediate problem go away without, says Arjan, significantly increasing power consumption. But, Arjan says, "there are many things wrong with ondemand, and I'm writing a new governor to fix the more fundamental issues with it." That code has not yet been posted, so it's not clear what sort of heuristics it will contain. Stay tuned; the demand for ondemand may soon be reduced significantly. MD maintainer Neil Brown has now taken a step in this direction with the posting of his dm-raid456 module, a RAID implementation for the device mapper which is built on the MD code. This patch set set has the potential to eliminate a bunch of duplicated code, which can only be a good thing. It also brings some nice features, including RAID6 support, multiple-target support, and more to the device mapper layer. This is early work which, probably, is not destined for the next merge window. The response from the device mapper side has been reasonably positive, though. So, with luck, we'll someday have both subsystems using the same RAID code. Kernel development news Several tracing presentations were in evidence at this year's Embedded Linux Conference, including Mathieu Desnoyers's "hands-on tutorial" on the Linux Trace Toolkit next generation (LTTng). Desnoyers showed how to use LTTng to solve real-world performance and latency problems, while giving a good overview of the process of Linux kernel problem solving. The target was embedded developers, but the presentation was useful for anyone interested in finding and fixing problems in Linux itself, or applications running atop it. Desnoyers has been hacking on the kernel for many years, and was recently awarded his Ph.D. in computer engineering based largely on his work with LTTng. Since then, he has started his own company, EfficiOS to do consulting work on LTTng as well as helping various customers diagnose their performance problems. In addition to LTTng itself, he also developed the Userspace RCU library that allows user-space applications to use the Read-Copy-Update (RCU) data synchronization technique that is used by the kernel. LTTng consists of three separate components: patches to the Linux kernel to add tracepoints along with the LTTng infrastructure and ring buffer, the LTT control user-space application, and the LTTV GUI interface for viewing trace output. Each is available on the LTTng web site and there are versions for kernels going back to 2.6.12. There is extensive documentation as well. The lockless trace clock is "one major piece of LTTng that is architecture dependent". This clock is a high-precision timestamp derived from the processor's cycle counter that is placed on each trace event. The timestamp is coordinated between separate processors allowing system-wide correlation of events. On processors with frequency scaling and non-synchronized cycle counters, like some x86 systems, the trace clock can get confused when the processor operating frequency changes, so that feature needs to be disabled before tracing. Desnoyers noted that Nokia had funded work for LTTng to properly handle frequency scaling and power management features of the ARM OMAP3 processor, but that it hasn't yet been done for x86. A portion of the talk was about the tradeoffs of various tracing strategies. Desnoyers described the factors that need to be considered when deciding on how to trace a problem including what kind of bug it is, how reproducible it is, how much tracing overhead the system can tolerate, the availability of the system to reproduce it on, whether it occurs only on production systems, and so on. Each of these things "impact the number of tracing iterations available". It may be that because it occurs infrequently or is on a third-party production system, one can only get a single tracing run, "or you may have the luxury of multiple trace runs". Based on those factors, there are different kinds of tracing to choose from in LTTng. At the top level, you can use producer-consumer tracing, where all of the enabled events are recorded to a filesystem, "flight recorder" mode, where the events are stored in fixed-size memory buffers and newer events overwrite the old, or both of these modes can be active at once. There are advantages and disadvantages to each, of course, starting with higher overhead for producer-consumer traces. But the amount of data which can be stored for those types of traces is generally much higher than for flight recorder traces. Because there is generally an enormous amount of data in a trace, Desnoyers described several techniques to help hone in on the problem area. In LTTng, events are grouped by subsystem into "channels" and each channel can have a different buffer size for flight recorder mode. That allows more backlog for certain kinds of events, while limiting others. In addition, instrumentation (trace events, kernel markers) can be disabled to reduce the amount of trace data that is generated. Another technique is to use "anchors", specific events in the trace that are the starting point for analysis. The anchor is often used by going backward in the trace from that point. The anchors can either come from instrumentation like the LTTng user-space tracer or kernel trace events, or they can be generated by the analysis itself. The longest timer interrupt jitter (i.e. how far from the nominal time where it should happen) is an example he gave of one kind of analysis-generated anchor. A related idea is "triggers", which are a kind of instrumentation with a side-effect. By using ltt_trace_stop("name"), a trigger can stop the tracing when a particular condition occurs in the kernel. Using lttctl from user space is another way to stop a trace. Triggers are particularly helpful for flight recorder traces, he said. Desnoyers also did a demonstration of LTTng on two separate kinds of problems both involving the scheduler. One was to try to identify sources of audio latency by running a program with a periodic timer that expired every 10ms. The code was written to put an anchor into the trace data every time the timer missed by more than 5ms. He ran the program, then moved a window around on the screen, which caused delays to the timer of up to 60ms. Using that data, he brought up the LTTV GUI to look at what was going on. It was a bit hard to follow exactly what he was doing, but eventually he narrowed it down to the scheduling of the X.org server. He then instrumented the kernel scheduler with a kernel marker to get more information on how it was making its decisions. Kernel markers were proposed for upstream inclusion a ways back, but were roundly criticized for cluttering up the kernel with things that looked like a debug printk(). Markers are good for ad hoc tracepoints, but "don't try to push it upstream", he warned. The other demo was to look at a buffer underrun in ALSA's aplay utility. The same scheduler marker was used to investigate that problem as well. Perhaps the hardest question was saved to the end of the presentation: what is the status of LTTng getting into the mainline? Desnoyers seemed upbeat about the prospects of that happening, partly because there has been so much progress made in the kernel tracing area. Most of the instrumentation side has already gone in, and the tracer itself is ongoing work that he now has time do. He presented a "status of LTTng" presentation at the Linux Foundation Collaboration Summit (LFCS), which was held just after ELC, and there was agreement among some of the tracing developers to work together on getting more of LTTng into the kernel. Desnoyers does not see a problem with having multiple tracers in the kernel, so long as they use common infrastructure and target different audiences. Ftrace is "more oriented toward kernel developers", he said, and it has different tracers for specific purposes. LTTng on the other hand is geared toward users and user-space programmers who need a look into the kernel to diagnose their problems. With Ftrace developer Steven Rostedt participating in both the ELC and LFCS talks—and agreeing with many of Desnoyers's ideas—the prospects look good for at least parts of LTTng to make their way into the mainline over the next year. There are two distinct ways in which writeback is done in contemporary kernels. A series of kernel threads handles writeback to specific block devices, attempting to keep each device busy as much of the time as possible. But writeback also happens in the form of "direct reclaim," and that, it seems, is where much of the trouble is. Direct reclaim happens when the core memory allocator is short of memory; rather than cause memory allocations to fail, the memory management subsystem will go casting around for pages to free. Once a sufficient amount of memory is freed, the allocator will look again, hoping that nobody else has swiped the pages it worked so hard to free in the meantime. Dave Chinner recently encountered a problem involving direct reclaim which manifested itself as a kernel stack overflow. Direct reclaim can happen as a result of almost any memory allocation call, meaning that it can be tacked onto the end of a call chain of nearly arbitrary length. So, by the time that direct reclaim is entered, a large amount of kernel stack space may have already been used. Kernel stacks are small - usually no larger than 8KB and often only 4KB - so there is not a lot of space to spare in the best of conditions. Direct reclaim, being invoked from random places in the kernel, cannot count on finding the best of conditions. The problem is that direct reclaim, itself, can invoke code paths of great complexity. At best, reclaim of dirty pages involves a call into filesystem code, which is complex enough in its own right. But if that filesystem is part of a union mount which sits on top of a RAID device which, in turn, is made up of iSCSI drives distributed over the network, the resulting call chain may be deep indeed. This is not a task that one wants to undertake with stack space already depleted. Dave ran into stack overflows - with an 8K stack - while working with XFS. The XFS filesystem is not known for its minimalist approach to stack use, but that hardly matters; in the case he describes, over 3K of stack space was already used before XFS got a chance to take its share. This is clearly a situation where things can go easily wrong. Dave's answer was a patch which disables the use of writeback in direct reclaim. Instead, the direct reclaim path must content itself with kicking off the flusher threads and grabbing any clean pages which it may find. There is another advantage to avoiding writeback in direct reclaim. The per-device flusher threads can accumulate adjacent disk blocks and attempt to write data in a way which minimizes seeks, thus maximizing I/O throughput. Direct reclaim, instead, takes pages from the least-recently-used (LRU) list with an eye toward freeing pages in a specific zone. As a result, pages flushed by direct reclaim tend to be scattered more widely across the storage devices, causing higher seek rates and worse performance. So disabling writeback in direct reclaim looks like a winning strategy. Except, of course, we're talking about virtual memory management code, and nothing is quite that simple. As Mel Gorman pointed out, no longer waiting for writeback in direct reclaim may well increase the frequency with which direct reclaim fails. That, in turn, can throw the system into the out-of-memory state, which is rarely a fun experience for anybody involved. This is not just a theoretical concern; it has been observed at Google and elsewhere. [PULL QUOTE: Lumpy reclaim, by its nature, is likely to create seeky I/O patterns, but skipping lumpy reclaim increases the likelihood of higher-order allocation failures. END QUOTE] Direct reclaim is also where lumpy reclaim is done. The lumpy reclaim algorithm attempts to free pages in physically-contiguous (in RAM) chunks, minimizing memory fragmentation and increasing the reliability of larger allocations. There is, unfortunately, a tradeoff to be made here: the nature of virtual memory is such that pages which are physically contiguous in RAM are likely to be widely dispersed on the backing storage device. So lumpy reclaim, by its nature, is likely to create seeky I/O patterns, but skipping lumpy reclaim increases the likelihood of higher-order allocation failures. So various other solutions have been contemplated. One of those is simply putting the kernel on a new stack-usage diet in the hope of avoiding stack overflows in the future. Dave's stack trace, for example, shows that the select() system call grabs 1600 bytes of stack before actually doing any work. Once again, though, there is a tradeoff here: select() behaves that way in order to reduce allocations (and improve performance) for the common case where the number of file descriptors is relatively small. Constraining its stack use would make an often performance-critical system call slower. Beyond that, reducing stack usage - while being a worthy activity in its own right - is seen as a temporary fix at best. Stack fixes can make a specific call chain work, but, as long as arbitrarily-complex writeback paths can be invoked with an arbitrary amount of stack space already used, problems will pop up in places. So a more definitive kind of fix is required; stack diets may buy time but will not really solve the problem. One common suggestion is to move direct reclaim into a separate kernel thread. That would put reclaim (and writeback) onto its own stack where there will be no contention with system calls or other kernel code. The memory allocation paths could poke this thread when its services are needed and, if necessary, block until the reclaim thread has made some pages available. Eventually, the lumpy reclaim code could perhaps be made smarter so that it produces less seeky I/O patterns. Another possibility is simply to increase the size of the kernel stack. But, given that overflows are being seen with 8K stacks, an expansion to 16K would be required. The increase in memory use would not be welcome, and the increase in larger allocations required to provide those stacks would put more pressure on the lumpy reclaim code. Still, such an expansion may well be in the cards at some point. According to Andrew Morton, though, the real problem is to be found elsewhere: In other words, the problem is not how direct reclaim is behaving. It is, instead, the fact that direct reclaim is happening as often as it is in the first place. If there were less need to invoke direct reclaim in the first place, the problems it causes would be less pressing. So, if Andrew gets his way, the focus of this work will shift to figuring out why the memory management code's behavior changed and fixing it. To that end, Dave has posted a set of tracepoints which should give some visibility into how the writeback code is making its decisions. Those tracepoints have already revealed some bugs, which have been duly fixed. The main issue remains unresolved, though. It has already been named as a discussion topic for the upcoming filesystems, storage, and memory management workshop (happening with LinuxCon in August), but many of the people involved are hoping that this particular issue will be long-solved by then. April 21, 2010 This article was contributed by Steven Rostedt Tracepoints within the kernel facilitate the analysis of how the kernel performs. The flow of critical information can be followed and examined in order to debug a latency problem, or to simply figure out better ways to tune the system. The core kernel tracepoints, like the scheduler and interrupt tracepoints, let the user see when and how events take place inside the kernel. Module developers can also take advantage of tracepoints; if their users or customers have problems, the developer can have them enable the tracepoints and analyze the situation. This article will explain how to add tracepoints in modules that are outside of the core kernel code. In Part 1, the process of creating a tracepoint in the core kernel was explained. Part 2 explained how to consolidate tracepoints with the use of DECLARE_EVENT_CLASS() and DEFINE_EVENT() and went over the field macros of TP_STRUCT__entry, and the function helpers of TP_printk(). This article takes a look at how to add tracepoints outside of the core kernel, which can be used by modules or architecture specific code. A brief look at some of the magic behind the TRACE_EVENT() macro, and a few more examples to get your feet wet with using tracepoints. For tracepoints in modules or in architecture-specific directories, having trace header files in the global include/trace/events directory may clutter it. The result would be to put files like mips_cpu.h or arm_cpu.h, which are not necessary for the core kernel, into that directory. It would end up something like the old include/asm-*/ setup. Also, if tracepoints went into staging drivers, putting staging header files in the core kernel code base would be a bad design. Because trace header files are handled very differently than other header files, the best solution is to have the header files placed at the location where they are used. For example, the XFS tracepoints are located in the XFS subdirectory in fs/xfs/xfs_trace.h. But, some of the magic of define_trace.h is that it must be able to include the trace file that included it (the reason for TRACE_HEADER_MULTI_READ). As explained in Part 1, the trace header files start with the cpp conditional: #if !defined(_TRACE_SCHED_H) || defined(TRACE_HEADER_MULTI_READ) #define _TRACE_SCHED_H Part 1 explained that one and only one of the C files that include a particular trace header will define CREATE_TRACE_POINTS before including the trace header. That activates the define_trace.h that the trace header file includes. The define_trace.h file will include the header again, but will first define TRACE_HEADER_MULTI_READ. As the cpp condition shows, this define will allow the contents of the trace header to be read again. For define_trace.h to include the trace header file, it must be able to find it. To do this, some changes need to be made to the Makefile where the trace file is included, and that file will need to tell define_trace.h not to look for it in the default location (include/trace/events). To tell define_trace.h where to find the trace header, the Makefile must define the path to the location of the trace file. One method is to extend CFLAGS to include the path: EXTRA_CFLAGS = -I$(src) But that affects CFLAGS for all files that the Makefile builds. If it is desired to only modify the the CFLAGS for the C file that has the CREATE_TRACE_POINTS defined, then the method used by the net/mac80211/Makefile can be used: CFLAGS_driver-trace.o = -I$(src) The driver-trace.c file contains the CREATE_TRACE_POINTS define and the include of driver-trace.h that contains the TRACE_EVENT() macros for the mac80211 tracepoints. To demonstrate how to add tracepoints to a module, I wrote a simple module, called sillymod, which just creates a thread that wakes up every second and performs a printk and records the number of times that it has done so. I will look at the relevant portions of the files, but the full file contents are also available: module, Makefile, the module with tracepoint, and the trace header file. The first step is to create the desired tracepoints. The trace header file is created the same way as the core trace headers described in Part 1, with a few more additions. The header must start by defining the system where all tracepoints within the file will belong to: #undef TRACE_SYSTEM #define TRACE_SYSTEM silly This module creates a trace system called silly. Then the special cpp condition is included: #if !defined(_SILLY_TRACE_H) || defined(TRACE_HEADER_MULTI_READ) #define _SILLY_TRACE_H The linux/tracepoint.h file is included, and finally the TRACE_EVENT() macros; one in this example: #include <linux/tracepoint.h> TRACE_EVENT(me_silly, TP_PROTO(unsigned long time, unsigned long count), TP_ARGS(time, count), TP_STRUCT__entry( __field( unsigned long, time ) __field( unsigned long, count ) ), TP_fast_assign( __entry->time = jiffies; __entry->count = count; ), TP_printk("time=%lu count=%lu", __entry->time, __entry->count) ); #endif /* _SILLY_TRACE_H */ The above is the same as what was described in Part 1 for core kernel tracepoints. After the #endif things become a bit different. Before including the define_trace.h file the following is added: /* This part must be outside protection */ #undef TRACE_INCLUDE_PATH #define TRACE_INCLUDE_PATH . #define TRACE_INCLUDE_FILE silly-trace #include <trace/define_trace.h> The TRACE_INCLUDE_PATH tells define_trace.h not to look in the default location (include/trace/events) for the trace header, but instead look in the include search path. By default, define_trace.h will include a file defined by TRACE_SYSTEM. The TRACE_INCLUDE_FILE tells define_trace.h that the trace header is called silly-trace.h (The .h is automatically added to the end of TRACE_INCLUDE_PATH). To add the tracepoint to the module, the module now includes the trace header. Before including the trace header it must also define CREATE_TRACE_POINTS: #define CREATE_TRACE_POINTS #include "silly-trace.h" The tracepoint can now be added to the code. set_current_state(TASK_INTERRUPTIBLE); schedule_timeout(HZ); printk("hello! %lu\n", count); trace_me_silly(jiffies, count); count++; Finally the Makefile must set the CFLAGS to have the includes include the local directory where the silly-trace.h file resides. CFLAGS_sillymod-event.o = -I$(src) One might believe the following would also work without modifying the Makefile, if the module resided in the kernel tree: #define TRACE_INCLUDE_PATH ../../path/to/trace/header Using a path name in TRACE_INCLUDE_PATH other than '.' runs the risk of containing a macro. For example, if XFS defined TRACE_INCLUDE_PATH as ../../fs/xfs/linux-2.6/xfs_trace.h, it would fail. That is because the Linux build #defines the name linux to nothing, which would make the path be ../../fs/xfs/-2.6/xfs_trace.h. Now the trace event is available. [mod] # insmod sillymod-event.ko [mod] # cd /sys/kernel/debug/tracing [tracing] # ls events block ext4 header_event i915 jbd2 module sched skb timer enable ftrace header_page irq kmem power silly syscalls workqueue [tracing] # ls events/silly enable filter me_silly [tracing] # echo 1 > events/silly/me_silly/enable [tracing] # cat trace # tracer: nop # # TASK-PID CPU# TIMESTAMP FUNCTION # | | | | | silly-thread-5377 [000] 1802.845581: me_silly: time=4304842209 count=10 silly-thread-5377 [000] 1803.845008: me_silly: time=4304843209 count=11 silly-thread-5377 [000] 1804.844451: me_silly: time=4304844209 count=12 silly-thread-5377 [000] 1805.843886: me_silly: time=4304845209 count=13 Once define_trace.h file can safely locate the trace header, the module's tracepoints can be created. To understand why all this manipulation is needed, a look at how define_trace.h is implemented may clarify things a bit. For those that dare to jump into the mystical world of the C preprocessor, take a look into include/trace/ftrace.h. But be warned, what you find there may leave you a bit loony, or at least think that the ones that wrote that code were a bit loony (in which case, you may be right). The include/trace/define_trace.h file does some basic set up for the TRACE_EVENT() macro, but for a trace to take advantage of it, it must have a header file in define_trace.h to do the real work (as do Ftrace and perf). While I was working on my Masters, a professor showed me a trick with cpp that lets one map strings to enums using the same data: #define DOGS { C(JACK_RUSSELL), C(BULL_TERRIER), C(ITALIAN_GREYHOUND) } #undef C #define C(a) ENUM_##a enum dog_enums DOGS; #undef C #define C(a) #a char *dog_strings[] = DOGS; char *dog_to_string(enum dog_enums dog) { return dog_strings[dog]; } The trick is that the macro DOGS contains a sub-macro C() that we can redefine and change the behavior of DOGS. This concept is key to how the TRACE_EVENT() macro works. All the sub-macros within TRACE_EVENT() can be redefined and cause the TRACE_EVENT() to be used to create different code that uses the same information. Part 1 described the requirements needed to create a tracepoint. One set of data (in the TRACE_EVENT() definition) must be able to do several things. Using this cpp trick, it is able to accomplish just that. The tracepoint code created by Mathieu Desnoyers required a DECLARE_TRACE(name, proto, args) be defined in a header file, and in some C file DEFINE_TRACE(name) was used. TRACE_EVENT() now does both jobs. In include/linux/tracepoint.h: #define TRACE_EVENT(name, proto, args, struct, assign, print) \ DECLARE_TRACE(name, PARAMS(proto), PARAMS(args)) The PARAMS() macro lets proto and args contain commas and not be mistaken as multiple parameters of DECLARE_TRACE(). Since the tracepoint.h must be included in all trace headers, this makes the TRACE_EVENT() macro fulfill the first part of the tracepoint creation. When a C file defines CREATE_TRACE_POINTS before including a trace header, the define_trace.h becomes active and performs: #undef TRACE_EVENT #define TRACE_EVENT(name, proto, args, tstruct, assign, print) \ DEFINE_TRACE(name) That is not enough, however, because the define_trace.h is declared after the TRACE_EVENT() macros are used. For this code to have an impact, the TRACE_EVENT() macro must be included again. The define_trace.h does some nasty C preprocessor obfuscation to be able to include the file that just included it: #define TRACE_HEADER_MULTI_READ #include TRACE_INCLUDE(TRACE_INCLUDE_FILE) The defining of the TRACE_HEADER_MULTI_READ will let the trace header be read again (and this is why it is needed in the first place). The TRACE_INCLUDE(TRACE_INCLUDE_FILE) is more cpp macro tricks that will include the file that included it. As explained in previous articles, this macro will use either TRACE_SYSTEM.h or TRACE_INCLUDE_FILE.h if that is defined, and will include the file from include/trace/events/ if TRACE_INCLUDE_PATH is not defined. I'll spare the reader the ugliness of the macros to accomplish this. For the more masochistic reader, feel free to look at the include/trace/define_trace.h file directly. When the file is included again, the TRACE_EVENT() macro will be processed again, but with its new meaning. The above explains how tracepoints are created. It only creates the tracepoint itself, and it does nothing to add it to a tracer's infrastructure. For Ftrace, this is where the ftrace.h file inside the define_trace.h comes into play. (Warning, the ftrace.h file is even more bizarre than define_trace.h). The macros in ftrace.h create the files and directories found in tracing/events. ftrace.h uses the same tricks explained earlier of redefining the macros within the TRACE_EVENT() macro as well as redefining the TRACE_EVENT() macro itself. How ftrace.h works is beyond the scope of this article, but feel free to read it directly, if you don't have any allergies to backslashes. If you change directories to the debugfs filesystem mount point (usually /sys/kernel/debug) and take a look inside tracing/events you will see all of the trace event systems defined in your kernel (i.e. the trace headers that defined TRACE_SYSTEM). [tracing] # ls events block enable ftrace header_event header_page irq kmem module power sched skb syscalls timer workqueue As mentioned in Part 2, the enable files are used to enable a tracepoint. The enable file in the events directory can enable or disable all events in the system, the enable file in one of the system's directories can enable or disable all events within the system, and the enable file within the specific event directory can enable or disable that event. Note, by writing '1' into any of the enable files will enable all events within that directory and below. Writing a '0' will disable all events within that directory and below. One nice feature about events is that they also show up in the Ftrace tracers. If an event is enabled while a tracer is running, those events will show up in the trace. Enabling events can make the function tracer even more informative: [tracing] # echo 1 > events/sched/enable [tracing] # echo function > current_tracer [tracing] # head -15 trace # tracer: function # # TASK-PID CPU# TIMESTAMP FUNCTION # | | | | | Xorg-1608 [001] 1695.236400: task_of <-update_curr Xorg-1608 [001] 1695.236401: sched_stat_runtime: task: Xorg:1608 runtime: 402851 [ns], vruntime: 153144994503 [ns] Xorg-1608 [001] 1695.236402: account_group_exec_runtime <-update_curr Xorg-1608 [001] 1695.236402: list_add <-enqueue_entity Xorg-1608 [001] 1695.236403: place_entity <-enqueue_entity Xorg-1608 [001] 1695.236403: task_of <-enqueue_entity Xorg-1608 [001] 1695.236404: sched_stat_sleep: task: gnome-terminal:1864 sleep: 639071 [ns] Xorg-1608 [001] 1695.236405: __enqueue_entity <-enqueue_entity Xorg-1608 [001] 1695.236406: hrtick_start_fair <-enqueue_task_fair Xorg-1608 [001] 1695.236407: sched_wakeup: task gnome-terminal:1864 [120] success=1 [001] Xorg-1608 [001] 1695.236408: check_preempt_curr <-try_to_wake_up Combining the events with tricks from the function graph tracer, we can find interrupt latencies, and which interrupts are responsible for long latencies. [tracing] # echo do_IRQ > set_ftrace_filter [tracing] # echo 1 > events/irq/irq_handler_entry/enable [tracing] # echo function_graph > current_tracer [tracing] # cat trace # tracer: function_graph # # CPU DURATION FUNCTION CALLS # | | | | | | | 0) ==========> | 0) | do_IRQ() { 0) | /* irq_handler_entry: irq=30 handler=iwl3945 */ 0) ==========> | 0) | do_IRQ() { 0) | /* irq_handler_entry: irq=30 handler=iwl3945 */ 0) + 22.965 us | } 0) <========== | 0) ! 148.135 us | } 0) <========== | 0) ==========> | 0) | do_IRQ() { 0) | /* irq_handler_entry: irq=1 handler=i8042 */ 0) + 45.347 us | } 0) <========== | Writing do_IRQ into set_ftrace_filter makes the function tracer only trace the do_IRQ() function. Then the irq_handler_entry tracepoint is activated and the function_graph tracer is selected. Since the function graph tracer shows the time a function executed, we can see the how long the interrupts ran for. But the function graph tracer only shows that the do_IRQ() function ran, but not which interrupt it executed. By enabling the irq_handler_entry event, we now see which interrupt was running. The above shows that my laptop's iwl3945 interrupt that handles the wireless communication caused a 148 microsecond latency! Tracepoints are a very powerful tool, but to make them useful, they must be flexible and trivial to add. Adding TRACE_EVENT() macros are quite easy and they are popping up all over the kernel. The 2.6.34-rc3 kernel currently has over 300 TRACE_EVENT() macros defined; 341 as of this writing. The code to implement the trace events uses lots of cpp tricks to accomplish its task. But the complexity of the implementation simplified the usage of tracepoints. The rule of thumb in creating the TRACE_EVENT() macro was: make the use of the TRACE_EVENT() macro as simple as possible, even if that makes the implementation of it extremely complex. Sometimes, the easier something is to use, the more complexity there is to create it. Now, a developer does not need to know how the TRACE_EVENT() macro works, she or he only needs to know that the work has been done for them. Adding TRACE_EVENT()s are easy, and any developer can now take advantage of them. Patches and updates Kernel trees Core kernel code Development tools Device drivers Filesystems and block I/O Memory management Networking Architecture-specific Benchmarks and bugs Miscellaneous Page editor: Jonathan Corbet Distributions News and Editorials This article was contributed by Koen Vervloesem) proprietary. However, two operating systems are becoming more and more popular in the BSD world: one is PC-BSD, a KDE-based FreeBSD derivative that strives to be the Ubuntu of the BSDs, and the other is DragonFly BSD, a FreeBSD fork that aims to provide single-system image clustering in the long term. In 2003, FreeBSD developer Matthew Dillon created DragonFly BSD as a fork of FreeBSD 4.8. He did this because he didn't agree with the direction FreeBSD 5 was going in the domains of threading and symmetric multiprocessing. Since then, the DragonFly BSD kernel diverged significantly from its mother kernel, for example by adding a Light Weight Kernel Threads (LWKT) implementation and a virtual kernel similar to User Mode Linux. However, there is still a close collaboration between DragonFly BSD and FreeBSD, and FreeBSD device drivers are regularly imported into DragonFly BSD. The operating system has also ported some functionality from NetBSD and OpenBSD.: After some preliminary work in version 1.12, DragonFly BSD added a new clustering filesystem in version 2.0, which was finally considered production-ready in version 2.2: HAMMER. There are also ports of the filesystem to Linux and Mac OS X, both using FUSE. HAMMER is a modern filesystem with fine-grained snapshots, integrity checking, instant crash recovery, and networked mirroring. It's no coincidence that this sounds a lot like ZFS: Dillon investigated ZFS and for a while it looked like he would port it to DragonFly BSD, but in the end he wasn't satisfied with the design and wrote his own, more cluster-oriented filesystem. The main reason for this was simple: DragonFly BSD's goal is transparent clustering, which needs a multi-master replicated environment. In this type of environment, ZFS doesn't quite fit the bill, as Dillon explained on the DragonFly kernel mailing list: HAMMER's approach to redundancy is logical replication of the entire filesystem. That is, wholely independant copies operating on different machines in different locations. HAMMER is the default filesystem now, but it's not recommended for storage media smaller than 50 G; DragonFly BSD uses UFS for small media. HAMMER supports filesystems up to 1 exabyte. Each HAMMER filesystem can span up to 256 disks, which can be added to an existing filesystem to let it grow. Users don't need to manually take snapshots: the system automatically writes historical data during each filesystem sync, which is every 30 to 60 seconds. Prior versions of files and directories are then accessible by appending @@ and a 64-bit hexadecimal transaction ID to the file name. In this way, users can even cd into a prior version of a directory. The system administrator can choose a history retention policy to prevent the filesystem from filling up too quickly, and explicit snapshots can also be made. Although HAMMER is considered production-ready by its developers, it's still a relatively young filesystem with the occasional serious bug. For example, soon after the 2.6 release a serious HAMMER corruption issue came up. New releases of the operating system occur approximately twice a year. The latest release is DragonFly BSD 2.6.1. Three of the most interesting new features are swapcache, tmpfs, and POSIX message queues. The former is a mechanism that allows the operating system to use a fast SSD to cache data and/or metadata for hard drive filesystems, which should improve disk performance dramatically. Swapcache works on all filesystems (e.g. HAMMER, UFS, or NFS) and is a simple turn-on-and-forget type of feature. The man page of swapcache has an extensive description of the new functionality and adds an analysis with some real-life examples. The memory filesystem tmpfs is a port from NetBSD. After loading the tmpfs driver as a kernel module at boot time, the user can create tmpfs filesystems. The data is backed by swap space when there's not enough free memory, but the metadata is stored in kernel memory. The tmpfs man page recommends that a modern DragonFly BSD platform reserve a large amount of swap space to accommodate tmpfs and other subsystems. The DragonFly BSD developers also ported the POSIX message queues API from NetBSD 5, which allows processes to exchange data in the form of messages. DragonFly BSD is distributed as a live CD ISO or USB image that lets users check their system for hardware compatibility before installation. For now, there are only versions for i386 and x86_64 architectures. The web site also talks about a DVD ISO that is able to show a full live X environment, but in the 2.6 release this has been replaced by a GUI USB image that boots into the desktop. At the time of this writing, the GUI USB image was not available yet due to some problems. DragonFly BSD uses BSD Installer, a console based system installation and configuration tool that is more user-friendly than FreeBSD's sysinstall. BSD Installer started its life as DragonFly BSD's installer, but it has since been ported to FreeSBIE and pfSense. The installation is straightforward, although for non-US users it's slightly annoying that the choice of keyboard layout is presented only after choosing the root password and adding a user. After installation and configuration, the system reboots into a minimal console-based BSD system. Instead of FreeBSD ports, DragonFly BSD uses NetBSD's pkgsrc as its official package management system. This freed the DragonFly BSD developers from having to maintain a large number of third-party software, and pkgsrc is designed with portability in mind. Users can install over 9000 binary packages in DragonFly BSD (with the pkg_radd command) or build them from source. For users that want to run software that exists for Linux but not for BSD, DragonFly BSD has a Linux emulation layer: the Linuxulator in the 2.6 release even runs Java and Flash, at least on the i386 architecture. The project's web site has a lot of documentation, both for users and developers, including some specific howtos. There's also an extensive but slightly out-of-date handbook based on the FreeBSD handbook, and a less extensive but more up-to-date new handbook. The handbook guides the user through the installation, but it also has chapters about "UNIX Basics" (which is a mix of DragonFly BSD basics and general UNIX basics), the pkgsrc packaging system, configuring X, and more advanced topics like jails, security, the kernel, and virtual kernels. Readers that want to keep an eye on the project but don't have the time to read the mailing lists can read The DragonFly BSD Digest by Justin Sherrill. Some Gource visualizations of the DragonFly BSD development nicely show what the small but active group of developers is doing. We looked at Gource and other visualization programs earlier this month. For a relatively small (45 committers) and lesser-known BSD operating system, DragonFly BSD is surprisingly good and development of new features happens remarkably fast. Maybe this small scale is the reason why the project is so innovative. The fact that the BSD Certification Group has added knowledge about the operating system to the requirements of the BSD Associate certification is another sign that DragonFly BSD is here to stay. But ultimately, the road to its goal is still paved with a lot of work. New Releases Distribution News Debian GNU/Linux Full Story (comments: 15) Full Story (comments: none) Fedora Ubuntu family New Distributions Distribution Newsletters Distribution meetings Newsletters and articles of interest Page editor: Rebecca Sobol Development This article was contributed by Nathan Willis. The fact that it worked demonstrates that the developer and end-user open source community is eager to get together. But that fact guarantees no automatic success; along the way the TXLF planning team met challenges that anyone investigating launching their own regional show could learn from — as well as opportunities where the open source community could build tools useful for a wide range of all-volunteer projects. The genesis for TXLF was a series of independent conversations along the lines "there should be a regional community Linux show in Austin," mostly by Matt Ray of Zenoss and myself, with other people. Eventually both Ray and I had that conversation with Ilan Rabinovitch of SCALE, who told us to start talking to each other. Gathering all of the interested parties in one place was the first challenge. There is little you can do other than put the word out in every conceivable medium and see what happens — the group contacted individual free software hackers, business contacts, and every regional LUG and developers' group with an active presence on the Internet. As the collection of interested parties expanded, counting out the tasks involved in putting such an event together became the next hurdle. Many of the team had personal experience at least participating in some aspect of behind-the-scenes conference work — as an exhibitor, a speaker, or a volunteer — but as no one had the freedom to work full-time on the task list, organizing it became an ad-hoc affair. I eventually took on the role of trying to keep the geographically dispersed team coordinated on the to-do list, and worked on raising sponsorships, marketing, working with exhibitors, and helping to develop the program. At the practical level, the biggest obstacle a community-run event faces is inertia. There are at least three types. First, all of the volunteers are likely to be enthusiastic about the idea of the event, but individuals generally cannot devote their time to working on it until they personally cross the "I'm sure it's going to happen" threshold. For some it is lower than for others — particularly those with behind-the-scenes conference experience. The challenge is for the team to move forward in the early stages and thus grow the pool of volunteers that can make time to pitch in. For each individual, though, the inertia is not lack of interest in helping out on the project, it is simply human nature to put tasks on the back burner until the project becomes real enough to move up in priority. The second type of inertia is just a lack of group structure. In the case of TXLF, very few of the volunteers knew each other well, none had worked together before, and as a result it was at times difficult to come to a consensus on nuts-and-bolts decisions such as "should there be a free ticket option, or should all tickets be priced," or, "who should we invite as a keynote speaker?" In most cases, there is no right or wrong answer and often no strong opinions, so in a democratic group the best option is often to simply vote and make do with the results. The third type of inertia, though, is the biggest challenge: the seemingly intractable problem of having three or four mission-critical decisions, all of which must be made first. In the case of TXLF, it was the date, the venue, and the sponsors, a chicken-and-egg-and-rooster problem, if you will. Selecting the date first risks eliminating availability of key sponsors or venues; selecting the venue first limits the choice of dates and means gambling on the ability to raise enough sponsorship to rent the venue; gathering sponsors first is impossible because without a date and a venue, they do not know their availability and may doubt whether or not the event will genuinely come to pass. Other events or volunteer projects may face a different combination; in any case there is no simple answer. TXLF selected the date first, attempting to minimize interference with other FOSS and local events — even so, the date selected ended up conflicting with COSSFEST. Once in the planning process, however, the challenges become all practical. Arguably the biggest impediment to planning a new event is that there is no definitive guide to the process. There is a generally-accepted list of the "large scale" components — asking sponsors for support, putting out a call-for-participation, opening for registration, arranging for audio/video at the venue, setting up the network, publicizing the event, etc. — but nothing in the way of a step-by-step guide that can break the tasks down for easier group consumption. Fortunately, most of the existing regional open source conferences do much of their own planning in public (from mailing lists to wikis), which makes for good reference material at the early stages. Even better, all of the organizers are dyed-in-the-wool enthusiasts who will offer their assistance to answer questions, refer questions to others, and in many cases actually pitch in. TXLF received a tremendous amount of help from the SCALE organizers (some of whom even volunteered in-person), as well as from the teams behind LinuxFest Northwest, the Florida Linux Show, Southeast LinuxFest, and Ohio LinuxFest. Speaking to someone who has organized an event before gives a new team more specific insight into the process of dealing with the contractors and volunteers to make arrangements. For example, knowing that certain companies will not agree to sponsor an event until there is a public call-for-participation, learning how to negotiate counting concession sales against venue rental price, or what sort of wording needs to go into the exhibitor agreement for an expo booth. The TXLF group came by most of this knowledge through person-to-person conversation and mailing list or IRC discussion. A considerably better approach for the future is the FOSSEvents.org site, which SCALE's Gareth Greenaway discussed in a session at TXLF. FOSSEvents.org is a newly launched site sponsored by the Peer-Directed Projects Center (best known for Freenode) that hopes to serve as a focal point for community-run free and open source software events, whether conferences, hackfests, or informal meet-ups. The plans include several services, such as a central location where speakers willing to present at FOSS events can register their availability and contact information, but one of its high-priority tasks is building a wiki-style guide to the event planning process. FOSSEvents.org already provides a searchable calendar of such events, which is itself a valuable resource. A number of people in the audience for Greenaway's presentation raised their hands when asked if they were planning an event of their own, so the service is certainly needed. At present, the TXLF organizers and all on-site volunteers are attempting to collect and process their own observations and feedback on the event, both for institutional knowledge and to better share with other groups like FOSSEvents.org. The challenge, as always, includes time, but also stems from the organization software tools themselves. One of the biggest surprises of coordinating the first-year event was seeing firsthand where free and non-free software fit the bill. In some areas, there was no surprise — the networking team built the wired and wireless network at the venue with open source, as one would expect. All of the design and pre-press work for the fliers, ads, shirts, and program guide was done with Inkscape, Gimp, and especially Scribus, which evidently surprises some who are not as familiar with those applications. There are even a few open source conference-management packages to handle tasks like registration, call-for-participation, and scheduling. ConMan from the Utah Open Source Conference is one such package that is still being developed; TXLF used SCALE's SCALEreg specifically for its registration, on-site check-in, and badge-printing capabilities. On the other hand, at multiple points the group found itself using closed-source solutions — particularly for collaboration — solely because there was (or at the very least, appeared to be) no viable open source alternative. This started at the very beginning, when the initial organizers needed a mailing list. Setting up a Google Groups list is free and fast; sadly the same cannot be said of any open source list service. If you have a server and can set up an instance of Mailman, you can create as many lists as you want — however, this is of no help before you have a domain name and a server itself. GNOME, KDE, GNU, and the various Linux distributions all host free mailing lists for their constituent projects, but at none of them can an interested party simply walk up and start their own list. Commercial services like Google and Yahoo offers this to any user; free software services like Mozilla Messaging, or perhaps Mailman itself with its highly-desirable list.org domain, are way behind. Similarly, when it comes to collaborative work on documents, there is not yet an open source offering to compete with Google Docs. There are several collaborative text editors, but no spreadsheets — a vital need for budget tracking and program development. The TXLF team set up a MediaWiki installation (as is a common first step in any site launch), but wikis make for terrible collaboration tools. Wiki markup is, at its best, a weaker and ill-defined substitute for basic HTML, but more importantly, wikis are too often used as an amalgam for a public content management system (CMS), team task-tracker, and personal notebook. But they lack the access control, hierarchy, and editorial features of a proper CMS, and the real project-planning capabilities of a task management application. There are several open source project management suites that a team could use to keep track of deadlines, deliverables, important documents (such as contracts), and contacts. Here again, though, most are behind the curve when compared to free software-as-a-service offerings, and in most cases the projects do not offer a free hosted solution at all. TXLF, like most volunteer projects, had to run its site on what we were given, a donated virtual host with only part-time support provided by a volunteer, and no choice over the application server or frameworks made available. It is easy to say "install it on your server," but without money and a systems administrator, that is rarely an option. Eventually, the TXLF group moved to tracking some multi-person tasks on Zoho, which offered the best compromise of features and limitations. Perhaps these examples highlight something that the open source community rarely talks about: building tools for non-development tasks. If you intend to start a software project, you can sign up for free project hosting at a wide variety of services (from completely free software to those that are free of charge, but with closed code); you can get mailing lists, web forums, issue tracking, and release management. On the other hand, you do not get a customer (or "constituent") relationship management (CRM) tool, shared iCalendar service, or collaborative document editing. Moreover, if you want to start a project that does not involve code — say, design, documentation, or translation — you may not be eligible for an account at all. Before working with TXLF, there were software applications I had only a tenuous awareness of; since the conference I have grown in appreciation for them. At the start, I managed all of my personal to-do elements for the event the way I do for writing assignments and other personal projects: with VTODO feeds organized within Thunderbird/Lightning. But that, of course, does not scale to multiple people, nor does it expose task dependencies or other tools to keep larger projects on deadline. While it is easy to keep in touch with a group on IRC, for large-scale projects one will eventually need collaborative document sharing and editing. Finally, while it seems simple enough for an individual to keep track of one year's sponsorship discussions via IMAP folders, that answer does not offer the flexibility of a CRM system, which multiple users can contribute to, and which multiple users can use to assist the project. The TXLF team did not find a document management or CRM system to use during the 2010 planning cycle; although Zoho worked well enough for multi-user task tracking, it offered neither of the other features; finding a free solution encompassing both is on the to-do list for next year. Anyone considering starting an open source or Linux conference in their local area should take the plunge and do so; so long as you are comfortable scaling the size of the event to the number of volunteers and potential attendees in the area, it is within reach. Quite a few people I have met while wearing the "Press" badge at other Linux conferences have shared the opinion that these weekend-based, community-driven events are the wave of the future. Unlike large-scale and corporate-run conferences, they tend to be very low-cost and draw on the ever-growing numbers of home-Linux-users and home-based-telecommuters. The odds are that if there is not already a regional Linux show close enough that you don't mind driving to it, there are other people in your area who feel the same way. Hopefully the FOSSEvents.org project will make it easier to start from scratch, assess the viability of such a project and make cost- and time-appropriate plans; if nothing else you know it has been done many times before and the community is willing to share its knowledge. Progress on the tools front is probably a longer-term goal. I am aware that several of the larger-scale non-profit software projects are themselves grappling with CRM and document-management application selection, so the community recognizes the need. Building truly free web services is a topic getting increasing coverage in the press, blog world, and conference circuit, but it has not yet reached critical mass. Nevertheless, if we do not hang a shingle outside and offer to tell our neighbors about open source software, who will? Full Story (comments: 1) Full Story (comments: 2) Newsletters and articles Announcements Non-Commercial announcements Commercial announcements Articles of interest Contests and Awards Calls for Presentations Upcoming Events If your event does not appear here, please tell us about it. Audio and Video programs Linux is a registered trademark of Linus Torvalds
http://lwn.net/Articles/383592/bigpage
CC-MAIN-2013-20
refinedweb
12,100
59.33
- × HTTP mocking and expectations library Filed under development aidsShow All. Table of Contents - How does it work? - Install - Usage - READ THIS! - About interceptors - Specifying hostname - Specifying path - Specifying request body - Specifying request query string - Specifying replies - Specifying headers - HTTP Verbs - Support for HTTP and HTTPS - Non-standard ports - Repeat response n times - Delay the response body - Delay the response - Delay the connection - Socket timeout - Chaining - Scope filtering - Conditional? Nock works by overriding Node's http.requestfunction. Also, it overrides http.ClientRequesttoo to cover for modules that use it directly. Install $ npm install --save-dev nock Node version support The latest version of nock supports all currently maintained Node versions, see Node Release Schedule Here is a list of past nock versions with respective node version support Usage On your test, you can setup your mocking object like this: const nock = require('nock') const scope = nock('') .get('/repos/atom/atom/license') .reply(200, { license: { key: 'mit', name: 'MIT License', spdx_id: 'MIT', url: '', node_id: 'MDc6TGljZW5zZTEz', }, }) This setup says that we will intercept every HTTP call to. It will intercept an HTTPS GET request to /repos/atom/atom/license, reply with a status 200, and the body will contain a (partial) response in JSON. READ THIS! - About interceptors When you setup an interceptor for a URL and that interceptor is used, it is removed from the interceptor list. This means that you can intercept 2 or more calls to the same URL and return different things on each of them. It also means that you must setup one interceptor for each request you are going to have, otherwise nock will throw an error because that URL was not present in the interceptor list. If you don’t want interceptors to be removed as they are used, you can use the .persist() method. Specifying hostname The request hostname can be a string or a RegExp. const scope = nock('') .get('/resource') .reply(200, 'domain matched') const scope = nock(/example\.com/) .get('/resource') .reply(200, 'domain regex matched') Note: You can choose to include or not the protocol in the hostname matching. Specifying path The request path can be a string, a RegExp or a filter function and you can use any HTTP verb. Using a string: const scope = nock('') .get('/resource') .reply(200, 'path matched') Using a regular expression: const scope = nock('') .get(/source$/) .reply(200, 'path using regex matched') Using a function: const scope = nock('') .get(uri => uri.includes('cats')) .reply(200, 'path using function matched') Specifying request body You can specify the request body to be matched as the second argument to the get, putor deletespecifications. There are five types of second argument allowed: String: nock will exact match the stringified request body with the provided string', body => string Nock understands query strings. Search parameters can be included as part of the path: nock('') .get('/users?foo=bar') .reply(200)' }] }) A URLSearchParamsinstance can be provided. const params = new URLSearchParams({ foo: 'bar' }) nock('') .get('/') .query(params) .reply(200) Nock supports passing a function to query. The function determines if the actual query matches or not. nock('') .get('/users') .query' }] }) A query string that is already URL encoded can be matched by passing the encodedQueryParamsflag in the options when creating the Scope. nock('', { encodedQueryParams: true }) .get('/users') .query('foo%5Bbar%5D%3Dhello%20world%21') .reply(200, { results: [{ id: 'pgte' }] }) Specifying replies You can specify the return status code for a path on the first argument of reply like this: const scope = nock('') .get('/users/1') .reply(404) You can also specify the reply body as a string: const scope = nock('') .get('/') .reply(200, 'Hello from Google!') or as a JSON-encoded object: const scope = nock('') .get('/') .reply(200, { username: 'pgte', email: 'pedro.teixeira@gmail.com', _id: '4324243fsd', }) or even as a file: const scope = nock('') .get('/') .replyWithFile(200, __dirname + '/replies/user.json', { 'Content-Type': 'application/json', }) Instead of an object or a buffer you can also pass in a callback to be evaluated for the value of the response body: const scope = nock('') .post('/echo') .reply(201, (uri, requestBody) => requestBody) = nock('') .post('/echo') .reply(201, (uri, requestBody, cb) => { fs.readFile('cat-poems.txt', cb) // Error-first callback }) = nock('') .post('/echo') .reply((uri, requestBody) => { return [ 201, 'THIS IS THE REPLY BODY', { header: 'value' }, // optional headers ] }) or, use an error-first callback that also gets the status code: const scope = nock('') .post('/echo') .reply((uri, requestBody, cb) => { setTimeout(() => cb(null, [201, 'THIS IS THE REPLY BODY']), 1000) }) A Stream works too: const scope = nock('') .get('/cat-poems') .reply(200, (uri, requestBody) => { return fs.createReadStream('cat-poems.txt') }) Access original request and headers If you're using the reply callback style, you can access the original client request using this.reqlike this: const scope = nock('') .get('/cat-poems') .reply(function(uri, requestBody) { console.log('path:', this.req.path) console.log('headers:', this.req.headers) // ... }) Note: Remember to use normal functionin that case, as arrow functions are using enclosing scope for thisbinding. Replying with errors You can reply with an error like this: nock('') .get('/cat-poems') .replyWithError('something awful happened') JSON error responses are allowed too: nock('') .get('/cat-poems') .replyWithError({ message: 'something awful happened', code: 'AWFUL_ERROR', }) Note: This will emit an errorevent on the requestobject, not the reply. Specifying headers Header field names are case-insensitive Per HTTP/1.1 4.2 Message Headers specification, all message headers are case insensitive and thus internally Nock uses lower-case for all field names even if some other combination of cases was specified either in mocking specification or in mocked requests themselves. Specifying Request Headers You can specify the request headers like this: const scope = nock('', { reqheaders: { authorization: 'Basic Auth', }, }) .get('/') .reply(200) Or you can use a regular expression or function to check the header values. The function will be passed the header value. const scope = nock('', { reqheaders: { 'X-My-Headers': headerValue => headerValue.includes('cats'), 'X-My-Awesome-Header': /Awesome/i, }, }) .get('/') .reply(200) If reqheadersis not specified or if hostis not part of it, Nock will automatically add hostvalue to request header. If no request headers are specified for mocking then Nock will automatically skip matching of request headers. Since the hostheader is a special case which may get automatically inserted by Nock, its matching is skipped unless it was also specified in the request being mocked. You can also have Nock fail the request if certain headers are present: const scope = nock('', { badheaders: ['cookie', 'x-forwarded-for'], }) .get('/') .reply(200) When invoked with this option, Nock will not match the request if any of the badheadersare present. Basic authentication can be specified as follows: const scope = nock('') .get('/') .basicAuth({ user: 'john', pass: 'doe' }) .reply(200) Specifying Reply Headers You can specify the reply headers like this: const scope = nock('') .get('/repos/atom/atom/license') .reply(200, { license: 'MIT' }, { 'X-RateLimit-Remaining': 4999 }) Or you can use a function to generate the headers values. The function will be passed the request, response, and response body (if available). The body will be either a buffer, a stream, or undefined. const scope = nock('') .get('/') .reply(200, 'Hello World!', { 'Content-Length': (req, res, body) => body.length, ETag: () => `${Date.now()}`, }) Default Reply Headers You can also specify default reply headers for all responses like this: const scope = nock('') .defaultReplyHeaders({ 'X-Powered-By': 'Rails', 'Content-Type': 'application/json', }) .get('/') .reply(200, 'The default headers should come too') Or you can use a function to generate the default headers values: const scope = nock('') .defaultReplyHeaders({ 'Content-Length': (req, res, body) => body.length, }) .get('/') .reply(200, 'The default headers should come too') Including Content-Length Header Automatically When using scope.reply()to set a response body manually, you can have the Content-Lengthheader calculated automatically. const scope = nock('') .replyContentLength() .get('/') .reply(200, { hello: 'world' }) NOTE: this does not work with streams or other advanced means of specifying the reply body. Including Date Header Automatically You can automatically append a Dateheader to your mock reply: const scope = nock('') .replyDate() .get('/') .reply(200, { hello: 'world' }) Or provide your own Dateobject: const scope = nock('') .replyDate(new Date(2015, 0, 1)) .get('/') .reply(200, { hello: 'world' }) HTTP Verbs Nock supports any HTTP verb, and it has convenience methods for the GET, POST, PUT, HEAD, DELETE, PATCH, OPTIONS and MERGE HTTP verbs. You can intercept any HTTP verb using .intercept(path, verb [, requestBody [, options]]): const scope = nock('') .intercept('/path', 'PATCH') .reply(304) Support for HTTP and HTTPS By default nock assumes HTTP. If you need to use HTTPS you can specify the like this: const scope = nock('') // ... Non-standard ports You are able to specify a non-standard port like this: const scope = nock('') ... Repeat') To repeat this response for as long as nock is active, use .persist(). Delay delayConnection(1000)is equivalent to delay({ head: 1000 }). Socket timeout You are able to specify the number of milliseconds that your connection should be idle, to simulate a socket timeout. nock('') .get('/') .socketDelay(2000) // 2 seconds .reply(200, '<html></html>') To test a request like the following: req = http.request('', res => { ... }) req.setTimeout(1000, () => { req.abort() }) req.end() NOTE: the timeout will be fired immediately, and will not leave the simulated connection idle for the specified period of time. Chaining You can chain behaviour like this:', }) Scope filtering You can filter the scope (protocol, domain or port) of nock through a function. The filtering function is accepted at the filteringScopefield of the optionsargument. This can be useful if you have a node module that randomly changes subdomains to which it sends requests, e.g., the Dropbox node module behaves like this. const scope = nock('', { filteringScope: scope => /^https:\/\/api[0-9]*.dropbox.com/.test(scope), }) .get('/1/metadata/auto/Photos?include_deleted=false&list=true') .reply(200) Conditional scope filtering You can also choose to filter out a scope based on your system environment (or any external factor). The filtering function is accepted at the conditionallyfield of the optionsargument. This can be useful if you only want certain scopes to apply depending on how your tests are executed. const scope = nock('', { conditionally: () => true, }) Path filtering You can also filter the URLs based on a function. This can be useful, for instance, if you have random or time-dependent data in your URL. You can use a regexp for replacement, just like String.prototype.replace: const scope = nock('') .filteringPath(/password=[^&]*/g, 'password=XXX') .get('/users/1?password=XXX') .reply(200, 'user') Or you can use a function: const scope = nock('') .filteringPath(path => '/ABC') .get('/ABC') .reply(200, 'user') Note that scope.filteringPathis not cumulative: it should only be used once per scope. Request Body filtering You can also filter the request body based on a function. This can be useful, for instance, if you have random or time-dependent data in your request body. You can use a regexp for replacement, just like String.prototype.replace: const scope = nock('') .filteringRequestBody(/password=[^&]*/g, 'password=XXX') .post('/users/1', 'data=ABC&password=XXX') .reply(201, 'OK') Or you can use a function to transform the body: const scope = nock('') .filteringRequestBody(body => 'ABC') .post('/', 'ABC') .reply(201, 'OK') If you don't want to match the request body you should omit the bodyargument from the method function: const scope = nock('') .post('/some_uri') // no body argument .reply(200, 'OK') Request Headers Matching If you need to match requests only if certain request headers match, you can. const scope = nock('') .matchHeader('accept', 'application/json') .get('/') .reply(200, { data: 'hello world', }) You can also use a regexp for the header body. const scope = nock('') .matchHeader('User-Agent', /Mozilla\/.*/) .get('/') .reply(200, { data: 'hello world', }) You can also use a function for the header body. const scope = nock('') .matchHeader('content-length', val => val >= 1000) .get('/') .reply(200, { data: 'hello world', }) Optional Requests By default every mocked request is expected to be made exactly once, and until it is it'll appear in scope.pendingMocks(), and scope.isDone()will return false (see expectations). In many cases this is fine, but in some (especially cross-test setup code) it's useful to be able to mock a request that may or may not happen. You can do this with optionally(). Optional requests are consumed just like normal ones once matched, but they do not appear in pendingMocks(), and isDone()will return true for scopes with only optional requests pending.. const getMock = optional => example .get('/pathC') .optionally(optional) .reply(200) getMock(true) example.pendingMocks() // [] getMock(false) example.pendingMocks() // ["GET"] Allow unmocked requests on a mocked hostname If you need some request on the same host name to be mocked and some others to really go through the HTTP stack, you can use the allowUnmockedoption like this: const scope = nock('', { allowUnmocked: true }) .get('/my/url') .reply(200, 'OK!') // GET /my/url => goes through nock // GET /other/url => actually makes request to the server Note: When applying {allowUnmocked: true}, if the request is made to the real server, no interceptor is removed. Expectations Every time an HTTP request is performed for a scope that is mocked, Nock expects to find a handler for it. If it doesn't, it will throw an error. Calls to nock() return a scope which you can assert by calling scope.done(). This will assert that all specified calls on that scope were performed. Example: const scope = nock('') .get('/') .reply(200, 'Hello from Google!') // do some stuff setTimeout(() => { // Will throw an assertion error if meanwhile a "GET" was // not performed. scope.done() }, 5000) .isDone() You can call isDone()on a single expectation to determine if the expectation was met: const scope = nock('') .get('/') .reply(200) scope.isDone() // will return false It is also available in the global scope, which will determine if all expectations have been met: nock.isDone() .cleanAll() You can cleanup all the prepared mocks (could be useful to cleanup some state after a failed test) like this: nock.cleanAll() .abortPendingRequests() You can abort all current pending request like this: nock.abortPendingRequests() .persist() You can make all the interceptors for a scope persist by calling .persist()on it: const scope = nock('') .persist() .get('/') .reply(200, 'Persisting all the way') Note that while a persisted scope will always intercept the requests, it is considered "done" after the first interception. If you want to stop persisting an individual persisted mock you can call persist(false): const scope = nock('') .persist() .get('/') .reply(200, 'ok') // Do some tests ... scope.persist(false) You can also use nock.cleanAll()which removes all mocks, including persistent mocks. To specify an exact number of times that nock should repeat the response, use .times(). ()) .activeMocks() You can see every mock that is currently active (i.e. might potentially reply to requests) in a scope using scope.activeMocks(). A mock is active if it is pending, optional but not yet completed, or persisted. Mocks that have intercepted their requests and are no longer doing anything are the only mocks which won't appear here. You probably don't need to use this - it mainly exists as a mechanism to recreate the previous (now-changed) behavior of pendingMocks(). console.error('active mocks: %j', scope.activeMocks()) It is also available in the global scope: console.error('active mocks: %j', nock.activeMocks()) .isActive() Your tests may sometimes want to deactivate the nock interceptor. Once deactivated, nock needs to be re-activated to work. You can check if nock interceptor is active or not by using nock.isActive(). Sample: if (!nock.isActive()) { nock.activate() } Logging Nock can log matches if you pass in a log function like this: const scope = nock('') .log(console.log) ... Restoring You can restore the HTTP interceptor to the normal unmocked behaviour by calling: nock.restore() note 1: restore does not clear the interceptor list. Use nock.cleanAll() if you expect the interceptor list to be empty. note 2: restore will also remove the http interceptor itself. You need to run nock.activate() to re-activate the http interceptor. Without re-activation, nock will not intercept any calls. Activating Only for cases where nock has been deactivated using nock.restore(), you can reactivate the HTTP interceptor to start intercepting HTTP calls using: nock.activate() note: To check if nock HTTP interceptor is active or inactive, use nock.isActive(). Turning Nock Off (experimental!) You can bypass Nock completely by setting the NOCK_OFFenvironment variable to "true". This way you can have your tests hit the real servers just by switching on this environment variable. $ NOCK_OFF=true node my_test.js Enable/Disable real HTTP requests By default, any requests made to a host that is not mocked will be executed normally. If you want to block these requests, nock allows you to do so. Disabling requests For disabling real http requests. nock.disableNetConnect() So, if you try to request any host not 'nocked', it will throw a NetConnectNotAllowedError. nock.disableNetConnect() const req = http.get('') req.on('error', err => { console.log(err) }) // The returned `http.ClientRequest` will emit an error event (or throw if you're not listening for it) // This code will log a NetConnectNotAllowedError with message: // Nock: Disallowed net connect for "google.com:80" Enabling requests For enabling any real HTTP requests (the default behavior): nock.enableNetConnect() You could allow real HTTP requests for certain host names by providing a string or a regular expression for the hostname, or a function that accepts the hostname and returns true or false: // Using a string nock.enableNetConnect('amazon.com') // Or a RegExp nock.enableNetConnect(/(amazon|github)\.com/) // Or a Function nock.enableNetConnect( host => host.includes('amazon.com') || host.includes( When you're done with the test, you probably want to set everything back to normal: nock.cleanAll() nock.enableNetConnect() Recording This is a cool feature: Guessing what the HTTP calls are is a mess, especially if you are introducing nock on your already-coded tests. For these cases where you want to mock an existing live system you can record and playback the HTTP calls like this: nock.recorder.rec() // Some HTTP calls happen and the nock code necessary to mock // those calls will be outputted to console Recording relies on intercepting real requests and responses and then persisting them for later use. In order to stop recording you should call nock.restore()and recording will stop. ATTENTION!: when recording is enabled, nock does no validation, nor will any mocks be enabled. Please be sure to turn off recording before attempting to use any mocks in your tests. dont_printoption If you just want to capture the generated code into a var as an array you can use: nock.recorder.rec({ dont_print: true, }) // ... some HTTP calls const nockCalls = nock.recorder.play() The nockCallsvar will contain an array of strings representing the generated code you need. Copy and paste that code into your tests, customize at will, and you're done! You can call nock.recorder.clear()to remove already recorded calls from the array that nock.recorder.play()returns. (Remember that you should do this one test at a time). output_objectsoption In case you want to generate the code yourself or use the test data in some other way, you can pass the output_objectsoption to rec: nock.recorder.rec({ output_objects: true, }) // ... some HTTP calls const nockCallObjects = nock.recorder.play() The returned call objects have the following properties: scope- the scope of the call including the protocol and non-standard ports (e.g. '') method- the HTTP verb of the call (e.g. 'GET') path- the path of the call (e.g. '/pgte/nock') body- the body of the call, if any status- the HTTP status of the reply (e.g. 200) response- the body of the reply which can be a JSON, string, hex string representing binary buffers or an array of such hex strings (when handling content-encodedin reply header) headers- the headers of the reply reqheader- the headers of the request If you save this as a JSON file, you can load them directly through nock.load(path). Then you can post-process them before using them in the tests. For example, to add request body filtering (shown here fixing timestamps to match the ones captured during recording): nocks = nock.load(pathToJson) nocks.forEach(function(nock) { nock, function(match, key, value) { return key + ':' + recordedTimestamp }) } else { return body } } }) Alternatively, if you need to pre-process the captured nock definitions before using them (e.g. to add scope filtering) then you can use nock.loadDefs(path)and nock.define(nockDefs). Shown here is scope filtering for Dropbox node module which constantly changes the subdomain to which it sends the requests: // Pre-process the nock definitions as scope filtering has to be defined before the nocks are defined (due to its very hacky nature). const nockDefs = nock.loadDefs(pathToJson) nockDefs.forEach(def => { // Do something with the definition object e.g. scope filtering. def.options = { ...def.options, filteringScope: scope => /^https:\/\/api[0-9]*.dropbox.com/.test(scope), } }) // Load the nocks from pre-processed definitions. const nocks = nock.define(nockDefs) enable_reqheaders_recordingoptionin recorder.rec()options. nock.recorder.rec({ dont_print: true, output_objects: true, enable_reqheaders_recording: true, }) Note that even when request headers recording is enabled Nock will never record user-agentheaders. user-agentvalues change with the version of Node and underlying operating system and are thus useless for matching as all that they can indicate is that the user agent isn't the one that was used to record the tests. loggingoption Nock will print using console.logby default (assuming that dont_printis false). If a different function is passed into logging, nock will send the log string (or object, when using output_objects) to that function. Here's a basic example. const appendLogToFile = content => { fs.appendFile('record.txt', content) } nock.recorder.rec({ logging: appendLogToFile, }) use_separatoroption By default, nock will wrap its output with the separator string <<<<<<-- cut here -->>>>>>before and after anything it prints, whether to the console or a custom log function given with the loggingoption. To disable this, set use_separatorto false. nock.recorder.rec({ use_separator: false, }) .removeInterceptor() This allows removing a specific interceptor. This can be either an interceptor instance or options for a url. It's useful when there's a list of common interceptors shared between tests, where an individual test requires one of the shared interceptors to behave differently. Examples: nock.removeInterceptor({ hostname: 'localhost', path: '/mockedResource', }) nock.removeInterceptor({ hostname : 'localhost', path : '/login' method: 'POST' proto : 'https' }) const interceptor = nock('').get('somePath') nock.removeInterceptor(interceptor) Events A scope emits the following events: emit('request', function(req, interceptor, body)) emit('replied', function(req, interceptor)) Global no match event You can also listen for no match events like this: nock.emitter.on('no match', req => {}) Nock Back Fixture recording support and playback. Setup You must specify a fixture directory before using, for example: In your test helper const nockBack = require('nock').back nockBack.fixtures = '/path/to/fixtures/' nockBack.setMode('record') Options nockBack.fixtures: path to fixture directory nockBack.setMode(): the mode to use Usage By default if the fixture doesn't exist, a nockBackwill create a new fixture and save the recorded output for you. The next time you run the test, if the fixture exists, it will be loaded in. The thiscontext of the callback function will have a property scopesto access all of the loaded nock scopes. const nockBack = require('nock').back const request = require('request') nockBack.setMode('record') nockBack.fixtures = __dirname + '/nockFixtures' //this only needs to be set once in your test helper // recording of the fixture nockBack('zomboFixture.json', nockDone => { request.get('', }) }) }) If your tests are using promises then use nockBacklike this: return nockBack('promisedFixture.json') .then(({ nockDone, context }) => { // do your tests returning a promise and chain it with // `.then(nockDone)` }) }) Options As an optional second parameter you can pass the following options before: a preprocessing function, gets called before nock.define after: a postprocessing function, gets called after nock.define afterRecord: a postprocessing function, gets called after recording. Is passed the array of scopes recorded and should return the intact array, a modified version of the array, or if custom formatting is desired, a stringified version of the array to save to the fixture recorder: custom options to pass to the recorder Example function prepareScope(scope) { scope, (match, key, value) => `${key}:${recordedTimestamp}` ) } else { return body } } } nockBack('zomboFixture.json', { before: prepareScope }, nockDone => { request.get('', function(err, res, body) { // do your tests nockDone() } } Modes To set the mode call nockBack.setMode(mode)or run the tests with the NOCK_BACK_MODEenvironment variable set before loading nock. If the mode needs to be changed programmatically, the following is valid: nockBack.setMode(nockBack.currentMode) wild: all requests go out to the internet, don't replay anything, doesn't record anything dryrun: The default, use recorded nocks, allow http calls, doesn't record anything, useful for writing new tests record: use recorded nocks, record new nocks lockdown: use recorded nocks, disables all http calls even when not nocked, doesn't record Commoninvocations will disable retrying, e.g.: await got("", { retry: 0 }) If you need to do this in all your tests, you can create a module got_client.jswhich exports a custom got instance: const got = require('got') module.exports = got.extend({ retry: 0 }) This is how it's handled in Nock itself (see #1523). Axios To use Nock with Axios, you may need to configure Axios to use the Node adapter as in the example below: import axios from 'axios' import nock from 'nock' import test from 'ava' // You can use any test framework. // If you are using jsdom, axios will default to using the XHR adapter which // can't be intercepted by nock. So, configure axios to use the node adapter. // // References: // // axios.defaults.adapter = require('axios/lib/adapters/http') test('can fetch test response', async t => { // Set up the mock request. const scope = nock('') .get('/test') .reply(200, 'test response') // Make the request. Note that the hostname must match exactly what is passed // to `nock()`. Alternatively you can set `axios.defaults.host = ''` // and run `axios.get('/test')`. await axios.get('') // Assert that the expected request was made. scope.done() }) Debugging Nock uses debug, so just run with environmental variable DEBUGset to nock.*. $ DEBUG=nock.* node my_test.js Contributing Thanks for wanting to contribute! Take a look at our Contributing Guide for notes on our commit message conventions and how to run tests. Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. Contributors Thanks goes to these wonderful people (emoji key): This project follows the all-contributors specification. Contributions of any kind welcome! Sponsors Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [Become a sponsor] License
https://www.javascripting.com/view/nock
CC-MAIN-2020-16
refinedweb
4,422
57.77
29 October 2010 17:21 [Source: ICIS news] LONDON (ICIS)--It is hardly a typical petrochemical company given its financial rollercoaster ride, but LyondellBasell’s operating performance mirrors recent market dynamics. As for the here and now, industry conditions for October look reasonable, CEO Jim Gallogly said on Friday. “We expect to see the typical seasonal impacts in the Refining and Oxyfuels area as well as end-of-year holiday reduced sales to some customers,” he added. “With these anticipated impacts, our outlook for the quarter is somewhat tempered compared to the strong second and third quarters," he said. That is the current sentiment across much of the sector, and across the wider chemical industry. Most executives hope, even believe, that the slowdown is largely seasonal. Most would expect economic and industrial output growth to slow in the developed world in the coming quarters and for ?xml:namespace> It is the degree to which Petrochemical companies still have to carefully match supply with demand. Working capital has to be tightly managed: money and goods are not flowing through the system in the way that they once were. But producers can feel a lot more comfortable running towards the end of 2010 than they have since the autumn of 2008. The second and third quarters were particularly strong and the LyondellBasell results, along with other firms reporting this week, reflect that. Producer volumes and prices have been encouraging. “We achieved excellent results in the third quarter as most of our segments performed very well," Gallogly said. "Globally, our Olefins & Polyolefins results were approximately equal to the strong results of the second quarter. As a result, we again generated significant cash during the quarter and further improved our liquidity,” he added. Cash generation is strong for many sector companies which have cut back on overheads and capital spending. Sensibly, the money is largely being used to reduce debts and improve liquidity. There is the question of capital expenditure. Maintenance cap ex should not have been compromised during the downturn but some plants have been put under a great deal of strain as they have been pushed harder to meet rising customer demand. Reliability and operability are a concern not just for producers when demand is running strong, but also for customers who fear a run-up in prices should some production fail. Parts of the industry still seem to be struggling to determine just how much sustainable supply is needed to match demand. This reflects the uncertainty that dogs governments, businesses and individuals alike. Just what does the ‘new normal’ look like? The latter part of 2010 is unlikely to tell. Moderating demand growth for chemicals and constrained operating rates appear to be the norm. Most of the industry is certainly not running on all cylinders. In large part, profits are being made on price rather than on sustainable volumes and that has to be a concern. For much of 2010, a volatile industry has been surprisingly stable and shown steady returns. It is far from certain that such a degree of sustainability can be maintained in 2011. Bookmark Paul Hodges’ Chemicals & the Economy blog Read John Richardson and Malini Hariharan’s Asian Chemical Connections
http://www.icis.com/Articles/2010/10/29/9405996/insight-looking-for-nothing-more-than-a-seasonal-decline.html
CC-MAIN-2014-42
refinedweb
533
53.71
StdInParse demonstrates streaming XML data from standard input. The StdInParse sample parses an XML file from standard input and prints out the number of elements in the file. To run StdInParse, enter the following: StdInParse < <XML file> The following parameters may be set from the command line Usage: StdInParse [options] < <XML file> This program demonstrates streaming XML data from standard input. It then uses the SAX Parser, and prints the number of elements, attributes, spaces and characters found in the input, using SAX API. Options: -v=xxx Validation scheme [always | never | auto*]. -n Enable namespace processing. Defaults to off. -s Enable schema processing. Defaults to off. -f Enable full schema constraint checking. Defaults to off. -? Show this help. * = Default if not provided explicitly. -v=always will force validation -v=never will not use any validation -v=auto will validate if a DOCTYPE declaration or a schema declaration is present in the XML document Make sure that you run StdInParse in the samples/data directory. Here is a sample output from StdInParse: cd xerces-c-3.1.4/samples/data StdInParse -v=always < personal.xml stdin: 10 ms (37 elems, 12 attrs, 0 spaces, 268 chars) Running StdInParse with the validating parser gives a different result because ignorable white-space is counted separately from regular characters. StdInParse -v=never < personal.xml stdin: 10 ms (37 elems, 12 attrs, 0 spaces, 268 chars) Note that the sum of spaces and characters in both versions is the same.
http://xerces.apache.org/xerces-c/stdinparse-3.html
CC-MAIN-2016-44
refinedweb
245
50.73