text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Inside a desktop .Internet application, the conventional selection for applying a local relational database is by using SQL Server Compact Edition, after which obviously there's the chance to make use of SQLite along with other third-party engines. Do you know the possibilities to some .Internet Metro-style application? SQL CE appears to become not available - any alternative? Actually, the whole System.Data namespace appears to become gone - so no LINQ to SQL or Entity Framework, either? How about HTML5 IndexedDB that appears to be shown to Metro HTML/JS applications - can that be utilized from .Internet in some way? Apparently, the Extensible Storage Engine Win32 API (also known as "JET Blue") continues to be obtainable in Metro applications. C++ ones may use it directly via #include <esent.h>. .Internet applications would need to use P/Invoke. This doesn't give SQL or other kind of high-level relational querying constructs, however it provides for key research, transactions, multiple indices per table, and multi-area indices. Let us be obvious: SQL CE is available in Home windows 8. It is available with Program Files however in WindowsSystem32 to appear even more embedded than before. Windows7 does not have sqlcecompact40.dll in system32 making this certainly new. System.Data and System.Data.Linq both reside in C:Program Files (x86)Reference AssembliesMicrosoftFramework.NETFrameworkv4.5. You can include references to individuals dlls by hand but obtaining the application to compile is a guessing game. It appears when you initially open assembling your shed and do nothing at all, you can include a mention of the individuals dlls anywhere and compile the application. Should you take away the dlls and then try to add it well you are hit having a "A mention of the '<4.5 framework directory>' couldn't be added in. If by some chance you cannot add them via Visual Studio it is simple to just add the HintPath by hand. My application now compiles however i also went into an problem where connecting the AppX wasn't working properly also it gave a cryptic "Payload cannot contain 2 of the identical dll" type messages. Enjoy it was attempting to include both 32 bit (the main one I linked) and 64bit in the last second. It incorporated DLLs I wasn't touching by hand like System.Data.OracleClient or System.Transactions therefore it was certainly some artifact in the build process I have yet to determine again. The primary problem I am coping with at this time is how you can produce a proper connection string because it will not initialize correctly with out them. SQL CE is probably still searching for hardcoded C: references therefore the ApplicationData samples might not act as preferred. I might attempt to make SQL CE 4 databases in Win7, transfer to Win8 and just reference them in your area but I am type of dealing with that problem there too. This close! Don't hesitate to comment regarding any items you encounter and I am certainly lower for many offline collaboration if anybody want to pool assets. This really is certainly a thick forest of monsters on and on it alone is showing much more challenging.
http://codeblow.com/questions/local-storage-of-structured-data-in-win8-metro-style-applications/
CC-MAIN-2018-09
refinedweb
534
56.35
Yoga Tutorial: Using a Cross-Platform Layout Engine Learn about Yoga, Facebook’s cross-platform layout engine that helps developers write more layout code in style akin to Flexbox Version - Swift 4, iOS 11, Xcode 9 Yoga is a cross-platform layout engine based on Flexbox that makes working with layouts easy. Instead of using Auto Layout for iOS or using Cascading Style Sheets (CSS) on the web, you can use Yoga as a common layout system. Initially launched as css-layout, an open source library from Facebook in 2014, it was revamped and rebranded as Yoga in 2016. Yoga supports multiple platforms including Java, C#, C, and Swift. Library developers can incorporate Yoga into their layout systems, as Facebook has done for two of their open source projects: React Native and Litho. However, Yoga also exposes a framework that iOS developers can directly use for laying out views. In this tutorial, you’ll work through core Yoga concepts then practice and expand them in building the FlexAndChill app. Even though you’ll be using the Yoga layout engine, it will be helpful for you to be familiar with Auto Layout before reading this tutorial. You’ll also want to have a working knowledge of CocoaPods to include Yoga in your project. Unpacking Flexbox Flexbox, also referred to as CSS Flexible Box, was introduced to handle complex layouts on the web. One key feature is the efficient layout of content in a given direction and the “flexing” of its size to fit a certain space. Flexbox consists of flex containers, each having one or more flex items: Flexbox defines how flex items are laid out inside of a flex container. Content outside of the flex container and inside of a flex item are rendered as usual. Flex items are laid out in a single direction inside of a container (although they can be optionally wrapped). This sets the main axis for the items. The opposite direction is known as the cross axis. Flexbox allows you to specify how items are positioned and spaced on the main axis and the cross axis. justify-content specifies the alignment of items along the container’s main axis. The example below shows item placements when the container’s flex direction is row: flex-start: Items are positioned at the beginning of the container. flex-end: Items are positioned at the end of the container. center: Items are positioned in the middle of the container. space-between: Items are evenly spaced with the first item placed at the beginning and the last item placed at the end of the container. space-around: Items are evenly spaced with the equal spacing around them. align-items specifies the alignment of items along the container’s cross axis. The example shows item placements when the container’s flex direction is row which means the cross axis runs vertically: The items are vertically aligned at the beginning, center, or end of the container. These initial Flexbox properties should give you a feel for how Flexbox works. There are many more you can work with. Some control how an item stretches or shrinks relative to the available container space. Others can set the padding, margin, or even size. Flexbox Examples A perfect place to try out Flexbox concepts is jsFiddle, an online playground for JavaScript, HTML and CSS. Go to this starter JSFiddle and take a look. You should see four panes: The code in the three editors drive the output you see in the lower right pane. The starter example displays a white box. Note the yoga class selector defined in the CSS editor. These represent the CSS defaults that Yoga implements. Some of the values differ from the Flexbox w3 specification defaults. For example, Yoga defaults to a column flex direction and items are positioned at the start of the container. Any HTML elements that you style via class="yoga" will start off in “Yoga” mode. <div class="yoga" style="width: 400px; height: 100px; background-color: white; flex-direction:row;"> </div> The div‘s basic style is yoga. Additional style properties set the size, background, and overrides the default flex direction so that items will flow in a row. In the HTML editor, add the following code just above the closing div tag: <div class="yoga" style="background-color: #cc0000; width: 80px;"></div> This adds a yoga styled, 80-pixel wide, red box to the div container. Tap Run in the top menu. You should see the following output: Add the following child element to the root div, right after the red box’s div: <div class="yoga" style="background-color: #0000cc; width: 80px;"></div> This adds an 80-pixel wide blue box. Tap Run. The updated output shows the blue box stacked to the right of the red box: Replace the blue box’s div code with the following: <div class="yoga" style="background-color: #0000cc; width: 80px; flex-grow: 1;"></div> The additional flex-grow property allows the box to expand and fill any available space. Tap Run to see the updated output with the blue box stretched out: Replace the entire HTML source with the following: <div class="yoga" style="width: 400px; height: 100px; background-color: white; flex-direction:row; padding: 10px;"> <div class="yoga" style="background-color: #cc0000; width: 80px; margin-right: 10px;"></div> <div class="yoga" style="background-color: #0000cc; width: 80px; flex-grow: 1; height: 25px; align-self: center;"></div> </div> This introduces padding to the child items, adds a right margin to the red box, sets the height of the blue box, and aligns the blue box to the center of the container. Tap Run to view the resulting output: You can view the final jsFiddle here. Feel free to play around with other layout properties and values. Yoga vs. Flexbox Even though Yoga is based on Flexbox, there are some differences. Yoga doesn’t implement all of CSS Flexbox. It skips non-layout properties such as setting the color. Yoga has modified some Flexbox properties to provide better Right-to-Left support. Lastly, Yoga has added a new AspectRatio property to handle a common need when laying out certain elements such as images. Introducing YogaKit While you may want to stay in wonderful world wide web-land, this is a Swift tutorial. Fear not, the Yoga API will keep you basking in the afterglow of Flexbox familiarity. You’ll be able to apply your Flexbox learnings to your Swift app layout. Yoga is written in C, primarily to optimize performance and for easy integration with other platforms. To develop iOS apps, you’ll be working with YogaKit, which is a wrapper around the C implementation. Recall that in the Flexbox web examples, layout was configured via style attributes. With YogaKit, layout configuration is done through a YGLayout object. YGLayout includes properties for flex direction, justify content, align items, padding, and margin. YogaKit exposes YGLayout as a category on UIView. The category adds a configureLayout(block:) method to UIView. The block closure takes in a YGLayout parameter and uses that info to configure the view’s layout properties. You build up your layout by configuring each participating view with the desired Yoga properties. Once done, you call applyLayout(preservingOrigin:) on the root view’s YGLayout. This calculates and applies the layout to the root view and subviews. Your First Layout Create a new Swift iPhone project with the Single View Application template named YogaTryout. You’ll be creating your UI programmatically so you won’t need to use storyboards. Open Info.plist and delete the Main storyboard file base name property. Then, set the Launch screen interface file base name value to an empty string. Finally, delete Main.storyboard and LaunchScreen.storyboard. Open AppDelegate.swift and add the following to application(_:didFinishLaunchingWithOptions:) before the return statement: window = UIWindow(frame: UIScreen.main.bounds) window?.rootViewController = ViewController() window?.backgroundColor = .white window?.makeKeyAndVisible() Build and run the app. You should see a blank white screen. Close the Xcode project. Open Terminal and enter the following command to install CocoaPods if you don’t already have it: sudo gem install cocoapods In Terminal, go to the directory where YogaTryout.xcodeproj is located. Create a file named Podfile and set its content to the following: platform :ios, '10.3' use_frameworks! target 'YogaTryout' do pod 'YogaKit', '~> 1.5' end Run the following command in Terminal to install the YogaKit dependency: pod install You should see output similar to the following: Analyzing dependencies Downloading dependencies Installing Yoga (1.5.0) Installing YogaKit (1.5.0) Generating Pods project Integrating client project [!] Please close any current Xcode sessions and use `YogaTryout.xcworkspace` for this project from now on. Sending stats Pod installation complete! There is 1 dependency from the Podfile and 2 total pods installed. From this point onwards, you’ll be working with YogaTryout.xcworkspace. Open YogaTryout.xcworkspace then build and run. You should still see a blank white screen. Open ViewController.swift and add the following import: import YogaKit This imports the YogaKit framework. Add the following to the end of viewDidLoad(): // 1 let contentView = UIView() contentView.backgroundColor = .lightGray // 2 contentView.configureLayout { (layout) in // 3 layout.isEnabled = true // 4 layout.flexDirection = .row layout.width = 320 layout.height = 80 layout.marginTop = 40 layout.marginLeft = 10 } view.addSubview(contentView) // 5 contentView.yoga.applyLayout(preservingOrigin: true) This code does the following: - Creates a view and sets the background color. - Sets up the layout configuration closure. - Enables Yoga styling during this view’s layout. - Sets various layout properties including the flex direction, frame size, and margin offsets. - Calculates and applies the layout to contentView. Build and run the app on an iPhone 7 Plus simulator. You should see a gray box: You may be scratching your head, wondering why you couldn’t have simply instantiated a UIView with the desired frame size and set its background color. Patience my child. The magic starts when you add child items to this initial container. Add the following to viewDidLoad() just before the line that applies the layout to contentView: let child1 = UIView() child1.backgroundColor = .red child1.configureLayout{ (layout) in layout.isEnabled = true layout.width = 80 } contentView.addSubview(child1) This code adds an 80-pixel wide red box to contentView. Now, add the following just after the previous code: let child2 = UIView() child2.backgroundColor = .blue child2.configureLayout{ (layout) in layout.isEnabled = true layout.width = 80 layout.flexGrow = 1 } contentView.addSubview(child2) This adds a blue box to the container that’s 80 pixels wide but that’s allowed to grow to fill out any available space in the container. If this is starting to look familiar, it’s because you did something similar in jsFiddle. Build and run. You should see the following: Now, add the following statement to the layout configuration block for contentView: layout.padding = 10 This sets a padding for all the child items. Add the following to child1‘s layout configuration block: layout.marginRight = 10 This sets a right margin offset for the red box. Finally, add the following to child2‘s layout configuration block: layout.height = 20 layout.alignSelf = .center This sets the height of the blue box and aligns it to the center of its parent container. Build and run. You should see the following: What if you want to center the entire gray box horizontally? Well, you can enable Yoga on contentView‘s parent view which is self.view. Add the following to viewDidLoad(), right after the call to super: view.configureLayout { (layout) in layout.isEnabled = true layout.width = YGValue(self.view.bounds.size.width) layout.height = YGValue(self.view.bounds.size.height) layout.alignItems = .center } This enables Yoga for the root view and configures the layout width and height based on the view bounds. alignItems configures the child items to be center-aligned horizontally. Remember that alignItems specifies how a container’s child items are aligned in the cross axis. This container has the default column flex direction. So the cross axis is in the horizontal direction. Remove the layout.marginLeft assignment in contentView‘s layout configuration. It’s no longer needed as you’ll be centering this item through its parent container. Finally, replace: contentView.yoga.applyLayout(preservingOrigin: true) With the following: view.yoga.applyLayout(preservingOrigin: true) This will calculate and apply the layout to self.view and its subviews which includes contentView. Build and run. Note that the gray box is now centered horizontally: Centering the gray box vertically on the screen is just as simple. Add the following to the layout configuration block for self.view: layout.justifyContent = .center Remove the layout.marginTop assignment in contentView‘s layout configuration. It won’t be needed since the parent is controlling the vertical alignment. Build and run. You should now see the gray box center-aligned both horizontally and vertically: Rotate the device to landscape mode. Uh-oh, you’ve lost your center: Fortunately, there’s a way to get notified about device orientation changes to help resolve this. Add the following method to the end of the class: override func viewWillTransition( to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) { super.viewWillTransition(to: size, with: coordinator) // 1 view.configureLayout{ (layout) in layout.width = YGValue(size.width) layout.height = YGValue(size.height) } // 2 view.yoga.applyLayout(preservingOrigin: true) } The code does the following: - Updates the layout configuration with the size of the new orientation. Note that only the affected properties are updated. - Re-calculates and applies the layout. Rotate the device back to portrait mode. Build and run the app. Rotate the device to landscape mode. The gray box should now be properly centered: You can download the final tryout project here if you wish to compare with your code. Granted, you’re probably mumbling under your breath about how you could have built this layout in less than three minutes with Interface Builder, including properly handling rotations: You’ll want to give Yoga a fresh look when your layout starts to become more complicated than you’d like and things like embedded stack views are giving you fits. On the other hand, you may have long abandoned Interface Builder for programmatic layout approaches like layout anchors or the Visual Format Language. If those are working for you, no need to change. Keep in mind that the Visual Format Language doesn’t support aspect ratios whereas Yoga does. Yoga is also easier to grasp once you understand Flexbox. There are many resources where you can quickly try out Flexbox layouts before building them out on iOS with Yoga. Advanced Layout Your joy of building white, red, and blue boxes has probably worn thin. Time to shake it up a bit. In the following section, you’ll take your newly minted Yoga skills to create a view similar to the following: Download and explore the starter project. It already includes the YogaKit dependency. The other main classes are: - ViewController: Displays the main view. You’ll primarily be working in this class. - ShowTableViewCell: Used to display an episode in the table view. - Show: Model object for a show. Build and run the app. You should see a black screen. Here’s a wireframe breakdown of the desired layout to help plan things out: Let’s quickly dissect the layout for each box in the diagram: - Displays the show’s image. - Displays summary information for the series with the items laid out in a row. - Displays title information for the show with the items laid out in a row. - Displays the show’s description with the items laid out in a column. - Displays actions that can be taken. The main container is laid out in a row. Each child item is a container with items laid out in a column. - Displays tabs with items laid out in a row. - Displays a table view that fills out the remaining space. As you build each piece of the layout you’ll get a better feel for additional Yoga properties and how to fine tune a layout. Open ViewController.swift and add the following to viewDidLoad(), just after the shows are loaded from the plist: let show = shows[showSelectedIndex] This sets the show to be displayed. Aspect Ratio Yoga introduces an aspectRatio property to help lay out a view if an item’s aspect ratio is known. AspectRatio represents the width-to-height ratio. Add the following code right after contentView is added to its parent: // 1 let episodeImageView = UIImageView(frame: .zero) episodeImageView.backgroundColor = .gray // 2 let image = UIImage(named: show.image) episodeImageView.image = image // 3 let imageWidth = image?.size.width ?? 1.0 let imageHeight = image?.size.height ?? 1.0 // 4 episodeImageView.configureLayout { (layout) in layout.isEnabled = true layout.flexGrow = 1.0 layout.aspectRatio = imageWidth / imageHeight } contentView.addSubview(episodeImageView) Let’s go through the code step-by-step: - Creates a UIImageView - Sets the image based on the selected show - Teases out the image’s size - Configures the layout and sets the aspectRatiobased on the image size Build and run the app. You should see the image stretch vertically yet respect the image’s aspect ratio: FlexGrow Thus far you’ve seen flexGrow applied to one item in a container. You stretched the blue box in a previous example by setting its flexGrow property to 1. If more than one child sets a flexGrow property, then the child items are first laid out based on the space they need. Each child’s flexGrow is then used to distribute the remaining space. In the series summary view, you’ll lay out the child items so that the middle section takes up twice as much left over space as the other two sections. Add the following after episodeImageView is added to its parent: let summaryView = UIView(frame: .zero) summaryView.configureLayout { (layout) in layout.isEnabled = true layout.flexDirection = .row layout.padding = self.padding } This code specifies that the child items will be laid out in a row and include padding. Add the following just after the previous code: let summaryPopularityLabel = UILabel(frame: .zero) summaryPopularityLabel.text = String(repeating: "★", count: showPopularity) summaryPopularityLabel.textColor = .red summaryPopularityLabel.configureLayout { (layout) in layout.isEnabled = true layout.flexGrow = 1.0 } summaryView.addSubview(summaryPopularityLabel) contentView.addSubview(summaryView) This adds a popularity label and sets its flexGrow property to 1. Build and run the app to view the popularity info: Add the following code just above the line that adds summaryView to its parent: let summaryInfoView = UIView(frame: .zero) summaryInfoView.configureLayout { (layout) in layout.isEnabled = true layout.flexGrow = 2.0 layout.flexDirection = .row layout.justifyContent = .spaceBetween } This sets up a new container view for the summary label child items. Note that the flexGrow property is set to 2. Therefore, summaryInfoView will take up twice as much remaining space as summaryPopularityLabel. Now add the following code right after the previous block: for text in [showYear, showRating, showLength] { let summaryInfoLabel = UILabel(frame: .zero) summaryInfoLabel.text = text summaryInfoLabel.font = UIFont.systemFont(ofSize: 14.0) summaryInfoLabel.textColor = .lightGray summaryInfoLabel.configureLayout { (layout) in layout.isEnabled = true } summaryInfoView.addSubview(summaryInfoLabel) } summaryView.addSubview(summaryInfoView) This loops through the summary labels to display for a show. Each label is a child item to the summaryInfoView container. That container’s layout specifies that the labels be placed at the beginning, middle, and end. Build and run the app to see the show’s labels: To tweak the layout to get the spacing just right, you’ll add one more item to summaryView. Add the following code next: let summaryInfoSpacerView = UIView(frame: CGRect(x: 0, y: 0, width: 100, height: 1)) summaryInfoSpacerView.configureLayout { (layout) in layout.isEnabled = true layout.flexGrow = 1.0 } summaryView.addSubview(summaryInfoSpacerView) This serves as a spacer with flexGrow set to 1. summaryView has 3 child items. The first and third child items will take 25% of any remaining container space while the second item will take 50% of the available space. Build and run the app to see the properly tweaked layout: More Examples Continue building the layout to see more spacing and positioning examples. Add the following just after the summaryView code: let titleView = UIView(frame: .zero) titleView.configureLayout { (layout) in layout.isEnabled = true layout.flexDirection = .row layout.padding = self.padding } let titleEpisodeLabel = showLabelFor(text: selectedShowSeriesLabel, font: UIFont.boldSystemFont(ofSize: 16.0)) titleView.addSubview(titleEpisodeLabel) let titleFullLabel = UILabel(frame: .zero) titleFullLabel.text = show.title titleFullLabel.font = UIFont.boldSystemFont(ofSize: 16.0) titleFullLabel.textColor = .lightGray titleFullLabel.configureLayout { (layout) in layout.isEnabled = true layout.marginLeft = 20.0 layout.marginBottom = 5.0 } titleView.addSubview(titleFullLabel) contentView.addSubview(titleView) The code sets up titleView as a container with two items for the show’s title. Build and run the app to see the title: Add the following code next: let descriptionView = UIView(frame: .zero) descriptionView.configureLayout { (layout) in layout.isEnabled = true layout.paddingHorizontal = self.paddingHorizontal } let descriptionLabel = UILabel(frame: .zero) descriptionLabel.font = UIFont.systemFont(ofSize: 14.0) descriptionLabel.numberOfLines = 3 descriptionLabel.textColor = .lightGray descriptionLabel.text = show.detail descriptionLabel.configureLayout { (layout) in layout.isEnabled = true layout.marginBottom = 5.0 } descriptionView.addSubview(descriptionLabel) This creates a container view with horizontal padding and adds a child item for the show’s detail. Now, add the following code: let castText = "Cast: \(showCast)"; let castLabel = showLabelFor(text: castText, font: UIFont.boldSystemFont(ofSize: 14.0)) descriptionView.addSubview(castLabel) let creatorText = "Creators: \(showCreators)" let creatorLabel = showLabelFor(text: creatorText, font: UIFont.boldSystemFont(ofSize: 14.0)) descriptionView.addSubview(creatorLabel) contentView.addSubview(descriptionView) This adds two items to descriptionView for more show details. Build and run the app to see the complete description: Next, you’ll add the show’s action views. Add a private helper method to the ViewController extension: func showActionViewFor(imageName: String, text: String) -> UIView { let actionView = UIView(frame: .zero) actionView.configureLayout { (layout) in layout.isEnabled = true layout.alignItems = .center layout.marginRight = 20.0 } let actionButton = UIButton(type: .custom) actionButton.setImage(UIImage(named: imageName), for: .normal) actionButton.configureLayout{ (layout) in layout.isEnabled = true layout.padding = 10.0 } actionView.addSubview(actionButton) let actionLabel = showLabelFor(text: text) actionView.addSubview(actionLabel) return actionView } This sets up a container view with an image and label that are center-aligned horizontally. Now, add the following after the descriptionView code in viewDidLoad(): let actionsView = UIView(frame: .zero) actionsView.configureLayout { (layout) in layout.isEnabled = true layout.flexDirection = .row layout.padding = self.padding } let addActionView = showActionViewFor(imageName: "add", text: "My List") actionsView.addSubview(addActionView) let shareActionView = showActionViewFor(imageName: "share", text: "Share") actionsView.addSubview(shareActionView) contentView.addSubview(actionsView) This creates a container view with two items created using showActionViewFor(imageName:text). Build and run the app to view the actions. Time to lay out some tabs. Add a new method to the ViewController extension: func showTabBarFor(text: String, selected: Bool) -> UIView { // 1 let tabView = UIView(frame: .zero) tabView.configureLayout { (layout) in layout.isEnabled = true layout.alignItems = .center layout.marginRight = 20.0 } // 2 let tabLabelFont = selected ? UIFont.boldSystemFont(ofSize: 14.0) : UIFont.systemFont(ofSize: 14.0) let fontSize: CGSize = text.size(attributes: [NSFontAttributeName: tabLabelFont]) // 3 let tabSelectionView = UIView(frame: CGRect(x: 0, y: 0, width: fontSize.width, height: 3)) if selected { tabSelectionView.backgroundColor = .red } tabSelectionView.configureLayout { (layout) in layout.isEnabled = true layout.marginBottom = 5.0 } tabView.addSubview(tabSelectionView) // 4 let tabLabel = showLabelFor(text: text, font: tabLabelFont) tabView.addSubview(tabLabel) return tabView } Going through the code step-by-step: - Creates a container with center-aligned horizontal items. - Calculates the desired font info based on if the tab is selected or not. - Creates a view to indicate that a tab is selected<.> - Creates a label representing the tab title. Add the following code after actionsView has been added to contentView (in viewDidLoad_: let tabsView = UIView(frame: .zero) tabsView.configureLayout { (layout) in layout.isEnabled = true layout.flexDirection = .row layout.padding = self.padding } let episodesTabView = showTabBarFor(text: "EPISODES", selected: true) tabsView.addSubview(episodesTabView) let moreTabView = showTabBarFor(text: "MORE LIKE THIS", selected: false) tabsView.addSubview(moreTabView) contentView.addSubview(tabsView) This sets up the tab container view and adds the tab items to the container. Build and run the app to see your new tabs: The tab selection is non-functional in this sample app. Most of the hooks are in place if you’re interested in adding it later. You’re almost done. You just have to add the table view to the end. Add following code after tabView has been added to contentView: let showsTableView = UITableView() showsTableView.delegate = self showsTableView.dataSource = self showsTableView.backgroundColor = backgroundColor showsTableView.register(ShowTableViewCell.self, forCellReuseIdentifier: showCellIdentifier) showsTableView.configureLayout{ (layout) in layout.isEnabled = true layout.flexGrow = 1.0 } contentView.addSubview(showsTableView) This code creates and configures a table view. The layout configuration sets the flexGrow property to 1, allowing the table view to expand to fill out any remaining space. Build and run the app. You should see a list of episodes included in the view: Where To Go From Here? Congratulations! If you’ve made it this far you’re practically a Yoga expert. Roll out your mat, grab the extra special stretch pants, and just breathe. You can download the final tutorial project here. Check out the Yoga documentation to get more details on additional properties not covered such as Right-to-Left support. Flexbox specification is a good resource for more background on Flexbox. Flexbox learning resources is a really handy guide for exploring the different Flexbox properties. I do hope you enjoyed reading this Yoga tutorial. If you have any comments or questions about this tutorial, please join the forum discussion below!
https://www.raywenderlich.com/530-yoga-tutorial-using-a-cross-platform-layout-engine
CC-MAIN-2019-09
refinedweb
4,214
51.14
In this chapter, we will focus more on the comparison between multiprocessing and multithreading. It is the use of two or more CPUs units within a single computer system. It is the best approach to get the full potential from our hardware by utilizing full number of CPU cores available in our computer system. It is the ability of a CPU to manage the use of operating system by executing multiple threads concurrently. The main idea of multithreading is to achieve parallelism by dividing a process into multiple threads. The following table shows some of the important differences between them − While working with concurrent applications, there is a limitation present in Python called the GIL (Global Interpreter Lock). GIL never allows us to utilize multiple cores of CPU and hence we can say that there are no true threads in Python. GIL is the mutex – mutual exclusion lock, which makes things thread safe. In other words, we can say that GIL prevents multiple threads from executing Python code in parallel. The lock can be held by only one thread at a time and if we want to execute a thread then it must acquire the lock first. With the use of multiprocessing, we can effectively bypass the limitation caused by GIL − By using multiprocessing, we are utilizing the capability of multiple processes and hence we are utilizing multiple instances of the GIL. Due to this, there is no restriction of executing the bytecode of one thread within our programs at any one time. The following three methods can be used to start a process in Python within the multiprocessing module − Fork command is a standard command found in UNIX. It is used to create new processes called child processes. This child process runs concurrently with the process called the parent process. These child processes are also identical to their parent processes and inherit all of the resources available to the parent. The following system calls are used while creating a process with Fork − fork() − It is a system call generally implemented in kernel. It is used to create a copy of the process.p> getpid() − This system call returns the process ID(PID) of the calling process. The following Python script example will help you understabd how to create a new child process and get the PIDs of child and parent processes − import os def child(): n = os.fork() if n > 0: print("PID of Parent process is : ", os.getpid()) else: print("PID of Child process is : ", os.getpid()) child() PID of Parent process is : 25989 PID of Child process is : 25990 Spawn means to start something new. Hence, spawning a process means the creation of a new process by a parent process. The parent process continues its execution asynchronously or waits until the child process ends its execution. Follow these steps for spawning a process − Importing multiprocessing module. Creating the object process. Starting the process activity by calling start() method. Waiting until the process has finished its work and exit by calling join() method. The following example of Python script helps in spawning three processes import multiprocessing def spawn_process(i): print ('This is process: %s' %i) return if __name__ == '__main__': Process_jobs = [] for i in range(3): p = multiprocessing.Process(target = spawn_process, args = (i,)) Process_jobs.append(p) p.start() p.join() This is process: 0 This is process: 1 This is process: 2 Forkserver mechanism is only available on those selected UNIX platforms that support passing the file descriptors over Unix Pipes. Consider the following points to understand the working of Forkserver mechanism − A server is instantiated on using Forkserver mechanism for starting new process. The server then receives the command and handles all the requests for creating new processes. For creating a new process, our python program will send a request to Forkserver and it will create a process for us. At last, we can use this new created process in our programs. Python multiprocessing module allows us to have daemon processes through its daemonic option. Daemon processes or the processes that are running in the background follow similar concept as the daemon threads. To execute the process in the background, we need to set the daemonic flag to true. The daemon process will continue to run as long as the main process is executing and it will terminate after finishing its execution or when the main program would be killed. Here, we are using the same example as used in the daemon threads. The only difference is the change of module from multithreading to multiprocessing and setting the daemonic flag to true. However, there would be a change in output as shown below − import multiprocessing import time def nondaemonProcess(): print("starting my Process") time.sleep(8) print("ending my Process") def daemonProcess(): while True: print("Hello") time.sleep(2) if __name__ == '__main__': nondaemonProcess = multiprocessing.Process(target = nondaemonProcess) daemonProcess = multiprocessing.Process(target = daemonProcess) daemonProcess.daemon = True nondaemonProcess.daemon = False daemonProcess.start() nondaemonProcess.start() starting my Process ending my Process The output is different when compared to the one generated by daemon threads, because the process in no daemon mode have an output. Hence, the daemonic process ends automatically after the main programs end to avoid the persistence of running processes. We can kill or terminate a process immediately by using the terminate() method. We will use this method to terminate the child process, which has been created with the help of function, immediately before completing its execution. import multiprocessing import time def Child_process(): print ('Starting function') time.sleep(5) print ('Finished function') P = multiprocessing.Process(target = Child_process) P.start() print("My Process has terminated, terminating main thread") print("Terminating Child Process") P.terminate() print("Child Process successfully terminated") My Process has terminated, terminating main thread Terminating Child Process Child Process successfully terminated The output shows that the program terminates before the execution of child process that has been created with the help of the Child_process() function. This implies that the child process has been terminated successfully. Every process in the operating system is having process identity known as PID. In Python, we can find out the PID of current process with the help of the following command − import multiprocessing print(multiprocessing.current_process().pid) The following example of Python script helps find out the PID of main process as well as PID of child process − import multiprocessing import time def Child_process(): print("PID of Child Process is: {}".format(multiprocessing.current_process().pid)) print("PID of Main process is: {}".format(multiprocessing.current_process().pid)) P = multiprocessing.Process(target=Child_process) P.start() P.join() PID of Main process is: 9401 PID of Child Process is: 9402 We can create threads by sub-classing the threading.Thread class. In addition, we can also create processes by sub-classing the multiprocessing.Process class. For using a process in subclass, we need to consider the following points − We need to define a new subclass of the Process class. We need to override the _init_(self [,args] ) class. We need to override the of the run(self [,args] ) method to implement what Process We need to start the process by invoking thestart() method. import multiprocessing class MyProcess(multiprocessing.Process): def run(self): print ('called run method in process: %s' %self.name) return if __name__ == '__main__': jobs = [] for i in range(5): P = MyProcess() jobs.append(P) P.start() P.join() called run method in process: MyProcess-1 called run method in process: MyProcess-2 called run method in process: MyProcess-3 called run method in process: MyProcess-4 called run method in process: MyProcess-5 If we talk about simple parallel processing tasks in our Python applications, then multiprocessing module provide us the Pool class. The following methods of Pool class can be used to spin up number of child processes within our main program This method is similar to the.submit()method of .ThreadPoolExecutor.It blocks until the result is ready. When we need parallel execution of our tasks then we need to use theapply_async()method to submit tasks to the pool. It is an asynchronous operation that will not lock the main thread until all the child processes are executed. Just like the apply() method, it also blocks until the result is ready. It is equivalent to the built-in map() function that splits the iterable data in a number of chunks and submits to the process pool as separate tasks. It is a variant of the map() method as apply_async() is to the apply() method. It returns a result object. When the result becomes ready, a callable is applied to it. The callable must be completed immediately; otherwise, the thread that handles the results will get blocked. The following example will help you implement a process pool for performing parallel execution. A simple calculation of square of number has been performed by applying the square() function through the multiprocessing.Pool method. Then pool.map() has been used to submit the 5, because input is a list of integers from 0 to 4. The result would be stored in p_outputs and it is printed. def square(n): result = n*n return result if __name__ == '__main__': inputs = list(range(5)) p = multiprocessing.Pool(processes = 4) p_outputs = pool.map(function_square, inputs) p.close() p.join() print ('Pool :', p_outputs) Pool : [0, 1, 4, 9, 16]
https://www.tutorialspoint.com/concurrency_in_python/concurrency_in_python_multiprocessing.htm
CC-MAIN-2021-39
refinedweb
1,546
55.24
on a lenny system, recently, someone else went in and reconfigured the authentication used for the mail system, and mail is currently working fine. i now want to add a new email client (IMP, part of the horde framework) but i obviously need to configure it to match whatever the new mail authentication protocol is. where can i find that? for those who've actually worked with horde and IMP, here's what looks like the relevant snippet from the horde/imp/config/servers.php file: $servers['imap'] = array( 'name' => 'IMAP Server', 'server' => 'localhost', 'hordeauth' => false, 'protocol' => 'imap/notls', 'port' => 143, 'folders' => '', 'namespace' => '', 'maildomain' => '[deleted].com', 'smtphost' => 'localhost', 'realm' => '', 'preferred' => '', 'dotfiles' => false, 'hierarchies' => array() ); i have good reason to believe that those settings are no longer correct, i just need to know how to figure out what they *should* be on this system. rday -- ======================================================================== Robert P. J. Day Waterloo, Ontario, CANADA Linux Consulting, Training and Kernel Pedantry. Web page: Twitter: ========================================================================
https://lists.debian.org/debian-user/2009/11/msg00230.html
CC-MAIN-2017-09
refinedweb
160
62.88
We are gonna use our Intel Edison with an Arduino shield and a grove shield to connect as many sensors as you want. GitHub : Step 1: Communicate With Our Edison Through a Serial COM First, let’s communicate with our Edison through a Serial COM. Install correct drivers for Edison () . Then open your fav Term software, we are gonna use Tera Term () and set correct COM PORT for Edison and BaudRate to 115200. And that’s it, you are now talking with your Edison board. If nothing appears on your TeraTerm screen, just tap enter. Then just write: root, for loggin into your Linux Yocto. Step 2: Installing a Repo We are gonna use an unoficcial Repo for Intel Edison, please just follow the next instructions from the next link: Once you complete the steps from the link, use your fav text editor on Linux, I’m using “nano”, to install it just write: opkg Install nano. Step 3: Sensors UV sensor that we used it’s a Grove UV sensor. You can see the wiki here:. We totally recommended to check this ebook to know more about Edison programming in Linux Enviroment : take a look on the libraries rmaa and upm. You can find amazing examples in the next link for Python environment:. In additional we’re gonna use a Grove Temperature sensor, just to provide more information to the user. Wiki: Step 4: Connecting Sensors to Edison Boards So, let’s start to connect our Edison and sensors. -Be carefull with the way that you connect your wires to the shield, I recommend to pair GND pin with Black wire, so it will be easier identify each pin on the shield and your sensor. -Both are analog sensors, so you have to connect it to Analog Inputs, something like this. -If you want to add a status led you can do it, I added a RGB Led, and connected in this way, each pin it’s a diffetent color, and the largest pin its Ground. So, you will have something like this: Step 5: Now open a new python file with: nano exe.py and paste the next code: import time, sys, signal, atexit import mraa #Lib for UV Sensor import pyupm_guvas12d as upmUV #Lib for Temp Sensor import pyupm_grove as upmTemp #Statys Led Variables RedLed = mraa.Gpio(3) GreenLed = mraa.Gpio(4) #Init sensors myUVSensor = upmUV.GUVAS12D(0); temp = upmTemp.GroveTemp(1) #Operating voltage for UV sensor GUVAS12D_AREF = 5.0; SAMPLES_PER_QUERY = 1024; #Set to OUTPUT Led Pins RedLed.dir(mraa.DIR_OUT) GreenLed.dir(mraa.DIR_OUT) #Handler for error exit def SIGINTHandler(signum, frame): raise SystemExit #Handler for ctrl+c def exitHandler(): RedLed.write(0) GreenLed.write(0) print "Exiting" sys.exit(0) #Init our Handlers atexit.register(exitHandler) signal.signal(signal.SIGINT, SIGINTHandler) while(1): #Read of temp sensor celsius = temp.value() #Read of UV sensor s = myUVSensor.value(GUVAS12D_AREF, SAMPLES_PER_QUERY) s= s/200 print s #Turn on GreenLED if UV is OK if (s<4): RedLed.write(0) GreenLed.write(1) #Turn on RedLed if UV its not OK #You can set your owns elif (s>4): GreenLed.write(0) RedLed.write(1) #Print temp print celsius time.sleep(.5) Run your exe.py file with: python exe.py Step 6: Results Now you have a little sun station that it will indicate with a red led if the sun its getting rude, or green led if the sun its happy! We take information about UV recommendations from the National Weather Services from the US: Try it and tell me your results. BTW… INTEL ROCKS! 2 Discussions 3 years ago So cool! How long have you been working with the Intel Edison? How do you like it compaired to other micro controllers? Reply 3 years ago Hi, let me remember... Amm.. Exactly one day, haha. It's a great tool, you have all capabilities like a raspberry PI, but more integration for communications. It's a perfect tool to develop projects, very stable btw. I totally recommend to program in python if you want to send data using mqtt.
https://www.instructables.com/id/Intel-Edison-Sun-Station-UV-and-Temp-With-Python/
CC-MAIN-2019-09
refinedweb
683
74.29
NAME, uevents, usually used to replay events at system coldplug. --verbose Print the list of devices which will be triggered. --dry-run Do not actually trigger the event. --retry-failed Trigger only the events which are failed during a previous run. -. --socket=path Pass the synthesized events to the specified socket, instead of triggering a global kernel event. All available event values will be send in the same format the kernel sends an uevent, or RUN+="socket:path" sends a message. If the first character of the specified path is an @ character, an abstract namespace socket is used, instead of an existing socket file. --env=KEY=value Pass an additional environemt key to the event. This works only with the --socket option. udevadm settle [options] Watches the udev event queue, and exits if all current events are handled. --timeout=seconds Maximum number of seconds to wait for the event queue to become empty. The default value is 180 seconds. - from the config. --env=KEY=value Set global variable. --max_childs=value Set the maximum number of events, udevd will handle at the same time. --max_childs_running=value)
http://manpages.ubuntu.com/manpages/intrepid/man8/udevadm.8.html
CC-MAIN-2014-42
refinedweb
185
59.8
This is the mail archive of the elfutils-devel@sourceware.org mailing list for the elfutils project. Mark Wielaard <mjw@redhat.com> writes: > On Thu, 2015-04-16 at 17:42 +0200, Petr Machata wrote: >> Mark Wielaard <mjw@redhat.com> writes: >> > The only disadvantage of that seems to be that it would immediately >> > introduce a v2 variant if we do it after a public release. >> >> I don't even think it would. Adding a new constructor doesn't break any >> existing clients. API-wise it would be stable as well, this adds use >> cases, doesn't change any (not even by way of implicit conversions from, >> say, Dwarf_Die * to unit_iterator). > > That is a nice surprise. Looking for more background on C++ ABI > compatibility issues I found the following overview handy: > Yes, that's a great resource. > So you can always add new functions, including constructors, to classes > and don't break ABI. But because our classes are now not pimpl'd we have Non-virtual member function, including constructors, are essentially the good old C functions with funny names. Things break around virtuals. > to be careful not to change/add/delete any state data member field or we > will break ABI (and will have to introduce a new v2 namespace). Yes. >> As long as your iteration is limited to well-formed sub-tree (i.e. you >> don't wan't to e.g. start at one leaf node and progress through half the >> CU to another leaf node), the simple vector of offsets would, I think, >> be still enough to keep all the context that you need. There might be >> code that assumes that iteration starts and ends at CU DIE's, but that >> could be transparently fixed. > > So, this does concern me a little. We have to be absolutely sure that > keeping the state as a vector of offsets is all we ever need. The only problem that I see is what to do with the m_cuit. For DIE-to-DIE iteration, it would probably be set to unit_iterator::end(). That now represents an end-iterator. So end iterator would be represented as m_die.addr == NULL instead--no valid DIE will have a NULL address. >> I'm not sure. I don't think so, at least you would need a different way >> of keeping track of context. > > Could we maybe abstract away the m_stack context (maybe pimpl it)? Yes, if you decide to overload the iterator to do both logical and raw iteration. I don't think that's the right approach. >> Personally I think adding a new iterator that builds on the raw one >> is the best option. > >> One could also make the new iterator configurable, like Frank says in >> his mail. Then you would have a single tree iterator type that can >> behave either way. > > Somehow this appeals more to me. But I admit to not know why. Less > classes feels better somehow. Or maybe it is just that the name > die_tree_iterator sounds more generic than it actually is. Those might > be totally misguided feelings though. I meant the new iterator. Splitting the concerns into a raw iterator and a high-level one that builds upon the raw one absolutely seems the right approach to me. The raw iterator doesn't end up paying the overhead for raw iteration, and the complexities of both raw and logical iteration will be kept to their respective silos. Just to make sure I'm not talking out of my ass, I put together a logical_die_tree_iterator. It's on pmachata/iterators. >> I still think exposing raw iterator is the right way forward, because it >> is a logical building block. A logical tree iterator can be built on >> top of that even without C support, if that's what's deemed best. > > If none of the above convinced you otherwise then lets just go with the > code you wrote now. But only on the condition that you come up with a > non-stupid name for the future "logical_die_tree_iterator" :) Note that the raw/logical distinction applies at least to child_iterator as well, maybe also to unit_iterator. I think something can be done to make logical_die_tree_iterator::move configurable, so that it's useful in child iterator as well. (One could almost write a template for logical iteration that is parametrized by iterator type, but not quite: for the tree iterator one needs to skip the root that it's constructed from, but with child iterator that's ignored implicitly. Besides, templates are ABI nightmare.) Regarding the names, I don't have any good ideas. In dwgrep I ended up with raw and cooked, which is in hindsight perhaps too cute. Maybe we can rename the current bunch to raw_*, and add the logical ones without a prefix. Thanks, Petr
https://sourceware.org/ml/elfutils-devel/imported/msg02378.html
CC-MAIN-2018-13
refinedweb
795
65.73
Hello, I work on Linux environment. in the current application i work on, I will have several consumers for some form of data. In my design each consumer will have their own concurrent_queues and data generator will push data in to the corresponding queues. When the consumer receives the data, it will call parallel_for to push its own consumers to make some calculations with the data. In order to have these separate queues work on their separate thread/task, may i create TBB tasks? or should i go with pthreads? Shall i consider queue consuming as an I/O bounded task so i shouldnt consider TBB tasks? thanks in advance. Link Copied Hello, From my perspective, the application you described resembles several pipeline schemes applied simultaneously. Specifically for such application designs, TBB Flow Graph was introduced in the library. With Flow Graph there is no point to introduce concurrent_queues, but use flow graph nodes for this purpose. Consider multifunction_node that has many queue_nodes as successors. In its body multifunction_node decides in which successor(s) (queue_node) to put the next message. After every queue_node goes function_node that makes some calculations with the data received. In the beginning of the graph there could be a source_node that fetches the data from some source (i.e. consumes it) and passes it to the multifunction_node. Regards, Aleksei Thank you for the prompt reply. It is correct that flow graph or pipeline might help here for my requirements. However, i am not used to those so in order to have a fast solution, i wanted to continue with the queue. So do you have any comments for the solution with the concurrent queue? thanks and regards. Hi, Could you please provide more info regarding your design then. I want to understand why you really need concurrent queues? Why not simply call parallel_for whenever the data is received and not pass it through the concurrent queue at first? Regards, Aleksei Hello, This is an option pricing engine. Application has several underlying instruments and each instrument has several options defined under them. Market Data is listened by the application. Each change in market data for a specific instrument triggers pricing calculations for the options defined under that specific instrument. So Market Data listener will put the market data change events in the corresponding queue of the specific underlying and when the underlying gets that market data update from its specific queue will trigger price calculations for its options. (through parallel_for) In conclusion, parallel_for will be used for the calculations of a specific instruments' options. But in order to have the concurrency for different instruments, i plan to have the queues defined per instrument. As soon as market data listener reads a market data update, it will write it to the specific queue and continue with parsing the next message. (which might be an update for another instrument.) regards. cinar. Then whenever the data is put in the queue you can enqueue TBB task that will retrieve that data from that queue and do further processing. However, please keep in mind that TBB tasks is a low level interface and its usage is discouraged. The other variant that I see is to create additional threads that will wait on the "pop" operation of concurrent_bounded_queues. But in this case every thread will be dedicated to service its own queue and you would need to manage threads synchronization on your own. Whenever the data is available the thread could call, for example, parallel_for algorithm for price calculation. I still encourage you to read TBB Flow Graph component's documentation (...) as it should be very useful in your case. Regards, Aleksei Hello Aleksei, regarding tbb flow graph usage for my usecase; as soon as new market data is received, i will create a graph and fire it. But shouldnt i wait for that graph to finish (wait_for_all()) and then process the next market data in this usecase? Then i cant have parallel processing of market data for different instruments. Could you help me here? Hello cinar, Glad to hear that you decided to give a chance to flow graph-based design for you app. First of all, you do not need to create a graph each time you received new market data. It is enough to create it once and then put the data into it (through try_put() API) each time the new portion of data needs to be processed. Or course, in order to be sure that a particular try_put() has made all the way through the graph, graph.wait_for_all() must be called. However, it is not required that every wait_for_all() call should correspond to every start of the graph. One can start the graph many times by putting the messages (perhaps, concurrently) in it starting node(s) and then call wait_for_all() only once. This guarantees that all the messages that were put have been processed by the graph. In this sense, all the messages will be processed in parallel relatively to themselves by the graph. As far as I understand, in your particular case with several instruments and several options defined for each instrument, you would like to separate the thread that listens to the changes in market data from the thread that "consumes these updates" from the listener. I would try to have sort of concurrent queue between them. The listener puts the data into the queue, while the consumer thread gets the data and try_put() it into the graph and calls wait_for_all() on it. Now, once the thread returns from this blocking call, it goes back to the concurrent queue to consume (put them into the graph again), attention, all the updates that have been put into the queue by the listener while the consumer was busy in wait_for_all() call. In addition, you could make sort of a dispatching node (try multifunction_node) at the beginning of the graph, that will decide what specific instrument the changed data belogs to, and put the message into the successor node, that corresponds to the calculation of options for that specific instrument. Regards, Aleksei Hello Aleksei, I plan to create the solution as you suggested. I will have the multifunction_node and it will send the market data to the related underlying's function node in which i will have the parallel_for for the option calculations. I wanted to get experienced with the multifunction_node and created the following solution as a poc. As the function_nodes are created with serial flag, i expected the numbers output to be in ascending order. But this isnt the case; numbers are written in a random order. What do i miss here? Could you comment? regards. #include "tbb/flow_graph.h" #include <chrono> #include <thread> using namespace tbb::flow; typedef multifunction_node<int, tbb::flow::tuple<int,int> > multi_node; struct MultiBody { void operator()(const int &i, multi_node::output_ports_type &op) { if(i % 2) bool x = std::get<1>(op).try_put(i); // put to odd queue else { bool x = std::get<0>(op).try_put(i); // put to even queue } } }; int main() { graph g; tbb::flow::function_node< int, tbb::flow::continue_msg, tbb::flow::queueing > first_worker(g, tbb::flow::serial, [] (const int& data) { printf("Process data with first worker: %d\n", data); }); tbb::flow::function_node< int, tbb::flow::continue_msg, tbb::flow::queueing > second_worker(g, tbb::flow::serial, [] (const int& data) { printf("Process data with second worker: %d\n", data); }); multi_node node1(g,unlimited,MultiBody()); make_edge(output_port<0>(node1), first_worker); make_edge(output_port<1>(node1), second_worker); for(int i = 0; i < 1000; ++i) { node1.try_put(i); } g.wait_for_all(); } Hello cinar, multifunction_node has unlimited parallelism, which means that invocations of its body could be done concurrently. And when things are happening concurrently all the bets are off. Therefore, one possible situation is when the second body invocation of multifunction_node completes first and puts the message to the corresponding function_node. Now imagine that similar thing could happen with 1000 invocations. So, depending on how OS schedules threads, their number, core states, etc., it could be perfectly legal for the thousandth message to come first to the function_node. In order to have ordering of messages one would need to use "sequencer_node" (...). This is the node that maintains global ordering across the messages. Regards, Aleksei Hello Aleksei, Right after i wrote the message, i understood the situation. Thanks for your comment. If i dont use the multifunction node and i create one function node (configured to be run serially) per underlying and then as soon as i get the market data, i can find the corresponding underlying and then try_put this market data to that node; inside the node i will call parallel_for for each of the options owned by that underlying. So, in summary N function_nodes for N underlyings with parallel_for inside the function_nodes will be the solution for my requirements. Do you agree? regards. Hi cinar, Yes, sounds reasonable to me except that I cannot understand why you need to make function_nodes to have serial semantics? Do these nodes are really processing shared data? I would also try to make them with unlimited concurrency (perhaps, with unavoidable data copying) and look which solution provides more throughput of the system. Regards, Aleksei. hello Aleksei, Market data for a specific underlying instrument must be processed serially (i.e. which price happens first must be processed first.) Consider the following scenario: for a specific underlying, following two prices come one after the other; 10.0 and then 10.1 If i allow the function_node (which exists to be separate for each underlying instrument) to have parallel semantics, first 10.1 might be processed and then 10.0. The last processed price is 10.0 but the last price for the underlying is 10.1. so there is no shared data but order of the market data is important. do i miss something here? regards. Hi cinar, You got it right, if function_node.try_put() is done from the same thread first for 10.0 and then for 10.1 price. Regards, Aleksei hello Aleksei, After your recommendations, i started to use flow graph for my async needs. I have several classes that has funciton nodes. All of the function nodes are leaf nodes, i.e. they are not connected to any other nodes. You told that i can only have one graph in the entire application. I have several questions but first one: I have such a code: Underlying* ul = ulit->second; if (depthData.side == Side::Buy) { bool b = (ul->getBidworker())->try_put(depthData); } else if (depthData.side == Side::Sell) { bool b = (ul->getAskworker())->try_put(depthData); } Inside the worker codes, i made a mistake and caused an infinite loop. I thought that those ul objects that have the infinite loops would have the problem. (i.e. not processing any more data through these workers) But application completely stopped, i.e. no other ul object received data through their workers. What do i miss here? Then shall i have separate graph objects? Isnt this design correct to have async behavior? regards.
https://community.intel.com/t5/Intel-oneAPI-Threading-Building/consuming-TBB-concurrent-queue/td-p/1149270
CC-MAIN-2021-21
refinedweb
1,828
55.54
Parse, log and continue I'm using cssutils in a script for making a survey on the way CSS is used in some circumstances. A nice feature of cssutils is that it will validate the CSS rules. I came across this nasty rule on Yahoo! Canada Web site {{{ !CSS .y-lang-tgl { background-color:#fof6fb;} }}} Currently my code is along this for this part {{{ !python def getStyleElementRules(self, htmltext): """Given an htmltext, return the CSS rules contained in the content""" compiledstyle = "" tree = etree.HTML(htmltext) styleelements = tree.xpath('//style') for styleelt in styleelements: if styleelt.text != None: compiledstyle = compiledstyle + styleelt.text else: logging.debug("STYLE ELEMENT NONE") if compiledstyle != None: cssutils.ser.prefs.indentClosingBrace = False cssutils.ser.prefs.keepComments = False cssutils.ser.prefs.lineSeparator = u'' cssutils.ser.prefs.omitLastSemicolon = False stylesheet = cssutils.parseString(compiledstyle) else: raise ValueError("STYLE ELEMENT: no CSS Rules") return stylesheet }}} In this case I get "ValueError: invalid literal for int() with base 16: 'fo'" for the invalid rule. I would love in these circumstances to be able to catch all style errors, log the event, but continue the processing. Basically CSSStyleSheet could return CSSStyleSheet.cssRules and CSSStyleSheet.bogusRules where bogusRules would be css rules which are really not understandable. OR maybe there is another way did you look into the feature? Not sure if that would help. I think there was a patch for the ValueError problem but sadly I am currently a bit busy with other projects :( But maybe you could try the current bitbucket version? Let me know if this helps, maybe i could find the time to at least make a release with the ValueError fix. To be frank though, I won't have time for a bogusRules feature, simply because I would have no time for it. I could install the new version from the bitbucket. No worries. I'm currently using. Thanks. If I have time after this program I'm working on, I might send you pull requests. btw, the new release 0.9.10 should at least parse the original css you tried (wrong hex color). Does not change your real question though, sorry.
https://bitbucket.org/cthedot/cssutils/issues/19/parse-log-and-continue
CC-MAIN-2017-51
refinedweb
355
69.48
I've created a function which creates a new password. But I want a function to store it in a text file from which it will be called when the program is started. Also how do I make an edit or delete password function? As the password will have to be called from the file to edit and store back,I do not know how to do create it. def new_password(): password = input ("Enter a password: ") verify = input ("Re-enter your password: ") while password != verify: print("Enter your password correctly.") password = input ("Enter a password. ") verify = input ("Re-enter yourpassword: ") print("Password created") return password
https://www.daniweb.com/programming/software-development/threads/361245/help-how-to-edit-delete-and-store-a-password
CC-MAIN-2019-04
refinedweb
105
67.35
Love the material AppBar? Do you want to add more color to the appbar? Here's a gradientAppBar. It works just like the normal AppBar. Also with actions, back buttons, titles. So it's just your normal AppBar, but with a twist!! example/README.md A new Flutter project. For help getting started with Flutter, view our online documentation. Add this to your package's pubspec.yaml file: dependencies: gradient_app_bar: ^0.1.3 You can install packages from the command line: with Flutter: $ flutter pub get Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:gradient_app_bar/gradient_app_bar.dart'; We analyzed this package on Aug 16, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter References Flutter, and has no conflicting libraries. Format lib/gradient_app_bar.dart. Run flutter format to format lib/gradient_app_bar.dart. The description is too long. (-10 points) Search engines display only the first part of the description. Try to keep the value of the description field in your package's pubspec.yaml file between 60 and 180 characters.
https://pub.dev/packages/gradient_app_bar
CC-MAIN-2019-35
refinedweb
199
53.78
This code shows you how to build a fast and performing control using C# and .NET 2.0. I wrote a similar control as an ActiveX once, using C++, ATL, and GDI, and wondered if it is possible to write performing code using .NET and GDI+. I needed it for another project. So I wrote this little control to show that it actually works. The code consists of a C# application and a custom control. The custom control really is the interesting part. We derive from Control as this doesn't give us all these properties we don't actually need like a usercontrol would give us, for example. Control public partial class AGauge : Control Well, there are still properties that show up in the designer that are not necessary. In C#, you can use the new keyword to get rid of them (shadows in VB). new public new Boolean AllowDrop, AutoSize, ForeColor, ImeMode For properties that you want to use but with a different behaviour, you can use the override keyword (if overrideable) to tell the program to call this overridden property instead of the implementation of the base class, which in our case is the implementation in Control. override public override System.Drawing.Color BackColor.. public override System.Drawing.Font Font.. public override System.Windows.Forms.ImageLayout BackgroundImageLayout.. To be able to further customize the control in the designer, we need to add some properties of our own. E.g., [System.ComponentModel.Browsable(true), System.ComponentModel.Category("AGauge"), System.ComponentModel.Description("The value.")] public Single Value.. The Browsable attribute tells the designer to show the property in the toolbox or not. The Category attribute tells the designer where to show the property if the categorized view is selected, and the Description attribute adds a description to the property that the designer can show in the toolbox. Browsable Category Description An event can carry additional information that is sent to the "listening" program, e.g., the form's event handler for this event. We want the event to carry the number of the range the needle is in (if it changes from being in one range to being in another). To add some data to the event, we derive from the standard event args and add a variable which is initialized in the constructor. This will hold the extra information sent along. public class ValueInRangeChangedEventArgs : EventArgs { public Int32 valueInRange; public ValueInRangeChangedEventArgs(Int32 valueInRange) { this.valueInRange = valueInRange; } } The event handler "listening" for our event needs to be of a type that "understands" our event. With the delegate statement, we define this type. public delegate void ValueInRangeChangedDelegate(Object sender, ValueInRangeChangedEventArgs e); [Description("This event is raised if the value falls into a defined range.")] public event ValueInRangeChangedDelegate ValueInRangeChanged; The event is of the type we defined in the delegate statement. The Description attribute enables the designer to show a description for the event in the Toolbox. The constructor is called when the control is created, e.g., before it will be shown in the designer. Here, we set the style of the control to enable double buffering. This isn't really necessary since we will do our own double buffering, but it doesn't hurt to do so. public AGauge() { InitializeComponent(); SetStyle(ControlStyles.OptimizedDoubleBuffer, true); } We need to override some of the member functions. First, we override OnPaintBackground to ensure that the background is not painted each time the control is refreshed, this uses too much CPU even if double buffering is enabled. One drawback is that we need to handle the drawing of a background image ourselves, but this isn't too much of a problem. OnPaintBackground protected override void OnPaintBackground(PaintEventArgs pevent) { } If the control is resized, we need to refresh it. So we override OnResize. OnResize protected override void OnResize(EventArgs e) { drawGaugeBackground = true; Refresh(); } The global variable "drawGaugeBackground" is set to true to tell the control to completely redraw itself. Refresh forces the control to redraw, or if you like to call OnPaint, under the hood, a Windows message is sent, but this is a different story. drawGaugeBackground true Refresh OnPaint Finally, we need to override OnPaint to show some output to the user. This is what our control really does, it shows the output to the user. It doesn't handle user input like a scrollbar would do. A scrollbar would override OnMouseMove, OnMouseDown, OnKeyPressed, and so on. OnPaint is the heart of our control. OnMouseMove OnMouseDown OnKeyPressed protected override void OnPaint(PaintEventArgs pe) OnPaint, which is called every time the control is redrawn, e.g., if the value of the gauge changed, determines if it should completely redraw itself or simply paint the background part with the performant function DrawImage. If the background hasn't changed, it only needs to draw the needle, thus avoiding costly GDI+ functions to be called every time. The background changes, e.g., if a property like a color has changed, or the control is resized, for example. DrawImage So it really is possible to write fast and performing controls with GDI+ if we use double buffering and blitting (DrawImage). If you like VB better than C#, you can search for "SpeedyHMI" on SourceForge, this project I wrote contains this gauge written in VB. Download, build, run and,.
http://www.codeproject.com/Articles/17559/A-fast-and-performing-gauge?fid=384340&df=90&mpp=50&sort=Position&spc=Relaxed&tid=4576514&PageFlow=FixedWidth
CC-MAIN-2017-17
refinedweb
878
55.74
If you have programmed in any other language before, you likely wrote some functions that "kept state". For those new to the concept, a state is one or more variables that are required to perform some computation but are not among the arguments of the relevant function. Object-oriented languages, like C++, make extensive usage of state variables within objects in the form of member variables. Procedural languages, like C, use variables declared outside the current scope to keep track of state. In Haskell, however, such techniques cannotEditEdit: import Control.Monad import System.Random rollDiceIO :: IO (Int, Int) rollDiceIO = liftM2 (,) (randomRIO (1,6)) (randomRIO (1,6)) That function rolls two dice. Here, liftM2 is used to make the non-monadic two-argument function (,) work within a monad. The (,) is the non-infix version of the tuple constructor. Thus, the two die rolls will be returned (in IO) as a tuple. Getting Rid of the IO MonadEditEdit. It is also error-prone: what if we pass one of the middle generators to the wrong line in the where clause? What we really need is a way to automate the extraction of the second member of the tuple (i.e. the new generator) and feed it to a new call to random. This is where the State monad comes into the picture. Introducing StateEdit Note In this chapter we will use the state monad provided by the module Control.Monad.Trans.State of the transformers package. By reading Haskell code in the wild, you will soon meet Control.Monad.State, a module of the closely related mtl package. The differences between these two modules need not concern us at the moment; everything we discuss here also applies to the mtl variant. The Haskell type State describes functions that consume a state andEdit Notice that we defined the data type with the newtype keyword, rather than the usual data. newtype can be used only for types with just one constructor and just one field. It ensures that the trivial wrapping and unwrapping of the single field is eliminated by the compiler. For that reason, simple wrapper types such as State are usually defined with newtype. Would defining a synonym with type be enough in such cases? Not really, because type does not allow us to define instances for the new data type, which is what we are about to do... Instantiating the MonadEditEditEditEdit is a processor of the generator state. The generator state itself is produced by the mkStdGen function. Note that GeneratorState does not specify what type of values we are going to extract, only the type of the state. We can now produce a function that, given a StdGen generator, outputs a number between 1 and 6. rollDie :: GeneratorState Int rollDie = do generator <- get let (value, newGenerator) = randomR (1,6) generator put newGenerator return value). - Then, we use the randomRfunction to produce an integer between 1 and 6 using the generator we took; we also store the new generator graciously returned by randomR. - We then set the state to be the newGeneratorusing the putfunction, so that the next call will use a different pseudo-random generator; - Finally, we inject the result into the GeneratorStatemonad using return. We can finally use our monadic die: > evalState rollDie (mkStdGen 0) 6 Why have we involved monads and built such an intricate framework only to do exactly what fst $ randomR (1,6) already does? Well, consider the following function: rollDice :: GeneratorState (Int, Int) rollDice = liftM2 (,) rollDie rollDie We obtain a function producing two pseudo-random numbers in a tuple. Note that these are in general different: > evalState rollDice (mkStdGen 666) (6,1) Under the hood, the monads are passing state to each other. It was previously very clunky using randomR (1,6) because we had to pass state manually. Now, the monad is taking care of that for us. Assuming we know how to use the lifting functions, constructing intricate combinations of pseudo-random numbers (tuples, lists, whatever) has suddenly become much easier. Pseudo-random values of different typesEdit a similarly "agnostic" function (analogous to rollDie) that provides a pseudo-random value of unspecified type (as long as it is an instance of Random): getRandom :: Random a => GeneratorState a getRandom = do generator <- get let (value, newGenerator) = random generator put newGenerator return value Compared to rollDie, this function does not specify the Int type in its signature and uses random instead of randomR; otherwise, it is just the same. getRandom can be used for any instance of Random: > evalState getRandom (mkStdGen 0) :: Bool True > evalState getRandom (mkStdGen 0) :: Char '\64685' > evalState getRandom (mkStdGen 0) :: Double 0.9872770354820595 > evalState getRandom (mkStdGen 0) :: Integer 2092838931 Indeed, it becomes quite easy to conjure all these at once: allTypes :: GeneratorState (Int, Float, Char, Integer, Double, Bool, Int) allTypes = liftM (,,,,,,) getRandom `ap` getRandom `ap` getRandom `ap` getRandom `ap` getRandom `ap` getRandom `ap` getRandom Here we are forced to used the ap function, defined in Control.Monad, since there exists no liftM7 (the standard libraries only go to liftM5). As you can see, ap fits multiple computations into an application of the (lifted) n-element-tuple constructor (in this case the 7-item (,,,,,,)). To understand ap further, look at its signature: ap :: (Monad m) => m (a -> b) -> m a -> m b Remember then that type a in Haskell can be a function as well as a value, and compare to: >:type liftM (,,,,,,) getRandom liftM (,,,,,) getRandom :: (Random a1) => State StdGen (b -> c -> d -> e -> f -> (a1, b, c, d, e, f)) The monad m is obviously State StdGen (which we "nicknamed" GeneratorState), while ap's first argument is function b -> c -> d -> e -> f -> (a1, b, c, d, e, f). Applying ap over and over (in this case 6 times), we finally get to the point where b is an actual value (in our case, a 7-element tuple), not another function. To sum it up, ap applies a function-in-a-monad to a monadic value (compare with liftM, which applies a function not in a monad to a monadic value). So much for understanding the implementation. Function allTypes provides pseudo-random values for all default instances of Random; an additional Int is inserted at the end to prove that the generator is not the same, as the two Ints will be different. > evalState allTypes (mkStdGen 0) (2092838931,9.953678e-4,'\825586',-868192881,0.4188001483955421,False,316817438).
http://en.m.wikibooks.org/wiki/Haskell/Understanding_monads/State
CC-MAIN-2015-11
refinedweb
1,065
50.77
Answered by: SMO - .NET 4.0 I created a new application with VS2010 which by default uses the .NET 4.0. I added some canned function from an existing application which uses .NET 3.5. The canned functions use SMO so I added the 10.0.0.0 SMO from the SDK that I have on my system. When I run the application it throws and can't use 2.0 with 4.0 error. Is there a new SDK for SQL that uses the NET 4.0? Or another work around? The SQL SDK I am using is August 2008 Error message: Microsoft.SqlServer.Management.Smo.FailedOperationException: ExecuteNonQuery failed for Database 'TM-Data'. ---> System.IO.FileLoadException: Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information. John J. Hughes Question Answers All replies Hi John, Based on your description and the error message, I think the issue is more related to .NET Framework. Please refer to the following threads: Please remember to mark the replies as answers if they help and unmark them if they provide no help. Welcome to the All-In-One Code Framework! If you have any feedback, please tell us. Jian, Thanks for the response. I disagree that it is a .NET framework issue per say. If the SQL group updates there SDK files to use .NET 4.0 I won't have a problem so the question stands is there or is there going to be an update to the SMO SDK? Now as you have pointed out there are many reference to the "useLegacyV2RuntimeActivationPolicy" and I agree it might work if placed in the machine.config but it does not work in my case in the app.config. I will not have access to my client's machines to update the machine.config file. If there is another way of telling the system to use both I could try that but I have not found it yet. Most of the solutions I have found are for WEB applications and my is WPF based. I am also concerned if I require .NET 4.0 to be installed which means the .NET 2.0/3.0/3.5 don't have to exist (if I understand 4.0) setting the legacy value will cause a failure which will require I tell them to install 3.5 with 2.0. Sort of defeats the 4.0 not requiring 2.0 concept. Thanks but still looking for an answer. John J. Hughes Hi I have the same problem. The ExecuteNonQuery method of the Database class failed, however, I was able to create tables and stored procedures using the Table and Procedure clases. I was only able to resolve the problem by downgrading to the 3.5 framework. I'm curious why Microsoft has not provided new SMO libraries for the 3.5 and 4.0 frameworks. There isn't much point upgrading to the new framework if the libraries we are dependent on are not being upgraded. Thanks> - I tried that and it did not work but it is possible I put it in the wrong place? I was using a WPF application in .NET 4. Since then I have moved all my application back to 3.5sp1 since all my application use SMO to some extent. John J. Hughes Hi Vindy, I have the same problem with trying to run CREATE PROCEDURE statements from a file with GO statements between each (in a .NET 4.0 project). I "discovered" SMO and thought I had found the solution, but it does not work, and creating an app.config with the attributes that you specified do not work. (The attribute useLegacyV2RuntimeActivationPolicy is not found in the schema.) Also, when I run my application with such a config file, no exception is thrown, it simply does not work. Can you please provide an update as to whether Microsoft has yet signed off on SMO usage with .NET 4.0. I want to close the door on this solution once and for all, if indeed, it is not to be used. Also, can you inform on alternatives? As an administrator, I need to be able to install a database and its objects via a script file. I would like to avoid rewriting everything to use the "exec sp_executesql". (I have huge scripts to run.) Thank you! Hello bk1234, I was wondering if you would be so kind as to share the code of your solution. I cannot find a way to ExecuteNonQuery for something called a procedure class, that you've indicated is in the SMO collection of namespaces. (I have changed my target framework to 3.5 from 4.0 already, in preparation of trying out your solution.) Thank you in advance, Peg - Trying to deploy the latest version of our product with .NET 4 Just ran into this mixed mode problem and see all the postings from last summer (including yours) regarding the use of the "uselegacy" parameter. Seems like an ugly workaround with potential problems if you are not careful. Also seems it was made available mostly to support those shops who are dragging their feet moving their assemblies to .NET 4. I find it hard to believe (or maybe not) that the offender in my case is Microsoft themselves due to my need for SMO. To make it worse this is January 2011 now. I am getting suspicious that this is a bad sign that maybe SMO is about to be shelved. Hope I'm wrong but I find it hard to believe Microsoft can't get around to rebuilding their SMO assemblies with .NET 4. Can anyone at Microsoft calm our concerns that SMO is dying? If so, any idea when we can expect a .NET 4 compatible set of assemblies? Since SMO is part of SQL my guess would be when they release SQL 2012 unless it has 4.5. Most likely they will relase SQL 2012 with 4.0 and sortly thereafter release .NET 4.5 and we will have the same problem with 4.5 as we now have with 4.0. you could try downloading the preview 3, maybe it has it. I have not had time to deal with it. John J. Hughes II Hi bkejser_, I was just looking at the examples on the smolite.com site, but couldn't determine whether it will also work for runing "*.sql" script files without the annoying "GO" errors, etc. Does SMOLite support running "*.sql" script files without modification? Thanks! bkejser Neat, will look at it, thanks... John J. Hughes II I just downloaded and added SMOLite in my project's references, however, the only way that I found to create a server object is by using the Server.ConnectionContext.ServerInstance property. Unfortunately, I too need to specify other connection parameters like: timeout, username, password or Trusted Connection, etc. Is there any other way that I can instantiate the Server class? It would be perfect if I could use a SqlConnection object instead of just the ServerInstance. Thanks for putting this library together. The documentation seems really good too :) Did you use SandCastle by any chance? - Edited by --- Miguel Guzman --- Thursday, February 09, 2012 7:08 PM Hi Yes. Sandcastle produces the documentation. A second constructor on the Server class has been aded to accept a SqlConnection instance. This is on the 1.1.4 release. Thanks If your project uses .NET Framework 4.0, you need to add the version of 10.0 Microsoft.SqlServer.Smo as a reference which you can find from your installation directory such as c:\Program Files (x86)\Microsoft SQL Server\100\SDK\Asseblies. coder Hello, Have you tried to copy as locale the only version 11.0 ( you click on your reference , right-click on your reference, select Properties and you chane the value of Copy as local from false to true ).Usually, it is working. With a little annoyement , your executable will be bigger. Advantage : it is working for 10.5 and 11.0. I used it to do applications versus 90 (2005) and 2008. Have a nice day Mark Post as helpful if it provides any help.Otherwise,leave it as it is. It is now 2013, I have Visual Studio 2012 and SQL Server 2012 installed and I have the same problem. The config file workaround doesn't work for test projects (without hack). If there was a an alternative to approach to run an sql script from C# I would gladly go with that, but I can't see one. EDIT: to answer my own question - I ended up using EF and code. Not ideal for scripting database create/fill type operations. - Edited by acarlonvsn1 Monday, July 01, 2013 5:14 AM update You do not need SMO to execute a SQL Script in .NET. Can't you just use a SqlConnection and SqlCommand in System.Data.SqlClient namespace? The SqlCommand can take a string that has all the stuff from a file .sql in it. Then you just call the method ExecuteNonQuery(string) on the SqlCommand object. Am I missing something? Ben Miller - SQL Server MVP - @DBADuck - Hi, I have already changed my app.config file <?xml version="1.0" encoding="utf-8" ?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0"/> </startup> </configuration> and target framework 4.0 and prerequisites 4.0 but still i have same problem.Any one have ideas?
https://social.technet.microsoft.com/Forums/sqlserver/en-US/533f7044-1109-4b7a-a697-2621f23017d6/smo-net-40?forum=sqlsmoanddmo
CC-MAIN-2016-40
refinedweb
1,588
76.42
Apache OpenOffice (AOO) Bugzilla – Issue 93503 Calc export of non-gregorian date to Excel loses format Last modified: 2017-05-20 11:11:12 UTC Steps to reproduce: 1. Start OpenOffice.org Calc 2. Tool -> Option -> Language Setting -> Languages and set Locale setting to Thai 3. Type a date (ex. 15/05/2008) in a cell 4. Click the cell and choose Format -> Cells -> Numbers, then changes the following 3.1) Change Category to Date 3.2) Change Language to Thai 56252 [details] Before save as Microsoft Excel 97/2000/XP (.xls) Created attachment 56253 [details] After open in Excel Created attachment 56254 [details] the bug document Problem still exists in DEV300_m52 (OOo 3.2 code line). Confirming issue. Adjusting summary. Old summary "Calc export of Buddhist date to Excel is broken" could be misinterpreted as "this is a regression"... Created attachment 70648 [details] fix date format export to excel This the patch to fix this issue. diff -rup OOO320_m19_ori/sc/source/filter/excel/xestyle.cxx OOO320/sc/source/filter/excel/xestyle.cxx --- OOO320_m19_ori/sc/source/filter/excel/xestyle.cxx 2010-05-26 23:30:44.000000000 +0700 +++ OOO320/sc/source/filter/excel/xestyle.cxx 2010-07-16 17:50:08.188256700 +0700 @@ -1327,8 +1327,15 @@ String XclExpNumFmtBuffer::GetFormatCode } aFormatStr = pEntry->GetMappedFormatstring( *mpKeywordTable, *mxFormatter->GetLocaleData() ); + xub_StrLen pos; if( aFormatStr.EqualsAscii( "Standard" ) ) aFormatStr.AssignAscii( "General" ); + else if( ( pos = aFormatStr.SearchAscii("[~buddhist]", 0) ) != STRING_NOTFOUND ) + aFormatStr.ReplaceAscii( pos, 11, "[$-107041E]", 11 ); + else if( ( pos = aFormatStr.SearchAscii("[~hijri]", 0) ) != STRING_NOTFOUND ) + aFormatStr.ReplaceAscii( pos, 8, "[$-1060401]", 11); + else if( ( pos = aFormatStr.SearchAscii("[~persian]", 0) ) != STRING_NOTFOUND ) + aFormatStr.ReplaceAscii( pos, 10, "[$-3060429]", 11 ); } } else reassigning for Daniel to evaluate the patch The patch has a little drawback, it does not respect quoted string inside the number format, e.g. the format "[~buddhist]" 0 gives numbers like [~buddhist] 1234 instead of 1234 in the cell. But I think we can live with that, as correct handling would require a complete format code parser... This should be done at the number formatter instead, as also Writer/Word use those format codes. SvNumberformat::GetMappedFormatstring() in svl/source/numbers/zformat.cxx would be the right place, and act upon NF_SYMBOLTYPE_CALENDAR in the switch(pType[j]). Simply replacing the calendar string with a fixed "[$-...]" notation may not always work as desired, as the value in that notation depends on the actual locale selected, not sure what happens in Excel if those don't match. Better would be to use the locale set at the number format and construct the proper string. For MS-LCID documentation in this context see for example However, we may live with fixed values for a first attempt if that sounds too complicated. Thank er, we will find a better solution by looking at SvNumberformat::GetMappedFormatstring() next. Accept the issue. Trying to find the best solution for this problem. Created attachment 70926 [details] fix buddhist calendar export in excel format patch to fix export buddhist calendar diff -rup OOO320m19-ori/svtools/source/numbers/zformat.cxx OOO320m19/svtools/source/numbers/zformat.cxx --- OOO320m19-ori/svtools/source/numbers/zformat.cxx 2010-05-26 23:30:44.000000000 +0700 +++ OOO320m19/svtools/source/numbers/zformat.cxx 2010-08-02 17:35:47.000000000 +0700 @@ -4128,14 +4128,6 @@ String SvNumberformat::GetMappedFormatst } const SvNumberNatNum& rNum = NumFor[n].GetNatNum(); - // - } USHORT nAnz = NumFor[n].GetnAnz(); if ( nSem && (nAnz || aPrefix.Len()) ) @@ -4195,6 +4187,34 @@ String SvNumberformat::GetMappedFormatst aStr += '"'; } break; + case NF_SYMBOLTYPE_CALDEL : + if ( pStr[j].EqualsAscii("[~") ) + { +( "0D07041E", aStr.Len() ); + } + else + { + aStr.InsertAscii( "0107041E", aStr.Len() ); + } + } + else // ignore other calendar + { + j++; // step to skip "]" + } + j++; + } + else + { + aStr += pStr[j]; + } + break; default: aStr += pStr[j]; } I fixed this issue, Excel export with buddhist calendar, with this patch. This patch check "[~buddhist]" and [NatNum1] and convert it into appropriate MS-LCID eg. "[$-0107041E]". It works successfully when open the xls file in Excel. Please take a look at the patch to see whether we're on the right track. However, when I open the exported xls file with OpenOffice.org, the number format will be converted to "[$-41E]" by the import filter. However, it will not convert any native number or calendar options. I will fix this import filter issue in another issue report. BTW, can you guide me where the code is. @tantai_thanakanok: Thanks, some nitpicks though: 0. Please always attach diffs as files of type text/patch, pasting diffs to the comment field may introduce unintended line breaks and other quirks. 1. The patch removes the Thai T NatNum modifier exported as t to Excel, that is unacceptable. This modifier is a specialty of the Thai Excel version we import and export. Any specific reason you removed that? 2. When encountering NF_SYMBOLTYPE_CALDEL there is no need to additionally check for "[~", NF_SYMBOLTYPE_CALDEL is only set for that context. For import the number format scanner/parser is used, see svl/source/numbers/zforscan.cxx with the strategic entry point ImpSvNumberformatScan::ScanFormat() Be warned, that is hard stuff ;-) Btw, I suggest you use a recent DEV300 milestone for development, things are quite changing between releases and the number formatter isn't even in module svtools anymore but in svl instead. Created attachment 71799 [details] fix excel import and export In the export filter, I remove the code that "check Thai T NatNum modifier and export as "t" to Excel" because in Excel, the "t" format will format Thai number using native digits correctly in Thai locale only. So I check NatNum specifier and export it as LCID to format Thai number shape correctly in all locale. In the import filter, I add the check for LCID and convert it into an appropriate calendarID and NatNum before checking for symboltype. I try to add support for other calendarID in the export and import filter. However, I have the following problems in the export filter :- - for [~hijri], in GetMappedFormatString(), I always get LANGUAGE_ENGLISH_US in the cell while actually it should be LANGUAGE_ARABIC_XXXX. So I cannot add the right numeral shape code into the LCID. - for [~persian], in GetMappedFormatString(), the celendarID [~-persion] disappear from the format code. I don't know when and why. DR->ER: patch changes number formatter, please have a look Created attachment 72126 [details] fixed the case that excel export date with locale id [$-xxx0000] We have built Windows binary with the above patch and tested both internally and by a few volunteers. The patched binary work correctly. ER: can you please review the code? @tantai_thanakanok: 1. What exactly does this change in SvNumberformat::ImpGetLanguageType() attempt to fix? - return (nNum && (cToken == ']' || nPos == nLen)) ? (LanguageType)nNum : + return ((cToken == ']' || nPos == nLen)) ? (LanguageType)nNum : 2. I don't like the change made to SvNumberformat::ImpNextSymbol() + xub_StrLen pos; + if ( ( pos = rString.SearchAscii( "[$-", 0 ) ) != STRING_NOTFOUND ) + { + if ( rString.GetChar(pos+10) == ']' ) + { + if ( rString.GetChar(4) == '0' && rString.GetChar(5) == '7' ) + { + rString.InsertAscii( "[~buddhist]", pos+11 ); + } + if ( rString.GetChar(3) == 'D' ) + { + rString.InsertAscii( "[NatNum1]", pos+11 ); + } + } + } because it tries to search for the "[$-" substring upon each and every call of the method, completely ignoring the state machine used otherwise in the method. The case of "[$-" is already detected with case SsGetBracketed and cToken case '$' and rString.GetChar(nPos) == '-' where eSymbolType = BRACKET_SYMBOLTYPE_LOCALE is set. Furthermore, since it always searches from position 0, it detects the condition multiple times and could insert the new strings each time the subconditions are met, which currently is not the case because the LanguageType value is only 16 bit and not a full LCID, hence the result the calling method reinserts (the scanned value in proper case) is shorter than the scanned D07xxxx (might in fact be considered a bug) and the problem doesn't occur. - } This condition happens only if either the user did input such a format or, more likely, the document was imported from a file originating from a Thai Excel version. For the latter case in round trip scenarios the 't' has to be preserved for older Excel versions. This was a requirement of the Thai user community. A conversion of NatNum==1 in LANGUAGE_THAI _without_ NF_KEY_THAI_T being set to "[$-D00041E]" may be fine though. thank you. I'll take a good look at your comments Created attachment 75035 [details] contain date with locale id 0 ( [$-1070000] ) Step to create date with locale id 0: 1.) open Microsoft excel 2.) type date eg. 18/11/2010 3.) right click > format cells > Number > Date , choose 14/03/2544 4.) check in category custom, the LCID is [$-1070000] 5.) click ok 6.) save in xls format 7.) open the xls file in OpenOffice.org Calc, the format code have a locale id 0 ([$-0]) @er > 1. What exactly does this change in SvNumberformat::ImpGetLanguageType() > attempt to fix? > - return (nNum && (cToken == ']' || nPos == nLen)) ? (LanguageType)nNum : > + return ((cToken == ']' || nPos == nLen)) ? (LanguageType)nNum : The reason for this fix is to allow nNum == 0, i.e. LocaleId == 0 which mean "system locale". The above attached file/use-case shows that it is possible. We've made the fix while also make sure that there's no other side-effect, e.g. cases from invalid localeid still return the same. Created attachment 75211 [details] updated version > 2. I don't like the change made to SvNumberformat::ImpNextSymbol() Corrected in the updated version of the patch. > I guess the example should actually be written as [$-D07041E]dd/mm/yy because the number format '0' doesn't make sense for date. And the result, after applying the patch, will be [$-41E][NatNum1][~buddhist]dd/mm/yy Is this OK? Since in the current implementation, you can't use the format [$-41E][~buddhist][NatNum1]dd/mm/yy at all. The number format dialog will accept the format code if you enter it manually but after you click the green checkmark but it will do nothing. The sequence "[$-41E][~buddhist][NatNum1]dd/mm/yy" happens to be illegal in the current implementation. Using the sequence "[$-41E][NatNum1][~buddhist]dd/mm/yy" will produce the desired effect. We can fix this but it require a lot more modifications. >'; >- } The above code is modified && !LCIDInserted ) + { + + aStr.InsertAscii( "[$-D00041E]", 0 ); + } Which means that the filter will accept the number format "t0" (processed elsewhere) from Excel and convert to "[$41E][NatNum1]0" in Calc. When convert back to excel it will become "[$-D00041E]0" (not "t0") which when import into Calc will become "[$41E][NatNum1]0" again. The difference is that when export to Excel, the code will no longer generate "t0" but the more correct (right?) "[$-D00041E]0". @er could you please review the patch.".
https://bz.apache.org/ooo/show_bug.cgi?id=93503
CC-MAIN-2021-39
refinedweb
1,744
58.69
21 November 2007 16:18 [Source: ICIS news] By Ed Cox LONDON (ICIS news)--If ethylene (C2) sellers have their way then the European market can expect to see bi-monthly and first-quarter contract pricing above €1,000/tonne ($1,471/tonne) for the first time, said sources on Wednesday. ?xml:namespace> Bi-monthly talks for December-January were already under way, with one seller saying that a €100/tonne hike would not now be sufficient to cover the latest surge in upstream oil and naphtha costs. A record high naphtha trade was reported on Tuesday at $837/tonne CIF (cost, insurance, freight) NWE (northwest ?xml:namespace> January Brent crude levels were hovering in the high $94s/bbl. WTI was above $97/bbl, once again prompting talk that it would break $100/bbl for the first time. The October-November bi-monthly contract was agreed at €945/tonne FD (free delivered) NWE, the same level as the fourth-quarter contract. Another ethylene producer already signalled last week that €100/tonne would be the minimum possible increase, before the latest rise in costs. The bi-monthly system has come under the microscope from other sellers and buyers not involved. One large producer and consumer have already withdrawn this year, leaving just two sellers. One of the remaining two producers already signalled it also expected to withdraw, probably in a few months’ time. The main three declared buyers who settle prices are all in the polyvinyl chloride (PVC) business. One consumer already said a €50/tonne hike might be possible but another was not yet ready to reveal a target. “It is true that 2007 was a better year for PVC margins than 2006 but we were still always following behind the ethylene increases," said the source. “The whole talk from sellers is based on oil and naphtha but they are not the only ones to suffer from this." Even those players not involved in the bi-monthly talks will cast more than one eye over the number, which will inevitably be mentioned in subsequent quarterly talks. “The December-January settlement will once again test the credibility of the system. A significant increase is needed but we have been here before and the bi-monthly number has not been the right one. This time it’s crucial,” said one major manufacturer which remained focused on quarterly business. Most of the involved parties, however, have said they are happy with the system, praising it as an alternative or an additional pricing mechanism, which is more fluid than the quarterly system. This means the bi-monthly system has more of a merchant market nature than the quarterly system which is more focused on the typically backward integrated derivative polyethylene, advocates of the system have
http://www.icis.com/Articles/2007/11/21/9080621/europe-c2-could-break-1000t-for-first-time.html
CC-MAIN-2015-22
refinedweb
461
58.01
W Can you give an example of how to hook in to the Entity Framework data source provider? Seconding MikeH's comment. Please provide an example. This leads on to more general feedback. Whilst it is great that you're regularly releasing updates, can you please ensure all new features are documented, as they aren't much use if I don't know how to use them (and if they already are documented, these blog announcements would be a great place to link to updated documentation). Looking forward to seeing what is possible with the new public providers. Cheers. Did you guys ever officially include the OData T4 for C# functionality described here: blogs.msdn.com/…/announcing-odata-t4-for-c-preview-1.aspx What I'm trying to do is to auto-generate Knockout.js bindable javascript / coffeescript classes using oData MetaData and T4 templates. Is this possible? Great stuff. Any chance the Entity Framework would ever be able to support properties that are only part of the c# classes but not of the database itself? e.g. any sort of computed properties / complex types / classes and properties that would automatically fallback to the reflection provider? Also is there any way of adding an "AfterSave" interceptor? This way you could trigger certain c# actions such as sending an email after a save operation was done on a certain resource. @DotNetWise Would appreciate such a feature as well! I just updated my WCF DS Toolkit fork with 5.5.0, if anyone is interested. Get it at: github.com/oising/WCFDS-Toolkit — there's a link to the NuGet package too. @Josh Good question! I'm also using the prerelease version of the T4 template looking further to a released one @MikeH/Stu – I totally hear you. Getting better at documentation is something we're working on, and I encourage you to continue to comment if we DON'T get those samples/documents available to you. So often it's easy for us to focus on getting the next feature out when there are hundreds of features people don't use simply because they aren't documented. @Josh/Chris – T4 has been on the back burner as we've had other priorities. We're debating a lot right now about how we hook into VS and what that experience looks like. At any rate, the T4 experience is likely not going to get a lot of love in the next six months. I do apologize if that's a disappointment – it's certainly a disappointment to me! @DotNetWise – While I can't comment on EF feature requests, they are much better than we are at taking both pull requests as well as feature requests. As far as the AfterSave interceptor, I'd like to see our extensibility story get a lot better in general but as with many other things, there are finite resources and we wind up focusing on a limited set of things. I would hope our direction with the public providers will give you what you're looking for, so maybe we should let that mature through the next release before considering next steps. And just endorsing Oisin's work: he's done an awesome job updating the WCF DS toolkit and I'm sure he'd love to have both feedback and contributions. The toolkit is a great way to get some of the extensibility points we don't currently provide. Hello I have the following trouble (WCF Data Services 5.5.0): • I have one project “A” containing complex type classes. • I have one project “B” containing entity type classes. • I have one project “C” where I have a custom WCF Data Service implementation based on the class types of “B” and they references complex types in “A”. I don’t know why if I directly create a resource types with one “NamedNamespace” the metadata refers the resource types with the CLR type (InstanceType) namespace. In consequence when I like have all the resource type under this “NamedNamespace” I have 3 schema with namespaces: “A” because the complex type classes, “B” because the entity type classes and “NamedNamespace” I am not sure exactly why: could be because the resource type namespaces I built or because the namespace of the assembly containing the custom WCF Data Service implementation (are the same this time). The worst is when I add service reference I have one error because the code namespaces generated are not recognized in code! Regards Hello Again: I forgot to include other questions with my last post. 1 – I have received errors browsing the custom data service when I don’t set the resources like ReadOnly, I don’t know why, but I think me missing something. 2 – I had to refuse the use the public reflection provider because it do not allow me skip the SetReadOnly step to be able add more logic to the resource definitions. I commented about my scenary on WCF Data Service 5.5.0 prerelease, I cited here: your." Regards I switched to using the WebApi OData solution as this seemed to be where MS was putting in more effort, but now you guys have just done a new release. Can you help me decide which one I should be using?! please!!!! renew documentation for custom provider dev. please!!!! some words about ASYNC on SERVER SIDE. Excellent Post. Also visit Hi, I've found an issue similar to the one reported here: stackoverflow.com/…/wcf-dataservice-metadata-typemismatchrelationshipconstraint What seems to be happening is that when a FK is based on more than one column, the WCF Data Services somehow forgets which column in one table links to the column in the other table, and if they have diferente datatypes, the reported error happens. I've posted a longer comment last weekend but it never showed up, so if you have both comment waiting to be published, choose one 🙂
https://blogs.msdn.microsoft.com/astoriateam/2013/05/30/wcf-data-services-5-5-0-release/
CC-MAIN-2019-30
refinedweb
990
61.46
Details - Type: New Feature - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: None - Fix Version/s: SCM-ACTIVITY-1.1 - Component/s: SCM Activity - Labels:None - Number of attachments : Description I would love to see support for TFS for the SCM Activity plugin for Sonar. Activity The Team Foundation Power Tools ship with a command-line version of annotate called "tfpt.exe annotate". This has a /noprompt option to direct the output to the console These links explain in more detail how the Annotate can be used like Blame is used Does this help? So, if I correctly understand, then command "tfpt annotate /noprompt filename" will produce output like this: 3 hatusr01 3/13/2006 public class Example { 4 buckh 4/13/2006 int field; 3 hatusr01 3/13/2006 } ? So, I can add support for TFS. And I'll come back to you for testing it. I will happily test it Done. New snapshot version was deployed, so you can test it. In order to get it worked "tfpt" should be in PATH. I'm trying to use SCM Activity Plugin but it fails because TFPT.exe (Power Tools 2008) does NOT use the format mentioned here. This is the format I get for tfpt annotate /noprompt 3 public class Example { 4 int field; 3 } As you can see only the changeset is returned. As we previously discussed I need following information:
http://jira.codehaus.org/browse/SONARPLUGINS-373
CC-MAIN-2013-48
refinedweb
234
64.3
ScrimageScrimage This readme is for the 2.0.x versions. For 1.4.x please see the old README1.4.md.. A typical use case for this library would be creating thumbnails of images uploaded by users in a web app, or resizing a set of images to have a consistent size, or optimizing PNG uploads by users to apply maximum compression, or applying a grayscale filter in a print application. Scrimage mostly builds on the functionality provided by java.awt.* along with selected other third party libraries. Image OperationsImage Operations These operations all operate on an existing image, returning a copy of that image. The more complicated operations have a link to more detailed documentation. Quick ExamplesQuick Examples Reading an image, scaling it to 50% using the Bicubic method, and writing out as PNG val in = ... // input stream val out = ... // output stream Image.fromStream(in).scale(0.5, Bicubic).output(out) // an implicit PNG writer is in scope by default Reading an image from a java File, applying a blur filter, then flipping it on the horizontal axis, then writing out as a Jpeg val inFile = ... // input File val outFile = ... // output File Image.fromFile(inFile).filter(BlurFilter).flipX.output(outFile)(JpegWriter()) // specified Jpeg Padding an image with a 20 pixel border around the edges in red val in = ... // input stream val out = ... // output stream Image.fromStream(in).pad(20, Color.Red) Enlarging the canvas of an image without scaling the image. Note: the resize methods change the canvas size, and the scale methods are used to scale/resize the actual image. This terminology is consistent with Photoshop. val in = ... // input stream val out = ... // output stream Image.fromStream(in).resize(600,400) Scaling an image to a specific size using a fast non-smoothed scale val in = ... // input stream val out = ... // output stream Image.fromStream(in).scaleTo(300, 200, FastScale) Writing out a heavily compressed Jpeg thumbnail implicit val writer = JpegWriter().withCompression(50) val in = ... // input stream val out = ... // output stream Image.fromStream(in).fit(180,120).output(new File("image.jpeg")) Printing the sizes and ratio of the image val in = ... // input stream val out = ... // output stream val image = Image.fromStream(in) println(s"Width: ${image.width} Height: ${image.height} Ratio: ${image.ratio}") Converting a byte array in JPEG to a byte array in PNG val in : Array[Byte] = ... // array of bytes in JPEG say val out = Image(in).write // default is PNG val out2 = Image(in).bytes) // an implicit PNG writer is in scope by default with max compression Coverting an input stream to a PNG with no compression implicit val writer = PngWriter.NoCompression val in : InputStream = ... // some input stream val out = Image.fromStream(in).stream Input / OutputInput / Output Scrimage supports loading and saving of images in the common web formats (currently png, jpeg, gif, tiff). In addition it extends javas image.io support by giving you an easy way to compress / optimize / interlace the images when saving. To load an image simply use the Image companion methods on an input stream, file, filepath (String) or a byte array. The format does not matter as the underlying reader will determine that. Eg, val in = ... // a handle to an input stream val image = Image.fromInputStream(in) To save a method, Scrimage requires an ImageWriter. You can use this implicitly or explicitly. A PngWriter is in scope by default. val image = ... // some image image.output(new File("/home/sam/spaghetti.png")) // use implicit writer image.output(new File("/home/sam/spaghetti.png"))(writer) // use explicit writer To set your own implicit writer, just define it in scope and it will override the default: implicit val writer = PngWriter.NoCompression val image = ... // some image image.output(new File("/home/sam/spaghetti.png")) // use custom implicit writer instead of default If you want to override the configuration for a writer then you can do this when you create the writer. Eg: implicit val writer = JpegWriter().withCompression(50).withProgressive(true) val image = ... // some image image.output(new File("/home/sam/compressed_spahgetti.png")) MetadataMetadata Scrimage builds on the metadata-extractor project to provide the ability to read metadata. This can be done in two ways. Firstly, the metadata is attached to the image if it was available when you loaded the image from the Image.fromStream, Image.fromResource, or Image.fromFile methods. Then you can call image.metadata to get a handle to the metadata object. Secondly, the metadata can be loaded without an Image being needed, by using the methods on ImageMetadata. Once you have the metadata object, you can invoke directories or tags to see the information. Format DetectionFormat Detection If you are interested in detecting the format of an image (which you don't need to do when simply loading an image, as Scrimage will figure it out for you) then you can use the FormatDetector. The detector recognises PNG, JPEG and GIF. FormatDetector.detect(bytes) // returns an Option[Format] with the detected format if any FormatDetector.detect(in) // same thing from an input stream IPhone OrientationIPhone Orientation Apple iPhone's have this annoying "feature" where an image taken when the phone is rotated is not saved as a rotated file. Instead the image is always saved as landscape with a flag set to whether it was portrait or not. Scrimage will detect this flag, if it is present on the file, and correct the orientation for you automatically. Most image readers do this, such as web browsers, but you might have noticed some things do not, such as intellij. Note: This will only work if you use Image.fromStream, Image.fromResource, or Image.fromFile, as otherwise the metadata will not be available. X11 ColorsX11 Colors There is a full list of X11 defined colors in the X11Colorlist class. This can be imported import X11Colorlist._ and used when you want to programatically specify colours, and gives more options than the standard 20 or so that are built into java.awt.Colo. Migration from 1.4.x to 2.0.0Migration from 1.4.x to 2.0.0 The major difference in 2.0.0 is the way the outputting works. See the earlier input/output section on how to update your code to use the new writers. Changelist: - Changed output methods to use typeclass approach - Removal of Mutableimage and replacement of AsyncImage with ParImage - Introduction of "Pixel" abstraction for methods that operate directly on pixels - Addition of metadata - Addition of io packag BenchmarksBenchmarks Some noddy benchmarks comparing the speed of rescaling an image. I've compared the basic getScaledInstance method in java.awt.Image with ImgScalr and Scrimage. ImgScalr delegates to awt.Graphics2D for its rendering. Scrimage adapts the methods implemented by Morten Nobel. The code is inside src/test/scala/com/sksamuel/scrimage/ScalingBenchmark.scala. The results are for 100 runs of a resize to a fixed width / height. As you can see, ImgScalr is the fastest for a simple rescale, but Scrimage is much faster than the rest for a high quality scale. Including Scrimage in your projectIncluding Scrimage in your project Scrimage is available on maven central. There are several dependencies. One is the scrimage-core library which is required. The others are scrimage-filters and scrimage-io-extra. They are split because the image filters is a large jar, and most people just want the basic resize/scale/load/save functionality. The scrimage-io-extra package brings in readers/writers for less common formats such as BMP Tiff or PCX. Note: The canvas operations are now part of the core library since 2.0.1 Scrimage is cross compiled for scala 2.12, 2.11 and 2.10. If using SBT then you want: libraryDependencies += "com.sksamuel.scrimage" %% "scrimage-core" % "2.x.x" libraryDependencies += "com.sksamuel.scrimage" %% "scrimage-io-extra" % "2.x.x" libraryDependencies += "com.sksamuel.scrimage" %% "scrimage-filters" % "2.x.x" Maven: <dependency> <groupId>com.sksamuel.scrimage</groupId> <artifactId>scrimage-core_2.11</artifactId> <version>2.1.0</version> </dependency> <dependency> <groupId>com.sksamuel.scrimage</groupId> <artifactId>scrimage-io-extra_2.11</artifactId> <version>2.1.0</version> </dependency> <dependency> <groupId>com.sksamuel.scrimage</groupId> <artifactId>scrimage-filters_2.11</artifactId> <version>2.1.0</version> </dependency> If you're using maven you'll have to adjust the artifact id with the correct scala version. SBT will do this automatically when you use %% like in the example above. FiltersFilters Scrimage comes with a wide array (or Iterable ;) of filters. Most of these filters I have not written myself, but rather collected from other open source imaging libraries (for compliance with licenses and / or attribution - see file headers), and either re-written them in Scala, wrapped them in Scala, fixed bugs or improved them. Some filters have options which can be set when creating the filters. All filters are immutable. Most filters have sensible default options as default parameters. Click on the small images to see an enlarged example. CompositesComposites Scrimage comes with the usual composites built in. This grid shows the effect of compositing palm trees over a US mailbox. The first column is the composite with a value of 0.5f, and the second column with 1f. Note, if you reverse the order of the images then the effects would be reversed. The code required to perform a composite is simply. val composed = image1.composite(new XYZComposite(alpha), image2) Click on an example to see it full screen. LicenseLicense This software is licensed under the Apache 2 license, quoted below. Copyright 2013-2015.
https://index.scala-lang.org/sksamuel/scrimage/scrimage-core/2.1.4?target=_2.10
CC-MAIN-2021-49
refinedweb
1,576
52.46
WMI Schemas While the WMI Object Model defines how programs work with WMI, the WMI schemas define the actual implementation of WMI objects. Consider an analogy of a driving manual versus a map. A driving manual explains the techniques of driving a car, whereas a map illustrates where the destinations are and how to get to them. The driving manual is analogous to the object model, while maps are analogous to schemas. Understanding WMI schemas allows you to understand the relationships among the objects that WMI manages. Part of a WMI schema is illustrated in Figure B.3. In this case, specific types of network adapters are defined by extending a general definition of network adapters (CIM_NetworkAdapter). Figure B.3 Part of a WMI. The Desktop Management Task Force. Some important concepts to understand about WMI schemas are: Namespace Contains classes and instances. Namespaces are not physical locations; they are more like logical databases. Namespaces can be nested. Within a namespace, there can be other namespaces that define subsets of objects. Class A definition of objects. Classes define the properties, their data types, methods, associations, and qualifiers for both the properties and the class as a whole. Instance A particular manifestation of a class. Instances are more commonly thought of as data. Because instances are objects, the two terms are often used interchangeably. However, instances are usually thought of in the context of a particular class, whereas objects can be of any class.. MOF Managed Object Format. MOF file A definition of namespaces, classes, instances, or providers; but in a text file. For more information, see the "Using MOF Files" section later in this appendix. MOF compiling Parsing a MOF file and storing the details in the WMI repository. CIM Common Information Model. For more information, see the Desktop Management Task Force Web site at. Association A WMI-managed relationship between two or more WMI objects. You can extend a schema by adding new classes and properties that are not currently provided by the schema. For information about extending the WMI schema, see the WMI Tutorial at. For More Information Did you find this information useful? Please send your suggestions and comments about the documentation to smsdocs@microsoft.com.
https://docs.microsoft.com/en-us/previous-versions/system-center/configuration-manager-2003/cc180287(v=technet.10)?redirectedfrom=MSDN
CC-MAIN-2020-10
refinedweb
369
59.19
by Sam Selikoff November 1, 2016 Last month I hosted Ember NYC's project night, and the audience and I built a sticky chatbox component together. My goal wasn't to end up with a prefabricated solution for everyone to use; instead, I wanted to work through the problem as a group, discussing our thought process and opinions as we went along. I built this component last summer as part of a client prototype and found it an interesting and fun challenge. It looks like this: Normally, when a scrollable <div> gets new content, its scrollbar is unaffected. You can see on the left that to keep reading, the user must scroll the chatbox each time a new message comes in. The sticky chatbox on the right is different. When the user is scrolled to the bottom, new messages appear and "push" the chat log up, so the user doesn't have to keep scrolling. But, if the user scrolls back up to read through older messages, the chat box doesn't snap to the bottom. The scrollbar stays in place, so the user isn't interrupted while reading through the log. This is the behavior found in most modern chat apps like Slack and Twitch. When writing complex components, I like to start by identifying the various states in which the component can exist. Identifying state can be tricky; sometimes I find it helpful to try to explain how the interface should behave as if I were talking to a non-technical business or product person. How might we talk about this component together? When the user is scrolled to the bottom, new messages should show up. If they scroll up to read old messages, the chat should stay still. From this plain-English description, the states of the component jump out at us: Of course, we can think of other states in which the component could exist -- for example, if the user was scrolled to the top -- but those states aren't relevant here, since they don't affect behavior. The only states that affect behavior are the two we've listed. Given these possible states, I gave my component an isScrolledToBottom boolean property that I could use to adjust the component's scrolling behavior. I then needed to update this property every time the state of the component changed. How might I achieve this? The first thing that came to mind was an addon I had used in previous projects: DockYard's Ember In Viewport. This addon lets you render a component to the screen that fires an action whenever that component enters or exits the viewport. Sounds like just what I needed. If I rendered this component at the end of the chat list, I'd then be able to know whenever the user reached the bottom, and set the state accordingly. If they started scrolling up to read old messages, the component would leave the viewport, and I'd be able to use another action to update the state. So, I wrote a simple {{in-viewport}} component using the mixin from the addon. You can see the full implementation of that component in the Twiddle below. I then used it in my component's template: <!-- chat-box.hbs --> <ul> {{#each messages as |message|}} <li>{{message.text}}</li> {{/each}} {{in-viewport did-enter=(action (mut isScrolledToBottom) true) did-exit=(action (mut isScrolledToBottom) false)}} </ul> All that remained was to write the component's behavior. If the user was scrolled to the bottom, the component's <ul> should scroll down each time a new message was rendered. The scrolling should happen after the new message was appended to the DOM — sounds like a perfect use case for the didRender hook: // chat-box.js import Ember from 'ember'; export default Ember.Component.extend({ didRender() { this._super(...arguments); if (this.get('isScrolledToBottom')) { this.$('ul')[0].scrollTop = this.$('ul')[0].scrollHeight; } } }); Et voilà! Our chat box lets the user read through the backlog, and then autoscrolls when they're all caught up. To my delight, several members of the group from the project night suggested a completely different strategy. The idea was simple: check the state of the scrollbar the moment a new message arrives. If the scrollbar was at the bottom, autoscroll the chatbox; otherwise, leave it alone. We still needed to store the state of the scrollbar, so we kept the isScrolledToBottom property; but now, we needed to set this property whenever the component was about to re-render. It took a bit of experimentation. We started out by trying to calculate the scroll position at the beginning of the didRender hook. The problem here is that in didRender, the chatbox had already been updated -- so even if the user had been scrolled to the bottom, the fact that the new message had already been appended meant they no longer were. Eventually we realized that we needed to calculate the scroll position just before the new message was added to the DOM. We pulled up the guides for a component's re-render lifecycle hooks: Both willUpdate and willRender seemed like good candidates. Looking at the documentation for each, we found that willRender is called on both initial render and re-renders, while willUpdate is only called on re-renders. Since we only cared about new messages, we went with willUpdate. After a little more experimentation, we were able to write a formula to calculate the state of the scrollbar. We then used this formula to set the component's state in willUpdate: import Ember from 'ember'; export default Ember.Component.extend({ willUpdate() { this._super(...arguments); let box = this.$('ul')[0]; let isScrolledToBottom = box.scrollTop + box.clientHeight === box.scrollHeight; this.set('isScrolledToBottom', isScrolledToBottom); }, didRender() { this._super(...arguments); if (this.get('isScrolledToBottom')) { this.$('ul')[0].scrollTop = this.$('ul')[0].scrollHeight; } } }); Now the state would be correct even after the new messages were appended, so the code in didRender worked just as before. Cool! Here's the Twiddle: After going through both solutions, Luke Melia pointed out that spying on scroll behavior is quite expensive (which is why the Ember In Viewport addon makes you explicitly opt-in to this behavior). He said that using the first approach could significantly affect performance, especially on mobile. In many cases, then, the willUpdate solution would be the superior choice. For our demo app, the willUpdate solution was sufficient — the only time we used the isScrolledToBottom property was when re-rendering the list. If you open the Twiddle, however, you'll notice that the state of our component can "lie": If you scroll the chatbox after a new message has been rendered, you'll notice that the isScrolledToBottom property won't change right away; in fact, it won't update to reflect the "true" state of the scrollbar until the next message arrives. If we were to add additional behavior to this component that relied on isScrolledToBottom being accurate, we could run into some issues. How might this happen? You could imagine updating the interface to show an indicator that new messages had arrived. You'd want that indicator to clear once the user had read through all the messages. In this case, there could be a long time between when the user had caught up and when the next message arrived, so the interface could fall "out of sync" with the actual state of the user's behavior. This is just one example of something that could affect your decision. Different approaches often favor competing goals, like performance versus accuracy. It's up to you to decide which strategy is most appropriate based on the unique priorities and needs of your application. Building the sticky chatbox as a group helped us all see the problem with a bit more clarity. We learned: willRenderand willUpdatehooks are a great place to take measurements or perform visual calculations on a component's DOM before Ember re-renders it. didRenderhook is useful if you need to update a component's DOM in response to a re-render, for example after the component receives new attrs. So reference the API docs often, keep pairing, and if you're in New York be sure to join us at Ember NYC's next Project Night!
https://embermap.com/notes/63-building-a-sticky-chatbox
CC-MAIN-2018-43
refinedweb
1,376
61.97
$299.99. Premium members get this course for $167.20. Premium members get this course for $47.20. Premium members get this course for $79.20. Premium members get this course for $37.50. Premium members get this course for $159.20. extern int __stdcall myfunction(int n, double x); then you compile and make a library of your fortran subroutines, and link to VC++ project. then use myfunction() in your cpp file Thanks. /* File CMAIN.C */ #include <stdio.h> extern int __stdcall FACT (int n); extern void __stdcall PYTHAGORAS (float a, float b, float *c); main() { float c; printf("Factorial of 7 is: %d\n", FACT(7)); PYTHAGORAS (30, 40, &c); printf("Hypotenuse if sides 30, 40 is: %f\n", c); } C File FORSUBS.FOR "this is your fortran file, use fortran compiler to C create library for this C INTEGER*4 FUNCTION Fact (n) INTEGER*4 n [VALUE] INTEGER*4 i, amt amt = 1 DO i = 1, n amt = amt * i END DO Fact = amt END SUBROUTINE Pythagoras (a, b, c) REAL*4 a [VALUE] REAL*4 b [VALUE] REAL*4 c [REFERENCE] c = SQRT (a * a + b * b) END as you see, you can specify value or reference but digital visual fortran does create compatible one. two if your fortran code doesnot specifically says [VALUE] the default is by reference so extern void __stdcall PYTHAGORAS (float *a, floa t * b, float *c); but above is for .c files, so if you are doing cpp project (C++ BUT NOT C) then extern "C" { void __stdcall PYTHAGORAS (float *a, floa t * b, float *c);} to say this is C in C++ SO ABOVE ARE THREE problems you may have, other than these it should run smoothly
https://www.experts-exchange.com/questions/10037271/Combine-Fortran-program-and-C-program-into-one.html
CC-MAIN-2018-09
refinedweb
287
65.86
Recently, I've published my very first library as an NPM module and I'd like to share my learnings and struggles with you. The module I'm talking about is ella-math, which is a library for vector and matrix calculations. These are useful for transformations in graphics programming, like scaling, moving and rotating shapes, or projecting things into 3d space. Why? To be honest, there really is no need to build my own vector/matrix calculation library. There are many of these already out there. But I wanted to do it anyway, just in order to learn how all this works. Getting started The language I've chosen to work with is TypeScript. I initialized the project from scratch: mkdir ella-math cd ella-math git init npm init npm i typescript -D ./node_modules/.bin/tsc --init In order to continue, I had to learn about the basics of the module systems used in JavaScript and by node.js and NPM. Module systems In the first place, NPM (and node.js) uses a module format that is called CommonJS. CommonJS syntax looks like this: // a vector class class Vec { /* ... */ } // a matrix class class Mat { /* ... */ } module.exports = { Vec, Mat }; In node.js, you can use the module by using require(): const { Vec } = require('ella-math'); const a = new Vec(1, 2, 3); const b = new Vec(4, 5, 6); const c = a.add(b); // (5, 7, 9) Unfortunately, if you want to try to use it directly in the browser by embedding the library via <script> tag, that does not work, because the browser doesn't know about module or require. One early approach to get a module system into the browser without having additional build steps was RequireJS. RequireJS also provides a require() function, but as the scripts are loaded asynchronously, the CommonJS syntax could not be used. So, requireJS came up with a module definition format that looks different and wasn't comaptible with CommonJS. It is called Asynchronous Module Definition (AMD). RequireJS isn't pretty common anymore and you will only find it in legacy projects. One selling point for RequireJS was that it even runs on IE6. So, there were 2 different module formats out there. And sometimes, you don't want to use a module system at all and just use the library via a <script> tag. A fix for this is called Universial Module Definition (UMD). The UMD format provides the best user experience because it supports CommonJS, AMD and also an export via global variable when no module system is available. The catch is, an UMD module looks quite messy: (function (global, factory) { typeof exports === 'object' && typeof module !== 'undefined' ? factory(exports) : typeof define === 'function' && define.amd ? define(['exports'], factory) : (global = global || self, factory(global.Ella = {})); }(this, (function (exports) { exports.Mat = Mat; exports.Vec = Vec; Object.defineProperty(exports, '__esModule', { value: true }); }))); Because this is a mess, the ECMAScript specification came up with another module system: ES modules (ESM). It introduced the import and export keywords into the JavaScript language, which looks a lot cleaner: export class Vec { /* ... */} export class Mat { /* ... */} On the consumer side: import { Vec, Mat } from './ella-math.esm.js'; <script type="module" src="index.esm.js"></script> Because this specification introduced a keywords, older browsers cannot understand this flavor of JavaScript. To address this, there is a type="module" attribute that tells the browser the JavaScript is using the modern module system. Old browsers don't execute it at all (like IE11). So, in an ideal world, you want to just use the modern module specification. But NPM uses CommonJS in the first place. Additionally, you may want to provide a way to directly use your library in the browser. So, I ended up still using the UMD module format but also providing the modern ES module format. Compiling to UMD and ESM To compile my library to an UMD module, I am using Rollup alongside with the official TypeScript plugin. Rollup both supports the UMD format and the ES module specification out of the box, so I can provide both flavors. The rollup configuration (rollup.config.js) looks like this: import typescript from '@rollup/plugin-typescript'; export default { input: 'src/ella.ts', output: [ { file: 'dist/ella.esm.js', format: 'es', }, { file: 'dist/ella.umd.js', format: 'umd', name: 'Ella', }, ], plugins: [typescript()], }; My scripts section in the package.json looks like this: { "scripts": { "test": "jest", "docs": "typedoc && touch docs/.nojekyll", "build:types": "tsc -t esnext --moduleResolution node -d --emitDeclarationOnly --outFile dist/ella.d.ts src/ella.ts", "build:js": "rollup -c rollup.config.js", "build:minjs": "terser dist/ella.umd.js --compress --mangle > dist/ella.umd.min.js", "build": "npm run build:js -s && npm run build:minjs -s && npm run build:types -s" } } Because the official rollup typescript plugin does not work with emitting type definitions, I'm using an additional build step to generate a .d.ts file containing all type definitions for TypeScript. Currently, the declaration and sourceMap options in tsconfig throw an error when using @rollup/plugin-typescript. There is a rewrite of the typescript plugin rollup-plugin-typescript2 that fixes the issue with type definitions and source maps, but I haven't tried it. To also provide a minified bundle, I'm using terser which works quite well together with Rollup. Additionally, I'm using typedoc to generate API documentation from code comments using JSDoc notation. Specify files for publishing To specify which files are going to be published, you specify all the things inside your package.json. The entry file goes in the main field, the type definition goes into the types field. Finally, you define a files field with an array containing the files and folders to be included: { /* ... */ "main": "dist/ella.umd.js", "types": "dist/ella.d.ts", "files": ["src", "dist"] /* ... */ } Publishing on NPM After you have created an account on NPM, you can log in to your account inside your command line shell: npm login Then, choose a package name and an initial version in your package.json: { "name": "ella-math", "version": "1.0.0", /* ... */ } You will have to chose a package name that is unique. To finally release your package on NPM, do npm publish Counting versions up After you have made your changes, make sure everything is committed and you are on the main branch. npm provides a built in versioning count tool: npm version major # v1.0.0 -> v2.0.0 npm version minor # v1.0.0 -> v1.1.0 npm version patch # v1.0.0 -> v1.0.1 # publish the new version npm publish Use major for breaking changes (every time your API changes). The version command also creates a git tag for the specific version inside your repo. To push it to your remote repo, use: git push git push --tags The released project So, I must say, publishing a library on NPM that was created in TypeScript is not super straight forward. You can check out the whole project at github: A demo using this library is on CodePen: Posted on by: Lea Rosema Product Engineer by day, creative coder by night. Working at SinnerSchrader. Discussion Good write up! Is there a specific reason why you include the src folder in your package? The way I do it is to only have dist-files in the npm package, and on the other hand gitignore those for the GitHub repo. As in: repo: sources and config only package: dist only Your consumers might enjoy the smaller package size if you leave out sources from the bundle :-) Good point, I think excluding src from the package is the right way. Initially, I thought of a scenario when using TypeScript, one could somehow import directly from the ts sources, and let the bundling be done by one's own transpiling toolchain, but that's out of scope of NPM, I guess. At least, there is no clean way to do that via NPM (yet). cough deno.land/ cough :-) Thanks for excellent article and background. Do you think creating npm modules from Typescript is tedious? Why was UMD the winner? Are there easier ways if we just stick to esm2015? Of course, I'd favorize ESM over UMD but I've had some issues using ESM. So, I stuck to UMD for now. Main reason: Coding playgrounds like JSFiddle or CodePen. Although you can use ESM in Codepen, it does not work as soon as you select some sort of transpiler for your code (Babel, TypeScript, Coffeescript, ...). To get it work with TypeScript, you will need to include the script as an UMD module in the settings panel. Also, the LTS version of node still displays the ExperimentalWarning. I guess as soon it is safe to get rid of the UMD build, things may get easier. Would you agree the whole JavaScript module system is a mess? Angular has it's decorator class ngmodule which works but it seems difficult because error messages are very bad. Great Work ! Just to say, I use to install my modules directly from GitHub with npm, in this case Cool, didn't know about that :) I have started a project in pure js and npm (not yarn) github.com/nazimboudeffa/vitaminx Any chance for a tutorial on how to perform matrix operations ? Best motivation ✌️ Epic Good stuff! Perhaps I could use this for some machine learning tasks I've just learnt about Documentation Generator today, but it seems you still have to write more TSDoc (or JSDoc). Yes, it's not complete yet. Also, the jsdoc (or tsdoc) generates API documentation, which is nice for autocomplete and looking up functions in an API reference. But there should also be some kind of detailed documentation about how to get started using it.
https://dev.to/terabaud/i-created-and-my-first-typescript-library-and-published-it-on-npm-44c
CC-MAIN-2020-40
refinedweb
1,628
65.52
This C Program finds the length of a string without using the built-in function. This program is used to find the length of the string means total number of characters present in a given string. This program need to be done without using the built-in-function. The built in functions are those which have built- in already which can be used directly. Here is source code of the C program to finds the length of a string without using the built-in function..The C program is successfully compiled and run on a Linux system. The program output is also shown below. /* * C program to find the length of a string without using the * built-in function */ #include <stdio.h> void main() { char string[50]; int i, length = 0; printf("Enter a string \n"); gets(string); /* keep going through each character of the string till its end */ for (i = 0; string[i] != '\0'; i++) { length++; } printf("The length of a string is the number of characters in it \n"); printf("So, the length of %s = %d\n", string, length); } $ cc pgm52.c $ a.out Enter a string Sanfoundry The length of a string is the number of characters in it So, the length of Sanfoundry = 10.
http://www.sanfoundry.com/c-program-length-string-without-built-in-function/
CC-MAIN-2017-17
refinedweb
208
79.09
Searching Active Directory with .NET (Visual Studio 2005) - Posted: Nov 02, 2005 at 6:56 PM - 97,425 Views - 7 Comments Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements Right click “Save as…” Federal Developer Evangelist, Robert Shelton, takes you through a 12 minute walkthrough/demonstration of how to search Active Directory for users, groups, and other AD Objects. This demonstration is using the DirectoryServices namespace of the .NET framework. The demonstration is using Visual Studio 2005, but the code will also work as written for Visual Studio 2003. You can find the code at my blog: My other AD Screencasts: - Adding user to AD with .NET - Adding groups and users to groups with .NET- AD Searchfilter (Querying) Syntax: - List of SearchScope options: ~ Robert Shel I couldn't find the VB-Code in the net, so I just ported it myself: ' If you want to search in a specific path, here's the right spot. ' Just insert the path into "As New DirectoryEntry("LDAP://OU=Accounting,DC=World,DC=com")" Dim Entry As New DirectoryEntry Dim Searcher As New DirectorySearcher(Entry) Dim AdObj As SearchResult Searcher.SearchScope = SearchScope.Subtree Searcher.Filter() = "(ObjectClass=user)" For Each AdObj In Searcher.FindAll Label1.Text = Label1.Text & "CN=" & AdObj.Properties("CN").Item(0) & " | Path=" & AdObj.Path & "<br>" I coded it with ASP.net for a webapplication. But the App does exactely the same as the first example. I hope you can use it.
http://channel9.msdn.com/Blogs/RobertShelton/Searching-Active-Directory-with-NET-Visual-Studio-2005?format=progressive
CC-MAIN-2014-52
refinedweb
256
55.95
Red Hat Bugzilla – Bug 595162 Upgrade libisofs from 0.6.20 to 0.6.32 Last modified: 2014-02-02 17:14:39 EST Description of problem: Please upgrade libisofs from 0.6.20 to 0.6.32 that we don't ship an already heavily outdated and old libisofs library in RHEL 6. Version-Release number of selected component (if applicable): libisofs-0.6.20-2.1.el6 How reproducible: Everytime Actual results: libisofs-0.6.20-2.1.el6 Expected results: libisofs-0.6.32-1.el6 or. Here we have changes in API 662: Thomas Schmitt 2010-05-03 {release-0.6.32} Version leap to 0.6.32 661: Thomas Schmitt 2010-05-01 Removed most of the development remarks of 0.6.31 660: Thomas Schmitt 2010-04-30 Corrected calls of functions iso_lsb(), iso_msb(), iso_bb() which used 659: Thomas Schmitt 2010-04-29 Making an educated guess whether the boot images contain a boot info table. 658: Thomas Schmitt 2010-04-25 Closed memory leak about boot catalog node. 657: Thomas Schmitt 2010-04-25 New API calls el_torito_get_isolinux_options(), el_torito_get_boot_media_type() 656: Thomas Schmitt 2010-04-23 Fixed a bug introduced with previous revision 655. 655: Thomas Schmitt 2010-04-23 New API calls el_torito_set_id_string(), el_torito_get_id_string(), 654: Thomas Schmitt 2010-04-22 Bug fix of previous revision 653: 653: Thomas Schmitt 2010-04-22 Added support for multiple boot images. 652: Thomas Schmitt 2010-04-20 Changed new API call from iso_image_set_boot_platform_id() to 651: Thomas Schmitt 2010-04-20 New API call iso_image_set_boot_platform_id(). 650: Thomas Schmitt 2010-04-17 Version leap to 0.6.31 649: Thomas Schmitt 2010-04-17 {release-0.6.30} Version leap to 0.6.30 648: Thomas Schmitt 2010-04-17 Updated genealogy of isohybrid MBR production. 647: Thomas Schmitt 2010-04-16 New API calls iso_read_opts_load_system_area() and iso_image_get_system_area() 646: Thomas Schmitt 2010-04-15 Enhanced configure tests for iconv. Now aborting if not available. 645: Thomas Schmitt 2010-04-14 Extended effect of iso_write_opts_set_pvd_times() parameter uuid to 644: Thomas Schmitt 2010-04-13 Implemented no_force_dots and separate omit_version_numbers for 643: Thomas Schmitt 2010-04-10 New bit1 of iso_write_opts_set_system_area() options. 642: Thomas Schmitt 2010-04-07 New API call iso_write_opts_set_pvd_times(). 641: Thomas Schmitt 2010-04-06 New API call iso_write_opts_set_system_area() acts like mkisofs option -G 640: Thomas Schmitt 2010-02-14 Adjusted copyright and license statements in single files. 639: Thomas Schmitt 2010-02-13 Added copyright statements to technical specs in doc directory. 638: Thomas Schmitt 2010-02-12 Updated license situation of make_isohybrid_mbr.c 637: Thomas Schmitt 2010-02-12 Changed comments from "Linux" to "GNU/Linux" where appropriate. 636: Thomas Schmitt 2010-02-10 Version leap to 0.6.29 635: Thomas Schmitt 2010-02-10 {release-0.6.28} Version leap to 0.6.28 634: Thomas Schmitt 2010-02-08 Updated license statement about our legal view and future licenses. 633: Thomas Schmitt 2010-02-08 Wrapped #endif mark into comment characters. 632: Thomas Schmitt 2010-02-08 One more safety precaution about checksum indice. 631: Thomas Schmitt 2010-02-08 Bug fix: Random checksum index could sneak in via boot catalog node 630: Thomas Schmitt 2010-02-08 Corrected a wrong constant with checksum indice of Iso_File_Src. 629: Thomas Schmitt 2010-02-08 Reacted on compiler warnings from gzpLinux on kernel 2.6 628: Thomas Schmitt 2010-02-05 Avoiding unnecessary use of pthread_exit() 627: Thomas Schmitt 2010-02-04 Forcing use of /usr/local on FreeBSD by LDFLAGS and CPPFLAGS. 626: Thomas Schmitt 2010-01-27 Changed leftover text which disallowed GPLv3. 625: Thomas Schmitt 2010-01-27 Removed more occurences of old restriction to GPLv2. 624: Thomas Schmitt 2010-01-20 Version leap to 0.6.27 623: Thomas Schmitt 2010-01-20 {release-0.6.26} Version leap to 0.6.26 622: Thomas Schmitt 2010-01-19 Updated copyright year and removed ban to derive GPLv3 or later. 621: Thomas Schmitt 2010-01-19 Removed outdated defunct code piece 620: Thomas Schmitt 2010-01-17 Bug fix: Invalid checksum tags were preserved when the new session produced 619: Thomas Schmitt 2010-01-11 More graceful reaction on filesystems where ACL are not enabled. 618: Thomas Schmitt 2010-01-10 Described scdbackup checksum tags in checksums..txt 617: Thomas Schmitt 2010-01-07 Changed configure test for zlib from compress2() to compressBound() 616: Thomas Schmitt 2009-12-31 Invalidating checksum buffer in case that image generation gets cancled. 615: Thomas Schmitt 2009-12-31 Introduced a default definition for PATH_MAX. 614: Thomas Schmitt 2009-12-04 Clarified that absolute paths to the local filesystem are expected. 613: Thomas Schmitt 2009-10-08 Version leap to 0.6.25 612: Thomas Schmitt 2009-10-08 {release-0.6.24} Version leap to 0.6.24 611: Thomas Schmitt 2009-10-08 Removed now unused function util.c:strcopy() 610: Thomas Schmitt 2009-10-07 Bug fix: short Rock Ridge names got stripped of trailing blanks when loaded 609: Thomas Schmitt 2009-10-07 Removed the remaining single blanks from empty PVD id strings. 608: Thomas Schmitt 2009-10-05 Added code for repairing "_" in all three PVD id file names. 607: Thomas Schmitt 2009-10-05 Avoided to convert empty PVD components copyright_file_id, abstract_file_id, or 606: Thomas Schmitt 2009-10-05 Avoided to return NULL by API calls iso_image_get_volset_id(), ..., 605: Thomas Schmitt 2009-09-17 Expanded new API call iso_write_opts_set_scdbackup_tag 604: Thomas Schmitt 2009-08-31 New API call iso_write_opts_set_scdbackup_tag() 603: Thomas Schmitt 2009-08-30 Updated description of libisofs checksum processing 602: Thomas Schmitt 2009-08-25 Version leap to 0.6.23 601: Thomas Schmitt 2009-08-25 {release-0.6.22} Version leap to 0.6.22 $ rpm -q libisofs libisofs-0.6.32-1.el6.x86_64 $ ls -l /usr/lib64/libisofs.so* lrwxrwxrwx. 1 root root 18 May 26 12:35 /usr/lib64/libisofs.so -> libisofs.so.6.28.0 lrwxrwxrwx. 1 root root 18 May 26 12:35 /usr/lib64/libisofs.so.6 -> libisofs.so.6.28.0 -rwxr-xr-x. 1 root root 288080 May 26 12:28 /usr/lib64/libisofs.so.6.28.0 Red Hat Enterprise Linux Beta 2 is now available and should resolve the problem described in this bug report. This report is therefore being closed with a resolution of CURRENTRELEASE. You may reopen this bug report if the solution does not work for you.
https://bugzilla.redhat.com/show_bug.cgi?id=595162
CC-MAIN-2016-26
refinedweb
1,076
67.04
> On Feb 18, 2015, at 12:14 AM, Alex K <address@hidden> wrote: > > On Fri Feb 13 2015 at 1:53:40 PM Ben Abbott <address@hidden> wrote: > >> On Feb 13, 2015, at 9:14 AM, Ben Abbott <address@hidden> wrote: >> >>> On Feb 13, 2015, at 2:26 AM, Alex K <address@hidden> wrote: >>> >>> On Wed Feb 11 2015 at 4:36:55 PM Ben Abbott <address@hidden> wrote: >>> > On Feb 11, 2015, at 1:39 PM, Alex K <address@hidden> wrote: >>> > >>> > Hi! >>> > >>> > Is it possible to run a custom function every time any plot is changed? >>> > For example, when running `plot`, `axis` etc., if they modify a figure on >>> > the screen I would like to detect that and save it to a new file. >>> > >>> > I use gnuplot as backend, so any solution involving configuring gnuplot >>> > will also work for me. >>> > >>> > Thanks! >>> > Alex >>> >>> If you can provide mode details, someone may have a more efficient solution >>> … but for a general solution I’d add a listener to the figures “children” >>> property. >>> >>> See “help addlistener” to see how to do that. >>> >>> Ben >>> >>> Hi Ben, >>> >>> Thanks for the reply! For more context, I'm trying to make octave auto-save >>> plots to files on the disk every time they are updated. I've tried >>> addlistener but it doesn't seem to work: >>> >>> function my_handler >>> fprintf ("my_handler called\n"); >>> endfunction >>> addlistener (gcf, "children", address@hidden, "my string"}) >>> sombrero >>> >>> the handler is not getting called. >>> >>> Best, >>> Alex >> >> Ok, that’s embarrassing. I get no error, and it doesn’t work for me either. >> >> There are many examples of this working correctly in Octave’s sources. (take >> a look at the m-file code of plotyy or subplot for examples). >> >> Does anyone see a problem with Alex’s example? >> >> Ben > > I filed a bug report. > > > > Ben > > I guess I found the a way to get notification when the plot changes - it has > a "__modified__" property that I can add listener to. However, if I call > "print" from the handler, it doesn't work, giving a strange Ghostscript > error. Any ideas? > > Code: > > function my_handler(h, dummy) > if (strcmp(get(h, "__modified__"), "off")) > disp 'saving snapshot...'; > print(gcf, 'snapshot.png', '-debug', '-dpng', '-S640,480') > endif > endfunction > > addlistener (gcf, "__modified__", @my_handler); > > Error message: > > plot(sin(-10:0.1:10)) > saving snapshot... > Ghostscript command: '' > gnuplot-pipeline: ' ; rm /tmp/oct-eyO49W.eps' > GPL Ghostscript 9.10: Unrecoverable error, exit code 1 > rm: cannot remove '/tmp/oct-eyO49W.eps': No such file or directory > ---------- output begin ---------- > Error: /undefinedfilename in (/tmp/oct-eyO49W.eps) > Operand stack: > > Execution stack: > %interp_exit .runexec2 --nostringval-- --nostringval-- > --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- > --nostringval-- false 1 %stopped_push > Dictionary stack: > --dict:1178/1684(ro)(G)-- --dict:0/20(G)-- --dict:77/200(L)-- > Current allocation mode is local > Last OS error: No such file or directory > > ----------- output end ----------- > error: print: failed to print > error: called from: > error: /usr/share/octave/3.8.1/m/plot/util/private/__gnuplot_print__.m > at line 165, column 7 > error: /usr/share/octave/3.8.1/m/plot/util/print.m at line 423, column > 14 > error: my_handler at line 15, column 5 > > Thanks! > Alex Try adding a "drawnow ()" to the handler before the print command. Ben
https://lists.gnu.org/archive/html/help-octave/2015-02/msg00054.html
CC-MAIN-2019-43
refinedweb
533
64.61
12 December 2008 11:37 [Source: ICIS news] By Nigel Davis LONDON (ICIS news)--The sharp downturn of the past months is being felt across broad swathes of the sector, so pockets of demand and margin resilience are not easy to spot. Some segments, however, might be expected to do better than the crowd. The industrial gases and agrochemicals businesses, although excluding fertilizers, spring to mind. This is a time to accentuate the positives, certainly, though a healthy dose of realism would not go amiss when talking to investors and other stakeholders. The chemicals sector may not be the “flavour of the month” with investors. Indeed, this could be a particularly tough period on the world’s stock markets for all of them. But some companies are better placed than most. Some segments look more robust in the face of what is likely to be a tough recession. “We understand that the company wants to highlight its strengths, but we remain concerned that there wasn't enough focus on the weaknesses - at least not when it comes to managing investor expectations,” Credit Suisse chemicals analysts in the ?xml:namespace> AkzoNobel makes decorative paints and industrial coatings and has a specialty chemicals business. At the start of this year it completed its €11bn acquisition of UK-based paints maker ICI. It has had a good story to tell: one of an expanded product portfolio; new geographical reach; and consolidation. Until the tide turned against chemicals, it was one of the most strongly promoted chemicals stocks. But times change and AkzoNobel is falling out of favour with analysts who worry about the firm’s exposure to chemicals industrial demand and, increasingly, to industry-led market weakness. True, the former ICI paints business is strong in places like The company argues that its emerging market turnover and profitability are strong. It is the world’s largest coating supplier. The breadth of markets served by the group as a whole gives it a degree of stability. But it is a degree of stability in what are turning out to be extremely turbulent times. AkzoNobel’s financials need not necessarily be cyclical. The company rates 74% of the business as having very low to low exposure to cyclical end-markets and 26% moderate to high. Coatings margins have proved to be relatively resilient. The company had a strong first three quarters and last month was reconfirming the full year outlook: EBITDA (earnings before interest, tax, deprecation and amortisation) before any exceptional items, and in constant currencies, close to last year’s pro-forma level of €1.87bn and a new floor to the dividend to €1.80 a share. It did admit that the market for paper chemicals was softening and that polymer chemicals were falling short of previous performance. It said that demand remained healthy across its base chemicals and surface chemistry businesses. But it is just these areas that worry the financial community. “We do not see significant risk of Akzo missing its FY08 guidance given the strong performance over the first nine months of the year,” Credit Suisse said. “However, we continue to see potential for disappointment into 2009.” Not surprisingly, those concerns rest on the performance of the specialty chemicals and the performance coatings segments. Investors have paid a lot of attention to the decorative coatings side of the business but perhaps not so much on these two other important parts of AkzoNobel. Price and volume declines in these businesses are inevitable in a deflationary and recessionary environment, Credit Suisse said, particularly for a business so geared with slowing markets. Price deflation and lower raw material costs probably won’t be enough to prevent a fall in margins, as the AkzoNobel CFO is reported to have said on Monday. Credit Suisse sees earnings declines of 9.4% in 2009, though the consensus view is still for some growth. The recession is going to hit hard across a large part of the chemicals sector and there are few defensive stocks. Not surprisingly, analysts suggest agrochemicals and industrial gases as being the most attractive parts of the sector now. Companies like AkzoNobel have a lot to prove in 2009. Just how resilient are they in a global downturn? And just how important, as times get harder, is the potential for stronger synergies with newly acquired businesses? AkzoNobel is challenged to make the ICI acquisition work, to learn from it and grow with it. At the same time, it is moving into a period where previous widespread restructuring in its former coatings businesses and in chemicals must be expected to prove its
http://www.icis.com/Articles/2008/12/12/9178979/insight-akzonobels-broad-base-will-be-tested-in-2009.html
CC-MAIN-2014-10
refinedweb
768
53.31
Hello all, I believe it is an easy thing to do but I haven't figured out how to convert between coordinate systems using transData or transAxes. Here is the simple_plot.py import numpy import pylab x = numpy.arange(0.0, 1.0+0.01, 0.01) y = numpy.cos(2*2*numpy.pi*x) pylab.plot(x, y) Here I want to transform y1 to axis scale between 0 and 1. Also, I want to transform axis scale, say 0.25, to a corresponding y value in the data coordinates. pylab.show() Currently I am doing it manually scaling things with axis limits, etc. I believe the neat thing is to use the transforms api. Can somebody explain me how it is done with transforms? I am using 0.98.1. Thanks in advance, Nihat
https://sourceforge.net/p/matplotlib/mailman/attachment/64ea6fd20806281408q65c73440ne92616779aa9a7c@mail.gmail.com/1/
CC-MAIN-2017-51
refinedweb
136
70.19
This is a guest blog post by. Last October we challenged our PyBites’ audience to make a web app to better navigate the Daily Python Tip feed. In this article, I’ll share what I built and learned along the way. In this article you will learn: - How to clone the project repo and set up the app. -. If you want to follow along, reading the code in detail (and possibly contribute), I suggest you fork the repo. Let’s get started. Project Setup First, Namespaces are one honking great idea so let’s do our work in a virtual environment. Using Anaconda I create it like so: $ virtualenv -p <path-to-python-to-use> ~/virtualenvs/pytip Create a production and a test database in Postgres: $ psql psql (9.6.5, server 9.6.2) Type "help" for help. # create database pytip; CREATE DATABASE # create database pytip_test; CREATE DATABASE We’ll need credentials to connect to the the database and the Twitter API (create a new app first). As per best practice configuration should be stored in the environment, not the code. Put the following env variables at the end of ~/virtualenvs/pytip/bin/activate, the script that handles activation / deactivation of your virtual environment, making sure to update the variables for your environment: export DATABASE_URL='postgres://postgres:password@localhost:5432/pytip' # twitter export CONSUMER_KEY='xyz' export CONSUMER_SECRET='xyz' export ACCESS_TOKEN='xyz' export ACCESS_SECRET='xyz' # if deploying it set this to 'heroku' export APP_LOCATION=local In the deactivate function of the same script, I unset them so we keep things out of the shell scope when deactivating (leaving) the virtual environment: unset DATABASE_URL unset CONSUMER_KEY unset CONSUMER_SECRET unset ACCESS_TOKEN unset ACCESS_SECRET unset APP_LOCATION Now is a good time to activate the virtual environment: $ source ~/virtualenvs/pytip/bin/activate Clone the repo and, with the virtual environment enabled, install the requirements: $ git clone && cd pytip $ pip install -r requirements.txt Next, we import the collection of tweets with: $ python tasks/import_tweets.py Then, verify that the tables were created and the tweets were added: $ psql \c pytip pytip=# \dt List of relations Schema | Name | Type | Owner --------+----------+-------+---------- public | hashtags | table | postgres public | tips | table | postgres (2 rows) pytip=# select count(*) from tips; count ------- 222 (1 row) pytip=# select count(*) from hashtags; count ------- 27 (1 row) pytip=# \q Now let’s run the tests: $ pytest ========================== test session starts ========================== platform darwin -- Python 3.6.2, pytest-3.2.3, py-1.4.34, pluggy-0.4.0 rootdir: realpython/pytip, inifile: collected 5 items tests/test_tasks.py . tests/test_tips.py .... ========================== 5 passed in 0.61 seconds ========================== And lastly run the Bottle app with: $ python app.py Browse to and voilà: you should see the tips sorted descending on popularity. Clicking on a hashtag link at the left, or using the search box, you can easily filter them. Here we see the pandas tips for example: The design I made with MUI - a lightweight CSS framework that follows Google’s Material Design guidelines. Implementation Details The DB and SQLAlchemy I used SQLAlchemy to interface with the DB to prevent having to write a lot of (redundant) SQL. In tips/models.py, we define our models - Hashtag and Tip - that SQLAlchemy will map to DB tables: from sqlalchemy import Column, Sequence, Integer, String, DateTime from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Hashtag(Base): __tablename__ = 'hashtags' id = Column(Integer, Sequence('id_seq'), primary_key=True) name = Column(String(20)) count = Column(Integer) def __repr__(self): return "<Hashtag('%s', '%d')>" % (self.name, self.count) class Tip(Base): __tablename__ = 'tips' id = Column(Integer, Sequence('id_seq'), primary_key=True) tweetid = Column(String(22)) text = Column(String(300)) created = Column(DateTime) likes = Column(Integer) retweets = Column(Integer) def __repr__(self): return "<Tip('%d', '%s')>" % (self.id, self.text) In tips/db.py, we import these models, and now it’s easy to work with the DB, for example to interface with the Hashtag model: def get_hashtags(): return session.query(Hashtag).order_by(Hashtag.name.asc()).all() And: def add_hashtags(hashtags_cnt): for tag, count in hashtags_cnt.items(): session.add(Hashtag(name=tag, count=count)) session.commit() Query the Twitter API We need to retrieve the data from Twitter. For that, I created tasks/import_tweets.py. I packaged this under tasks because it should be run in a daily cronjob to look for new tips and update stats (number of likes and retweets) on existing tweets. For the sake of simplicity I have the tables recreated daily. If we start to rely on FK relations with other tables we should definitely choose update statements over delete+add. We used this script in the Project Setup. Let’s see what it does in more detail. First, we create an API session object which we pass to tweepy.Cursor. This feature of the API is really nice: it deals with pagination, iterating through the timeline. For the amount of tips - 222 at the time I write this - it’s really fast. The exclude_replies=True and include_rts=False arguments are convenient because we only want Daily Python Tip’s own tweets (not re-tweets). Extracting hashtags from the tips requires very little code. First, I defined a regex for a tag: TAG = re.compile(r'#([a-z0-9]{3,})') Then, I used findall to get all tags. I passed them to collections.Counter which returns a dict like object with the tags as keys, and counts as values, ordered in descending order by values (most common). I excluded the too common python tag which would skew the results. def get_hashtag_counter(tips): blob = ' '.join(t.text.lower() for t in tips) cnt = Counter(TAG.findall(blob)) if EXCLUDE_PYTHON_HASHTAG: cnt.pop('python', None) return cnt Finally, the import_* functions in tasks/import_tweets.py do the actual import of the tweets and hashtags, calling add_* DB methods of the tips directory/package. Make a Simple web app with Bottle With this pre-work done, making a web app is surprisingly easy (or not so surprising if you used Flask before). First of all meet Bottle: Bottle is a fast, simple and lightweight WSGI micro web-framework for Python. It is distributed as a single file module and has no dependencies other than the Python Standard Library. Nice. The resulting web app comprises of < 30 LOC and can be found in app.py. For this simple app, a single method with an optional tag argument is all it takes. Similar to Flask, the routing is handled with decorators. If called with a tag it filters the tips on tag, else it shows them all. The view decorator defines the template to use. Like Flask (and Django) we return a dict for use in the template. @route('/') @route('/<tag>') @view('index') def index(tag=None): tag = tag or request.query.get('tag') or None tags = get_hashtags() tips = get_tips(tag) return {'search_tag': tag or '', 'tags': tags, 'tips': tips} As per documentation, to work with static files, you add this snippet at the top, after the imports: @route('/static/<filename:path>') def send_static(filename): return static_file(filename, root='static') Finally, we want to make sure we only run in debug mode on localhost, hence the APP_LOCATION env variable we defined in Project Setup: if os.environ.get('APP_LOCATION') == 'heroku': run(host="0.0.0.0", port=int(os.environ.get("PORT", 5000))) else: run(host='localhost', port=8080, debug=True, reloader=True) Bottle Templates Bottle comes with a fast, powerful and easy to learn built-in template engine called SimpleTemplate. In the views subdirectory I defined a header.tpl, index.tpl, and footer.tpl. For the tag cloud, I used some simple inline CSS increasing tag size by count, see header.tpl: % for tag in tags: <a style="font-size: {{ tag.count/10 + 1 }}em;" href="/{{ tag.name }}">#{{ tag.name }}</a> % end In index.tpl we loop over the tips: % for tip in tips: <div class='tip'> <pre>{{ !tip.text }}</pre> <div class="mui--text-dark-secondary"><strong>{{ tip.likes }}</strong> Likes / <strong>{{ tip.retweets }}</strong> RTs / {{ tip.created }} / <a href="{{ tip.tweetid }}" target="_blank">Share</a></div> </div> % end If you are familiar with Flask and Jinja2 this should look very familiar. Embedding Python is even easier, with less typing— (% ... vs {% ... %}). All css, images (and JS if we’d use it) go into the static subfolder. And that’s all there is to making a basic web app with Bottle. Once you have the data layer properly defined it’s pretty straightforward. Add tests with pytest Now let’s make this project a bit more robust by adding some tests. Testing the DB required a bit more digging into the pytest framework, but I ended up using the pytest.fixture decorator to set up and tear down a database with some test tweets. Instead of calling the Twitter API, I used some static data provided in tweets.json. And, rather than using the live DB, in tips/db.py, I check if pytest is the caller ( sys.argv[0]). If so, I use the test DB. I probably will refactor this, because Bottle supports working with config files. The hashtag part was easier to test ( test_get_hashtag_counter) because I could just add some hashtags to a multiline string. No fixtures needed. Code quality matters - Better Code Hub Better Code Hub guides you in writing, well, better code. Before writing the tests the project scored a 7: Not bad, but we can do better: I bumped it to a 9 by making the code more modular, taking the DB logic out of the app.py (web app), putting it in the tips folder/ package (refactorings 1 and 2) Then with the tests in place the project scored a 10: Conclusion and Learning Our Code Challenge #40 offered some good practice: - I built a useful app which can be expanded (I want to add an API). - I used some cool modules worth exploring: Tweepy, SQLAlchemy, and Bottle. - I learned some more pytest because I needed fixtures to test interaction with the DB. - Above all, having to make the code testable, the app became more modular which made it easier to maintain. Better Code Hub was of great help in this process. - I deployed the app to Heroku using our step-by-step guide. We Challenge You The best way to learn and improve your coding skills is to practice. At PyBites we solidified this concept by organizing Python code challenges. Check out our growing collection, fork the repo, and get coding! Let us know if you build something cool by making a Pull Request of your work. We have seen folks really stretching themselves through these challenges, and so did we. Happy coding! Contact Info: I am Bob Belderbos from PyBites, you can reach out to me by: - Twitter: - GitHub: - Slack: upon request (send us an email). What do you think?
https://realpython.com/blog/python/building-a-simple-web-app-with-bottle-sqlalchemy-twitter-api/
CC-MAIN-2018-05
refinedweb
1,804
64.51
In this example we are going to describe swapping of two numbers in java without using the third number in java. We have used input.nextInt() method of Scanner class for getting the integer type values from the command prompt. The swapping of two numbers is based on simple arithmetic operation that has been described below in simple steps. Same logic has been applied in the programming as well. Following is the method how to number are swapped:- First we take two variable "a" and "b". initially a = 15 and b= 20. Step 1 a= a+b given a=15+20 a=35 Step 2 b= a-b given b= 35-20 b=15 Step 3 a = a-b given a= 35-15 a =20 Finally a=20 and b=15 Example package SwappingNumber; import java.util.Scanner; public class SwappingNumber { public static void main(String args[]) { int a, b, num; System.out.println("Enter the a vlaue:"); Scanner in = new Scanner(System.in); a = in.nextInt(); System.out.println("Enter the b value:"); b = in.nextInt(); System.out.println("Before Swapping value\nx = "+a+"\ny = "+b); num = a; a = b; b = num; System.out.println("After Swapping value\nx = "+a+"\ny = "+b); } }: Swapping of two numbers in java Post your Comment
http://roseindia.net/java/beginners/swapping-of-two-numbers-in-java.shtml
CC-MAIN-2015-40
refinedweb
212
58.69
In this tutorial we will learn to implement marathi/hindi font in an android application which consists an text view, to display the text. - To implement marathi/hindi font in android application follow the steps given below : - First of all open up Android eclipse : - Go to File->New->Android Application Project - You will get a dialog box where you have to provide the details for your applicaiton : - After filling the details you will have the following view of your Dialog box : - Now click on Next button, you will get another dialog box where you just have to uncheck the first icon : - After this click on the Next button, it will ask to select a Activity. So, select Blank Activity and click on Next button : - Your Blank Activity will get created, so now click on the Finish button : - Your project will be created as shown below : - Now download the font which ever you like for example marathi font “mangla” and paste it in the assets folder as shown in the image below : - Now let’s design our layout. So, go to activity_main.xml file and drag and drop the TextView as shown in the image : - Now save your work by pressing Ctrl + S . - Now go to MainActivity.java file and write the following code in it : - Now go to Your activity_main.xml file and insert your Marathi / Hindi text in the text property of the TextView as shown in the image. - Now select your project and run it as shown : - Thus you will have following Output on your emulator… - For inserting the image you can copy and paste the image in res->drawable folder import android.app.Activity; import android.graphics.Typeface; import android.os.Bundle; import android.widget.TextView; public class Acts7 extends Activity{ protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.acts7); TextView text_view = (TextView)findViewById(R.id.textView1); Typeface font = Typeface.createFromAsset(getAssets(), "mangal.ttf"); text_view.setTypeface(font); } public void onStop() { super.onStop(); } } Thus, we have successfully learnt how to implement hindi font in android application.
https://blog.eduonix.com/android-tutorials/how-to-implement-hindi-font-in-android-application/
CC-MAIN-2021-04
refinedweb
341
53.1
Anvil components can raise events. For example, when a Button is clicked, it raises the ‘click’ event. Click on ‘Form1’ in the App Browser to go back to your UI. Click on your ‘Submit’ Button, and scroll to the bottom of the Properties Panel. You’ll see a list of events for the Button. Click the blue arrows next to ‘click’. You will be taken to the Form Editor’s ‘Code’ view. This is where you write your client-side Python code that runs in the browser. You’ll see a submit_button_click method on your Form that looks like this: def submit_button_click(self, **event_args): """This method is called when the button is clicked""" pass This is the Python method that runs when the ‘Add an article’ Button is clicked. For example, to display a simple popup with the text ‘You clicked the button’, edit your submit_button_click function as follows: def submit_button_click(self, **event_args): # Display a popup that says 'You clicked the button' alert("You clicked the button")
https://anvil.works/learn/tutorials/feedback-form/chapter-4/10-set-up-an-event-handler.html
CC-MAIN-2020-10
refinedweb
167
80.62
sorry about this email, it is quite off-topic, but I know a lot of good hackers hang out here, and I'm quite stuck on this little problem. I have been going at this problem from a multitude of angles, and I settled on using pass-by-reference after trial and error with returning char* this code; #include <stdio.h> #include <iostream> void test(char ***foo) { **foo = strtok("test1,test2",","); } void test2(char ***foo) { **foo = "Spin the big yellow thing"; } int main() { char ** bar; test2(&bar); printf("%s\n", *bar); test(&bar); printf("%s\n", *bar); return 0; } in this; results Spin the big yellow thing Segmentation fault If anyone can give me a tip about why this segfaults when using strtok, I'd be very happy :) I have no backtrace, as my system doesn't have gdb installed, but I will install it later on (tomorrow) if I get a chance. Cheers, -- Morten On Fri, 07 May 2004 01:48:29 +0200, Morten Nilsen <morten@...> said: > **foo = "Spin the big yellow thing"; > results Spin the big yellow thing > Segmentation fault > > If anyone can give me a tip about why this segfaults when using strtok, > I'd be very happy :) Subtle bug. ;) First, we point **foo at a copy of the string. Then we call strtok() on that string. Two things to note: 1) strtok *modified* the string to keep track of its state between calls (so it knows where to pick up parsing on the next call). 2) gcc by default will put string constants in a non-writable memory segment so they can't be accidentally splatted. Solutions: 1) **foo = strdup("Spin the big yellow thing"); This will malloc() some space, copy the string into it. Remember that for good programming practice, you will need to free() that string if/when you're done with it. 2) If using gcc, compile with -fwriteable-strings. This doesn't fix your real problem, but at least you won't crash.
https://sourceforge.net/p/enlightenment/mailman/enlightenment-devel/thread/409ACECD.60203@nilsen.com/
CC-MAIN-2018-22
refinedweb
331
73.21
Content-type: text/html #include <stdlib.h> const char *getexecname(void); The getexecname() function returns the pathname (the first argument of one of the exec family of functions; see exec(2)) of the executable that started the process. Normally this is an absolute pathname, as the majority of commands are executed by the shells that append the command name to the user's PATH components. If this is not an absolute path, the output of getcwd(3C) can be prepended to it to create an absolute path, unless the process or one of its ancestors has changed its root directory or current working directory since the last successful call to one of the exec family of functions. If successful, getexecname() returns a pointer to the executables pathname; otherwise, it returns 0. The getexecname() function obtains the executable pathname from the AT_SUN_EXECNAME aux vector. These vectors are made available to dynamically linked processes only. A successful call to one of the exec family of functions will always have AT_SUN_EXECNAME in the aux vector. The associated pathname is guaranteed to be less than or equal to PATH_MAX, not counting the trailing null byte that is always present. See attributes(5) for descriptions of the following attributes: exec(2), getcwd(3C), attributes(5)
http://backdrift.org/man/SunOS-5.10/man3c/getexecname.3c.html
CC-MAIN-2016-50
refinedweb
210
51.28
Update Button Toast alpha animation to use Nine Old Androids RESOLVED WONTFIX Status () People (Reporter: Margaret, Assigned: dreamerkiwi, Mentored) Tracking Firefox Tracking Flags (Not tracked) Details (Whiteboard: [mozweekend]) Attachments (3 attachments) When bug 1044133 lands, we can switch from using our own gecko animation library to NineOldAndroids. We should update the alpha animation in ButtonToast.java. I think Claas started working on this during the mozweekend.de, Claas, do you want to take this bug? Whiteboard: [mozweekend] Sorry for the delay, but yes: I would like to take this bug. Nice, over to you then. Assignee: nobody → mozilla I changed ButtonToast.java to use NineOldAndroids rather than the gecko animation library. I have tested it and got the same animations as before. If this (minimal) patch is okay, I would have an additional one which enriches the class with explanatory Javadoc. Flags: needinfo?(margaret.leibovic) This would be my proposal for better documentation (plus extraction of the animation parts into two methods fadeIn/fadeOut). It has patch 1044275.diff as a prerequisite. Comment on attachment 8635611 [details] [diff] [review] ButtonToast.java - use NineOldAndroids Review of attachment 8635611 [details] [diff] [review]: ----------------------------------------------------------------- I tested this out locally, and when I panned the page to make the toast disappear, it looks like it immediately hides, as opposed to fading out like it did before. Do you see that when testing locally? I'm going to redirect this review to liuche, since she has more recent experience than me with working with these animations. Attachment #8635611 - Flags: review?(margaret.leibovic) → review?(liuche) Comment on attachment 8635615 [details] [diff] [review] ButtonToast.java - better documentation Review of attachment 8635615 [details] [diff] [review]: ----------------------------------------------------------------- This is generally looking fine, but I don't think we need the new helper methods. ::: mobile/android/base/widget/ButtonToast.java @@ +191,5 @@ > + /** > + * Fades in our toast. > + * @param immediate true if toast should be shown immediately. > + */ > + private void fadeIn(boolean immediate) { This method name is a bit misleading, since the toast won't actually "fade in" if the immediate parameter is true. This helper method only has one consumer, so I don't think we should split this out. Same goes with the `fadeOut` helper method below. @@ +201,5 @@ > animator.start(); > } > > + /** > + * Hides our toast and stores the reason. The reason isn't stored anywhere, so this isn't quite correct. The reason is just passed along to the onToastHidden listener. @@ +256,5 @@ > + * Returns true if the toast is not visible. > + */ > + public boolean isGone() { > + return (mView.getVisibility() == View.GONE); > + } I don't think this helper method does enough to be worth the extra layer of complexity. Let's just leave the visibility check inline above as it was before. @@ +265,2 @@ > public boolean isVisible() { > return (mView.getVisibility() == View.VISIBLE); I do see that we have this corresponding `isVisible` method, but this is used by external consumers, which is why I think it makes sense to keep this. Attachment #8635615 - Flags: review?(margaret.leibovic) → feedback+ (Also, thanks for your patches! And thanks for helping to update the code comments!) Flags: needinfo?(margaret.leibovic) Comment on attachment 8635611 [details] [diff] [review] ButtonToast.java - use NineOldAndroids Review of attachment 8635611 [details] [diff] [review]: ----------------------------------------------------------------- ::: mobile/android/base/widget/ButtonToast.java @@ +127,5 @@ > > mView.setVisibility(View.VISIBLE); > int duration = immediate ? 0 : mView.getResources().getInteger(android.R.integer.config_longAnimTime); > > + ValueAnimator animator = ObjectAnimator.ofFloat(mView, "alpha", 0.0f, 1.0f); Nit: make this final @@ +150,5 @@ > mView.clearAnimation(); > if (immediate) { > mView.setVisibility(View.GONE); > } else { > + ObjectAnimator animator = ObjectAnimator.ofFloat(mView, "alpha", 1.0f, 0.0f); Nit: final Attachment #8635611 - Flags: review?(liuche) → review+ Assignee: mozilla → dreamerkiwi I merged Claas' patches and refactored them according to your reviews. I tested the animations locally with "Open in New Tab". Thanks for the patch Kim, however we're actually deprecating ButtonToast, and switching to Android snackbars. I'm sorry that this bug slipped through the cracks for being closed WONTFIX! If you're looking for something else, you can check , or if you want a good first JS bug, you can take a look at bug 1225563! Let me know if you have any other questions. Status: NEW → RESOLVED Last Resolved: 4 years ago Resolution: --- → WONTFIX
https://bugzilla.mozilla.org/show_bug.cgi?id=1044275
CC-MAIN-2019-22
refinedweb
692
51.55
Need 2014. Why? Because I love maps. I also love the sp package that they created. I don't have a strong working knowledge of it yet, but I have found a paper by Zack Almquist to be very useful in working through the basics. Here's a quick example using US census data. Note that if you want to download the Almquist packages that they are huge. library(UScensus2010) library(UScensus2010county) What I'd like to do is see where the oldest Americans live. This package will give us 51 shapefiles, so doing this by state is straightforward. For openers, let's have a look at Florida. To spare myself typing the crazy field names, I'll code a couple helper functions. data(florida.county10) Over65 = function(dfCensus) { Over65 = dfCensus$H0170009 + dfCensus$H0170010 + dfCensus$H0170011 Over65 = Over65 + dfCensus$H0170019 + dfCensus$H0170020 + dfCensus$H0170021 Over65 } PercentOver65 = function(dfCensus) { PercentOver65 = Over65(dfCensus)/dfCensus$H0170001 } florida.county10$Over65 = Over65(florida.county10) florida.county10$PercentOver65 = PercentOver65(florida.county10) florida.county10$PercentOver65 = PercentOver65(florida.county10) The USCensus choropleth() function works well for me most of the time, but I get an error on the breaks that I use often enough that I've coded my own. This is probably something that I'm doing wrong, but it's a nice exercise to get the choropleths to look right. Actually, I'm fairly sure that I figured out how to set the colors from some code that I read on is.R()'s advent calendaR from last year. library(RColorBrewer) library(classInt) MyChoropleth = function(sp, dem, palette, ...) { df = sp@data brks = classIntervals(df[, dem], n = length(palette), style = "quantile") brks = brks$brks sp$MyColor = palette[findInterval(df[, dem], brks, all.inside = TRUE)] plot(sp, col = sp$MyColor, axes = F, ...) } myPalette = brewer.pal(9, "Blues") MyChoropleth(florida.county10, "PercentOver65", myPalette, border = "transparent") So there are fewer seniors in the southernmost part of the state. Given the urban character of a city like Miami, that seems consistent. But how does Florida compare to the rest of the US? This will take a bit more work. First, we'll need to load all of the states data and calculate the over 65 percent. We'll then need to merge all of the states into one very large spatial polygons data frame. data(states.names) lower48 = states.names[!states.names %in% c("alaska", "hawaii", "district_of_columbia")] lower48 = paste0(lower48, ".county10") data(list = lower48[1]) spLower48 = get(lower48[1]) rm(list = lower48[1]) for (i in 2:48) { data(list = lower48[i]) spLower48 = spRbind(spLower48, get(lower48[i])) rm(list = lower48[i]) } spLower48$PercentOver65 = PercentOver65(spLower48) MyChoropleth(spLower48, "PercentOver65", myPalette, border = "transparent") This code works when I run it in RStudio, but I'm not able to get it to run with knitr. Suggestions welcome. In the meantime, I've had to upload the image to WordPress the old fashioned way. The resolution is fairly dreadful. Given the lateness of the hour and the length of time it takes R to process all that data and imagery, I'm going to leave well enough alone. Suggestions for improvement are welcome. What's there suggests that Florida has a well-earned reputation for being a favorite place to live for seniours, but there are plenty of older folks in the midwest, as well as Arizona and Nevada. If I had my way, I'd retire somewhere in Europe. It's probably the only way I'll collect anything from my German pension. Tomorrow I'll begin a lengthy look at the career of actor Michael Caine. citation("UScensus2010county") ## ## To cite UScensus2000 in publications use: ## ## Zack W. Almquist (2010). US Census Spatial and Demographic Data ## in R: The UScensus2000 Suite of Packages. Journal of Statistical ## Software, 37(6), 1-31. URL. ## ## A BibTeX entry for LaTeX users is ## ## @Article{, ## title = {US Census Spatial and Demographic Data in {R}: The {UScensus2000} Suite of Packages}, ## author = {Zack W. Almquist}, ## journal = {Journal of Statistical Software}, ## year = {2010}, ## volume = {37}, ## number = {6}, ## pages = {1--31}, ## url = {}, ## } classInt_0.1-21 ## [4] RColorBrewer_1.0-5 UScensus2010county_1.00 UScensus2010_0.11 ## [7] foreign_0.8-55 maptools_0.8-27 sp_1.0-13 ## ## loaded via a namespace (and not attached): ## [1] class_7.3-9 digest_0.6.3 e1071_1.6-1 evaluate_0.4.7 ## [5] formatR_0.9 grid_3.0.2 lattice_0.20-23 RCurl_1.95-4.1 ## [9] stringr_0.6.2 tools_3.0.2 XML_3.98-1.1 XMLRPC_0.3...
http://www.r-bloggers.com/24-days-of-r-day-2/
CC-MAIN-2014-10
refinedweb
736
57.77
Unanswered: Wrap a widget to add another widget next to it Unanswered: Wrap a widget to add another widget next to it Hello everyone ! I'm facing a problem. I want to turn this (in the .ui.xml file, it's a widget) : Code: <b:SomeWidget ui: Code: <a:HelpWrapper <b:SomeWidget ui: </a:HelpWrapper> In order to do it, I wrote a little class called "HelpWrapper" : Code: public class HelpWrapper extends Composite implements HasWidgets.ForIsWidget { /** * The container */ private final HorizontalLayoutContainer container; /** * The help icon widget */ private HelpWidget helpWidget; /** * The wrapped widget */ private Widget wrappedWidget = null; /** * The constructor */ public HelpWrapper() { // We create & initialize the container container = new HorizontalLayoutContainer(); // We create & add a spacing between the widget and the quick help container.add(new LabelToolItem(" ")); // We create & add the quick help widget helpWidget = new HelpWidget(); container.add(helpWidget); initWidget(container); } /** * @param text * The tooltip text */ public void setText(String text) { if (helpWidget != null) { helpWidget.setTooltipHTML(text); } } /** * @param size * The size of the icon */ public void setSize(IconSize size) { helpWidget.resizeIcon(size); } @Override public void add(Widget w) { // We add the widget and the help widget if (wrappedWidget == null) { wrappedWidget = w; container.insert(wrappedWidget, 0); container.add(new LabelToolItem(" ")); container.add(helpWidget); } } @Override public void clear() { // We clear the container container.clear(); } @Override public Iterator<Widget> iterator() { return container.iterator(); } @Override public boolean remove(Widget w) { boolean ok = container.remove(w); if (ok) { wrappedWidget = null; } return ok; } @Override public void add(IsWidget w) { add(w.asWidget()); } @Override public boolean remove(IsWidget w) { return remove(w.asWidget()); } } But for some reason, the height of the original widget is not preserved :-( From this : base.png I get this : fail.png But what I wanted is this : win.png What did I do wrong :-( ? Thank you in advance ! - Join Date - Feb 2009 - Location - Minnesota - 2,732 - Answers - 109 - Vote Rating - 90 I haven't run your code, but it appears that you are wrapping up a child that needs size to work correctly in a new parent (HelpWrapper) that does not properly pass along sizing information from its parent. Without knowing more details about your situation, is there a reason you aren't using the HorizontalLayoutContainer or HBoxLayoutContainer? HLC would be good if you are using a parent to size the children, whereas HBoxLC would be better if the children already have sizes. CssFloatLayoutContainer is another option, which will behave much like HBoxLC. Html tags can have display as 'inline' or 'block' - this would be another way to think about solving this. However, setting sizes on an inline element needs to be done carefully. GXT almost always favors using block elements with sizing an positioning - these are the basis for most layout container classes. FlowLayoutContainer, as opposed to the others mentioned earlier, do not do any of this sizing, but leaves it to the html elements and their styling to position and size themselves. If you set two children to be display:inline in it, they will line up, one after the other, but I'm not sure that is what you mean by inline in this case. Try out HLC (remember to use layout data!), HBoxLC, and CssFloatLC, see if one of those might make sense instead of crafting your own. If it doesn't, consider posting enough code to allow others to run it - we can run the container you made, but not the other things it lives in or that go in it.
http://www.sencha.com/forum/showthread.php?256357-Wrap-a-widget-to-add-another-widget-next-to-it
CC-MAIN-2014-42
refinedweb
571
55.34
Making a button Posted on March 1st, 2001 Making a button is quite simple: you just call the Button constructor with the label you want on the button. (You can also use the default constructor if you want a button with no label, but this is not very useful.) Usually you’ll want to create a handle for the button so you can refer to it later. The Button is a component, like its own little window, that will automatically get repainted as part of an update. This means that you don’t explicitly paint a button or any other kind of control; you simply place them on the form and let them automatically take care of painting themselves. So to place a button on a form you override init( ) instead of overriding paint( ): //: Button1.java // Putting buttons on an applet import java.awt.*; import java.applet.*; public class Button1 extends Applet { Button b1 = new Button("Button 1"), b2 = new Button("Button 2"); public void init() { add(b1); add(b2); } } ///:~ It’s not enough to create the Button (or any other control). You must also call the Applet add( ) method to cause the button to be placed on the applet’s form. This seems a lot simpler than it is, because the call to add( ) actually decides, implicitly, where to place the control on the form. Controlling the layout of a form is examined shortly.
https://www.codeguru.com/java/tij/tij0135.shtml
CC-MAIN-2019-35
refinedweb
235
59.23
So you want to use AWS Cognito to authenticate users and have your user pool, identity pool, and app client all set up in the AWS console. …the next question is how can you connect this with your React based frontend? While there are a few ways to go about doing this, this post is going to give you a brief overview on how to do this via a library called AWS-Amplify. AWS-Amplify is an open source project managed by AWS described as “a declarative JavaScript library for application development using cloud services.” I liked this particular library, because it has a client first approach and abstracts away some of the setup required in the JavaScript SDK. My favorite features of Amplify are: Authentication (via Cognito), API (via API Gateway), and Storage (via S3), but this library has a lot more to offer than just those features. This post will focus on how to authenticate users from a React based frontend…more specifically user signup that has an email address verification step. The Setup First you’ll need to setup a config file to reference your already created AWS resources (in this case the user pool, identity pool, and client id) in your /src folder. The file will look something like this : src/config.js export default { cognito: { REGION: ‘YOUR_COGNITO_REGION’, USER_POOL_ID: ‘YOUR_USER_POOL_ID’, APP_CLIENT_ID: ‘YOUR_APP_CLIENT_ID’, IDENTITY_POOL_ID: ‘YOUR_IDENTITY_POOL_ID’ } }; Then in your index.js file where you setup your react app, you’ll need to configure aws Amplify. It’ll look similar to this: src/index.js import React from ‘react’; import ReactDOM from ‘react-dom’; import Amplify from ‘aws-amplify’; import config from ‘./config’; import App from ‘./App’; Amplify.configure({ Auth: { mandatorySignIn: true, region: config.cognito.REGION, userPoolId: config.cognito.USER_POOL_ID, identityPoolId: config.cognito.IDENTITY_POOL_ID, userPoolWebClientId: config.cognito.APP_CLIENT_ID } }); ReactDOM.render( <Router> <App /> </Router>, document.getElementById(‘root’) ); The mandatorySignIn property is optional, but is a good idea if you are using other AWS resources via Amplify and want to enforce user authentication before accessing those resources. Also note that for now having a separate config file might seem a bit overkill, but once you add in multiple resources (i.e. Storage, API, Pub Sub etc.) you’ll want that extra config file to keep things easy to manage. Implementation Overview The signup flow will look like this: - The user submits what they’ll use for login credentials (in this case email and password) via a signup form and a second form to type in a confirmation code will appear. - Behind the scenes the Amplify library will sign the user up in Cognito. - Cognito will send a confirmation code email to the user’s signup email address to verify that the email address is real. - The user will check their email > get the code > type the code into the confirmation form. - On submit, Amplify will send the information to Cognito which then confirms the signup. On successful confirmation, Amplify will sign the user into the application. Implementation Part 1 First in your signup form component, you’ll need to import Auth from the Amplify library like this: import { Auth } from ‘aws-amplify’; As you create your form, I’d suggest using local component state to store the form data. It’ll look like your typical form with the difference being using the Amplify methods in your handleSubmit function whenever the user submits the form. The handleSubmit function will look like this: handleSubmit = async event => { event.preventDefault(); try { const newUser = await Auth.signUp({ username: this.state.email, password: this.state.password }); this.setState({ newUser }); } catch (event) { if (event.code === ‘UsernameExistsException’) { const tryAgain = await Auth.resendSignUp(this.state.email); this.setState({ newUser: tryAgain }); } else { alert(event.message); } } } On success, Amplify returns a user object after the signUp method is called, so I’ve decided to store this object in my component local state so the component knows which form to render (the signup or the confirmation). Before we continue let’s go over a quick edge case. So if our user refreshes the page when on the confirmation form and then tries to sign up again with the same email address, they’ll receive an error that the user already exists and will need to signup with a different email address. The catch block demonstrates one way of handling that possibility by resending the signup code to the user if that email is already present in Cognito. This will allow the user to continue using the same email address should they refresh the page or leave the site before entering the confirmation code. Implementation Part 2 So now the user is looking at the confirmation form and has their confirmation code to type in. We’ll need to render the confirmation form. Similar to the signup form it’ll look like a typical form with the exception being the function that is called whenever the user submits the confirmation form. The handleSubmit function for the confirmation form will look similar to this when using Amplify: handleConfirmationSubmit = async event => { event.preventDefault(); try { await Auth.confirmSignUp(this.state.email, this.state.confirmationCode); await Auth.signIn(this.state.email, this.state.password); this.props.isAuthenticated(true); this.props.history.push("/"); } catch (event) { alert(event.message); } } So it is taking in the form data, using Amplify to confirm the user’s email address via the conformation code and signing in the user if successful. You can then verify if a user is signed in via props at the route level if you’d like. In this case, I arbitrarily named it isAuthenticated and redirected the user to the root path. The complete docs for using the Auth feature of Amplify can be found here. We’ve only scratched the surface in this post, so go forth and explore the all of the different features that Amplify has to offer. I’ve found it has a very nice declarative syntax and is very readable for folks who are new to a codebase. For building further on your React-based serverless applications, I highly recommend Stackery for managing all of your serverless infrastructure backed up by seamless, git-based version control.
https://www.tefter.io/bookmarks/55360/readable
CC-MAIN-2020-05
refinedweb
1,022
54.73
C and its children are the Esperanto of programming languages. Just good enough to be used, just bad enough to inspire countless revisions, and even with vastly superior successors it'll probably never die just due to the inertia of having been there first. Does anyone have a link to a transcript?.. You don't pay for the features you don't use, so there's no harm in implementing the features the right way for those who wish to use them. For example, D provides a boat-load of features, but if you want to write straight C, you can. You don't have to pay the price of garbage collection, bounds checking, and so forth if you don't want to. The C++ mentality means that there's a much greater penalty for using advanced features. Not only a penalty in performance, but also usability and maintainability. No wonder developers are scared to use it as anything beyond C with Classes. As already said, blame the programmer, not the tool. o.Lock(); // stuff o.unLock(); The fact that you need to invoke the unLock() method explicitely, which is prone to error and is the reason for using the macro in the first place. Why not? Placing the closing bracket of your for loop is prone to error, and if you leave it out completely, I imagine the resulting compiler error being non-trivial to track down. Coverity is useful for verifying arbitrary locking conventions. Why make the bounds-checked array the default array? Because you very rarely want to reference outside the bounds of an array, and you very rarely care enough about the minimal overhead to warrant the risk of bugs and vulnerabilities. For those times where you want to linearly traverse a 2D array or where you need to squeeze everything out of that inner loop, you can opt for the simple array. 1) It's less prone to error than having to write a full method call, since you're likely to get a graphical hint from the editor you're using. In any case, it's not more prone to error than omitting any other closing braket. 2) The compiler might emit a non trivial error, this depends on the compiler, but at least it will complain. A far better situation than it not complaining at all. Quite a controversial statement from soneone who advocates the compiler to have all features one needs... then use an external tool to do something the compiler can already do? ;-) I don't agree, I don't want to pay for what I don't use. But in any case, "default array" is nonsense: declaring the type of an object is up to me, it's me who decides whether or not to use a certain type or not, certainly not the compiler. Therefore, wherever I want I can use the bound-checked array. That is to say, the "default bound-checked array" is a policy I, the programmer, have to and can adopt without any extra efforts. Just like in java, and it would be totally legal C++, provided the macro synchronized expanded to something like this: #define synchronized(o) for((o).Lock(); (o).isLocked(); (o).unLock()) ... and where would the .Lock() method come from? c++.lang.Object? (sorry, couldn't resist :-) For being part of the language, they are surprisingly unaware of how the language works. Macros will gladly mess up anything, without worrying about things like namespaces. Having the following code messed up, just because "Sun" is #define'd when you try to compile on Solaris is really not very helpful. namespace x { class C { private: enum { Mon, Tue, Wed, Thu, Fri, Sat, Sure, but then you're just using macros for the sake of using macros, or to change the appearance of the code. In both those cases, macros actually do a good job. The page you linked to does not correctly mimic the C# "lock". If you use the same syntax in C++ as C#, the scope of the lock will be different from that in C#. Since the lock macro is outside the block that should be locked, the lock won't go out of scope when the block ends and the unlocking will take place at a later time, which may or may not be a problem. ". rayiner, your example in C++ would be best implemented with an iterator or with the foreach macro described earlier. All your example proves is that when there's no link between syntax and semantics the compiler cannot optimize unnecessary code away, therefore the solution is to create that link, which is what an interator pattern. "Just good enough to be used, just bad enough to inspire countless revisions, and even with vastly superior successors it'll probably never die just due to the inertia of having been there first." That sounds exactly like a desciption of the average programmer who is incapable of programming C or it's children because they require nappies in the runtime environment to keep making up for the buggy, diseased code they end up slapping together. Vastly superior successors? Do you have ANY experience with real development? There's a time and a place for C and C++, just like there's a time and a place for Java and C#, just like there's a time and a place for machine code and assembly programming. There are a number of tasks that I wouldn't want to do in any other language than C. It is simply the best tool for some jobs. I'd really like to hear about some of these vastly superior languages, so I can write my kernel drivers in them. I'd really like to hear about some of these vastly superior languages, so I can write cellphone applications in them. Please, share with me what these wonderful panacea languages are, so that I can write my toolkit libraries in them, and let my customers call them from whatever language they use. When people talk of vastly superior languages, they are really talking about a shift in language application. People can do much more much quicker in Java/C#/Perl/Python/Ruby then in C/C++ (speaking from experience as a former Java/C/C++ programmer). Thus the application space for C/C++ shrinks, people move to more modern languages, and they begin to frame their discussions wrt those languages. Ahh, finally. Someone bringing an intelligent comment to the discussion. I'm so tired of GC-jerks going around talking about the death of C/C++, and how inferior they are. They are little children running around shouting how much better their G.I. Joe is than your He-Man. Yeah, G.I. Joe's got guns and fighter jets, but He-Man has a magic sword, and battlecat, AND advanced technology. Er, ummm, yeah - sorry for the tangent. But the point is, they have their application domain, where they make sense, where they're the best tool (so far) for the job. Nothing in and of itself. The problem is when you have lots of people working on the same code. Some people will write C++ like you do, some people will go template happy and use them for everything, a third person will write C++ in a completely different way. And much confusion will arise. Take this link, educate yourself, then re-read your post, sir: And I quote: "The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada" Objective-C is virtually unused and unsupported outside of OS X. Fortran is a niche language these days and only used within certain high performance computing circles. Ada is even less used and less supported than Fortran in the real world. Java could be a contender, but the gcc support is woefully behind and incomplete. So I fail to see what your link is supposed to prove. C and C++ are in fact the only widely supported language that compiles to native machine code.. Who said anything about higher level? You don't have to use a virtual machine just because you don't want to use C++. There are the likes of Pascal/Modula-2/Ada. They have their own flaws, but for many purposes they are very good. And you won't loose any performance. (!C++) != dog_slow The C family of languages was not chosen as the de facto standard because they were so great, it just happened that way. They are good at low-level programming, but most people shouldn't be doing that (yes, even games programmers, beyond the odd shader) - that's what OSs are for, after all. Ada is the only one out of the list I've seen, and it certainly looks very nice, and it is also used in embedded applications from what I've seen. I think I read boeing used it in one of their planes even. I'm not saying (!C++) == dog_slow I'm saying compiled C++ produces pretty darn fast and reasonably (though not always true...) compact binaries compared to *most* of the languages people are saying are the new wonder-bread language. Especially things like C#, Java, Ruby, etc. I don't hear people saying "lets all use Pascal now" or "lets all use Ada now" I hear... lets all use "C#" or "Java" or whatever the new fancy pants language is. D is probably the only exception to the list of new wonder-bread languages that appears to be worth squat. It needs more bindings to popular otherwise it looks very nice.. It would be smarter to put the same energy into D. () Yes, let's dump the hundreds of millions of lines of code written in this tried and tested, standardised language, and put lots of resources into a brand new, proprietary language supported by only one company, and which (as far as I can tell) doesn't even have a standard library. What a great idea! Well done! Edited 2007-08-13 19:28 The title of this thread is "The Next Generation C++". The language war is officially on-topic. I'm totally with you on D. It has two FOSS implementations, one of them in GCC. D is binary-compatible with anything written in C, including your native libraries. Besides macros, C code compiles unmodified in D, and there are scripts to convert your #includes and headers to imports and modules. The feature set is more like Java and C# than C++, but it compiles to native code. It lets you go as low as you want, from pointers and manual memory management down to the inline assembler. But it also has template meta-programming, multithreading primitives, garbage collection, and even built-in unit testing. The tradeoff is no C++ compatibility. So, unfortunately, it can't use C++ libraries as you've described. Because of its design, a language must implement almost the entire C++ semantics and compiler in order to link to C++. For this reason, the only language that can link to C++ is, well, C++. D is designed to simplify compiler development, allowing for better optimization and more compliant implementations. A large subset of D (minus inline assembly and other low-level features) is suitable for compiling to bytecode for various virtual machines. It's everything that C++ should have been, and the only downside is that it's not C++. Inertia sucks. ...it can't use C++ libraries as you've described... Sorry, I had C++ on the brain, I did mean C libraries. Thanks for the catch. I use or have used all of the above (except C#) so believe me when I say C++ and Java are frustrating. My use tends to focus on prototypes so it is well I have moved on. Their learning/re-learning curve is steep and as such, they are not well-suited for newbies, casual users, and those that don't use them everyday. That is not to say that they don't have their place, they do. I'm finding my preferred suite to be C, D and Python. They are not perfect, but they seem to give me the "right" blend of learning curve, RAD, performance, maintainability, cleanliness (for tutorial purposes), and multi-platform I want. YMMV. (I'm still not happy with GUIs and the amount of work required for quick, ad hoc projects. Mebbie someday that will change.) Anyway, nice discussion and hopefully our world will continue to improve. ...and yeah, Inertia does suck. ;-) Here's why I love C++: it is essentially the ONLY object-oriented language that provides the programmer with very tight control over the generated assembly code (OK, I haven't tried D). I love C++ for this exciting interplay between high-level abstraction and low-level brute force. If you know the C++ language well enough, you get exactly the same excellent low-level control as with C. The object-oriented aspects, such as templates and inheritance, do not bring any overhead -- actually the assembly code doesn't even "remember" those abstractions. Sure, a FEW features of the C++ language do introduce an overhead. These are: virtual methods, and runtime type identification. But you can easily avoid them if this overhead is unacceptable for your program (in most cases where this overhead matters, templates can do the job at compile-time). Without using them, you can still enjoy most of the aspects of C++. I know I developed a whole C++ template library (Eigen) without using them at all. Good for you. Fortunately, it's still easy to do that sort of stuff, if a little verbose. There is a closure syntax under discussion for C++09, as well as a range-based for loop (i.e. foreach in certain other languages). What changes do you propose for the template syntax? So far as debugging, C++09 improves that by leaps and bounds with the introduction of concepts and concept maps. Those are a way to provide a sort of type-checking to templates, somewhat similar in idea to interfaces but without locking users into a rigid class hierarchy (and allowing templates to still work with both native and user-defined data types that match a particular concept). With the exception of automatic vs manual memory management, I'm not so sure I believe this line anymore. The only reason I'm ever able to do things quickly in Python/PHP/Java is because the standard library is huge and contains a bazillion functions and classes useful for a variety of things that C++ doesn't. As soon as you drop in a nice utility library for C++ that includes things like HTTP/SMTP wrappers, a large assortment of string and vector utility functions, and so on, C++ makes it just as easy to do things as those other languages. Automatic memory management can even be had thanks to GCs like the Boehm GC. There are clearly some cases where C++ still isn't the best bet (text-file processing, where Perl is king, for example), but that's just fine. No language is going to do EVERYTHING perfectly. That's why we have multiple languages. Use Perl for what Perl is good for, use Python for what Python is good for, use C++ for what C++ is good for, etc. There's nothing worse among programmers than being a "one trick pony" (a programmer who is only proficient in one language). I never expected to see so many passionate comments against C++... I'm not joking, I always thought that programmers really liked it. I, for one, fell pretty comfortable with it, but most of the times it depends on the framework/libs/toolkits I'm using. Some of them are great (QT) and some of them really suck (no examples, I'm not in the mood for a flamewar). BTW, I also use Ruby, PHP, plain good old C, and a handful of other languages. For me, languages are simple tools, some of them more suited for certain tasks than others, that's all. You take the blue pill, the story ends and you wake up believing whatever you want to believe. You take the red pill, you stay in wonderland and i show you just how deep the rabbit hole goes. blue pill:= C++ red pill:= D The whole thing boils down to whether you want to keep link compatibility with C++ (I said C++, not C) and chain yourself or loose it and free yourself. About the negative views, maybe it has to do with the extensive delays when something becomes popular. For a long time, C++ was on top. What did people do during all that if they didn't like it? And now with Java, how many people hate it? And right now, I don't dislike MS as much as I used to and more people nowadays don't even know who they are. Before that, IBM was truly hated. This is just an opinion, but I think the perceived lockin of things that are too popular angers a lot of people (or the lack of choice). About C++, I still want properties and named parameters for functions, objects and templates. But Bjarne says no way and will never put them in. I'm very sad! About C++, I still want properties and named parameters for functions, objects and templates. But Bjarne says no way and will never put them in. I'm very sad! And I must say I agree with him. Both issues can be solved as library solution. Here's simple property code: Named parameters are mentioned in "C++ Template Metaprogramming: Concepts, Tools, and Techniques ...", Abrahams, Gurtovoy. Works on Windoze, OSX, Linux (GCD), dmd, ... it has Wx (WxD)... Perfect! .V I love C++. I've made a very good living as a contract programmer sorting out the utter messes that people create in this mongrel of a language, and now Bjarne is `crafting' my meal-ticket right through to retirement ! Mr Stroustrup, you should have a Paypal link on your website. I definitely owe you a bung. I would call myself fairly C++ knowledgeable and i use it often for user mode programming. But once i had to debug someone else's code written in C++ and following were my horrors: 1. templates inside templates inside templates....Nested templates are the worst thing designed in C++ 2. Operator overloading - While looking at the code, i really did not know whether a + is a function call that can throw or it is a simple addition. 3. Exception handling - It just makes large project so much harder to debug because you never know once an internal function throw, where the exception will get caught. And the handler that catches the exception usually doesn't have any idea on what to do with it. 4. smart pointer type stuff - C++ doesn't have garbage collection and people shouldn't try to mimic that. With all the auto_ptr crap associated with nested templates, finding where and when an object leaks was a nightmare. After this C++ experience, i became quite anit-complex-C++. I like some of the features it provides like data abstraction using classes etc but it is becoming increasingly complex. For myself i only use following C++ features: 1. Basic C features 2. Classes 3. Derived classes with single inheritance or multiple inheritance rarely 4. Well design STL type templates 5. Virtual function where appropriate but not very often The things i avoid like anything are: 1. Nested Templates (and in general all templates except STL) 2. Exception handling - I hate compiler unwinding the call stack etc. It is so much harder to debug. 3. auto_ptr style memory management - It is never right and when it is wrong, it is nightmare to maintain. 4. Operator overloading - I avoid it mostly but i know for some math related tasks it is nice. Use with caution...don't overdo it. One person overloaded operator >> for sending data on socket and that is too much for me. 5. Crap stuff like virtual constructor etc etc and all the *extra smart* C++ receipes available in some books. And yeah, if you really want to know how twisted C++ can be, read Effective C++ series. After reading that i realized how complex C++ can get and how to never get in that trap. Edited 2007-08-14 02:03 1. Nested Templates (and in general all templates except STL) 2. Exception handling .... It is so much harder to debug. 3. auto_ptr style memory management - It is never right and when it is wrong, it is nightmare to maintain. 4. Operator overloading - I avoid it mostly .... Use with caution...don't overdo it. ... 5. Crap stuff like virtual constructor etc etc and all the *extra smart* C++ receipes ... And yeah, if you really want to know how twisted C++ can be, read Effective C++ series. Nice to see that someone agrees with me... I've been saying this for years, taking the flames. If you need/want a more powerful language than C++, go for it but don't complain when you can't twiddle bits fast or access your system as easily. Why is it better to write 5 lines of code instead of 25, if: - it can't be read or maintained even by me - portability is shaky - if it doesn't compile, it's easier to switch to the 25 line version than figure out why - if it doesn't work, it really can't be debugged - it takes just as long to write and longer to compile. I do use templates, but I don't nest them; I throw exceptions, but very rarely out of a function; I'm leery of auto_ptr-type things I didn't write; and banning boost from production code was worth it for the saved research time alone. I call this method "Aim low and SHIP!" and it's served several companies (and me) very well. 'depressing' I took a look at the web site. It's not a language design, it's a buffet of features based on the idiosyncratic tastes of the authors. C is, perhaps, the best language ever designed. That is to say, it is the closest to doing what its designer intended of any language I've every used, and I've used a few dozen languages over the years. C++ is an abomination. It was a poor attempt to graft classes onto C that grew like Topsy, and the one thing it doesn't need is more growing. Were it not for Koenig's idea of using C++ as a 'safer C', the language would be unusable. Java is just like C++, only different. I wholly agree about this (the 'D' part). I was hoping for something innovative and ended up disappointed. Case in point is the massive number of keywords. It's more a language expansion than a new refactored language. Smorgasboard it is. What can be done to "expand" c++ is also limited as is. It seems writing a fully compliant compiler is a daunting task. c++ is not totally an abomination. Strict discipline and having smart policies for what should and shouldn't be used are needed. It can be fairly expressive and powerful. c++ was a proving ground for a lot of good technologies, especially efficient generics. The implementation has been somewhat wanting in some areas. I seriously doubt much more can be done with the current c++ framework. Time for a new, better, smarter 'c' offspring. Maybe 'd lite' or something. Edited 2007-08-14 03:51 UTC ... every problem looks like a nail. This applies to many C++ programmers -- they use C++ for (almost) everything, even when it is not appropriate. C++ has its uses, but they are few. For low-level OS stuff, plain C is better (it is more predictable, and objects don't really have much use in low-level OS code) and for programming complex systems, SML, OCaml, Haskell and C# are far better. Don't you wish your mail program and web browser was made in a language where buffer overflows can't happen? Do you see any reason these need to be programmed in a close-to-the-iron programming model? Before you say "performance", please look at the figures: Once you get to something more complex than micro-benchmarks, SML (using the MLTon compiler) and OCaml produce code that matches typical C++ for performance. As for C++ syntax and semantics, my main gripe is with the syntax. As far as I know, no C++ parser is fully compliant with the standard (which gives informal disambiguation rules instead of an unambiguous grammar). This is not only a problem for compiler writers, it is also a problem for programmers: If the syntax is so complex, there is a good chance that even programmers can't read or write code correctly, and you risk that different compilers parse the same code differently. Lack of runtime bounds checks, manual memory management etc. bothers me less. It allows the programmer close control over what code is generated, which is good when it is really needed. It rarely is, though, and few programmers understand C++ well enough to really do it (once they go outside the plain C core of C++). C++ is not a language. C++ is something about four programming languages, each with it's own syntax, each with it's own ideology, each one of the designed to solve a scope of problems, slapped together with no coherent vision whatsoever. Let's see. In C++ we have: C Macros object oriented additions to C Templates While I really like C (with all it's problems and catches), there is no gain for me in using C++ over C. The additional effort I have when using 'proper C++' weights far more then the gains. Not to mention the problems with the ABI that still exist as of 2007. For low-level problems I use C. For rapid-prototyping I use Python. For high-level problems I use Java (or C#). Now D, however... It is everything C++ should have been: Object orientation, Garbage collection, Generics, Collection-iterators (foreach). And while D gives me all the sweets of a managed language like Jave or C#, it actually compiles to native machine code. Yay! Compatibility to C. Yay! I don't see no need in riding a dying horse. Why try to correct all the mistakes made in C++ when there is a worthy successor ready? By the way; the gdc-package for Debian should be available in unstable as of yesterday, I think. Edited 2007-08-14 08:10 Pfeifer, you have Object orientation in C++ too, you can have garbage collection, you have generics (templates), you can have foreach**. In short, all you've listed from D is available in C++ as well. Tell me again why should I use D? ** Edited 2007-08-14 08:26 falemagn, all those things (with the exception of object-orientation) are not part of the C++ core. Some of the features I mentioned are available for C++, yes. But only through hacks, not as a coherent part of the language. Maybe I didn't make it very clear in my first post; I don't like C++ because it consists of four (five if you need IDL) different programming languages, all forced together with a big roll of duct tape. Providing core features of a programming language through third-party "addons" is not going to help. Even Stroustrup realizes this and tries to push C++ forward (see tfa). D gives you binary compatibility to C (and thus access to all your old C libraries), gives you real interface definitions, includes (optional) garbage collection, a versatile collection iterator, generics/templates, all roled into one coherent language, one concept. But D is ready. C++09 will arrive 2009. Why wait when you can have D now? They are not part of "the core" because they don't need to be part of the core: as shown, they can be implemented in libraries. That's not an hack, that's how it's supposed to be. If you want those features, just use them, no one is stopping you: they're already implemented, tested and well working. Would it turn out to be C++ then? It is like Linux 1.0 which is very fast and snappy. Then you start to add functions and suddenly it is getting big and slow. Would this apply to D as well? I think that C++ is an abomination. You can do some very complex things in it, but that takes years of practice. Why not turn to a simpler language that allows you to do those complex things in a simple way then? I heard that it took years before a C++ compiler came that could compile ANSI C++. And still C++ compilers have problems following ANSI? How complex is C++ then? Jesus. And if C++ dies, you have spent much time learning those weird C++ things that gets useless in other languages. You have wasted your time. "Then you start to add functions and suddenly it is getting big and slow." Actually, that's one of the things C++ doesn't suffer from. (unless you by "slow" means the evolution of the language). C++ does a pretty good job of making sure you don't pay for features you don't use. There have been a million similar discussions on the net C vs C++ vs Java, etc. Programming language is a tool and you need better tools to create more complex software. Programmers have been using the same old tools for decades and the quality of software they create is abysmal. New languages/methodologies need to reflect the fact that the future systems need to support fine grained multithreading if they are to effectively utilise multicore CPUs with UMA or NUMA hardware configurations. Parallel programming is a difficult challenge and the current programming languages are not suited for such tasks. "Next generation C++" sounds like a farce to me. I think in order to get the optimal performance and rigidity, the hardware, system software and development tools, need to be designed as a unity, with references to one another. Creating a programming language in near vacuum will not achieve the flexibility required for high-performance, reliable and complex software. Parallel programming you say. Let's see: OpenMP auto-parallelization works with C/C++/Fortran (yes, Fortran!). Intel's new TBB library (low level lock-free/fine grained locks algorithms and data structures) works with C++. SIMD (SSE, 3dNow, AltiVec) based parallelization -- languages with inline assembly: C/C++. Doesn't sound like farce to me. C was like DOS: Broken, flawed, and an smashing success. C++ was like Windows 95: Backwards compatible with its predecessor but for the same reason ugly, overcomplex and full of gotchas. Ada was like OS/2: A superior alternative to its competitor but not backwards compatible. Not a bad choice from a technical standpoint but pretty much dead by now. Objective C was like a Mac: A world of its own. Pretty nice, but nobody notices. Java was like Unix: Ubiquitous, fairly well engineered, well entrenched in the big corps, will be around forever. C# was like Windows XP: a clean room implementation of already done ideas that somehow managed to coexist with its predecessors while gradually eating them. C++09 is like Windows ME: C++ with bells and whistles, fixes none of the glaring problems with its predecessor piling even more shit on top of it instead. No wonder people are running away from the sinking ship that's C++09 and looking alternatives such as D.
http://www.osnews.com/comments/18451
CC-MAIN-2015-35
refinedweb
5,325
72.66
IWbemServices::DeleteClass method The IWbemServices::DeleteClass method deletes the specified class from the current namespace. If a dynamic instance provider is associated with the class, the provider is unregistered, and it is no longer called for by that class. Any classes that derive from the deleted class are also deleted, and their associated providers are unregistered. All outstanding static instances of the specified class and its subclasses are also deleted when the class is deleted. If a dynamic class provider provides the class, the success of the deletion depends on whether the provider supports class deletion. Syntax Parameters - strClass [in] Name of the class targeted for deletion. - lFlags [in] One of the following values can be set. WBEM_FLAG_RETURN_IMMEDIATELY This flag causes this to be a semisynchronous call. For more information, see Calling a Method. - Indicates that the caller is a push provider. -. - ppCallResult [out] If NULL, this parameter is not used. If ppCallResult is specified, it must be set to point to NULL on entry.. Return value This method returns an HRESULT indicating the status of the method call. The following list lists the value contained withinan HRESULT. On failure, you can obtain any available information from the COM function GetErrorInfo. COM-specific error codes may also be returned if network problems cause you to lose the remote connection to Windows Management. - WBEM_E_ACCESS_DENIED The current user does not have permission to delete classes. - WBEM_E_FAILED This indicates other unspecified errors. - WBEM_E_INVALID_CLASS The specified class does not exist. - WBEM_E_CLASS_HAS_CHILDREN Deleting this class would invalidate a subclass. - WBEM_E_INVALID_OPERATION Deletion is not supported for the specified class. It may have been a system class or a class supplied by a dynamic provider that does not support class deletion. - WBEM_E_INVALID_PARAMETER A specified parameter is not valid. -
https://msdn.microsoft.com/en-us/library/aa392099(v=vs.85).aspx
CC-MAIN-2016-44
refinedweb
291
50.33
Here reference - CancelSave RevitAddInUtility Reference - Set up the RvtSamples application - Fix errors in RvtSamples.txt - Download RvtSamples and RevitLookup I already described this same process during the Revit 2013 timeframe. Let's see if anything changed, or, better still, improved. Compile and Install RevitLookup My first step is always to compile, install and test RevitLookup. This went completely smoothly, so I have nothing to report on that. Please see below for a link to the version I compiled. Set up the Revit API Assembly Paths Before the introduction of Revit Onebox, I used to make copies of the Revit API assemblies for the different flavours. With the advent of Onebox, I opted for the simpler solution of using the Revit API DLLs path updater RevitAPIDllsPathUpdater.exe. Simply launch it, enter the sample location and DLL folder, and let it do its job. In my case, I entered the following paths: - C:\a\lib\revit\2014\SDK\Samples\RevitAPIDllsPathUpdater.exe - Sample location: C:\a\lib\revit\2014\SDK\Samples - DLL folder: C:\Program Files\Autodesk\Revit Architecture 2014 It completes and reports that it "Replaced 169 files, skipped 0 files." First Compilation Run Causes Expected Errors With the correct assembly paths in place, it is time to open the Visual Studio solution SDKSamples2014.sln and compile the samples. The first run reports Rebuild All: 166 succeeded, 3 failed, 0 skipped. This no surprise, because there are some expected errors. Set up the RevitAddInUtility Assembly Path The first error is caused by the RevitAddInUtilitySample and says: "The type or namespace name 'Autodesk' could not be found (are you missing a using directive or an assembly reference?)" The RevitAddInUtilitySample references the RevitAddInUtility assembly. It is also located in the Revit API assembly path, but RevitAPIDllsPathUpdater.exe does not take it into account, so you have to open that project and set the assembly path manually instead: In my case, the correct reference path to it is C:\Program Files\Autodesk\Revit Architecture 2014\RevitAddInUtility.dll. PointCurveCreation Office Reference The Massing PointCurveCreation sample references Microsoft.Office.Interop.Excel in order to interact with spreadsheets. I have not set up Office on my virtual machine, so that caused an error saying "The type or namespace name 'Office' does not exist in the namespace 'Microsoft' (are you missing an assembly reference?)" For a quick ad hoc solution to this, I simply unloaded this one project for the time being. CancelSave RevitAddInUtility Reference The next problem occurs in the Events CancelSave sample: "The type or namespace name 'RevitAddIns' does not exist in the namespace 'Autodesk' (are you missing an assembly reference?)" Same resolution as above, set the RevitAddInUtility assembly path. Wow! That was it! No more errors, just 345 warnings saying 'There was a mismatch between the processor architecture of the project being built "MSIL" and the processor architecture of the reference "RevitAPI, Version=2014'. I assume you can ignore those. This was a smoother compilation than ever before. Set up the RvtSamples Application I always install RvtSamples to load all the other SDK samples if I ever want to test something in one of them. It saves you from installing them one by one, which might prove a lengthy process, there being well over a hundred of them. To do so, I first add two files to the RvtSamples project, its add-in manifest and its text file listing all the samples to load: In the text file, I replace the samples path by my real installation folder, globally replacing "Z:\SDK2013\Samples\" by "C:\a\lib\revit\2014\SDK\Samples\". At the end, I add placeholders for my two include files, for the Autodesk Developer Network ADN and The Building Coder sample collections: ##include C:\a\lib\revit\2014\adn\src\AdnSamples.txt #include C:\a\lib\revit\2014\bc\BcSamples.txt The ADN samples are commented out, because we have not completed their migration yet. I already migrated The Building Coder samples to Revit 2014, though, so that include file is already active. I need to add the RvtSamples assembly path to its add-in manifest and install that in the Revit Add-Ins folder, and we are set to go. Fix errors in RvtSamples.txt As usual, the list of samples to load specified by RvtSamples.txt is not perfectly set up. Here are some of the add-ins causing errors on my system: - RotateFramingObjects - ProjectUnit (missing) - GenericModelCreation - ElementViewer - PointCurveCreation (my fault) - TraverseSystem - CreateShared - BarDescriptions (missing) - StructuralLayerFunction ProjectUnit is missing, presumably because the unit API changed in Revit 2014 and the sample has been removed. It is still listed in RvtSamples.txt, and should be removed there as well. Most of the others are caused by VB.NET samples listed in their 'bin' subfolder, whereas their assembly DLL really lives in 'bin/Debug', at least on my system, and vice versa, in the case of CreateShared. After my first clean-up pass, the following still cause problems: - GenericModelCreation - PointCurveCreation (my fault) - TraverseSystem - CreateShared - BarDescriptions (missing) - StructuralLayerFunction Again, these are almost all VB.NET samples. Something strange going on with those. There is a testing switch you can set in the RvtSamples source code, actually: bool testClassName = true; Setting it to true turns up more errors: - DeleteObject - HelloRevit - RotateFramingObjects - MaterialProperties - SlabProperties - CreateBeamsColumnsBraces - StructuralLayerFunction I fixed some of these, but not all. Anyway, I'll stop fixing this for today, though, because I have other things to do as well. RvtSamples loads now and most of the samples are available: Download RvtSamples and RevitLookup Here is my current version of RvtSamples, and also of RevitLookup, which was missing from some intermediate versions of the SDK samples. The RvtSamples package includes both the original and my modified add-in manifest and RvtSamples.txt sample list. You can compere them to see the changes I applied, and add analogous changes of your own for any other samples that you wish to activate. I hope this is of use to you. This article was picked for TenLinks Daily by. Actually, taking a closer look at the CAD digest listing, I currently count 53 of The Building Coder articles listed there. I was not aware of that before. Surprise, surprise :-)
http://thebuildingcoder.typepad.com/blog/2013/04/compiling-the-revit-2014-sdk.html
CC-MAIN-2014-42
refinedweb
1,031
54.32
You may recall that templates generate content recursively. After the data is generated, each result is used as the new reference point for a nested iteration of the template. This is usually used to generate content in a tree or menu. Both the RDF and XML datasource types support recursion. For example, using this XML datasource: <people> <group name="Male"> <person name="Napoleon Bonaparte"/> <person name="Julius Caesar"/> <person name="Ferdinand Magellan"/> </group> <group name="Female"> <person name="Cleopatra"/> <person name="Laura Secord"/> </group> </people> We could display this data in a flat list by using the right query: <query expr="group/person/"> Or, we could display one level for the two groups, and use another level for each person. <groupbox type="menu" datasources="people.xml" ref="*" querytype="xml"> <template> <query expr="*"/> <action> <vbox uri="?" class="indent"> <label value="?name"/> </vbox> </action> </template> </groupbox> In this simplified example, the XPath expression just gets the list of child elements of the reference node. For the outermost iteration, a vbox is created with a child label. Since the initial reference node is the root of the XML source document, the results are two elements, one for each group element. However, a further step is done to retrieve an additional level of nodes. As each group has children itself, each result (in this case, each group) becomes the reference point for a futher iteration. The same query is executed again but using the groups generated from the previous execution of the query. This time, the query generates a result for each person in the XML source. The content of the action body is again generated for each result, but instead of being inserted inside the outermost groupbox, this new content is inserted inside the content generated from the previous iteration. The content is always inserted directly inside the element with the uri attribute. The result is output like the following: <groupbox> ... <vbox id="row2" container="true" empty="false" class="indent"> <label value="Male"/> <vbox id="row4" class="indent"><label value="Napoleon Bonaparte"/></vbox> <vbox id="row5" class="indent"><label value="Julius Caesar"/></vbox> <vbox id="row6" class="indent"><label value="Ferdinand Magellan"/></vbox> </vbox> <vbox id="row3" container="true" empty="false" class="indent"> <label value="Female"/> <vbox id="row7" class="indent"><label value="Cleopatra"/></vbox> <vbox id="row8" class="indent"><label value="Laura Secord"/></vbox> </vbox> </groupbox> Note how similar content corresponding to the action body is created for both the groups as well as the people. The template builder has also added container and empty attributes to the groups. This is done with all nodes that have children to indicate that the node contains generated children as well as whether the node is empty. These hints are used for trees, but they can also be used in a stylesheet to provide a different appearance for containers with children, empty containers, as well as non-containers. In this example, both the parent groups and child people are displayed the same. You could use multiple rules as well, in order to generate different output for each level. In this next example, an assign element is used to assign the local name of the node is to the variable ?type. In an XPath expression, the period refers to the context node. For an assign element, the context is the the result node. The local-name function retrieves the tag of the element without the namespace prefix. In this case, there isn't a namespace prefix, so the name function could be used instead. <vbox datasources="people.xml" ref="*" querytype="xml"> <template> <query expr="*"> <assign var="?type" expr="local-name(.)"/> </query> <rule> <where subject="?type" rel="equals" value="group"/> <action> <groupbox uri="?"> <caption label="?name"/> </groupbox> </action> </rule> <rule> <action> <label uri="?" value="?name"/> </action> </rule> </template> </vbox> The first rule contains a where clause which matches only those results that have a type of group. As the type is bound to the local name of the result node, this will match only the first level of results from the XML data, that is, those with the group tag. The second rule has no where conditions, so this matches all remaining results. The output for groups is a groupbox with a caption containing the name. The output for non-groups is a label. You could further expand this process for other levels. Disabling Recursion The recursion on a template occurs automatically. In fact, it has occured with all of the examples so far. However, in most cases, there either aren't any children or the next iteration of the query doesn't return any results, so no output is generated. Sometimes, you will not want a template generate recursive content. You can do this by adding a flag. <vbox datasources="people.xml" ref="*" querytype="xml" flags="dont-recurse"> Here, the flags attribute is set to dont-recurse. This disables the recursion and only generates one level of results.
https://developer.mozilla.org/en-US/docs/Archive/Mozilla/XUL/Template_Guide/Using_Recursive_Templates
CC-MAIN-2020-16
refinedweb
822
54.93
MarkupValidator/XML Limitations This page contains the draft text for a new page that may potentially be added to the MarkupValidator website. It is being placed on this wiki at the suggestion of Olivier Thereaux in order to allow others from the validator mailing list to contribute to its development. Please feel free to join in. The Issue Currently, the MarkupValidator's XHTML validation results page contains the following line: Note: The Validator XML support has some limitations. The link points to a page on the OpenJade website that contains highly technical information on OpenSP's limitations. Rather than having the link point directly to the OpenJade website, I think it would be less confusing to the people who use the validator (and post questions to the mailing list ;-) ) if the link pointed to a more user friendly page that was hosted on the validator website. The page would list some of the validator's more popular XML limitations in a way that was easy for users of the validator to understand. This intermediate page would then link to the OpenJade website. Draft text In addition to validating documents against their specified DTD, the w3c validator also tests documents that use an XHTML Doctype for XML well-formedness. At present, the validator's support for checking XML well-formedness has some known limitations. This means that certains documents that are labeled as valid by the validator will cause conforming XML user agents to throw fatal errors. Most notably many web browsers will simply refuse to load pages that are served as XML (using the application/xhtml+xml mime type) if they are not well-formed. Instead they simply display an error. Below are some of the more well known limitations. The use of "&" and "<" as data is marked as a warning. It should be an error.In XML and SGML the ampersand and left angle-bracket characters have special meaning, they are used to describe entity or character references (e.g " or 4) and tags (e.g. Examples: We need more R & D should be replaced with We need more R & D Everyone knows that 1 < 2 should be replaced with Everyone knows that 1 < 2 Please refer to the XML Specification for more detailed information. XML declaration If the XML declaration is present, it must appear at the very beginning of the document, i.e. it must not be preceded by anything, not even by whitespace or comments, and it must match the production for XMLDecl. However, the W3C validator considers <?xml encoding="utf-8" version="1.0"?> to be valid. Adjacent attribute specifications In XML documents, attribute specifications must be separated by whitespace, according to the productions for EmptyElemTag and STag.However, the W3C validator considers to be valid. "--" in comments "--" must not appear inside a comment, according to the production for Comment. However, the W3C validator considers to be valid. Character encoding declaration in meta element only The XHTML 1.0 specification states that, even for XHTML documents that are delivered as text/html, it is unsufficient to declare their character encoding in a meta element. The W3C validator accepts such a declaration anyway. Namespace declaration The XHTML 1.0 specification requires a strictly conforming document to contain an xmlns declaration for the XHTML namespace. The W3C validator does not enforce this criterium. system identifier in PUBLIC doctype declaration In XML, unlike SGML, a PUBLIC doctype declaration can not have only a FPI, it must also have a SI. In XML mode, openSP (and thus the validator) marks it as a warning, should be an error. Next well-known limitation goes here Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Nunc porta. In sem neque, bibendum ac, sagittis vitae, malesuada ac, velit. Aenean nec metus sed wisi condimentum placerat. Etiam ullamcorper. Nam eu mi quis diam pretium vehicula. Nulla non nulla at diam convallis aliquam. Quisque non elit non lacus vehicula lobortis. Nam arcu mauris, mattis et, hendrerit sit amet, laoreet vitae, pede. Nulla consequat, magna et elementum vestibulum, sem nibh ultrices turpis, in faucibus turpis velit vel arcu. Vestibulum nonummy posuere purus.
http://www.w3.org/wiki/MarkupValidator/XML_Limitations
CC-MAIN-2014-52
refinedweb
686
56.55
import "runtime" Package runtime contains operations that interact with Go's runtime system, such as functions to control goroutines. It also includes the low-level type information used by the reflect package; see reflect's documentation for the programmable interface to the run-time type system... The net, net/http, and crypto/tls GORACE variable configures the race detector, for programs built using -race. See for details.. alg.go atomic_pointer.go cgo.go cgo_mmap.go cgo_sigaction.go cgocall.go cgocallback.go cgocheck.go chan.go compiler.go complex.go cpuflags.go cpuflags_amd64.go cpuprof.go cputicks.go debug.go debugcall.go debuglog.go debuglog_off.go defs_linux_amd64.go env_posix.go error.go extern.go fastlog2.go fastlog2table.go float.go hash64.go heapdump.go iface.go lfstack.go lfstack_64bit.go lock_futex.go malloc.go map.go map_fast32.go map_fast64.go map_faststr.go mbarrier.go mbitmap.go mcache.go mcentral.go mem_linux.go mfinal.go mfixalloc.go mgc.go mgclarge.go mgcmark.go mgcscavenge.go mgcstack.go mgcsweep.go mgcsweepbuf.go mgcwork.go mheap.go mprof.go msan0.go msize.go mstats.go mwbbuf.go netpoll.go netpoll_epoll.go os_linux.go os_linux_generic.go os_linux_noauxv.go os_nonopenbsd.go panic.go plugin.go print.go proc.go profbuf.go proflabel.go race0.go rdebug.go relax_stub.go runtime.go runtime1.go runtime2.go rwmutex.go select.go sema.go signal_amd64x.go signal_linux_amd64.go signal_sighandler.go signal_unix.go sigqueue.go sigqueue_note.go sigtab_linux_generic.go sizeclasses.go slice.go softfloat64.go stack.go string.go stubs.go stubs2.go stubs3.go stubs_amd64x.go stubs_linux.go symtab.go sys_nonppc64x.go sys_x86.go time.go timestub.go timestub2.go trace.go traceback.go type.go typekind.go utf8.go vdso_elf64.go vdso_linux.go vdso_linux_amd64.go write_err.go const ( c0 = uintptr((8-sys.PtrSize)/4*2860486313 + (sys.PtrSize-4)/4*33054211828000289) c1 = uintptr((8-sys.PtrSize)/4*3267000013 + (sys.PtrSize-4)/4*23344194077549503) ) type algorithms - known to compiler const ( alg_NOEQ = iota alg_MEM0 alg_MEM8 alg_MEM16 alg_MEM32 alg_MEM64 alg_MEM128 alg_STRING alg_INTER alg_NILINTER alg_FLOAT32 alg_FLOAT64 alg_CPLX64 alg_CPLX128 alg_max ) const ( maxAlign = 8 hchanSize = unsafe.Sizeof(hchan{}) + uintptr(-int(unsafe.Sizeof(hchan{}))&(maxAlign-1)) debugChan = false ) Offsets into internal/cpu records for use in assembly. const ( offsetX86HasAVX2 = unsafe.Offsetof(cpu.X86.HasAVX2) offsetX86HasERMS = unsafe.Offsetof(cpu.X86.HasERMS) offsetX86HasSSE2 = unsafe.Offsetof(cpu.X86.HasSSE2) offsetARMHasIDIVA = unsafe.Offsetof(cpu.ARM.HasIDIVA) ) const ( debugCallSystemStack = "executing on Go runtime stack" debugCallUnknownFunc = "call from unknown function" debugCallRuntime = "call from within the Go runtime" debugCallUnsafePoint = "call not at safe point" ) const ( debugLogUnknown = 1 + iota debugLogBoolTrue debugLogBoolFalse debugLogInt debugLogUint debugLogHex debugLogPtr debugLogString debugLogConstString debugLogStringOverflow debugLogPC debugLogTraceback ) const ( // debugLogHeaderSize is the number of bytes in the framing // header of every dlog record. debugLogHeaderSize = 2 // debugLogSyncSize is the number of bytes in a sync record. debugLogSyncSize = debugLogHeaderSize + 2*8 ) const ( _EINTR = 0x4 _EAGAIN = 0xb _ENOMEM = 0xc _PROT_NONE = 0x0 _PROT_READ = 0x1 _PROT_WRITE = 0x2 _PROT_EXEC = 0x4 _MAP_ANON = 0x20 _MAP_PRIVATE = 0x2 _MAP_FIXED = 0x10 _MADV_DONTNEED = 0x4 _MADV_FREE = 0x8 _MADV_HUGEPAGE = 0xe _MADV_NOHUGEPAGE = 0xf _SA_RESTART = 0x10000000 _SA_ONSTACK = 0x8000000 _SA_RESTORER = 0x4000000 _SA_SIGINFO = 0x4 _SIGHUP = 0x1 _SIGINT = 0x2 _SIGQUIT = 0x3 _SIGILL = 0x4 _SIGTRAP = 0x5 _SIGABRT = 0x6 _SIGBUS = 0x7 _SIGFPE = 0x8 _SIGKILL = 0x9 _SIGUSR1 = 0xa _SIGSEGV = 0xb _SIGUSR2 = 0xc _SIGPIPE = 0xd _SIGALRM = 0xe _SIGSTKFLT = 0x10 _SIGCHLD = 0x11 _SIGCONT = 0x12 _SIGSTOP = 0x13 _SIGTSTP = 0x14 _SIGTTIN = 0x15 _SIGTTOU = 0x16 _SIGURG = 0x17 _SIGXCPU = 0x18 _SIGXFSZ = 0x19 _SIGVTALRM = 0x1a _SIGPROF = 0x1b _SIGWINCH = 0x1c _SIGIO = 0x1d _SIGPWR = 0x1e _SIGSYS = 0x1f _FPE_INTDIV = 0x1 _FPE_INTOVF = 0x2 _FPE_FLTDIV = 0x3 _FPE_FLTOVF = 0x4 _FPE_FLTUND = 0x5 _FPE_FLTRES = 0x6 _FPE_FLTINV = 0x7 _FPE_FLTSUB = 0x8 _BUS_ADRALN = 0x1 _BUS_ADRERR = 0x2 _BUS_OBJERR = 0x3 _SEGV_MAPERR = 0x1 _SEGV_ACCERR = 0x2 _ITIMER_REAL = 0x0 _ITIMER_VIRTUAL = 0x1 _ITIMER_PROF = 0x2 _EPOLLIN = 0x1 _EPOLLOUT = 0x4 _EPOLLERR = 0x8 _EPOLLHUP = 0x10 _EPOLLRDHUP = 0x2000 _EPOLLET = 0x80000000 _EPOLL_CLOEXEC = 0x80000 _EPOLL_CTL_ADD = 0x1 _EPOLL_CTL_DEL = 0x2 _EPOLL_CTL_MOD = 0x3 _AF_UNIX = 0x1 _F_SETFL = 0x4 _SOCK_DGRAM = 0x2 ) const ( _O_RDONLY = 0x0 _O_CLOEXEC = 0x80000 ) const ( // Constants for multiplication: four random odd 64-bit numbers. m1 = 16877499708836156737 m2 = 2820277070424839065 m3 = 9497967016996688599 m4 = 15839092249703872147 ) const ( fieldKindEol = 0 fieldKindPtr = 1 fieldKindIface = 2 fieldKindEface = 3 tagEOF = 0 tagObject = 1 tagOtherRoot = 2 tagType = 3 tagGoroutine = 4 tagStackFrame = 5 tagParams = 6 tagFinalizer = 7 tagItab = 8 tagOSThread = 9 tagMemStats = 10 tagQueuedFinalizer = 11 tagData = 12 tagBSS = 13 tagDefer = 14 tagPanic = 15 tagMemProf = 16 tagAllocSample = 17 ) Cache of types that have been serialized already. We use a type's hash field to pick a bucket. Inside a bucket, we keep a list of types that have been serialized so far, most recently used first. Note: when a bucket overflows we may end up serializing a type more than once. That's ok. const ( typeCacheBuckets = 256 typeCacheAssoc = 4 ) const ( // addrBits is the number of bits needed to represent a virtual address. // // See heapAddrBits for a table of address space sizes on // various architectures. 48 bits is enough for all // architectures except s390x. // // On AMD64, virtual addresses are 48-bit (or 57-bit) numbers sign extended to 64. // We shift the address left 16 to eliminate the sign extended part and make // room in the bottom for the count. // // On s390x, virtual addresses are 64-bit. There's not much we // can do about this, so we just hope that the kernel doesn't // get to really high addresses and panic if it does. addrBits = 48 // In addition to the 16 bits taken from the top, we can take 3 from the // bottom, because node must be pointer-aligned, giving a total of 19 bits // of count. cntBits = 64 - addrBits + 3 // On AIX, 64-bit addresses are split into 36-bit segment number and 28-bit // offset in segment. Segment numbers in the range 0x0A0000000-0x0AFFFFFFF(LSA) // are available for mmap. // We assume all lfnode addresses are from memory allocated with mmap. // We use one bit to distinguish between the two ranges. aixAddrBits = 57 aixCntBits = 64 - aixAddrBits + 3 ) const ( mutex_unlocked = 0 mutex_locked = 1 mutex_sleeping = 2 active_spin = 4 active_spin_cnt = 30 passive_spin = 1 ) const ( debugMalloc = false maxTinySize = _TinySize tinySizeClass = _TinySizeClass maxSmallSize = _MaxSmallSize pageShift = _PageShift pageSize = _PageSize pageMask = _PageMask // By construction, single page spans of the smallest object class // have the most objects per span. maxObjsPerSpan = pageSize / 8 concurrentSweep = _ConcurrentSweep _PageSize = 1 << _PageShift _PageMask = _PageSize - 1 // _64bit = 1 on 64-bit systems, 0 on 32-bit systems _64bit = 1 << (^uintptr(0) >> 63) / 2 // Tiny allocator parameters, see "Tiny allocator" comment in malloc.go. _TinySize = 16 _TinySizeClass = int8(2) _FixAllocChunk = 16 << 10 // Chunk size for FixAlloc // Per-P, per order stack segment cache size. _StackCacheSize = 32 * 1024 // Number of orders that get caching. Order 0 is FixedStack // and each successive order is twice as large. // We want to cache 2KB, 4KB, 8KB, and 16KB stacks. Larger stacks // will be allocated directly. // Since FixedStack is different on different systems, we // must vary NumStackOrders to keep the same maximum cached size. // OS | FixedStack | NumStackOrders // -----------------+------------+--------------- // linux/darwin/bsd | 2KB | 4 // windows/32 | 4KB | 3 // windows/64 | 8KB | 2 // plan9 | 4KB | 3 _NumStackOrders = 4 - sys.PtrSize/4*sys.GoosWindows - 1*sys.GoosPlan9 // heapAddrBits is the number of bits in a heap address. On // amd64, addresses are sign-extended beyond heapAddrBits. On // other arches, they are zero-extended. // // On most 64-bit platforms, we limit this to 48 bits based on a // combination of hardware and OS limitations. // // amd64 hardware limits addresses to 48 bits, sign-extended // to 64 bits. Addresses where the top 16 bits are not either // all 0 or all 1 are "non-canonical" and invalid. Because of // these "negative" addresses, we offset addresses by 1<<47 // (arenaBaseOffset) on amd64 before computing indexes into // the heap arenas index. In 2017, amd64 hardware added // support for 57 bit addresses; however, currently only Linux // supports this extension and the kernel will never choose an // address above 1<<47 unless mmap is called with a hint // address above 1<<47 (which we never do). // // arm64 hardware (as of ARMv8) limits user addresses to 48 // bits, in the range [0, 1<<48). // // ppc64, mips64, and s390x support arbitrary 64 bit addresses // in hardware. On Linux, Go leans on stricter OS limits. Based // on Linux's processor.h, the user address space is limited as // follows on 64-bit architectures: // // Architecture Name Maximum Value (exclusive) // --------------------------------------------------------------------- // amd64 TASK_SIZE_MAX 0x007ffffffff000 (47 bit addresses) // arm64 TASK_SIZE_64 0x01000000000000 (48 bit addresses) // ppc64{,le} TASK_SIZE_USER64 0x00400000000000 (46 bit addresses) // mips64{,le} TASK_SIZE64 0x00010000000000 (40 bit addresses) // s390x TASK_SIZE 1<<64 (64 bit addresses) // // These limits may increase over time, but are currently at // most 48 bits except on s390x. On all architectures, Linux // starts placing mmap'd regions at addresses that are // significantly below 48 bits, so even if it's possible to // exceed Go's 48 bit limit, it's extremely unlikely in // practice. // // On aix/ppc64, the limits is increased to 1<<60 to accept addresses // returned by mmap syscall. These are in range: // 0x0a00000000000000 - 0x0afffffffffffff // // On 32-bit platforms, we accept the full 32-bit address // space because doing so is cheap. // mips32 only has access to the low 2GB of virtual memory, so // we further limit it to 31 bits. // // WebAssembly currently has a limit of 4GB linear memory. heapAddrBits = (_64bit*(1-sys.GoarchWasm)*(1-sys.GoosAix))*48 + (1-_64bit+sys.GoarchWasm)*(32-(sys.GoarchMips+sys.GoarchMipsle)) + 60*sys.GoosAix // maxAlloc is the maximum size of an allocation. On 64-bit, // it's theoretically possible to allocate 1<<heapAddrBits bytes. On // 32-bit, however, this is one less than 1<<32 because the // number of bytes in the address space doesn't actually fit // in a uintptr. maxAlloc = (1 << heapAddrBits) - (1-_64bit)*1 // heapArenaBytes is the size of a heap arena. The heap // consists of mappings of size heapArenaBytes, aligned to // heapArenaBytes. The initial heap mapping is one arena. // // This is currently 64MB on 64-bit non-Windows and 4MB on // 32-bit and on Windows. We use smaller arenas on Windows // because all committed memory is charged to the process, // even if it's not touched. Hence, for processes with small // heaps, the mapped arena space needs to be commensurate. // This is particularly important with the race detector, // since it significantly amplifies the cost of committed // memory. heapArenaBytes = 1 << logHeapArenaBytes // logHeapArenaBytes is log_2 of heapArenaBytes. For clarity, // prefer using heapArenaBytes where possible (we need the // constant to compute some other constants). logHeapArenaBytes = (6+20)*(_64bit*(1-sys.GoosWindows)*(1-sys.GoosAix)*(1-sys.GoarchWasm)) + (2+20)*(_64bit*sys.GoosWindows) + (2+20)*(1-_64bit) + (8+20)*sys.GoosAix + (2+20)*sys.GoarchWasm // heapArenaBitmapBytes is the size of each heap arena's bitmap. heapArenaBitmapBytes = heapArenaBytes / (sys.PtrSize * 8 / 2) pagesPerArena = heapArenaBytes / pageSize // arenaL1Bits is the number of bits of the arena number // covered by the first level arena map. // // This number should be small, since the first level arena // map requires PtrSize*(1<<arenaL1Bits) of space in the // binary's BSS. It can be zero, in which case the first level // index is effectively unused. There is a performance benefit // to this, since the generated code can be more efficient, // but comes at the cost of having a large L2 mapping. // // We use the L1 map on 64-bit Windows because the arena size // is small, but the address space is still 48 bits, and // there's a high cost to having a large L2. // // We use the L1 map on aix/ppc64 to keep the same L2 value // as on Linux. arenaL1Bits = 6*(_64bit*sys.GoosWindows) + 12*sys.GoosAix // arenaL2Bits is the number of bits of the arena number // covered by the second level arena index. // // The size of each arena map allocation is proportional to // 1<<arenaL2Bits, so it's important that this not be too // large. 48 bits leads to 32MB arena index allocations, which // is about the practical threshold. arenaL2Bits = heapAddrBits - logHeapArenaBytes - arenaL1Bits // arenaL1Shift is the number of bits to shift an arena frame // number by to compute an index into the first level arena map. arenaL1Shift = arenaL2Bits // arenaBits is the total bits in a combined arena map index. // This is split between the index into the L1 arena map and // the L2 arena map. arenaBits = arenaL1Bits + arenaL2Bits // arenaBaseOffset is the pointer value that corresponds to // index 0 in the heap arena map. // // On amd64, the address space is 48 bits, sign extended to 64 // bits. This offset lets us handle "negative" addresses (or // high addresses if viewed as unsigned). // // On other platforms, the user address space is contiguous // and starts at 0, so no offset is necessary. arenaBaseOffset uintptr = sys.GoarchAmd64 * (1 << 47) // Max number of threads to run garbage collection. // 2, 3, and 4 are all plausible maximums depending // on the hardware details of the machine. The garbage // collector scales well to 32 cpus. _MaxGcproc = 32 // minLegalPointer is the smallest possible legal pointer. // This is the smallest possible architectural page size, // since we assume that the first page is never mapped. // // This should agree with minZeroPage in the compiler. minLegalPointer uintptr = 4096 ) const ( // Maximum number of key/elem pairs a bucket can hold. bucketCntBits = 3 bucketCnt = 1 << bucketCntBits // Maximum average load of a bucket that triggers growth is 6.5. // Represent as loadFactorNum/loadFactDen, to allow integer math. loadFactorNum = 13 loadFactorDen = 2 // Maximum key or elem size to keep inline (instead of mallocing per element). // Must fit in a uint8. // Fast versions cannot handle big elems - the cutoff size for // fast versions in cmd/compile/internal/gc/walk.go must be at most this elem. maxKeySize = 128 maxElemSize = 128 // data offset should be the size of the bmap struct, but needs to be // aligned correctly. For amd64p32 this means 64-bit alignment // even though pointers are 32 bit. dataOffset = unsafe.Offsetof(struct { b bmap v int64 }{}.v) // Possible tophash values. We reserve a few possibilities for special marks. // Each bucket (including its overflow buckets, if any) will have either all or none of its // entries in the evacuated* states (except during the evacuate() method, which only happens // during map writes and thus no one else can observe the map during that time). emptyRest = 0 // this cell is empty, and there are no more non-empty cells at higher indexes or overflows. emptyOne = 1 // this cell is empty evacuatedX = 2 // key/elem is valid. Entry has been evacuated to first half of larger table. evacuatedY = 3 // same as above, but evacuated to second half of larger table. evacuatedEmpty = 4 // cell is empty, bucket is evacuated. minTopHash = 5 // minimum tophash for a normal filled cell. // flags iterator = 1 // there may be an iterator using buckets oldIterator = 2 // there may be an iterator using oldbuckets hashWriting = 4 // a goroutine is writing to the map sameSizeGrow = 8 // the current map growth is to a new map of the same size // sentinel bucket ID for iterator checks noCheck = 1<<(8*sys.PtrSize) - 1 ) const ( bitPointer = 1 << 0 bitScan = 1 << 4 heapBitsShift = 1 // shift offset between successive bitPointer or bitScan entries wordsPerBitmapByte = 8 / 2 // heap words described by one bitmap byte // all scan/pointer bits in a byte bitScanAll = bitScan | bitScan<<heapBitsShift | bitScan<<(2*heapBitsShift) | bitScan<<(3*heapBitsShift) bitPointerAll = bitPointer | bitPointer<<heapBitsShift | bitPointer<<(2*heapBitsShift) | bitPointer<<(3*heapBitsShift) ) const ( _EACCES = 13 _EINVAL = 22 ) const ( _DebugGC = 0 _ConcurrentSweep = true _FinBlockSize = 4 * 1024 // sweepMinHeapDistance is a lower bound on the heap distance // (in bytes) reserved for concurrent sweeping between GC // cycles. sweepMinHeapDistance = 1024 * 1024 ) ) const ( fixedRootFinalizers = iota fixedRootFreeGStacks fixedRootCount // rootBlockBytes is the number of bytes to scan per data or // BSS root. rootBlockBytes = 256 << 10 // rootBlockSpans is the number of spans to scan per span // root. rootBlockSpans = 8 * 1024 // 64MB worth of spans // maxObletBytes is the maximum bytes of an object to scan at // once. Larger objects will be split up into "oblets" of at // most this size. Since we can scan 1–2 MB/ms, 128 KB bounds // scan preemption at ~100 µs. // // This must be > _MaxSmallSize so that the object base is the // span base. maxObletBytes = 128 << 10 // drainCheckThreshold specifies how many units of work to do // between self-preemption checks in gcDrain. Assuming a scan // rate of 1 MB/ms, this is ~100 µs. Lower values have higher // overhead in the scan loop (the scheduler check may perform // a syscall, so its overhead is nontrivial). Higher values // make the system less responsive to incoming work. drainCheckThreshold = 100000 ) const ( // The background scavenger is paced according to these parameters. // // scavengePercent represents the portion of mutator time we're willing // to spend on scavenging in percent. // // scavengePageLatency is a worst-case estimate (order-of-magnitude) of // the time it takes to scavenge one (regular-sized) page of memory. // scavengeHugePageLatency is the same but for huge pages. // // scavengePagePeriod is derived from scavengePercent and scavengePageLatency, // and represents the average time between scavenging one page that we're // aiming for. scavengeHugePagePeriod is the same but for huge pages. // These constants are core to the scavenge pacing algorithm. scavengePercent = 1 // 1% scavengePageLatency = 10e3 // 10µs scavengeHugePageLatency = 10e3 // 10µs scavengePagePeriod = scavengePageLatency / (scavengePercent / 100.0) scavengeHugePagePeriod = scavengePageLatency / (scavengePercent / 100.0) // ) const ( gcSweepBlockEntries = 512 // 4KB on 64-bit gcSweepBufInitSpineCap = 256 // Enough for 1GB heap on 64-bit ) const ( _WorkbufSize = 2048 // in bytes; larger values result in less contention // workbufAlloc is the number of bytes to allocate at a time // for new workbufs. This must be a multiple of pageSize and // should be a multiple of _WorkbufSize. // // Larger values reduce workbuf allocation overhead. Smaller // values reduce heap fragmentation. workbufAlloc = 32 << 10 ) const ( numSpanClasses = _NumSizeClasses << 1 tinySpanClass = spanClass(tinySizeClass<<1 | 1) ) const ( _KindSpecialFinalizer = 1 _KindSpecialProfile = 2 ) const ( // wbBufEntries is the number of write barriers between // flushes of the write barrier buffer. // // This trades latency for throughput amortization. Higher // values amortize flushing overhead more, but increase the // latency of flushing. Higher values also increase the cache // footprint of the buffer. // // TODO: What is the latency cost of this? Tune this value. wbBufEntries = 256 // wbBufEntryPointers is the number of pointers added to the // buffer by each write barrier. wbBufEntryPointers = 2 ) pollDesc contains 2 binary semaphores, rg and wg, to park reader and writer goroutines respectively. The semaphore can be in the following states: pdReady - io readiness notification is pending; a goroutine consumes the notification by changing the state to nil. pdWait - a goroutine prepares to park on the semaphore, but not yet parked; the goroutine commits to park by changing the state to G pointer, or, alternatively, concurrent io notification changes the state to READY, or, alternatively, concurrent timeout/close changes the state to nil. G pointer - the goroutine is blocked on the semaphore; io notification or timeout/close changes the state to READY or nil respectively and unparks the goroutine. nil - nothing of the above. const ( pdReady uintptr = 1 pdWait uintptr = 2 ) const ( _FUTEX_PRIVATE_FLAG = 128 _FUTEX_WAIT_PRIVATE = 0 | _FUTEX_PRIVATE_FLAG _FUTEX_WAKE_PRIVATE = 1 | _FUTEX_PRIVATE_FLAG ) Clone, the Linux rfork. const ( _CLONE_VM = 0x100 _CLONE_FS = 0x200 _CLONE_FILES = 0x400 _CLONE_SIGHAND = 0x800 _CLONE_PTRACE = 0x2000 _CLONE_VFORK = 0x4000 _CLONE_PARENT = 0x8000 _CLONE_THREAD = 0x10000 _CLONE_NEWNS = 0x20000 _CLONE_SYSVSEM = 0x40000 _CLONE_SETTLS = 0x80000 _CLONE_PARENT_SETTID = 0x100000 _CLONE_CHILD_CLEARTID = 0x200000 _CLONE_UNTRACED = 0x800000 _CLONE_CHILD_SETTID = 0x1000000 _CLONE_STOPPED = 0x2000000 _CLONE_NEWUTS = 0x4000000 _CLONE_NEWIPC = 0x8000000 cloneFlags = _CLONE_VM | _CLONE_FS | _CLONE_FILES | _CLONE_SIGHAND | _CLONE_SYSVSEM | _CLONE_THREAD /* revisit - okay for now */ ) const ( _AT_NULL = 0 // End of vector _AT_PAGESZ = 6 // System physical page size _AT_HWCAP = 16 // hardware capability bit vector _AT_RANDOM = 25 // introduced in 2.6.29 _AT_HWCAP2 = 26 // hardware capability bit vector 2 ) const ( _SS_DISABLE = 2 _NSIG = 65 _SI_USER = 0 _SIG_BLOCK = 0 _SIG_UNBLOCK = 1 _SIG_SETMASK = 2 ) const ( deferHeaderSize = unsafe.Sizeof(_defer{}) minDeferAlloc = (deferHeaderSize + 15) &^ 15 minDeferArgs = minDeferAlloc - deferHeaderSize ) Keep a cached value to make gotraceback fast, since we call it on every call to gentraceback. The cached value is a uint32 in which the low bits are the "crash" and "all" settings and the remaining bits are the traceback value (0 off, 1 on, 2 include system). const ( tracebackCrash = 1 << iota tracebackAll tracebackShift = iota ) defined constants const ( // _Gidle means this goroutine was just allocated and has not // yet been initialized. _Gidle = iota // 0 // _Grunnable means this goroutine is on a run queue. It is // not currently executing user code. The stack is not owned. _Grunnable // 1 // _Grunning means this goroutine may execute user code. The // stack is owned by this goroutine. It is not on a run queue. // It is assigned an M and a P. _Grunning // 2 // _Gsyscall means this goroutine is executing a system call. // It is not executing user code. The stack is owned by this // goroutine. It is not on a run queue. It is assigned an M. _Gsyscall // 3 // _Gwaiting means this goroutine is blocked in the runtime. // It is not executing user code. It is not on a run queue, // but should be recorded somewhere (e.g., a channel wait // queue) so it can be ready()d when necessary. The stack is // not owned *except* that a channel operation may read or // write parts of the stack under the appropriate channel // lock. Otherwise, it is not safe to access the stack after a // goroutine enters _Gwaiting (e.g., it may get moved). _Gwaiting // 4 // _Gmoribund_unused is currently unused, but hardcoded in gdb // scripts. _Gmoribund_unused // 5 // _Gdead means this goroutine is currently unused. It may be // just exited, on a free list, or just being initialized. It // is not executing user code. It may or may not have a stack // allocated. The G and its stack (if any) are owned by the M // that is exiting the G or that obtained the G from the free // list. _Gdead // 6 // _Genqueue_unused is currently unused. _Genqueue_unused // 7 // _Gcopystack means this goroutine's stack is being moved. It // is not executing user code and is not on a run queue. The // stack is owned by the goroutine that put it in _Gcopystack. _Gcopystack // 8 // _Gscan combined with one of the above states other than // _Grunning indicates that GC is scanning the stack. The // goroutine is not executing user code and the stack is owned // by the goroutine that set the _Gscan bit. // // _Gscanrunning is different: it is used to briefly block // state transitions while GC signals the G to scan its own // stack. This is otherwise like _Grunning. // // atomicstatus&~Gscan gives the state the goroutine will // return to when the scan completes. _Gscan = 0x1000 _Gscanrunnable = _Gscan + _Grunnable // 0x1001 _Gscanrunning = _Gscan + _Grunning // 0x1002 _Gscansyscall = _Gscan + _Gsyscall // 0x1003 _Gscanwaiting = _Gscan + _Gwaiting // 0x1004 ) const ( // _Pidle means a P is not being used to run user code or the // scheduler. Typically, it's on the idle P list and available // to the scheduler, but it may just be transitioning between // other states. // // The P is owned by the idle list or by whatever is // transitioning its state. Its run queue is empty. _Pidle = iota // _Prunning means a P is owned by an M and is being used to // run user code or the scheduler. Only the M that owns this P // is allowed to change the P's status from _Prunning. The M // may transition the P to _Pidle (if it has no more work to // do), _Psyscall (when entering a syscall), or _Pgcstop (to // halt for the GC). The M may also hand ownership of the P // off directly to another M (e.g., to schedule a locked G). _Prunning // _Psyscall means a P is not running user code. It has // affinity to an M in a syscall but is not owned by it and // may be stolen by another M. This is similar to _Pidle but // uses lightweight transitions and maintains M affinity. // // Leaving _Psyscall must be done with a CAS, either to steal // or retake the P. Note that there's an ABA hazard: even if // an M successfully CASes its original P back to _Prunning // after a syscall, it must understand the P may have been // used by another M in the interim. _Psyscall // _Pgcstop means a P is halted for STW and owned by the M // that stopped the world. The M that stopped the world // continues to use its P, even in _Pgcstop. Transitioning // from _Prunning to _Pgcstop causes an M to release its P and // park. // // The P retains its run queue and startTheWorld will restart // the scheduler on Ps with non-empty run queues. _Pgcstop // _Pdead means a P is no longer used (GOMAXPROCS shrank). We // reuse Ps if GOMAXPROCS increases. A dead P is mostly // stripped of its resources, though a few things remain // (e.g., trace buffers). _Pdead ) Values for the flags field of a sigTabT. const ( _SigNotify = 1 << iota // let signal.Notify have signal, even if from kernel _SigKill // if signal.Notify doesn't take it, exit quietly _SigThrow // if signal.Notify doesn't take it, exit loudly _SigPanic // if the signal is from the kernel, panic _SigDefault // if the signal isn't explicitly requested, don't monitor it _SigGoExit // cause all runtime procs to exit (only used on Plan 9). _SigSetStack // add SA_ONSTACK to libc handler _SigUnblock // always unblock; see blockableSig _SigIgn // _SIG_DFL action is to ignore the signal ) const ( _TraceRuntimeFrames = 1 << iota // include frames for internal runtime functions. _TraceTrap // the initial PC, SP are from a trap, not a return PC from a call _TraceJumpStack // if traceback is on a systemstack, resume trace at g that called into it ) scase.kind values. Known to compiler. Changes here must also be made in src/cmd/compile/internal/gc/select.go's walkselect. const ( caseNil = iota caseRecv caseSend caseDefault ) const ( _SIG_DFL uintptr = 0 _SIG_IGN uintptr = 1 ) const ( sigIdle = iota sigReceiving sigSending ) const ( _MaxSmallSize = 32768 smallSizeDiv = 8 smallSizeMax = 1024 largeSizeDiv = 128 _NumSizeClasses = 67 _PageShift = 13 ) const ( mantbits64 uint = 52 expbits64 uint = 11 bias64 = -1<<(expbits64-1) + 1 nan64 uint64 = (1<<expbits64-1)<<mantbits64 + 1 inf64 uint64 = (1<<expbits64 - 1) << mantbits64 neg64 uint64 = 1 << (expbits64 + mantbits64) mantbits32 uint = 23 expbits32 uint = 8 bias32 = -1<<(expbits32-1) + 1 nan32 uint32 = (1<<expbits32-1)<<mantbits32 + 1 inf32 uint32 = (1<<expbits32 - 1) << mantbits32 neg32 uint32 = 1 << (expbits32 + mantbits32) ) const ( // StackSystem is a number of additional bytes to add // to each stack below the usual guard area for OS-specific // purposes like signal handling. Used on Windows, Plan 9, // and iOS because they do not use a separate stack. _StackSystem = sys.GoosWindows*512*sys.PtrSize + sys.GoosPlan9*512 + sys.GoosDarwin*sys.GoarchArm*1024 + sys.GoosDarwin*sys.GoarchArm64*1024 // The minimum size of stack used by Go code _StackMin = 2048 // The minimum stack size to allocate. // The hackery here rounds FixedStack0 up to a power of 2. _FixedStack0 = _StackMin + _StackSystem _FixedStack1 = _FixedStack0 - 1 _FixedStack2 = _FixedStack1 | (_FixedStack1 >> 1) _FixedStack3 = _FixedStack2 | (_FixedStack2 >> 2) _FixedStack4 = _FixedStack3 | (_FixedStack3 >> 4) _FixedStack5 = _FixedStack4 | (_FixedStack4 >> 8) _FixedStack6 = _FixedStack5 | (_FixedStack5 >> 16) _FixedStack = _FixedStack6 + 1 // Functions that need frames bigger than this use an extra // instruction to do the stack split check, to avoid overflow // in case SP - framesize wraps below zero. // This value can be no bigger than the size of the unmapped // space at zero. _StackBig = 4096 // The stack guard is a pointer this many bytes above the // bottom of the stack. _StackGuard = 880*sys.StackGuardMultiplier + _StackSystem // After a stack split check the SP is allowed to be this // many bytes below the stack guard. This saves an instruction // in the checking sequence for tiny frames. _StackSmall = 128 // The maximum number of bytes that a chain of NOSPLIT // functions can use. _StackLimit = _StackGuard - _StackSystem - _StackSmall ) const ( // stackDebug == 0: no logging // == 1: logging of per-stack operations // == 2: logging of per-frame operations // == 3: logging of per-word updates // == 4: logging of per-word reads stackDebug = 0 stackFromSystem = 0 // allocate stacks from system memory instead of the heap stackFaultOnFree = 0 // old stacks are mapped noaccess to detect use after free stackPoisonCopy = 0 // fill stack that should not be accessed with garbage, to detect bad dereferences during copy stackNoCache = 0 // disable per-P small stack caches // check the BP links during traceback. debugCheckBP = false ) const ( uintptrMask = 1<<(8*sys.PtrSize) - 1 // Goroutine preemption request. // Stored into g->stackguard0 to cause split stack check failure. // Must be greater than any real sp. // 0xfffffade in hex. stackPreempt = uintptrMask & -1314 // Thread is forking. // Stored into g->stackguard0 to cause split stack check failure. // Must be greater than any real sp. stackFork = uintptrMask & -1234 ) const ( maxUint = ^uint(0) maxInt = int(maxUint >> 1) ) PCDATA and FUNCDATA table indexes. See funcdata.h and ../cmd/internal/objabi/funcdata.go. = -0x80000000 ) Event types in the trace, args are given in square brackets. const ( traceEvNone = 0 // unused traceEvBatch = 1 // start of per-P batch of events [pid, timestamp] traceEvFrequency = 2 // contains tracer timer frequency [frequency (ticks per second)] traceEvStack = 3 // stack [stack id, number of PCs, array of {PC, func string ID, file string ID, line}] traceEvGomaxprocs = 4 // current value of GOMAXPROCS [timestamp, GOMAXPROCS, stack id] traceEvProcStart = 5 // start of P [timestamp, thread id] traceEvProcStop = 6 // stop of P [timestamp] traceEvGCStart = 7 // GC start [timestamp, seq, stack id] traceEvGCDone = 8 // GC done [timestamp] traceEvGCSTWStart = 9 // GC STW start [timestamp, kind] traceEvGCSTWDone = 10 // GC STW done [timestamp] traceEvGCSweepStart = 11 // GC sweep start [timestamp, stack id] traceEvGCSweepDone = 12 // GC sweep done [timestamp, swept, reclaimed] traceEvGoCreate = 13 // goroutine creation [timestamp, new goroutine id, new stack id, stack id] traceEvGoStart = 14 // goroutine starts running [timestamp, goroutine id, seq] traceEvGoEnd = 15 // goroutine ends [timestamp] traceEvGoStop = 16 // goroutine stops (like in select{}) [timestamp, stack] traceEvGoSched = 17 // goroutine calls Gosched [timestamp, stack] traceEvGoPreempt = 18 // goroutine is preempted [timestamp, stack] traceEvGoSleep = 19 // goroutine calls Sleep [timestamp, stack] traceEvGoBlock = 20 // goroutine blocks [timestamp, stack] traceEvGoUnblock = 21 // goroutine is unblocked [timestamp, goroutine id, seq, stack] traceEvGoBlockSend = 22 // goroutine blocks on chan send [timestamp, stack] traceEvGoBlockRecv = 23 // goroutine blocks on chan recv [timestamp, stack] traceEvGoBlockSelect = 24 // goroutine blocks on select [timestamp, stack] traceEvGoBlockSync = 25 // goroutine blocks on Mutex/RWMutex [timestamp, stack] traceEvGoBlockCond = 26 // goroutine blocks on Cond [timestamp, stack] traceEvGoBlockNet = 27 // goroutine blocks on network [timestamp, stack] traceEvGoSysCall = 28 // syscall enter [timestamp, stack] traceEvGoSysExit = 29 // syscall exit [timestamp, goroutine id, seq, real timestamp] traceEvGoSysBlock = 30 // syscall blocks [timestamp] traceEvGoWaiting = 31 // denotes that goroutine is blocked when tracing starts [timestamp, goroutine id] traceEvGoInSyscall = 32 // denotes that goroutine is in syscall when tracing starts [timestamp, goroutine id] traceEvHeapAlloc = 33 // memstats.heap_live change [timestamp, heap_alloc] traceEvNextGC = 34 // memstats.next_gc change [timestamp, next_gc] traceEvTimerGoroutine = 35 // denotes timer goroutine [timer goroutine id] traceEvFutileWakeup = 36 // denotes that the previous wakeup of this goroutine was futile [timestamp] traceEvString = 37 // string dictionary entry [ID, length, string] traceEvGoStartLocal = 38 // goroutine starts running on the same P as the last event [timestamp, goroutine id] traceEvGoUnblockLocal = 39 // goroutine is unblocked on the same P as the last event [timestamp, goroutine id, stack] traceEvGoSysExitLocal = 40 // syscall exit on the same P as the last event [timestamp, goroutine id, real timestamp] traceEvGoStartLabel = 41 // goroutine starts running with label [timestamp, goroutine id, seq, label string id] traceEvGoBlockGC = 42 // goroutine blocks on GC assist [timestamp, stack] traceEvGCMarkAssistStart = 43 // GC mark assist start [timestamp, stack] traceEvGCMarkAssistDone = 44 // GC mark assist done [timestamp] traceEvUserTaskCreate = 45 // trace.NewContext [timestamp, internal task id, internal parent task id, stack, name string] traceEvUserTaskEnd = 46 // end of a task [timestamp, internal task id, stack] traceEvUserRegion = 47 // trace.WithRegion [timestamp, internal task id, mode(0:start, 1:end), stack, name string] traceEvUserLog = 48 // trace.Log [timestamp, internal task id, key string id, stack, value string] traceEvCount = 49 ) const ( // Timestamps in trace are cputicks/traceTickDiv. // This makes absolute values of timestamp diffs smaller, // and so they are encoded in less number of bytes. // 64 on x86 is somewhat arbitrary (one tick is ~20ns on a 3GHz machine). // The suggested increment frequency for PowerPC's time base register is // 512 MHz according to Power ISA v2.07 section 6.2, so we use 16 on ppc64 // and ppc64le. // Tracing won't work reliably for architectures where cputicks is emulated // by nanotime, so the value doesn't matter for those architectures. traceTickDiv = 16 + 48*(sys.Goarch386|sys.GoarchAmd64|sys.GoarchAmd64p32) // Maximum number of PCs in a single stack trace. // Since events contain only stack id rather than whole stack trace, // we can allow quite large values here. traceStackSize = 128 // Identifier of a fake P that is used when we trace without a real P. traceGlobProc = -1 // Maximum number of bytes to encode uint64 in base-128. traceBytesPerNumber = 10 // Shift of the number of arguments in the first event byte. traceArgCountShift = 6 // Flag passed to traceGoPark to denote that the previous wakeup of this // goroutine was futile. For example, a goroutine was unblocked on a mutex, // but another goroutine got ahead and acquired the mutex before the first // goroutine is scheduled, so the first goroutine has to block again. // Such wakeups happen on buffered channels and sync.Mutex, // but are generally not interesting for end user. traceFutileWakeup byte = ) Numbers fundamental to the encoding. const ( runeError = '\uFFFD' // the "error" Rune or "Unicode replacement character" runeSelf = 0x80 // characters below Runeself are represented as themselves in a single byte. maxRune = '\U0010FFFF' // Maximum valid Unicode code point. ) Code points in the surrogate range are not valid for UTF-8. const ( surrogateMin = 0xD800 surrogateMax = 0xDFFF ) const ( t1 = 0x00 // 0000 0000 tx = 0x80 // 1000 0000 t2 = 0xC0 // 1100 0000 t3 = 0xE0 // 1110 0000 t4 = 0xF0 // 1111 0000 t5 = 0xF8 // 1111 1000 maskx = 0x3F // 0011 1111 mask2 = 0x1F // 0001 1111 mask3 = 0x0F // 0000 1111 mask4 = 0x07 // 0000 0111 rune1Max = 1<<7 - 1 rune2Max = 1<<11 - 1 rune3Max = 1<<16 - 1 // The default lowest and highest continuation byte. locb = 0x80 // 1000 0000 hicb = 0xBF // 1011 1111 ) const ( _AT_SYSINFO_EHDR = 33 _PT_LOAD = 1 /* Loadable program segment */ _PT_DYNAMIC = 2 /* Dynamic linking information */ _DT_NULL = 0 /* Marks end of dynamic section */ _DT_HASH = 4 /* Dynamic symbol hash table */ _DT_STRTAB = 5 /* Address of string table */ _DT_SYMTAB = 6 /* Address of symbol table */ _DT_GNU_HASH = 0x6ffffef5 /* GNU-style dynamic symbol hash table */ _DT_VERSYM = 0x6ffffff0 _DT_VERDEF = 0x6ffffffc _VER_FLG_BASE = 0x1 /* Version definition of file itself */ _SHN_UNDEF = 0 /* Undefined section */ _SHT_DYNSYM = 11 /* Dynamic linker symbol table */ _STT_FUNC = 2 /* Symbol is a code object */ _STT_NOTYPE = 0 /* Symbol type is not specified */ _STB_GLOBAL = 1 /* Global symbol */ _STB_WEAK = 2 /* Weak symbol */ _EI_NIDENT = 16 // Maximum indices for the array types used when traversing the vDSO ELF structures. // Computed from architecture-specific max provided by vdso_linux_*.go vdsoSymTabSize = vdsoArrayMax / unsafe.Sizeof(elfSym{}) vdsoDynSize = vdsoArrayMax / unsafe.Sizeof(elfDyn{}) vdsoSymStringsSize = vdsoArrayMax // byte vdsoVerSymSize = vdsoArrayMax / 2 // uint16 vdsoHashSize = vdsoArrayMax / 4 // uint32 // vdsoBloomSizeScale is a scaling factor for gnuhash tables which are uint32 indexed, // but contain uintptrs vdsoBloomSizeScale = unsafe.Sizeof(uintptr(0)) / 4 // uint32 ): one of 386, amd64, arm, s390x, and so on. const GOARCH string = sys.GOARCH GOOS is the running program's operating system target: one of darwin, freebsd, linux, and so on. To view possible combinations of GOOS and GOARCH, run "go tool dist list". const GOOS string = sys.GOOS const ( // Number of goroutine ids to grab from sched.goidgen to local per-P cache at once. // 16 seems to provide enough amortization, but other than that it's mostly arbitrary number. _GoidCacheBatch = 16 ) The maximum number of frames we print for a traceback const _TracebackMaxFrames = 100 buffer of pending write data const ( bufSize = 4096 ) const cgoCheckPointerFail = "cgo argument has Go pointer to Go pointer" const cgoResultFail = "cgo result has Go pointer" const cgoWriteBarrierFail = "Go pointer stored into non-Go memory" debugCachedWork enables extra checks for debugging premature mark termination. For debugging issue #27993. const debugCachedWork = false debugLogBytes is the size of each per-M ring buffer. This is allocated off-heap to avoid blowing up the M and hence the GC'd heap size. const debugLogBytes = 16 << 10 debugLogStringLimit is the maximum number of bytes in a string. Above this, the string will be truncated with "..(n more bytes).." const debugLogStringLimit = debugLogBytes / 8 const debugPcln = false const debugSelect = false defaultHeapMinimum is the value of heapminimum for GOGC==100. const defaultHeapMinimum = 4 << 20 const dlogEnabled = false const fastlogNumBits = 5 forcePreemptNS is the time slice given to a G before it is preempted. const forcePreemptNS = 10 * 1000 * 1000 // 10ms freezeStopWait is a large value that freezetheworld sets sched.stopwait to in order to request that all Gs permanently stop. const freezeStopWait = 0x7fffffff gcAssistTimeSlack is the nanoseconds of mutator assist time that can accumulate on a P before updating gcController.assistTime. const gcAssistTimeSlack = 5000 const gcBitsChunkBytes = uintptr(64 << 10) const gcBitsHeaderBytes = unsafe.Sizeof(gcBitsHeader{}) gcCreditSlack is the amount of scan work creditGoalUtilization is the goal CPU utilization for marking as a fraction of GOMAXPROCS. const gcGoalUtilization = 0.30 gcOverAssistWork determines how many extra units of scan work a GC assist does when an assist happens. This amortizes the cost of an assist by pre-paying for this many bytes of future allocations. const gcOverAssistWork = 64 << 10 const hashRandomBytes = sys.PtrSize / 4 * 64 const itabInitSize = 512 const mProfCycleWrap = uint32(len(memRecord{}.future)) * (2 << 24) const maxCPUProfStack = 64 const maxZero = 1024 // must match value in cmd/compile/internal/gc/walk.go minPhysPageSize is a lower-bound on the physical page size. The true physical page size may be larger than this. In contrast, sys.PhysPageSize is an upper-bound on the physical page size. const minPhysPageSize = 4096 const minfunc = 16 // minimum function size const msanenabled = false osRelaxMinNS is the number of nanoseconds of idleness to tolerate without performing an osRelax. Since osRelax may reduce the precision of timers, this should be enough larger than the relaxed timer precision to keep the timer error acceptable. const osRelaxMinNS = 0 const pcbucketsize = 256 * minfunc // size of bucket in the pc->func lookup table persistentChunkSize is the number of bytes we allocate when we grow a persistentAlloc. const persistentChunkSize = 256 << 10 const pollBlockSize = 4 * 1024 const raceenabled = false To shake out latent assumptions about scheduling order, we introduce some randomness into scheduling decisions when running with the race detector. The need for this was made obvious by changing the (deterministic) scheduling order in Go 1.5 and breaking many poorly-written tests. With the randomness here, as long as the tests pass consistently with -race, they shouldn't have latent scheduling assumptions. const randomizeScheduler = raceenabled const rwmutexMaxReaders = 1 << 30 Prime to not correlate with any user patterns. const semTabSize = 251 const sizeofSkipFunction = 256 const stackTraceDebug = false testSmallBuf forces a small write barrier buffer to stress write barrier flushing. const testSmallBuf = false timersLen is the length of timers array. Ideally, this would be set to GOMAXPROCS, but that would require dynamic reallocation The current value is a compromise between memory usage and performance that should cover the majority of GOMAXPROCS values used in the wild. const timersLen = 64 The constant is known to the compiler. There is no fundamental theory behind this number. const tmpStringBufSize = 32 treapFilterAll represents the filter which allows all spans. const treapFilterAll = ^treapIterFilter(0) const usesLR = sys.MinFrameSize > 0 const ( // vdsoArrayMax is the byte-size of a maximally sized array on this architecture. // See cmd/compile/internal/amd64/galign.go arch.MAXWIDTH initialization. vdsoArrayMax = 1<<50 - 1 ) var ( _cgo_init unsafe.Pointer _cgo_thread_start unsafe.Pointer _cgo_sys_thread_create unsafe.Pointer _cgo_notify_runtime_init_done unsafe.Pointer _cgo_callers unsafe.Pointer _cgo_set_context_function unsafe.Pointer _cgo_yield unsafe.Pointer ) var ( // Set in runtime.cpuinit. // TODO: deprecate these; use internal/cpu directly. x86HasPOPCNT bool x86HasSSE41 bool arm64HasATOMICS bool ) var ( itabLock mutex // lock for accessing itab table itabTable = &itabTableInit // pointer to current table itabTableInit = itabTableType{size: itabInitSize} // starter table ) var ( uint16Eface interface{} = uint16InterfacePtr(0) uint32Eface interface{} = uint32InterfacePtr(0) uint64Eface interface{} = uint64InterfacePtr(0) stringEface interface{} = stringInterfacePtr("") sliceEface interface{} = sliceInterfacePtr(nil) uint16Type *_type = (*eface)(unsafe.Pointer(&uint16Eface))._type uint32Type *_type = (*eface)(unsafe.Pointer(&uint32Eface))._type uint64Type *_type = (*eface)(unsafe.Pointer(&uint64Eface))._type stringType *_type = (*eface)(unsafe.Pointer(&stringEface))._type sliceType *_type = (*eface)(unsafe.Pointer(&sliceEface))._type ) physHugePageSize is the size in bytes of the OS's default physical huge page size whose allocation is opaque to the application. It is assumed and verified to be a power of two. If set, this must be set by the OS init code (typically in osinit) before mallocinit. However, setting it at all is optional, and leaving the default value is always safe (though potentially less efficient). Since physHugePageSize is always assumed to be a power of two, physHugePageShift is defined as physHugePageSize == 1 << physHugePageShift. The purpose of physHugePageShift is to avoid doing divisions in performance critical functions. var ( physHugePageSize uintptr physHugePageShift uint ) var ( fingCreate uint32 fingRunning bool ) var ( mbuckets *bucket // memory profile buckets bbuckets *bucket // blocking profile buckets xbuckets *bucket // mutex profile buckets buckhash *[179999]*bucket bucketmem uintptr mProf struct { // cycle is the global heap profile cycle. This wraps // at mProfCycleWrap. cycle uint32 // flushed indicates that future[cycle] in all buckets // has been flushed to the active profile. flushed bool } ) var ( netpollInited uint32 pollcache pollCache netpollWaiters uint32 ) var ( // printBacklog is a circular buffer of messages written with the builtin // print* functions, for use in postmortem analysis of core dumps. printBacklog [512]byte printBacklogIndex int ) var ( m0 m g0 g raceprocctx0 uintptr ) var ( argc int32 argv **byte ) var ( allglen uintptr allm *m allp []*p // len(allp) == gomaxprocs; may change at safe points, otherwise immutable allpLock mutex // Protects P-less reads of allp and all writes gomaxprocs int32 ncpu int32 forcegc forcegcstate sched schedt newprocs int32 // Information about what cpu features are available. // Packages outside the runtime should not use these // as they are not an external api. // Set on startup in asm_{386,amd64,amd64p32}.s processorVersionInfo uint32 isIntel bool lfenceBeforeRdtsc bool goarm uint8 // set by cmd/link on arm systems framepointer_enabled bool // set by cmd/link ) Set by the linker so the runtime can determine the buildmode. var ( islibrary bool // -buildmode=c-shared isarchive bool // -buildmode=c-archive ) var ( chansendpc = funcPC(chansend) chanrecvpc = funcPC(chanrecv) ) channels for synchronizing signal mask updates with the signal mask thread var ( disableSigChan chan uint32 enableSigChan chan uint32 maskUpdatedChan chan struct{} ) initialize with vsyscall fallbacks var ( vdsoGettimeofdaySym uintptr = 0xffffffffff600000 vdsoClockgettimeSym uintptr = 0 ) Make the compiler check that heapBits.arena is large enough to hold the maximum arena frame number. var _ = heapBits{arena: (1<<heapAddrBits)/heapArenaBytes - 1} _cgo_mmap is filled in by runtime/cgo when it is linked into the program, so it is only non-nil when using cgo. go:linkname _cgo_mmap _cgo_mmap var _cgo_mmap unsafe.Pointer _cgo_munmap is filled in by runtime/cgo when it is linked into the program, so it is only non-nil when using cgo. go:linkname _cgo_munmap _cgo_munmap var _cgo_munmap unsafe.Pointer var _cgo_setenv unsafe.Pointer // pointer to C function _cgo_sigaction is filled in by runtime/cgo when it is linked into the program, so it is only non-nil when using cgo. go:linkname _cgo_sigaction _cgo_sigaction var _cgo_sigaction unsafe.Pointer var _cgo_unsetenv unsafe.Pointer // pointer to C function var addrspace_vec [1]byte var adviseUnused = uint32(_MADV_FREE) used in asm_{386,amd64,arm64}.s to seed the hash function var aeskeysched [hashRandomBytes]byte var algarray = [alg_max]typeAlg{ alg_NOEQ: {nil, nil}, alg_MEM0: {memhash0, memequal0}, alg_MEM8: {memhash8, memequal8}, alg_MEM16: {memhash16, memequal16}, alg_MEM32: {memhash32, memequal32}, alg_MEM64: {memhash64, memequal64}, alg_MEM128: {memhash128, memequal128}, alg_STRING: {strhash, strequal}, alg_INTER: {interhash, interequal}, alg_NILINTER: {nilinterhash, nilinterequal}, alg_FLOAT32: {f32hash, f32equal}, alg_FLOAT64: {f64hash, f64equal}, alg_CPLX64: {c64hash, c64equal}, alg_CPLX128: {c128hash, c128equal}, } var argslice []string var badmorestackg0Msg = "fatal: morestack on g0\n" var badmorestackgsignalMsg = "fatal: morestack on gsignal\n" var badsystemstackMsg = "fatal: systemstack called from unexpected goroutine" var blockprofilerate uint64 // in CPU ticks boundsErrorFmts provide error text for various out-of-bounds panics. Note: if you change these strings, you should adjust the size of the buffer in boundsError.Error below as well. var boundsErrorFmts = [...]string{ boundsIndex: "index out of range [%x] with length %y", boundsSliceAlen: "slice bounds out of range [:%x] with length %y", boundsSliceAcap: "slice bounds out of range [:%x] with capacity %y", boundsSliceB: "slice bounds out of range [%x:%y]", boundsSlice3Alen: "slice bounds out of range [::%x] with length %y", boundsSlice3Acap: "slice bounds out of range [::%x] with capacity %y", boundsSlice3B: "slice bounds out of range [:%x:%y]", boundsSlice3C: "slice bounds out of range [%x:%y:]", } boundsNegErrorFmts are overriding formats if x is negative. In this case there's no need to report y. var boundsNegErrorFmts = [...]string{ boundsIndex: "index out of range [%x]", boundsSliceAlen: "slice bounds out of range [:%x]", boundsSliceAcap: "slice bounds out of range [:%x]", boundsSliceB: "slice bounds out of range [%x:]", boundsSlice3Alen: "slice bounds out of range [::%x]", boundsSlice3Acap: "slice bounds out of range [::%x]", boundsSlice3B: "slice bounds out of range [:%x:]", boundsSlice3C: "slice bounds out of range [%x::]", } var buf [bufSize]byte var buildVersion = sys.TheVersion cgoAlwaysFalse is a boolean value that is always false. The cgo-generated code says if cgoAlwaysFalse { cgoUse(p) }. The compiler cannot see that cgoAlwaysFalse is always false, so it emits the test and keeps the call, giving the desired escape analysis result. The test is cheaper than the call. var cgoAlwaysFalse bool var cgoContext unsafe.Pointer cgoHasExtraM is set on startup when an extra M is created for cgo. The extra M must be created before any C/C++ code calls cgocallback. var cgoHasExtraM bool var cgoSymbolizer unsafe.Pointer When running with cgo, we call _cgo_thread_start to start threads for us so that we can play nicely with foreign code. var cgoThreadStart unsafe.Pointer var cgoTraceback unsafe.Pointer var cgo_yield = &_cgo_yield var class_to_allocnpages = [_NumSizeClasses]uint8{0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 2, 1, 2, 1, 3, 2, 3, 1, 3, 2, 3, 4, 5, 6, 1, 7, 6, 5, 4, 3, 5, 7, 2, 9, 7, 5, 8, 3, 10, 7, 4} var class_to_divmagic = [_NumSizeClasses]divMagic{{0, 0, 0, 0}, {3, 0, 1, 65528}, {4, 0, 1, 65520}, {5, 0, 1, 65504}, {4, 11, 683, 0}, {6, 0, 1, 65472}, {4, 10, 205, 0}, {5, 9, 171, 0}, {4, 11, 293, 0}, {7, 0, 1, 65408}, {4, 13, 911, 0}, {5, 10, 205, 0}, {4, 12, 373, 0}, {6, 9, 171, 0}, {4, 13, 631, 0}, {5, 11, 293, 0}, {4, 13, 547, 0}, {8, 0, 1, 65280}, {5, 9, 57, 0}, {6, 9, 103, 0}, {5, 12, 373, 0}, {7, 7, 43, 0}, {5, 10, 79, 0}, {6, 10, 147, 0}, {5, 11, 137, 0}, {9, 0, 1, 65024}, {6, 9, 57, 0}, {7, 9, 103, 0}, {6, 11, 187, 0}, {8, 7, 43, 0}, {7, 8, 37, 0}, {10, 0, 1, 64512}, {7, 9, 57, 0}, {8, 6, 13, 0}, {7, 11, 187, 0}, {9, 5, 11, 0}, {8, 8, 37, 0}, {11, 0, 1, 63488}, {8, 9, 57, 0}, {7, 10, 49, 0}, {10, 5, 11, 0}, {7, 10, 41, 0}, {7, 9, 19, 0}, {12, 0, 1, 61440}, {8, 9, 27, 0}, {8, 10, 49, 0}, {11, 5, 11, 0}, {7, 13, 161, 0}, {7, 13, 155, 0}, {8, 9, 19, 0}, {13, 0, 1, 57344}, {8, 12, 111, 0}, {9, 9, 27, 0}, {11, 6, 13, 0}, {7, 14, 193, 0}, {12, 3, 3, 0}, {8, 13, 155, 0}, {11, 8, 37, 0}, {14, 0, 1, 49152}, {11, 8, 29, 0}, {7, 13, 55, 0}, {12, 5, 7, 0}, {8, 14, 193, 0}, {13, 3, 3, 0}, {7, 14, 77, 0}, {12, 7, 19, 0}, {15, 0, 1, 32768}} var class_to_size = [_NumSizeClasses]uint16{0, 8, 16, 32, 48, 64, 80, 96, 112, 128, 144, 160, 176, 192, 208, 224, 240, 256, 288, 320, 352, 384, 416, 448, 480, 512, 576, 640, 704, 768, 896, 1024, 1152, 1280, 1408, 1536, 1792, 2048, 2304, 2688, 3072, 3200, 3456, 4096, 4864, 5376, 6144, 6528, 6784, 6912, 8192, 9472, 9728, 10240, 10880, 12288, 13568, 14336, 16384, 18432, 19072, 20480, 21760, 24576, 27264, 28672, 32768} crashing is the number of m's we have waited for when implementing GOTRACEBACK=crash when a signal is received. var crashing int32 var dbgvars = []dbgVar{ {"allocfreetrace", &debug.allocfreetrace}, {"clobberfree", &debug.clobberfree}, {"cgocheck", &debug.cgocheck}, {"efence", &debug.efence}, {"gccheckmark", &debug.gccheckmark}, {"gcpacertrace", &debug.gcpacertrace}, {"gcshrinkstackoff", &debug.gcshrinkstackoff}, {"gcstoptheworld", &debug.gcstoptheworld}, {"gctrace", &debug.gctrace}, {"invalidptr", &debug.invalidptr}, {"madvdontneed", &debug.madvdontneed}, {"sbrk", &debug.sbrk}, {"scavenge", &debug.scavenge}, {"scheddetail", &debug.scheddetail}, {"schedtrace", &debug.schedtrace}, {"tracebackancestors", &debug.tracebackancestors}, } Holds variables parsed from GODEBUG env var, except for "memprofilerate" since there is an existing int var for that value, which may already have an initial value. var debug struct { allocfreetrace int32 cgocheck int32 clobberfree int32 efence int32 gccheckmark int32 gcpacertrace int32 gcshrinkstackoff int32 gcstoptheworld int32 gctrace int32 invalidptr int32 madvdontneed int32 // for Linux; issue 28466 sbrk int32 scavenge int32 scheddetail int32 schedtrace int32 tracebackancestors int32 } var debugPtrmask struct { lock mutex data *byte } var didothers bool var divideError = error(errorString("integer divide by zero")) var dumpfd uintptr // fd to write the dump to. var dumphdr = []byte("go1.7 heap dump\n") var earlycgocallback = []byte("fatal error: cgo callback before cgo call\n") var envs []string var ( epfd int32 = -1 // epoll descriptor ) var extraMCount uint32 // Protected by lockextra var extraMWaiters uint32 var extram uintptr var failallocatestack = []byte("runtime: failed to allocate stack for the new OS thread\n") var failthreadcreate = []byte("runtime: failed to create new OS thread\n") nacl fake time support - time in nanoseconds since 1970 var faketime int64 var fastlog2Table = [1<<fastlogNumBits + 1]float64{ 0, 0.0443941193584535, 0.08746284125033943, 0.12928301694496647, 0.16992500144231248, 0.2094533656289499, 0.24792751344358555, 0.28540221886224837, 0.3219280948873623, 0.3575520046180837, 0.39231742277876036, 0.4262647547020979, 0.4594316186372973, 0.4918530963296748, 0.5235619560570128, 0.5545888516776374, 0.5849625007211563, 0.6147098441152082, 0.6438561897747247, 0.6724253419714956, 0.7004397181410922, 0.7279204545631992, 0.7548875021634686, 0.7813597135246596, 0.8073549220576042, 0.8328900141647417, 0.8579809951275721, 0.8826430493618412, 0.9068905956085185, 0.9307373375628862, 0.9541963103868752, 0.9772799234999164, 1, } var finalizer1 = [...]byte{ 1<<0 | 1<<1 | 0<<2 | 1<<3 | 1<<4 | 1<<5 | 1<<6 | 0<<7, 1<<0 | 1<<1 | 1<<2 | 1<<3 | 0<<4 | 1<<5 | 1<<6 | 1<<7, 1<<0 | 0<<1 | 1<<2 | 1<<3 | 1<<4 | 1<<5 | 0<<6 | 1<<7, 1<<0 | 1<<1 | 1<<2 | 0<<3 | 1<<4 | 1<<5 | 1<<6 | 1<<7, 0<<0 | 1<<1 | 1<<2 | 1<<3 | 1<<4 | 0<<5 | 1<<6 | 1<<7, } var fingwait bool var fingwake bool var finptrmask [_FinBlockSize / sys.PtrSize / 8]byte var floatError = error(errorString("floating point error")) forcegcperiod is the maximum time in nanoseconds between garbage collections. If we go this long without a garbage collection, one is forced to run. This is a variable for testing purposes. It normally doesn't change. var forcegcperiod int64 = 2 * 60 * 1e9 Bit vector of free marks. Needs to be as big as the largest number of objects per span. var freemark [_PageSize / 8]bool freezing is set to non-zero if the runtime is trying to freeze the world. var freezing uint32 Stores the signal handlers registered before Go installed its own. These signal handlers will be invoked in cases where Go doesn't want to handle a particular signal (e.g., signal occurred on a non-Go thread). See sigfwdgo for more information on when the signals are forwarded. This is read by the signal handler; accesses should use atomic.Loaduintptr and atomic.Storeuintptr. var fwdSig [_NSIG]uintptr var gStatusStrings = [...]string{ _Gidle: "idle", _Grunnable: "runnable", _Grunning: "running", _Gsyscall: "syscall", _Gwaiting: "waiting", _Gdead: "dead", _Gcopystack: "copystack", } var gcBitsArenas struct { lock mutex free *gcBitsArena next *gcBitsArena // Read atomically. Write atomically under lock. current *gcBitsArena previous *gcBitsArena } gcBlackenEnabled is 1 if mutator assists and background mark workers are allowed to blacken objects. This must only be set when gcphase == _GCmark. var gcBlackenEnabled uint gcMarkWorkerModeStrings are the strings labels of gcMarkWorkerModes to use in execution traces. var gcMarkWorkerModeStrings = [...]string{ "GC (dedicated)", "GC (fractional)", "GC (idle)", } gcWorkPauseGen is for debugging the mark completion algorithm. gcWork put operations spin while gcWork.pauseGen == gcWorkPauseGen. Only used if debugCachedWork is true. var gcWorkPauseGen uint32 = 1 Initialized from $GOGC. GOGC=off means no GC. var gcpercent int32 Garbage collector phase. Indicates to write barrier and synchronization task to perform. var gcphase uint32 var globalAlloc struct { mutex persistentAlloc } handlingSig is indexed by signal number and is non-zero if we are currently handling the signal. Or, to put it another way, whether the signal handler is currently set to the Go signal handler or not. This is uint32 rather than bool so that we can use atomic instructions. var handlingSig [_NSIG]uint32 exported value for testing var hashLoad = float32(loadFactorNum) / float32(loadFactorDen) used in hash{32,64}.go to seed the hash function var hashkey [4]uintptr inForkedChild is true while manipulating signals in the child process. This is used to avoid calling libc functions in case we are using vfork. var inForkedChild bool var inf = float64frombits(0x7FF0000000000000) iscgo is set to true by the runtime/cgo package var iscgo bool var labelSync uintptr mSpanStateNames are the names of the span states, indexed by mSpanState. var mSpanStateNames = []string{ "mSpanDead", "mSpanInUse", "mSpanManual", "mSpanFree", } mainStarted indicates that the main M has started. var mainStarted bool main_init_done is a signal used by cgocallbackg that initialization has been completed. It is made before _cgo_notify_runtime_init_done, so all cgo calls can rely on it existing. When main_init is complete, it is closed, meaning cgocallbackg can reliably receive from it. var main_init_done chan bool var maxstacksize uintptr = 1 << 20 // enough until runtime.main sets it for real var memoryError = error(errorString("invalid memory address or nil pointer dereference")) set using cmd/go/internal/modload.ModInfoProg var modinfo string var modulesSlice *[]*moduledata // see activeModules var mutexprofilerate uint64 // fraction sampled var nbuf uintptr newmHandoff contains a list of m structures that need new OS threads. This is used by newm in situations where newm itself can't safely start an OS thread. var newmHandoff struct { lock mutex // newm points to a list of M structures that need new OS // threads. The list is linked through m.schedlink. newm muintptr // waiting indicates that wake needs to be notified when an m // is put on the list. waiting bool wake note // haveTemplateThread indicates that the templateThread has // been started. This is not protected by lock. Use cas to set // to 1. haveTemplateThread uint32 } oneBitCount is indexed by byte and produces the number of 1 bits in that byte. For example 128 has 1 bit set and oneBitCount[128] will holds 1. var oneBitCount = [256]uint} ptrmask for an allocation containing a single pointer. var oneptrmask = [...]uint8{1} var overflowError = error(errorString("integer overflow")) var overflowTag [1]unsafe.Pointer // always nil panicking is non-zero when crashing the program for an unrecovered panic. panicking is incremented and decremented atomically. var panicking uint32 physPageSize is the size in bytes of the OS's physical pages. Mapping and unmapping operations must be done at multiples of physPageSize. This must be set by the OS init code (typically in osinit) before mallocinit. var physPageSize uintptr pinnedTypemaps are the map[typeOff]*_type from the moduledata objects. These typemap objects are allocated at run time on the heap, but the only direct reference to them is in the moduledata, created by the linker and marked SNOPTRDATA so it is ignored by the GC. To make sure the map isn't collected, we keep a second reference here. var pinnedTypemaps []map[typeOff]*_type var poolcleanup func() var procAuxv = []byte("/proc/self/auxv\x00") var prof struct { signalLock uint32 hz int32 } var ptrnames = []string{ 0: "scalar", 1: "ptr", } var racecgosync uint64 // represents possible synchronization in C code reflectOffs holds type offsets defined at run time by the reflect package. When a type is defined at run time, its *rtype data lives on the heap. There are a wide range of possible addresses the heap may use, that may not be representable as a 32-bit offset. Moreover the GC may one day start moving heap memory, in which case there is no stable offset that can be defined. To provide stable offsets, we add pin *rtype objects in a global map and treat the offset as an identifier. We use negative offsets that do not overlap with any compile-time module offsets. Entries are created by reflect.addReflectOff. var reflectOffs struct { lock mutex next int32 m map[int32]unsafe.Pointer minv map[unsafe.Pointer]int32 } runningPanicDefers is non-zero while running deferred functions for panic. runningPanicDefers is incremented and decremented atomically. This is used to try hard to get a panic stack trace out when exiting. var runningPanicDefers uint32 runtimeInitTime is the nanotime() at which the runtime started. var runtimeInitTime int64 Sleep/wait state of the background scavenger. var scavenge struct { lock mutex g *g parked bool timer *timer // Generation counter. // // It represents the last generation count (as defined by // mheap_.scavengeGen) checked by the scavenger and is updated // each time the scavenger checks whether it is on-pace. // // Skew between this field and mheap_.scavengeGen is used to // determine whether a new update is available. // // Protected by mheap_.lock. gen uint64 } var semtable [semTabSize]struct { root semaRoot pad [cpu.CacheLinePadSize - unsafe.Sizeof(semaRoot{})]byte } var shiftError = error(errorString("negative shift amount")) } var signalsOK bool var sigprofCallersUse uint32 var sigset_all = sigset{^uint32(0), ^uint32(0)} var sigtable = [...]sigTabT{ {0, "SIGNONE: no trap"}, {_SigNotify + _SigKill, "SIGHUP: terminal line hangup"}, {_SigNotify + _SigKill, "SIGINT: interrupt"}, {_SigNotify + _SigThrow, "SIGQUIT: quit"}, {_SigThrow + _SigUnblock, "SIGILL: illegal instruction"}, {_SigThrow + _SigUnblock, "SIGTRAP: trace trap"}, {_SigNotify + _SigThrow, "SIGABRT: abort"}, {_SigPanic + _SigUnblock, "SIGBUS: bus error"}, {_SigPanic + _SigUnblock, "SIGFPE: floating-point exception"}, {0, "SIGKILL: kill"}, {_SigNotify, "SIGUSR1: user-defined signal 1"}, {_SigPanic + _SigUnblock, "SIGSEGV: segmentation violation"}, {_SigNotify, "SIGUSR2: user-defined signal 2"}, {_SigNotify, "SIGPIPE: write to broken pipe"}, {_SigNotify, "SIGALRM: alarm clock"}, {_SigNotify + _SigKill, "SIGTERM: termination"}, {_SigThrow + _SigUnblock, "SIGSTKFLT: stack fault"}, {_SigNotify + _SigUnblock + _SigIgn, "SIGCHLD: child status has changed"}, {_SigNotify + _SigDefault + _SigIgn, "SIGCONT: continue"}, {0, "SIGSTOP: stop, unblockable"}, {_SigNotify + _SigDefault + _SigIgn, "SIGTSTP: keyboard stop"}, {_SigNotify + _SigDefault + _SigIgn, "SIGTTIN: background read from tty"}, {_SigNotify + _SigDefault + _SigIgn, "SIGTTOU: background write to tty"}, {_SigNotify + _SigIgn, "SIGURG: urgent condition on socket"}, {_SigNotify, "SIGXCPU: cpu limit exceeded"}, {_SigNotify, "SIGXFSZ: file size limit exceeded"}, {_SigNotify, "SIGVTALRM: virtual alarm clock"}, {_SigNotify + _SigUnblock, "SIGPROF: profiling alarm clock"}, {_SigNotify + _SigIgn, "SIGWINCH: window size change"}, {_SigNotify, "SIGIO: i/o now possible"}, {_SigNotify, "SIGPWR: power failure restart"}, {_SigThrow, "SIGSYS: bad system call"}, {_SigSetStack + _SigUnblock, "signal 32"}, {_SigSetStack + _SigUnblock, "signal 33"}, {_SigNotify, "signal 34"}, {_SigNotify, "signal 35"}, {_SigNotify, "signal 36"}, {_SigNotify, "signal 37"}, {_SigNotify, "signal 38"}, {_SigNotify, "signal 39"}, {_SigNotify, "signal 40"}, {_SigNotify, "signal 41"}, {_SigNotify, "signal 42"}, {_SigNotify, "signal 43"}, {_SigNotify, "signal 44"}, {_SigNotify, "signal 45"}, {_SigNotify, "signal 46"}, {_SigNotify, "signal 47"}, {_SigNotify, "signal 48"}, {_SigNotify, "signal 49"}, {_SigNotify, "signal 50"}, {_SigNotify, "signal 51"}, {_SigNotify, "signal 52"}, {_SigNotify, "signal 53"}, {_SigNotify, "signal 54"}, {_SigNotify, "signal 55"}, {_SigNotify, "signal 56"}, {_SigNotify, "signal 57"}, {_SigNotify, "signal 58"}, {_SigNotify, "signal 59"}, {_SigNotify, "signal 60"}, {_SigNotify, "signal 61"}, {_SigNotify, "signal 62"}, {_SigNotify, "signal 63"}, {_SigNotify, "signal 64"}, } var size_to_class128 = [(_MaxSmallSize-smallSizeMax)/largeSizeDiv + 1]uint8{31, 32, 33, 34, 35, 36, 36, 37, 37, 38, 38, 39, 39, 39, 40, 40, 40, 41, 42, 42, 43, 43, 43, 43, 43, 44, 44, 44, 44, 44, 44, 45, 45, 45, 45, 46, 46, 46, 46, 46, 46, 47, 47, 47, 48, 48, 49, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 51, 51, 51, 51, 51, 51, 51, 51, 51, 51, 52, 52, 53, 53, 53, 53, 54, 54, 54, 54, 54, 55, 55, 55, 55, 55, 55, 55, 55, 55, 55, 55, 56, 56, 56, 56, 56, 56, 56, 56, 56, 56, 57, 57, 57, 57, 57, 57, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 60, 60, 60, 60, 60, 61, 61, 61, 61, 61, 61, 61, 61, 61, 61, 61, 62, 62, 62, 62, 62, 62, 62, 62, 62, 62, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66} var size_to_class8 = [smallSizeMax/smallSizeDiv + 1]uint8{0, 1, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 11, 11, 12, 12, 13, 13, 14, 14, 15, 15, 16, 16, 17, 17, 18, 18, 18, 18, 19, 19, 19, 19, 20, 20, 20, 20, 21, 21, 21, 21, 22, 22, 22, 22, 23, 23, 23, 23, 24, 24, 24, 24, 25, 25, 25, 25, 26, 26, 26, 26, 26, 26, 26, 26, 27, 27, 27, 27, 27, 27, 27, 27, 28, 28, 28, 28, 28, 28, 28, 28, 29, 29, 29, 29, 29, 29, 29, 29, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31}]) var skipPC uintptr Global pool of large stack spans. var stackLarge struct { lock mutex free [heapAddrBits - pageShift]mSpanList // free lists by log_2(s.npages) } Global pool of spans that have free stacks. Stacks are assigned an order according to size. order = log_2(size/FixedStack) There is a free list for each order. TODO: one lock per order? var stackpool [_NumStackOrders]mSpanList var starttime int64 startup_random_data holds random bytes initialized at startup. These come from the ELF AT_RANDOM auxiliary vector (vdso_linux_amd64.go or os_linux_386.go). var startupRandomData []byte staticbytes is used to avoid convT2E for byte-sized values. var staticbytes = [..., } var sysTHPSizePath = []byte("/sys/kernel/mm/transparent_hugepage/hpage_pmd_size\x00") testSigtrap is used by the runtime tests. If non-nil, it is called on SIGTRAP. If it returns true, the normal behavior on SIGTRAP is suppressed. var testSigtrap func(info *siginfo, ctxt *sigctxt, gp *g) bool TODO: These should be locals in testAtomic64, but we don't 8-byte align stack variables on 386. var test_z64, test_x64 uint64 throwOnGCWork causes any operations that add pointers to a gcWork buffer to throw. TODO(austin): This is a temporary debugging measure for issue #27993. To be removed before release. var throwOnGCWork bool var ticks struct { lock mutex pad uint32 // ensure 8-byte alignment of val on 386 val uint64 } timers contains "per-P" timer heaps. Timers are queued into timersBucket associated with the current P, so each P may work with its own timers independently of other P instances. Each timersBucket may be associated with multiple P if GOMAXPROCS > timersLen. var timers [timersLen]struct { timersBucket // The padding should eliminate false sharing // between timersBucket values. pad [cpu.CacheLinePadSize - unsafe.Sizeof(timersBucket{})%cpu.CacheLinePadSize]byte } var tmpbuf []byte trace is global tracing context. var trace struct { lock mutex // protects the following members lockOwner *g // to avoid deadlocks during recursive lock locks enabled bool // when set runtime traces events shutdown bool // set when we are waiting for trace reader to finish after setting enabled to false headerWritten bool // whether ReadTrace has emitted trace header footerWritten bool // whether ReadTrace has emitted trace footer shutdownSema uint32 // used to wait for ReadTrace completion seqStart uint64 // sequence number when tracing was started ticksStart int64 // cputicks when tracing was started ticksEnd int64 // cputicks when tracing was stopped timeStart int64 // nanotime when tracing was started timeEnd int64 // nanotime when tracing was stopped seqGC uint64 // GC start/done sequencer reading traceBufPtr // buffer currently handed off to user empty traceBufPtr // stack of empty buffers fullHead traceBufPtr // queue of full buffers fullTail traceBufPtr reader guintptr // goroutine that called ReadTrace, or nil stackTab traceStackTable // maps stack traces to unique ids // Dictionary for traceEvString. // // TODO: central lock to access the map is not ideal. // option: pre-assign ids to all user annotation region names and tags // option: per-P cache // option: sync.Map like data structure stringsLock mutex strings map[string]uint64 stringSeq uint64 // markWorkerLabels maps gcMarkWorkerMode to string ID. markWorkerLabels [len(gcMarkWorkerModeStrings)]uint64 bufLock mutex // protects buf buf traceBufPtr // global trace buffer, used when running without a p } var traceback_cache uint32 = 2 << tracebackShift var traceback_env uint32 var typecache [typeCacheBuckets]typeCacheBucket var urandom_dev = []byte("/dev/urandom\x00") var useAVXmemmove bool var useAeshash bool If useCheckmark is true, marking of an object uses the checkmark bits (encoding above) instead of the standard mark bits. var useCheckmark = false var vdsoLinuxVersion = vdsoVersionKey{"LINUX_2.6", 0x3ae75f6} var vdsoSymbolKeys = []vdsoSymbolKey{ {"__vdso_gettimeofday", 0x315ca59, 0xb01bca00, &vdsoGettimeofdaySym}, {"__vdso_clock_gettime", 0xd35ec75, 0x6e43a318, &vdsoClockgettimeSym}, } var waitReasonStrings = [...]string{ waitReasonZero: "", waitReasonGCAssistMarking: "GC assist marking", waitReasonIOWait: "IO wait", waitReasonChanReceiveNilChan: "chan receive (nil chan)", waitReasonChanSendNilChan: "chan send (nil chan)", waitReasonDumpingHeap: "dumping heap", waitReasonGarbageCollection: "garbage collection", waitReasonGarbageCollectionScan: "garbage collection scan", waitReasonPanicWait: "panicwait", waitReasonSelect: "select", waitReasonSelectNoCases: "select (no cases)", waitReasonGCAssistWait: "GC assist wait", waitReasonGCSweepWait: "GC sweep wait", waitReasonGCScavengeWait: "GC scavenge wait", waitReasonChanReceive: "chan receive", waitReasonChanSend: "chan send", waitReasonFinalizerWait: "finalizer wait", waitReasonForceGGIdle: "force gc (idle)", waitReasonSemacquire: "semacquire", waitReasonSleep: "sleep", waitReasonSyncCondWait: "sync.Cond.Wait", waitReasonTimerGoroutineIdle: "timer goroutine (idle)", waitReasonTraceReaderBlocked: "trace reader (blocked)", waitReasonWaitForGCCycle: "wait for GC cycle", waitReasonGCWorkerIdle: "GC worker (idle)", } var work struct { full lfstack // lock-free list of full blocks workbuf empty lfstack // lock-free list of empty blocks workbuf pad0 cpu.CacheLinePad // // Number of roots of various root types. Set by gcMarkRootPrepare. nFlushCacheRoots int nDataRoots, nBSSRoots, nSpanRoots, nStackRoots int // to mark termination. markDoneSema uint32 bgMarkReady note // signal background mark worker has started bgMarkDone uint32 // cas to 1 when at a background mark completion point // q gQueue } // sweepWaiters is a list of blocked goroutines to wake when // we transition from mark termination to sweep. sweepWaiters struct { lock mutex list gList } // } Holding worldsema grants an M the right to try to stop the world and prevents gomaxprocs from changing concurrently. var worldsema uint32 = 1 } var zeroVal [maxZero]byte base address for all 0-byte allocations var zerobase uintptr() Breakpoint executes a breakpoint trap. func CPUProfile() []byte CPUProfile panics. It formerly provided raw access to chunks of a pprof-format profile generated by the runtime. The details of generating that format have changed, so this functionality has been removed. Deprecated: Use the runtime/pprof package, or the handlers in the net/http/pprof package, or the testing package's -test.cpuprofile flag instead.. To translate these PCs into symbolic information such as function names and line numbers, use CallersFrames. CallersFrames accounts for inlined functions and adjusts the return program counters into call program counters. Iterating over the returned slice of PCs directly is discouraged, as is using FuncForPC on any of the returned PCs, since these cannot account for inlining or return program counter adjustment. go:noinline func GC() GC runs a garbage collection and blocks the caller until the garbage collection is complete. It may also block the entire program.() string GOROOT returns the root of the Go tree. It uses the GOROOT environment variable, if set at process start, or else the root used during the Go build. func Goexit() Goexit terminates the goroutine that calls it. No other goroutine is affected. Goexit runs all deferred calls before terminating the goroutine. Because Goexit is not a panic,() Gosched yields the processor, allowing other goroutines to run. It does not suspend the current goroutine, so execution resumes automatically. func KeepAlive(x() LockOSThread wires the calling goroutine to its current operating system thread. The calling goroutine will always execute in that thread, and no other goroutine will execute in it, until the calling goroutine has made as many calls to UnlockOSThread as to LockOSThread. If the calling goroutine exits without unlocking the thread, the thread will be terminated. All init functions are run on the startup thread. Calling LockOSThread from an init function will cause the main function to be invoked on that thread. A goroutine should call LockOSThread before calling OS services or non-Go library functions that depend on per-thread state.() int64 NumCgoCall returns the number of cgo calls made by the current process. func NumGoroutine() int NumGoroutine returns the number of goroutines that currently exist.. On all platforms, the traceback function is invoked when a call from Go to C to Go requests a stack trace. On linux/amd64, linux/ppc64le, and freebsd/amd64, the traceback function is also invoked when a signal is received by a thread that is executing a cgo call. The traceback function should not make assumptions about when it is called, as future versions of Go may make additional calls. is scheduled to run at some arbitrary time after the program can no longer reach the object to which obj points. < 0. (For n>1 the details of sampling may change.)() StopTrace stops tracing, if it was previously enabled. StopTrace only returns after all the reads for the trace have completed.() UnlockOSThread undoes an earlier call to LockOSThread. If this drops the number of active LockOSThread calls on the calling goroutine to zero, it unwires the calling goroutine from its fixed operating system thread. If there are no active LockOSThread calls, this is a no-op. Before calling UnlockOSThread, the caller must ensure that the OS thread is suitable for running other goroutines. If the caller made any permanent changes to the state of the thread that would affect other goroutines, it should not call this function and thus leave the goroutine locked to the OS thread until the goroutine (and hence the thread) exits. func Version() string Version returns the Go tree's version string. It is either the commit hash and date at the time of the build or, when possible, a release tag like "go1.3". func _ELF_ST_BIND(val byte) byte How to extract and insert information held in the st_info field. func _ELF_ST_TYPE(val byte) byte func _ExternalCode() func _GC() func _LostExternalCode() func _LostSIGPROFDuringAtomic64() func _System() func _VDSO() func _cgo_panic_internal(p *byte) func abort() abort crashes the runtime in situations where even throw might not work. In general it should do something a debugger will recognize (e.g., an INT3 on x86). A crash in abort is recognized by the signal handler, which will attempt to tear down the runtime immediately. func abs(x float64) float64 Abs returns the absolute value of x. Special cases are: Abs(±Inf) = +Inf Abs(NaN) = NaN func access(name *byte, mode int32) int32 Called from write_err_android.go only, but defined in sys_linux_*.s; declared here (instead of in write_err_android.go) for go vet on non-android builds. The return value is the raw syscall result, which may encode an error number. go:noescape func acquirep(_p_ *p) Associate p and the current m. This function is allowed to have write barriers even if the caller isn't because it immediately acquires _p_. go:yeswritebarrierrec func add(p unsafe.Pointer, x uintptr) unsafe.Pointer Should be a built-in for unsafe.Pointer? go:nosplit func add1(p *byte) *byte add1 returns the byte pointer p+1. go:nowritebarrier go:nosplit func addb(p *byte, n uintptr) *byte addb returns the byte pointer p+n. go:nowritebarrier go:nosplit func addfinalizer(p unsafe.Pointer, f *funcval, nret uintptr, fint *_type, ot *ptrtype) bool Adds a finalizer to the object p. Returns true if it succeeded. func addmoduledata() Called from linker-generated .initarray; declared for go vet; do NOT call from Go. func addspecial(p unsafe.Pointer, s *special) bool Adds the special record s to the list of special records for the object p. All fields of s should be filled in except for offset & next, which this routine will fill in. Returns true if the special was successfully added, false otherwise. (The add will fail only if a record with the same p and s->kind already exists.) func addtimer(t *timer) func adjustctxt(gp *g, adjinfo *adjustinfo) func adjustdefers(gp *g, adjinfo *adjustinfo) func adjustframe(frame *stkframe, arg unsafe.Pointer) bool Note: the argument/return area is adjusted by the callee. func adjustpanics(gp *g, adjinfo *adjustinfo) func adjustpointer(adjinfo *adjustinfo, vpp unsafe.Pointer) Adjustpointer checks whether *vpp is in the old stack described by adjinfo. If so, it rewrites *vpp to point into the new stack. func adjustpointers(scanp unsafe.Pointer, bv *bitvector, adjinfo *adjustinfo, f funcInfo) bv describes the memory starting at address scanp. Adjust any pointers contained therein. func adjustsudogs(gp *g, adjinfo *adjustinfo) func advanceEvacuationMark(h *hmap, t *maptype, newbit uintptr) func aeshash(p unsafe.Pointer, h, s uintptr) uintptr in asm_*.s func aeshash32(p unsafe.Pointer, h uintptr) uintptr func aeshash64(p unsafe.Pointer, h uintptr) uintptr func aeshashstr(p unsafe.Pointer, h uintptr) uintptr func afterfork() func alginit() func allgadd(gp *g) func appendIntStr(b []byte, v int64, signed bool) []byte func archauxv(tag, val uintptr) func arenaBase(i arenaIdx) uintptr arenaBase returns the low address of the region covered by heap arena i. func args(c int32, v **byte) func argv_index(argv **byte, i int32) *byte nosplit for use in linux startup sysargs go:nosplit func asmcgocall(fn, arg unsafe.Pointer) int32 go:noescape func asminit() func assertE2I2(inter *interfacetype, e eface) (r iface, b bool) func assertI2I2(inter *interfacetype, i iface) (r iface, b bool) func atoi(s string) (int, bool) atoi parses an int from a string s. The bool result reports whether s is a number representable by a value of type int. func atoi32(s string) (int32, bool) atoi32 is like atoi but for integers that fit into an int32. func atomicstorep(ptr unsafe.Pointer, new unsafe.Pointer) atomicstorep performs *ptr = new atomically and invokes a write barrier. go:nosplit func atomicwb(ptr *unsafe.Pointer, new unsafe.Pointer) atomicwb performs a write barrier before an atomic pointer write. The caller should guard the call with "if writeBarrier.enabled". func badTimer() badTimer is called if the timer data structures have been corrupted, presumably due to racy use by the program. We panic here rather than panicing due to invalid slice access while holding locks. See issue #25686. func badcgocallback() called from assembly func badctxt() func badmcall(fn func(*g)) func badmcall2(fn func(*g)) func badmorestackg0() go:nosplit go:nowritebarrierrec func badmorestackgsignal() func badreflectcall() func badsignal(sig uintptr, c *sigctxt) This runs on a foreign stack, without an m or a g. No stack split. go:nosplit go:norace go:nowritebarrierrec func badsystemstack() func badunlockosthread() func beforeIdle() bool func beforefork() func bgscavenge(c chan int) Background scavenger. The background scavenger maintains the RSS of the application below the line described by the proportional scavenging statistics in the mheap struct. func bgsweep(c chan int) func binarySearchTree(x *stackObjectBuf, idx int, n int) (root *stackObject, restBuf *stackObjectBuf, restIdx int) Build a binary search tree with the n objects in the list x.obj[idx], x.obj[idx+1], ..., x.next.obj[0], ... Returns the root of that tree, and the buf+idx of the nth object after x.obj[idx]. (The first object that was not included in the binary search tree.) If n == 0, returns nil, x. func block() func blockableSig(sig uint32) bool blockableSig reports whether sig may be blocked by the signal mask. We never want to block the signals marked _SigUnblock; these are the synchronous signals that turn into a Go panic. In a Go program--not a c-archive/c-shared--we never want to block the signals marked _SigKill or _SigThrow, as otherwise it's possible for all running threads to block them and delay their delivery until we start a new thread. When linked into a C program we let the C code decide on the disposition of those signals. func blockevent(cycles int64, skip int) func blocksampled(cycles int64) bool func bool2int(x bool) int bool2int returns 0 if x is false or 1 if x is true. func breakpoint() func bucketEvacuated(t *maptype, h *hmap, bucket uintptr) bool func bucketMask(b uint8) uintptr bucketMask returns 1<<b - 1, optimized for code generation. func bucketShift(b uint8) uintptr bucketShift returns 1<<b, optimized for code generation. func bulkBarrierBitmap(dst, src, size, maskOffset uintptr, bits *uint8) bulkBarrierBitmap executes write barriers for copying from [src, src+size) to [dst, dst+size) using a 1-bit pointer bitmap. src is assumed to start maskOffset bytes into the data covered by the bitmap in bits (which may not be a multiple of 8). This is used by bulkBarrierPreWrite for writes to data and BSS. func bulkBarrierPreWrite(dst, src, size uintptr) bulkBarrierPreWrite executes a write barrier for every pointer slot in the memory range [src, src+size), using pointer/scalar information from [dst, dst+size). This executes the write barriers necessary before a memmove. src, dst, and size must be pointer-aligned. The range [dst, dst+size) must lie within a single object. It does not perform the actual writes. As a special case, src == 0 indicates that this is being used for a memclr. bulkBarrierPreWrite will pass 0 for the src of each write barrier. Callers should call bulkBarrierPreWrite immediately before calling memmove(dst, src, size). This function is marked nosplit to avoid being preempted; the GC must not stop the goroutine between the memmove and the execution of the barriers. The caller is also responsible for cgo pointer checks if this may be writing Go pointers into non-Go memory. The pointer bitmap is not maintained for allocations containing no pointers at all; any caller of bulkBarrierPreWrite must first make sure the underlying allocation contains pointers, usually by checking typ.ptrdata. Callers must perform cgo checks if writeBarrier.cgo. func bulkBarrierPreWriteSrcOnly(dst, src, size uintptr) bulkBarrierPreWriteSrcOnly is like bulkBarrierPreWrite but does not execute write barriers for [dst, dst+size). In addition to the requirements of bulkBarrierPreWrite callers need to ensure [dst, dst+size) is zeroed. This is used for special cases where e.g. dst was just created and zeroed with malloc. go:nosplit func bytes(s string) (ret []byte) func bytesHash(b []byte, seed uintptr) uintptr func c128equal(p, q unsafe.Pointer) bool func c128hash(p unsafe.Pointer, h uintptr) uintptr func c64equal(p, q unsafe.Pointer) bool func c64hash(p unsafe.Pointer, h uintptr) uintptr func cachestats() cachestats flushes all mcache stats. The world must be stopped. go:nowritebarrier func call1024(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call1048576(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call1073741824(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call128(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call131072(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call134217728(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call16384(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call16777216(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call2048(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call2097152(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call256(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call262144(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call268435456(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call32(typ, fn, arg unsafe.Pointer, n, retoffset uint32) in asm_*.s not called directly; definitions here supply type information for traceback. func call32768(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call33554432(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call4096(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call4194304(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call512(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call524288(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call536870912(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call64(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call65536(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call67108864(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call8192(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func call8388608(typ, fn, arg unsafe.Pointer, n, retoffset uint32) func callCgoMmap(addr unsafe.Pointer, n uintptr, prot, flags, fd int32, off uint32) uintptr callCgoMmap calls the mmap function in the runtime/cgo package using the GCC calling convention. It is implemented in assembly. func callCgoMunmap(addr unsafe.Pointer, n uintptr) callCgoMunmap calls the munmap function in the runtime/cgo package using the GCC calling convention. It is implemented in assembly. func callCgoSigaction(sig uintptr, new, old *sigactiont) int32 callCgoSigaction calls the sigaction function in the runtime/cgo package using the GCC calling convention. It is implemented in assembly. go:noescape func callCgoSymbolizer(arg *cgoSymbolizerArg) callCgoSymbolizer calls the cgoSymbolizer function. func callers(skip int, pcbuf []uintptr) int func canpanic(gp *g) bool canpanic returns false if a signal should throw instead of panicking. func cansemacquire(addr *uint32) bool func casfrom_Gscanstatus(gp *g, oldval, newval uint32) The Gscanstatuses are acting like locks and this releases them. If it proves to be a performance hit we should be able to make these simple atomic stores but for now we are going to throw if we see an inconsistent state. func casgcopystack(gp *g) uint32 casgstatus(gp, oldstatus, Gcopystack), assuming oldstatus is Gwaiting or Grunnable. Returns old status. Cannot call casgstatus directly, because we are racing with an async wakeup that might come in from netpoll. If we see Gwaiting from the readgstatus, it might have become Grunnable by the time we get to the cas. If we called casgstatus, it would loop waiting for the status to go back to Gwaiting, which it never will. go:nosplit func casgstatus(gp *g, oldval, newval uint32) If asked to move to or from a Gscanstatus this will throw. Use the castogscanstatus and casfrom_Gscanstatus instead. casgstatus will loop if the g->atomicstatus is in a Gscan status until the routine that put it in the Gscan state is finished. go:nosplit func castogscanstatus(gp *g, oldval, newval uint32) bool This will return false if the gp is not in the expected status and the cas fails. This acts like a lock acquire while the casfromgstatus acts like a lock release. func cfuncname(f funcInfo) *byte func cgoCheckArg(t *_type, p unsafe.Pointer, indir, top bool, msg string) cgoCheckArg is the real work of cgoCheckPointer. The argument p is either a pointer to the value (of type t), or the value itself, depending on indir. The top parameter is whether we are at the top level, where Go pointers are allowed. func cgoCheckBits(src unsafe.Pointer, gcbits *byte, off, size uintptr) cgoCheckBits checks the block of memory at src, for up to size bytes, and throws if it finds a Go pointer. The gcbits mark each pointer value. The src pointer is off bytes into the gcbits. go:nosplit go:nowritebarrier func cgoCheckMemmove(typ *_type, dst, src unsafe.Pointer, off, size uintptr) cgoCheckMemmove is called when moving a block of memory. dst and src point off bytes into the value to copy. size is the number of bytes to copy. It throws if the program is copying a block that contains a Go pointer into non-Go memory. go:nosplit go:nowritebarrier func cgoCheckPointer(ptr interface{}, args ...interface{}) cgoCheckPointer checks if the argument contains a Go pointer that points to a Go pointer, and panics if it does. func cgoCheckResult(val interface{}) cgoCheckResult is called to check the result parameter of an exported Go function. It panics if the result is or contains a Go pointer. func cgoCheckSliceCopy(typ *_type, dst, src slice, n int) cgoCheckSliceCopy is called when copying n elements of a slice from src to dst. typ is the element type of the slice. It throws if the program is copying slice elements that contain Go pointers into non-Go memory. go:nosplit go:nowritebarrier func cgoCheckTypedBlock(typ *_type, src unsafe.Pointer, off, size uintptr) cgoCheckTypedBlock checks the block of memory at src, for up to size bytes, and throws if it finds a Go pointer. The type of the memory is typ, and src is off bytes into that type. go:nosplit go:nowritebarrier func cgoCheckUnknownPointer(p unsafe.Pointer, msg string) (base, i uintptr) cgoCheckUnknownPointer is called for an arbitrary pointer into Go memory. It checks whether that Go memory contains any other pointer into Go memory. If it does, we panic. The return values are unused but useful to see in panic tracebacks. func cgoCheckUsingType(typ *_type, src unsafe.Pointer, off, size uintptr) cgoCheckUsingType is like cgoCheckTypedBlock, but is a last ditch fall back to look for pointers in src using the type information. We only use this when looking at a value on the stack when the type uses a GC program, because otherwise it's more efficient to use the GC bits. This is called on the system stack. go:nowritebarrier go:systemstack func cgoCheckWriteBarrier(dst *uintptr, src uintptr) cgoCheckWriteBarrier is called whenever a pointer is stored into memory. It throws if the program is storing a Go pointer into non-Go memory. This is called from the write barrier, so its entire call tree must be nosplit. go:nosplit go:nowritebarrier func cgoContextPCs(ctxt uintptr, buf []uintptr) cgoContextPCs gets the PC values from a cgo traceback. func cgoInRange(p unsafe.Pointer, start, end uintptr) bool cgoInRange reports whether p is between start and end. go:nosplit go:nowritebarrierrec func cgoIsGoPointer(p unsafe.Pointer) bool cgoIsGoPointer reports whether the pointer is a Go pointer--a pointer to Go memory. We only care about Go memory that might contain pointers. go:nosplit go:nowritebarrierrec func cgoSigtramp() func cgoUse(interface{}) cgoUse is called by cgo-generated code (using go:linkname to get at an unexported name). The calls serve two purposes: 1) they are opaque to escape analysis, so the argument is considered to escape to the heap. 2) they keep the argument alive until the call site; the call is emitted after the end of the (presumed) use of the argument by C. cgoUse should not actually be called (see cgoAlwaysFalse). func cgocall(fn, arg unsafe.Pointer) int32 Call from Go to C. go:nosplit func cgocallback(fn, frame unsafe.Pointer, framesize, ctxt uintptr) func cgocallback_gofunc(fv, frame, framesize, ctxt uintptr) Not all cgocallback_gofunc frames are actually cgocallback_gofunc, so not all have these arguments. Mark them uintptr so that the GC does not misinterpret memory when the arguments are not present. cgocallback_gofunc is not called from go, only from cgocallback, so the arguments will be found via cgocallback's pointer-declared arguments. See the assembly implementations for more details. func cgocallbackg(ctxt uintptr) Call from C back to Go. go:nosplit func cgocallbackg1(ctxt uintptr) func cgounimpl() called from (incomplete) assembly func chanbuf(c *hchan, i uint) unsafe.Pointer chanbuf(c, i) is pointer to the i'th slot in the buffer. func chanrecv(c *hchan, ep unsafe.Pointer, block bool) (selected, received bool) chanrecv receives on channel c and writes the received data to ep. ep may be nil, in which case received data is ignored. If block == false and no elements are available, returns (false, false). Otherwise, if c is closed, zeros *ep and returns (true, false). Otherwise, fills in *ep with an element and returns (true, true). A non-nil ep must point to the heap or the caller's stack. func chanrecv1(c *hchan, elem unsafe.Pointer) entry points for <- c from compiled code go:nosplit func chanrecv2(c *hchan, elem unsafe.Pointer) (received bool) func chansend(c *hchan, ep unsafe.Pointer, block bool, callerpc uintptr) bool * generic single channel send/recv * If block is not nil, * then the protocol will not * sleep but return if it could * not complete. * * sleep can wake up with g.param == nil * when a channel involved in the sleep has * been closed. it is easiest to loop and re-run * the operation; we'll see that it's now closed. func chansend1(c *hchan, elem unsafe.Pointer) entry point for c <- x from compiled code go:nosplit func check() func checkASM() bool checkASM reports whether assembly runtime checks have passed. func checkTimeouts() func checkTreapNode(t *treapNode) checkTreapNode when used in conjunction with walkTreap can usually detect a poorly formed treap. func checkdead() Check for deadlock situation. The check is based on number of running M's, if 0 -> deadlock. sched.lock must be held. func checkmcount() func clearCheckmarks() func clearSignalHandlers() clearSignalHandlers clears all signal handlers that are not ignored back to the default. This is called by the child after a fork, so that we can enable the signal mask for the exec without worrying about running a signal handler in the child. go:nosplit go:nowritebarrierrec func clearpools() func clobberfree(x unsafe.Pointer, size uintptr) clobberfree sets the memory content at x to bad content, for debugging purposes. func clone(flags int32, stk, mp, gp, fn unsafe.Pointer) int32 func closechan(c *hchan) func closefd(fd int32) int32 func closeonexec(fd int32) func complex128div(n complex128, m complex128) complex128 func concatstring2(buf *tmpBuf, a [2]string) string func concatstring3(buf *tmpBuf, a [3]string) string func concatstring4(buf *tmpBuf, a [4]string) string func concatstring5(buf *tmpBuf, a [5]string) string func concatstrings(buf *tmpBuf, a []string) string concatstrings implements a Go string concatenation x+y+z+... The operands are passed in the slice a. If buf != nil, the compiler has determined that the result does not escape the calling function, so the string data can be stored in buf if small enough. func connect(fd int32, addr unsafe.Pointer, len int32) int32 func contains(s, t string) bool func convT16(val uint16) (x unsafe.Pointer) func convT32(val uint32) (x unsafe.Pointer) func convT64(val uint64) (x unsafe.Pointer) func convTslice(val []byte) (x unsafe.Pointer) func convTstring(val string) (x unsafe.Pointer) func copysign(x, y float64) float64 copysign returns a value with the magnitude of x and the sign of y. func copystack(gp *g, newsize uintptr, sync bool) Copies gp's stack to a new stack of a different size. Caller must have changed gp status to Gcopystack. If sync is true, this is a self-triggered stack growth and, in particular, no other G may be writing to gp's stack (e.g., via a channel operation). If sync is false, copystack protects against concurrent channel operations. func countSub(x, y uint32) int countSub subtracts two counts obtained from profIndex.dataCount or profIndex.tagCount, assuming that they are no more than 2^29 apart (guaranteed since they are never more than len(data) or len(tags) apart, respectively). tagCount wraps at 2^30, while dataCount wraps at 2^32. This function works for both. func countrunes(s string) int countrunes returns the number of runes in s. func cpuinit() cpuinit extracts the environment variable GODEBUG from the environment on Unix-like operating systems and calls internal/cpu.Initialize. func cputicks() int64 careful: cputicks is not guaranteed to be monotonic! In particular, we have noticed drift between cpus on certain os/arch combinations. See issue 8976. func crash() func createfing() func cstring(s string) unsafe.Pointer func debugCallCheck(pc uintptr) string debugCallCheck checks whether it is safe to inject a debugger function call with return PC pc. If not, it returns a string explaining why. func debugCallPanicked(val interface{}) func debugCallV1() func debugCallWrap(dispatch uintptr) debugCallWrap pushes a defer to recover from panics in debug calls and then calls the dispatching function at PC dispatch. func debug_modinfo() string go:linkname debug_modinfo runtime/debug.modinfo func decoderune(s string, k int) (r rune, pos int) decoderune returns the non-ASCII rune at the start of s[k:] and the index after the rune in s. decoderune assumes that caller has checked that the to be decoded rune is a non-ASCII rune. If the string appears to be incomplete or decoding problems are encountered (runeerror, k + 1) is returned to ensure progress when decoderune is used to iterate over a string. func deductSweepCredit(spanBytes uintptr, callerSweepPages uintptr) deductSweepCredit deducts sweep credit for allocating a span of size spanBytes. This must be performed *before* the span is allocated to ensure the system has enough credit. If necessary, it performs sweeping to prevent going in to debt. If the caller will also sweep pages (e.g., for a large allocation), it can pass a non-zero callerSweepPages to leave that many pages unswept. deductSweepCredit makes a worst-case assumption that all spanBytes bytes of the ultimately allocated span will be available for object allocation. deductSweepCredit is the core of the "proportional sweep" system. It uses statistics gathered by the garbage collector to perform enough sweeping so that all pages are swept during the concurrent sweep phase between GC cycles. mheap_ must NOT be locked. func deferArgs(d *_defer) unsafe.Pointer The arguments associated with a deferred call are stored immediately after the _defer header in memory. go:nosplit func deferclass(siz uintptr) uintptr defer size class for arg size sz go:nosplit func deferproc(siz int32, fn *funcval) Create a new deferred function fn with siz bytes of arguments. The compiler turns a defer statement into a call to this. go:nosplit func deferprocStack(d *_defer) deferprocStack queues a new deferred function with a defer record on the stack. The defer record must have its siz and fn fields initialized. All other fields can contain junk. The defer record must be immediately followed in memory by the arguments of the defer. Nosplit because the arguments on the stack won't be scanned until the defer record is spliced into the gp._defer list. go:nosplit func deferreturn(arg0 uintptr) The single argument isn't actually used - it just has its address taken so it can be matched against pending defers. go:nosplit func deltimer(t *timer) bool Delete timer t from the heap. Do not need to update the timerproc: if it wakes up early, no big deal. func dematerializeGCProg(s *mspan) func dieFromSignal(sig uint32) dieFromSignal kills the program with a signal. This provides the expected exit status for the shell. This is only called with fatal signals expected to kill the process. go:nosplit go:nowritebarrierrec func divlu(u1, u0, v uint64) (q, r uint64) 128/64 -> 64 quotient, 64 remainder. adapted from hacker's delight func doInit(t *initTask) func dolockOSThread() dolockOSThread is called by LockOSThread and lockOSThread below after they modify m.locked. Do not allow preemption during this call, or else the m might be different in this function than in the caller. go:nosplit func dopanic_m(gp *g, pc, sp uintptr) bool func dounlockOSThread() dounlockOSThread is called by UnlockOSThread and unlockOSThread below after they update m->locked. Do not allow preemption during this call, or else the m might be in different in this function than in the caller. go:nosplit func dropg() dropg removes the association between m and the current goroutine m->curg (gp for short). Typically a caller sets gp's status away from Grunning and then immediately calls dropg to finish the job. The caller is also responsible for arranging that gp will be restarted using ready at an appropriate time. After calling dropg and arranging for gp to be readied later, the caller can do other work but eventually should call schedule to restart the scheduling of goroutines on this m. func dropm() dropm is called when a cgo callback has called needm but is now done with the callback and returning back into the non-Go thread. It puts the current m back onto the extra list. The main expense here is the call to signalstack to release the m's signal stack, and then the call to needm on the next callback from this thread. It is tempting to try to save the m for next time, which would eliminate both these costs, but there might not be a next time: the current thread (which Go does not control) might exit. If we saved the m for that thread, there would be an m leak each time such a thread exited. Instead, we acquire and release an m on each call. These should typically not be scheduling operations, just a few atomics, so the cost should be small. TODO(rsc): An alternative would be to allocate a dummy pthread per-thread variable using pthread_key_create. Unlike the pthread keys we already use on OS X, this dummy key would never be read by Go code. It would exist only so that we could register at thread-exit-time destructor. That destructor would put the m back onto the extra list. This is purely a performance optimization. The current version, in which dropm happens on each cgo call, is still correct too. We may have to keep the current version on systems with cgo but without pthreads, like Windows. func duffcopy() func duffzero() func dumpGCProg(p *byte) func dumpbool(b bool) func dumpbv(cbv *bitvector, offset uintptr) dump kinds & offsets of interesting fields in bv func dumpfields(bv bitvector) dumpint() the kind & offset of each field in an object. func dumpfinalizer(obj unsafe.Pointer, fn *funcval, fint *_type, ot *ptrtype) func dumpframe(s *stkframe, arg unsafe.Pointer) bool func dumpgoroutine(gp *g) func dumpgs() func dumpgstatus(gp *g) func dumpint(v uint64) dump a uint64 in a varint format parseable by encoding/binary func dumpitabs() func dumpmemprof() func dumpmemprof_callback(b *bucket, nstk uintptr, pstk *uintptr, size, allocs, frees uintptr) func dumpmemrange(data unsafe.Pointer, len uintptr) dump varint uint64 length followed by memory contents func dumpmemstats() func dumpms() func dumpobj(obj unsafe.Pointer, size uintptr, bv bitvector) dump an object func dumpobjs() func dumpotherroot(description string, to unsafe.Pointer) func dumpparams() func dumpregs(c *sigctxt) func dumproots() func dumpslice(b []byte) func dumpstr(s string) func dumptype(t *_type) dump information for a type func dwrite(data unsafe.Pointer, len uintptr) func dwritebyte(b byte) func efaceHash(i interface{}, seed uintptr) uintptr func efaceeq(t *_type, x, y unsafe.Pointer) bool func elideWrapperCalling(id funcID) bool elideWrapperCalling reports whether a wrapper function that called function id should be elided from stack traces. func encoderune(p []byte, r rune) int encoderune writes into p (which must be large enough) the UTF-8 encoding of the rune. It returns the number of bytes written. func ensureSigM() ensureSigM starts one global, sleeping thread to make sure at least one thread is available to catch signals enabled for os/signal. func entersyscall() Standard syscall entry used by the go syscall library and normal cgo calls. This is exported via linkname to assembly in the syscall package. go:nosplit go:linkname entersyscall func entersyscall_gcwait() func entersyscall_sysmon() func entersyscallblock() The same as entersyscall(), but with a hint that the syscall is blocking. go:nosplit func entersyscallblock_handoff() func envKeyEqual(a, b string) bool envKeyEqual reports whether a == b, with ASCII-only case insensitivity on Windows. The two strings must have the same length. func environ() []string func epollcreate(size int32) int32 func epollcreate1(flags int32) int32 func epollctl(epfd, op, fd int32, ev *epollevent) int32 func epollwait(epfd int32, ev *epollevent, nev, timeout int32) int32 func eqslice(x, y []uintptr) bool func evacuate(t *maptype, h *hmap, oldbucket uintptr) func evacuate_fast32(t *maptype, h *hmap, oldbucket uintptr) func evacuate_fast64(t *maptype, h *hmap, oldbucket uintptr) func evacuate_faststr(t *maptype, h *hmap, oldbucket uintptr) func evacuated(b *bmap) bool func execute(gp *g, inheritTime bool) Schedules gp to run on the current M. If inheritTime is true, gp inherits the remaining time in the current time slice. Otherwise, it starts a new time slice. Never returns. Write barriers are allowed because this is called immediately after acquiring a P in several places. func exit(code int32) func exitThread(wait *uint32) exitThread terminates the current thread, writing *wait = 0 when the stack is safe to reclaim. func exitsyscall() The goroutine g exited its system call. Arrange for it to run on a cpu again. This is called only from the go syscall library, not from the low-level system calls used by the runtime. Write barriers are not allowed because our P may have been stolen. go:nosplit go:nowritebarrierrec go:linkname exitsyscall func exitsyscall0(gp *g) exitsyscall slow path on g0. Failed to acquire P, enqueue gp as runnable. go:nowritebarrierrec func exitsyscallfast(oldp *p) bool func exitsyscallfast_pidle() bool func exitsyscallfast_reacquired() exitsyscallfast_reacquired is the exitsyscall path on which this G has successfully reacquired the P it was running on before the syscall. func extendRandom(r []byte, n int) extendRandom extends the random numbers in r[:n] to the whole slice r. Treats n<0 as n==0. func f32equal(p, q unsafe.Pointer) bool func f32hash(p unsafe.Pointer, h uintptr) uintptr func f32to64(f uint32) uint64 func f32toint32(x uint32) int32 func f32toint64(x uint32) int64 func f32touint64(x float32) uint64 func f64equal(p, q unsafe.Pointer) bool func f64hash(p unsafe.Pointer, h uintptr) uintptr func f64to32(f uint64) uint32 func f64toint(f uint64) (val int64, ok bool) func f64toint32(x uint64) int32 func f64toint64(x uint64) int64 func f64touint64(x float64) uint64 func fadd32(x, y uint32) uint32 func fadd64(f, g uint64) uint64 func fastexprand(mean int) int32 fastexprand returns a random number from an exponential distribution with the specified mean. func fastlog2(x float64) float64 fastlog2 implements a fast approximation to the base 2 log of a float64. This is used to compute a geometric distribution for heap sampling, without introducing dependencies into package math. This uses a very rough approximation using the float64 exponent and the first 25 bits of the mantissa. The top 5 bits of the mantissa are used to load limits from a table of constants and the rest are used to scale linearly between them. func fastrand() uint32 func fastrandn(n uint32) uint32 func fatalpanic(msgs *_panic) fatalpanic implements an unrecoverable panic. It is like fatalthrow, except that if msgs != nil, fatalpanic also prints panic messages and decrements runningPanicDefers once main is blocked from exiting. func fatalthrow() fatalthrow implements an unrecoverable runtime throw. It freezes the system, prints stack traces starting from its caller, and terminates the process. func fcmp64(f, g uint64) (cmp int32, isnan bool) func fdiv32(x, y uint32) uint32 func fdiv64(f, g uint64) uint64 func feq32(x, y uint32) bool func feq64(x, y uint64) bool func fge32(x, y uint32) bool func fge64(x, y uint64) bool func fgt32(x, y uint32) bool func fgt64(x, y uint64) bool func fillstack(stk stack, b byte) func findObject(p, refBase, refOff uintptr) (base uintptr, s *mspan, objIndex uintptr) findObject returns the base address for the heap object containing the address p, the object's span, and the index of the object in s. If p does not point into a heap object, it returns base == 0. If p points is an invalid heap pointer and debug.invalidptr != 0, findObject panics. refBase and refOff optionally give the base address of the object in which the pointer p was found and the byte offset at which it was found. These are used for error reporting. func findnull(s *byte) int func findnullw(s *uint16) int func findrunnable() (gp *g, inheritTime bool) Finds a runnable goroutine to execute. Tries to steal from other P's, get g from global queue, poll network. func findsghi(gp *g, stk stack) uintptr func finishsweep_m() finishsweep_m ensures that all spans are swept. The world must be stopped. This ensures there are no sweeps in progress. func finq_callback(fn *funcval, obj unsafe.Pointer, nret uintptr, fint *_type, ot *ptrtype) func fint32to32(x int32) uint32 func fint32to64(x int32) uint64 func fint64to32(x int64) uint32 func fint64to64(x int64) uint64 func fintto64(val int64) (f uint64) func float64bits(f float64) uint64 Float64bits returns the IEEE 754 binary representation of f. func float64frombits(b uint64) float64 Float64frombits returns the floating point number corresponding the IEEE 754 binary representation b. func flush() func flushallmcaches() flushallmcaches flushes the mcaches of all Ps. func flushmcache(i int) flushmcache flushes the mcache of allp[i]. func fmtNSAsMS(buf []byte, ns uint64) []byte fmtNSAsMS nicely formats ns nanoseconds as milliseconds. func fmul32(x, y uint32) uint32 func fmul64(f, g uint64) uint64 func fneg64(f uint64) uint64 func forEachP(fn func(*p)) forEachP calls fn(p) for every P p when p reaches a GC safe point. If a P is currently executing code, this will bring the P to a GC safe point and execute fn on that P. If the P is not executing code (it is idle or in a syscall), this will call fn(p) directly while preventing the P from exiting its state. This does not ensure that fn will run on every CPU executing Go code, but it acts as a global memory barrier. GC uses this as a "ragged barrier." The caller must hold worldsema. go:systemstack func forcegchelper() func fpack32(sign, mant uint32, exp int, trunc uint32) uint32 func fpack64(sign, mant uint64, exp int, trunc uint64) uint64 func freeSomeWbufs(preemptible bool) bool freeSomeWbufs frees some workbufs back to the heap and returns true if it should be called again to free more. func freeStackSpans() freeStackSpans frees unused stack spans at the end of GC. func freedefer(d *_defer) Free the given defer. The defer cannot be used after this call. This must not grow the stack because there may be a frame without a stack map when this is called. func freedeferfn() func freedeferpanic() Separate function so that it can split stack. Windows otherwise runs out of stack space. func freemcache(c *mcache) func freespecial(s *special, p unsafe.Pointer, size uintptr) Do whatever cleanup needs to be done to deallocate s. It has already been unlinked from the mspan specials list. func freezetheworld() Similar to stopTheWorld but best-effort and can be called several times. There is no reverse operation, used during crashing. This function must not lock any mutexes. func fsub64(f, g uint64) uint64 func fuint64to32(x uint64) float32 func fuint64to64(x uint64) float64 func funcPC(f interface{}) uintptr funcPC returns the entry PC of the function f. It assumes that f is a func value. Otherwise the behavior is undefined. CAREFUL: In programs with plugins, funcPC can return different values for the same function (because there are actually multiple copies of the same function in the address space). To be safe, don't use the results of this function in any == expression. It is only safe to use the result as an address at which to start executing code. go:nosplit func funcdata(f funcInfo, i uint8) unsafe.Pointer func funcfile(f funcInfo, fileno int32) string func funcline(f funcInfo, targetpc uintptr) (file string, line int32) func funcline1(f funcInfo, targetpc uintptr, strict bool) (file string, line int32) func funcname(f funcInfo) string func funcnameFromNameoff(f funcInfo, nameoff int32) string func funcspdelta(f funcInfo, targetpc uintptr, cache *pcvalueCache) int32 func funpack32(f uint32) (sign, mant uint32, exp int, inf, nan bool) func funpack64(f uint64) (sign, mant uint64, exp int, inf, nan bool) func futex(addr unsafe.Pointer, op int32, val uint32, ts, addr2 unsafe.Pointer, val3 uint32) int32 func futexsleep(addr *uint32, val uint32, ns int64) Atomically, if(*addr == val) sleep Might be woken up spuriously; that's allowed. Don't sleep longer than ns; ns < 0 means forever. go:nosplit func futexwakeup(addr *uint32, cnt uint32) If any procs are sleeping on addr, wake up at most cnt. go:nosplit func gcAssistAlloc(gp *g) gcAssistAlloc performs GC work to make gp's assist debt positive. gp must be the calling user gorountine. This must be called with preemption enabled. func gcAssistAlloc1(gp *g, scanWork int64) gcAssistAlloc1 is the part of gcAssistAlloc that runs on the system stack. This is a separate function to make it easier to see that we're not capturing anything from the user stack, since the user stack may move while we're in this function. gcAssistAlloc1 indicates whether this assist completed the mark phase by setting gp.param to non-nil. This can't be communicated on the stack since it may move. func gcBgMarkPrepare() gcBgMarkPrepare sets up state for background marking. Mutator assists must not yet be enabled. func gcBgMarkStartWorkers()Worker(_p_ *p) func gcDrain(gcw *gcWork, flags gcDrainFlags) gcDrain scans roots and objects in work buffers, blackening grey objects until it is unable to get more work. It may return before GC is done; it's the caller's responsibility to balance work from other Ps. If flags&gcDrainUntilPreempt != 0, gcDrain returns when g.preempt is set. If flags&gcDrainIdle != 0, gcDrain returns when there is other work to do. If flags&gcDrainFractional != 0, gcDrain self-preempts when pollFractionalWorkerExit() returns true. This implies gcDrainNoBlock. If flags&gcDrainFlushBgCredit != 0, gcDrain flushes scan work credit to gcController.bgScanCredit every gcCreditSlack units of scan work. func gcDrainN(gcw *gcWork, scanWork int64) int64 gcDrainN blackens grey objects until it has performed roughly scanWork units of scan work or the G is preempted. This is best-effort, so it may perform less work if it fails to get a work buffer. Otherwise, it will perform at least n units of work, but may perform more because scanning is always done in whole object increments. It returns the amount of scan work performed. The caller goroutine must be in a preemptible state (e.g., _Gwaiting) to prevent deadlocks during stack scanning. As a consequence, this must be called on the system stack. go:nowritebarrier go:systemstack func gcDumpObject(label string, obj, off uintptr) gcDumpObject dumps the contents of obj for debugging and marks the field at byte offset off in obj. func gcEffectiveGrowthRatio() float64cFlushBgCredit(scanWork int64) gcFlushBgCredit flushes scanWork units of background scan work credit. This first satisfies blocked assists on the work.assistQueue and then flushes any remaining credit to gcController.bgScanCredit. Write barriers are disallowed because this is used by gcDrain after it has ensured that all work is drained and this must preserve that condition. func gcMark(start_time int64) gcMark runs the mark (or, for concurrent GC, mark termination) All gcWork caches must be empty. STW is in effect at this point. func gcMarkDone()RootCheck() gcMarkRootCheck checks that all roots have been scanned. It is purely for debugging. func gcMarkRootPrepare() gcMarkRootPrepare queues root scanning jobs (stacks, globals, and some miscellany) and initializes scanning-related state. The caller must have call gcCopySpans(). func gcMarkTermination(nextTriggerRatio float64) func gcMarkTinyAllocs() gcMarkTinyAllocs greys all active tiny alloc blocks. func gcMarkWorkAvailable(p *p) bool gcMarkWorkAvailable reports whether executing a mark worker on p is potentially useful. p may be nil, in which case it only checks the global sources of work. func gcPaceScavenger() gcPaceScavenger updates the scavenger's pacing, particularly its rate and RSS goal. The RSS goal is based on the current heap goal with a small overhead to accomodate non-determinism in the allocator. The pacing is based on scavengePageRate, which applies to both regular and huge pages. See that constant for more information. func gcParkAssist() bool gcParkAssist puts the current goroutine on the assist queue and parks. gcParkAssist reports whether the assist is now satisfied. If it returns false, the caller must retry the assist. func gcResetMarkState(). gcResetMarkState must be called on the system stack because it acquires the heap lock. See mheap for details. func gcSetTriggerRatio(triggerRatio float64). func gcStart(trigger gcTrigger)cSweep(mode gcMode) gcSweep must be called on the system stack because it acquires the heap lock. See mheap for details. go:systemstack func gcWaitOnMark(n uint32) gcWaitOnMark blocks until GC finishes the Nth mark phase. If GC has already completed this mark phase, it returns immediately. func gcWakeAllAssists() gcWakeAllAssists wakes all currently blocked assists. This is used at the end of a GC cycle. gcBlackenEnabled must be false to prevent new assists from going to sleep after this point. func gcWriteBarrier() Called from compiled code; declared for vet; do NOT call from Go. func gcallers(gp *g, skip int, pcbuf []uintptr) int func gcd(a, b uint32) uint32 func gcenable() gcenable is called after the bulk of the runtime initialization, just before we're about to start letting user code run. It kicks off the background sweeper goroutine, the background scavenger goroutine, and enables GC. func gcinit() func gcmarknewobject(obj, size, scanSize uintptr) gcmarknewobject marks a newly allocated object black. obj must not contain any non-nil pointers. This is nosplit so it can manipulate a gcWork without preemption. go:nowritebarrier go:nosplit func gcount() int32 func gcstopm() Stops the current m for stopTheWorld. Returns when the world is restarted. func gentraceback(pc0, sp0, lr0 uintptr, gp *g, skip int, pcbuf *uintptr, max int, callback func(*stkframe, unsafe.Pointer) bool, v unsafe.Pointer, flags uint) int Generic traceback. Handles runtime stack prints (pcbuf == nil), the runtime.Callers function (pcbuf != nil), as well as the garbage collector (callback != nil). A little clunky to merge these, but avoids duplicating the code and all its subtlety. The skip argument is only valid with pcbuf != nil and counts the number of logical frames to skip rather than physical frames (with inlining, a PC in pcbuf can represent multiple calls). If a PC is partially skipped and max > 1, pcbuf[1] will be runtime.skipPleaseUseCallersFrames+N where N indicates the number of logical frames to skip in pcbuf[0]. func getArgInfo(frame *stkframe, f funcInfo, needArgMap bool, ctxt *funcval) (arglen uintptr, argmap *bitvector) getArgInfo returns the argument frame information for a call to f with call frame frame. This is used for both actual calls with active stack frames and for deferred calls or goroutines that are not yet executing. If this is an actual call, ctxt must be nil (getArgInfo will retrieve what it needs from the active stack frame). If this is a deferred call or unstarted goroutine, ctxt must be the function object that was deferred or go'd. func getArgInfoFast(f funcInfo, needArgMap bool) (arglen uintptr, argmap *bitvector, ok bool) getArgInfoFast returns the argument frame information for a call to f. It is short and inlineable. However, it does not handle all functions. If ok reports false, you must call getArgInfo instead. TODO(josharian): once we do mid-stack inlining, call getArgInfo directly from getArgInfoFast and stop returning an ok bool. func getHugePageSize() uintptr func getRandomData(r []byte) func getStackMap(frame *stkframe, cache *pcvalueCache, debug bool) (locals, args bitvector, objs []stackObjectRecord) getStackMap returns the locals and arguments live pointer maps, and stack object list for frame. func getargp(x int) uintptr getargp returns the location where the caller writes outgoing function call arguments. go:nosplit go:noinline func getcallerpc() uintptr func getcallersp() uintptr func getclosureptr() uintptr getclosureptr returns the pointer to the current closure. getclosureptr can only be used in an assignment statement at the entry of a function. Moreover, go:nosplit directive must be specified at the declaration of caller function, so that the function prolog does not clobber the closure register. for example: //go:nosplit func f(arg1, arg2, arg3 int) { dx := getclosureptr() } The compiler rewrites calls to this function into instructions that fetch the pointer from a well-known register (DX on x86 architecture, etc.) directly. func getgcmask(ep interface{}) (mask []byte) Returns GC type info for the pointer stored in ep for testing. If ep points to the stack, only static live information will be returned (i.e. not for objects which are only dynamically live stack objects). func getgcmaskcb(frame *stkframe, ctxt unsafe.Pointer) bool func getm() uintptr A helper function for EnsureDropM. func getproccount() int32 func getsig(i uint32) uintptr func gettid() uint32 func gfpurge(_p_ *p) Purge all cached G's from gfree list to the global list. func gfput(_p_ *p, gp *g) Put on gfree list. If local list is too long, transfer a batch to the global list. func globrunqput(gp *g) Put gp on the global runnable queue. Sched must be locked. May run during STW, so write barriers are not allowed. go:nowritebarrierrec func globrunqputbatch(batch *gQueue, n int32) Put a batch of runnable goroutines on the global runnable queue. This clears *batch. Sched must be locked. func globrunqputhead(gp *g) Put gp at the head of the global runnable queue. Sched must be locked. May run during STW, so write barriers are not allowed. go:nowritebarrierrec func goPanicIndex(x int, y int) failures in the comparisons for s[x], 0 <= x < y (y == len(s)) func goPanicIndexU(x uint, y int) func goPanicSlice3Acap(x int, y int) func goPanicSlice3AcapU(x uint, y int) func goPanicSlice3Alen(x int, y int) failures in the comparisons for s[::x], 0 <= x <= y (y == len(s) or cap(s)) func goPanicSlice3AlenU(x uint, y int) func goPanicSlice3B(x int, y int) failures in the comparisons for s[:x:y], 0 <= x <= y func goPanicSlice3BU(x uint, y int) func goPanicSlice3C(x int, y int) failures in the comparisons for s[x:y:], 0 <= x <= y func goPanicSlice3CU(x uint, y int) func goPanicSliceAcap(x int, y int) func goPanicSliceAcapU(x uint, y int) func goPanicSliceAlen(x int, y int) failures in the comparisons for s[:x], 0 <= x <= y (y == len(s) or cap(s)) func goPanicSliceAlenU(x uint, y int) func goPanicSliceB(x int, y int) failures in the comparisons for s[x:y], 0 <= x <= y func goPanicSliceBU(x uint, y int) func goargs() func gobytes(p *byte, n int) (b []byte) used by cmd/cgo func goenvs() func goenvs_unix() func goexit(neverCallThisFunction) goexit is the return stub at the top of every goroutine call stack. Each goroutine stack is constructed as if goexit called the goroutine's entry point function, so that when the entry point function returns, it will return to goexit, which will call goexit1 to perform the actual exit. This function must never be called directly. Call goexit1 instead. gentraceback assumes that goexit terminates the stack. A direct call on the stack will cause gentraceback to stop walking the stack prematurely and if there is leftover state it may panic. func goexit0(gp *g) goexit continuation on g0. func goexit1() Finishes execution of the current goroutine. func gogetenv(key string) string func gogo(buf *gobuf) func gopanic(e interface{}) The implementation of the predeclared function panic. func gopark(unlockf func(*g, unsafe.Pointer) bool, lock unsafe.Pointer, reason waitReason, traceEv byte, traceskip int). func goparkunlock(lock *mutex, reason waitReason, traceEv byte, traceskip int) Puts the current goroutine into a waiting state and unlocks the lock. The goroutine can be made runnable again by calling goready(gp). func gopreempt_m(gp *g) func goready(gp *g, traceskip int) func gorecover(argp uintptr) interface{} The implementation of the predeclared function recover. Cannot split the stack because it needs to reliably find the stack segment of its caller. TODO(rsc): Once we commit to CopyStackAlways, this doesn't need to be nosplit. go:nosplit func goroutineReady(arg interface{}, seq uintptr) Ready the goroutine arg. func goroutineheader(gp *g) func gosave(buf *gobuf) func goschedImpl(gp *g) func gosched_m(gp *g) Gosched continuation on g0. func goschedguarded() goschedguarded yields the processor like gosched, but also checks for forbidden states and opts out of the yield in those cases. go:nosplit func goschedguarded_m(gp *g) goschedguarded is a forbidden-states-avoided version of gosched_m func gostartcall(buf *gobuf, fn, ctxt unsafe.Pointer) adjust Gobuf as if it executed a call to fn with context ctxt and then did an immediate gosave. func gostartcallfn(gobuf *gobuf, fv *funcval) adjust Gobuf as if it executed a call to fn and then did an immediate gosave. func gostring(p *byte) string This is exported via linkname to assembly in syscall (for Plan9). go:linkname gostring func gostringn(p *byte, l int) string func gostringnocopy(str *byte) string func gostringw(strw *uint16) string func gotraceback() (level int32, all, crash bool) gotraceback returns the current traceback settings. If level is 0, suppress all tracebacks. If level is 1, show tracebacks, but exclude runtime frames. If level is 2, show tracebacks including runtime frames. If all is set, print all goroutine stacks. Otherwise, print just the current goroutine. If crash is set, crash (core dump, etc) after tracebacking. func greyobject(obj, base, off uintptr, span *mspan, gcw *gcWork, objIndex uintptr) obj is the start of an object with mark mbits. If it isn't already marked, mark it and enqueue into gcw. base and off are for debugging only and could be removed. See also wbBufFlush1, which partially duplicates this logic. func growWork(t *maptype, h *hmap, bucket uintptr) func growWork_fast32(t *maptype, h *hmap, bucket uintptr) func growWork_fast64(t *maptype, h *hmap, bucket uintptr) func growWork_faststr(t *maptype, h *hmap, bucket uintptr) func gwrite(b []byte) write to goroutine-local buffer if diverting output, or else standard error. func handoffp(_p_ *p) Hands off P from syscall or locked M. Always runs without a P, so write barriers are not allowed. go:nowritebarrierrec func hasPrefix(s, prefix string) bool func hashGrow(t *maptype, h *hmap) func haveexperiment(name string) bool func heapBitsSetType(x, size, dataSize uintptr, typ *_type) heapBitsSetType records that the new allocation [x, x+size) holds in [x, x+dataSize) one or more values of type typ. (The number of values is given by dataSize / typ.size.) If dataSize < size, the fragment [x+dataSize, x+size) is recorded as non-pointer data. It is known that the type has pointers somewhere; malloc does not call heapBitsSetType when there are no pointers, because all free objects are marked as noscan during heapBitsSweepSpan. There can only be one allocation from a given span active at a time, and the bitmap for a span always falls on byte boundaries, so there are no write-write races for access to the heap bitmap. Hence, heapBitsSetType can access the bitmap without atomics. There can be read-write races between heapBitsSetType and things that read the heap bitmap like scanobject. However, since heapBitsSetType is only used for objects that have not yet been made reachable, readers will ignore bits being modified by this function. This does mean this function cannot transiently modify bits that belong to neighboring objects. Also, on weakly-ordered machines, callers must execute a store/store (publication) barrier between calling this function and making the object reachable. func heapBitsSetTypeGCProg(h heapBits, progSize, elemSize, dataSize, allocSize uintptr, prog *byte) heapBitsSetTypeGCProg implements heapBitsSetType using a GC program. progSize is the size of the memory described by the program. elemSize is the size of the element that the GC program describes (a prefix of). dataSize is the total size of the intended data, a multiple of elemSize. allocSize is the total size of the allocated memory. GC programs are only used for large allocations. heapBitsSetType requires that allocSize is a multiple of 4 words, so that the relevant bitmap bytes are not shared with surrounding objects. func heapRetained() uint64 heapRetained returns an estimate of the current heap RSS. func hexdumpWords(p, end uintptr, mark func(uintptr) byte) hexdumpWords prints a word-oriented hex dump of [p, end). If mark != nil, it will be called with each printed word's address and should return a character mark to appear just before that word's value. It can return 0 to indicate no mark. func ifaceHash(i interface { F() }, seed uintptr) uintptr func ifaceeq(tab *itab, x, y unsafe.Pointer) bool func inHeapOrStack(b uintptr) bool inHeapOrStack is a variant of inheap that returns true for pointers into any allocated heap span. func inPersistentAlloc(p uintptr) bool inPersistentAlloc reports whether p points to memory allocated by persistentalloc. This must be nosplit because it is called by the cgo checker code, which is called by the write barrier code. go:nosplit func inRange(r0, r1, v0, v1 uintptr) bool inRange reports whether v0 or v1 are in the range [r0, r1]. func inVDSOPage(pc uintptr) bool vdsoMarker reports whether PC is on the VDSO page. func incidlelocked(v int32) func index(s, t string) int func inf2one(f float64) float64 inf2one returns a signed 1 if f is an infinity and a signed 0 otherwise. The sign of the result is the sign of f. func inheap(b uintptr) bool inheap reports whether b is a pointer into a (potentially dead) heap object. It returns false for pointers into mSpanManual spans. Non-preemptible because it is used by write barriers. go:nowritebarrier go:nosplit func init() start forcegc helper goroutine func initAlgAES() func initCheckmarks() func initsig(preinit bool) Initialize signals. Called by libpreinit so runtime may not be initialized. go:nosplit go:nowritebarrierrec func injectglist(glist *gList) Injects the list of runnable G's into the scheduler and clears glist. Can run concurrently with GC. func int32Hash(i uint32, seed uintptr) uintptr func int64Hash(i uint64, seed uintptr) uintptr func interequal(p, q unsafe.Pointer) bool func interhash(p unsafe.Pointer, h uintptr) uintptr func intstring(buf *[4]byte, v int64) (s string) func isAbortPC(pc uintptr) bool isAbortPC reports whether pc is the program counter at which runtime.abort raises a signal. It is nosplit because it's part of the isgoexception implementation. func isDirectIface(t *_type) bool isDirectIface reports whether t is stored directly in an interface value. func isEmpty(x uint8) bool isEmpty reports whether the given tophash array entry represents an empty bucket entry. func isExportedRuntime(name string) bool isExportedRuntime reports whether name is an exported runtime function. It is only for runtime functions, so ASCII A-Z is fine. func isFinite(f float64) bool isFinite reports whether f is neither NaN nor an infinity. func isInf(f float64) bool isInf reports whether f is an infinity. func isNaN(f float64) (is bool) isNaN reports whether f is an IEEE 754 “not-a-number” value. func isPowerOfTwo(x uintptr) bool func isSweepDone() bool isSweepDone reports whether all spans are swept or currently being swept. Note that this condition may transition from false to true at any time as the sweeper runs. It may transition from true to false if a GC runs; to prevent that the caller must be non-preemptible or must somehow block GC progress. func isSystemGoroutine(gp *g, fixed bool) bool isSystemGoroutine reports whether the goroutine g must be omitted in stack dumps and deadlock detector. This is any goroutine that starts at a runtime.* entry point, except for runtime.main and sometimes runtime.runfinq. If fixed is true, any goroutine that can vary between user and system (that is, the finalizer goroutine) is considered a user goroutine. func ismapkey(t *_type) bool func itabAdd(m *itab) itabAdd adds the given itab to the itab hash table. itabLock must be held. func itabHashFunc(inter *interfacetype, typ *_type) uintptr func itab_callback(tab *itab) func itabsinit() func iterate_finq(callback func(*funcval, unsafe.Pointer, uintptr, *_type, *ptrtype)) func iterate_itabs(fn func(*itab)) func iterate_memprof(fn func(*bucket, uintptr, *uintptr, uintptr, uintptr, uintptr)) func itoa(buf []byte, val uint64) []byte go:nosplit itoa converts val to a decimal representation. The result is written somewhere within buf and the location of the result is returned. buf must be at least 20 bytes. func itoaDiv(buf []byte, val uint64, dec int) []byte itoaDiv formats val/(10**dec) into buf. func jmpdefer(fv *funcval, argp uintptr) func key32(p *uintptr) *uint32 We use the uintptr mutex.key and note.key as a uint32. go:nosplit func less(a, b uint32) bool less checks if a < b, considering a & b running counts that may overflow the 32-bit range, and that their "unwrapped" difference is always less than 2^31. func lfnodeValidate(node *lfnode) lfnodeValidate panics if node is not a valid address for use with lfstack.push. This only needs to be called when node is allocated. func lfstackPack(node *lfnode, cnt uintptr) uint64 func libpreinit() Called to do synchronous initialization of Go code built with -buildmode=c-archive or -buildmode=c-shared. None of the Go runtime is initialized. go:nosplit go:nowritebarrierrec func lock(l *mutex) func lockOSThread() func lockedOSThread() bool func lowerASCII(c byte) byte func mProf_Flush() mProf_Flush flushes the events from the current heap profiling cycle into the active profile. After this it is safe to start a new heap profiling cycle with mProf_NextCycle. This is called by GC after mark termination starts the world. In contrast with mProf_NextCycle, this is somewhat expensive, but safe to do concurrently. func mProf_FlushLocked() func mProf_Free(b *bucket, size uintptr) Called when freeing a profiled block. func mProf_Malloc(p unsafe.Pointer, size uintptr) Called by malloc to record a profiled block. func mProf_NextCycle() mProf_NextCycle publishes the next heap profile cycle and creates a fresh heap profile cycle. This operation is fast and can be done during STW. The caller must call mProf_Flush before calling mProf_NextCycle again. This is called by mark termination during STW so allocations and frees after the world is started again count towards a new heap profiling cycle. func mProf_PostSweep() mProf_PostSweep records that all sweep frees for this GC cycle have completed. This has the effect of publishing the heap profile snapshot as of the last mark termination without advancing the heap profile cycle. func mSysStatDec(sysStat *uint64, n uintptr) Atomically decreases a given *system* memory stat. Same comments as mSysStatInc apply. go:nosplit func mSysStatInc(sysStat *uint64, n uintptr) madvise(addr unsafe.Pointer, n uintptr, flags int32) int32 return value is only set on linux to be used in osinit() func main() The main goroutine. func main_main() go:linkname main_main main.main func makeslice(et *_type, len, cap int) unsafe.Pointer func makeslice64(et *_type, len64, cap64 int64) unsafe.Pointer func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer Allocate an object of size bytes. Small objects are allocated from the per-P cache's free lists. Large objects (> 32 kB) are allocated straight from the heap. func mallocinit() func mapaccess1(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer mapaccess1 returns a pointer to h[key]. Never returns nil, instead it will return a reference to the zero object for the elem type if the key is not in the map. NOTE: The returned pointer may keep the whole map live, so don't hold onto it for very long. func mapaccess1_fast32(t *maptype, h *hmap, key uint32) unsafe.Pointer func mapaccess1_fast64(t *maptype, h *hmap, key uint64) unsafe.Pointer func mapaccess1_faststr(t *maptype, h *hmap, ky string) unsafe.Pointer func mapaccess1_fat(t *maptype, h *hmap, key, zero unsafe.Pointer) unsafe.Pointer func mapaccess2(t *maptype, h *hmap, key unsafe.Pointer) (unsafe.Pointer, bool) func mapaccess2_fast32(t *maptype, h *hmap, key uint32) (unsafe.Pointer, bool) func mapaccess2_fast64(t *maptype, h *hmap, key uint64) (unsafe.Pointer, bool) func mapaccess2_faststr(t *maptype, h *hmap, ky string) (unsafe.Pointer, bool) func mapaccess2_fat(t *maptype, h *hmap, key, zero unsafe.Pointer) (unsafe.Pointer, bool) func mapaccessK(t *maptype, h *hmap, key unsafe.Pointer) (unsafe.Pointer, unsafe.Pointer) returns both key and elem. Used by map iterator func mapassign(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer Like mapaccess, but allocates a slot for the key if it is not present in the map. func mapassign_fast32(t *maptype, h *hmap, key uint32) unsafe.Pointer func mapassign_fast32ptr(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer func mapassign_fast64(t *maptype, h *hmap, key uint64) unsafe.Pointer func mapassign_fast64ptr(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer func mapassign_faststr(t *maptype, h *hmap, s string) unsafe.Pointer func mapclear(t *maptype, h *hmap) mapclear deletes all keys from a map. func mapdelete(t *maptype, h *hmap, key unsafe.Pointer) func mapdelete_fast32(t *maptype, h *hmap, key uint32) func mapdelete_fast64(t *maptype, h *hmap, key uint64) func mapdelete_faststr(t *maptype, h *hmap, ky string) func mapiterinit(t *maptype, h *hmap, it *hiter) mapiterinit initializes the hiter struct used for ranging over maps. The hiter struct pointed to by 'it' is allocated on the stack by the compilers order pass or on the heap by reflect_mapiterinit. Both need to have zeroed hiter since the struct contains pointers. func mapiternext(it *hiter) func markroot(gcw *gcWork, i uint32) markroot scans the i'th root. Preemption must be disabled (because this uses a gcWork). nowritebarrier is only advisory here. func markrootBlock(b0, n0 uintptr, ptrmask0 *uint8, gcw *gcWork, shard int) markrootBlock scans the shard'th shard of the block of memory [b0, b0+n0), with the given pointer mask. func markrootFreeGStacks() markrootFreeGStacks frees stacks of dead Gs. This does not free stacks of dead Gs cached on Ps, but having a few cached stacks around isn't a problem. func markrootSpans(gcw *gcWork, shard int) markrootSpans marks roots for one shard of work.spans. func mcall(fn func(*g)) mcall switches from the g to the g0 stack and invokes fn(g), where g is the goroutine that made the call. mcall saves g's current PC/SP in g->sched so that it can be restored later. It is up to fn to arrange for that later execution, typically by recording g in a data structure, causing something to call ready(g) later. mcall returns to the original goroutine g later, when g has been rescheduled. fn must not return at all; typically it ends by calling schedule, to let the m run other goroutines. mcall can only be called from g stacks (not g0, not gsignal). This must NOT be go:noescape: if fn is a stack-allocated closure, fn puts g on a run queue, and g executes before fn returns, the closure will be invalidated while it is still executing. func mcommoninit(mp *m) func mcount() int32 func mdump() func memclrHasPointers(ptr unsafe.Pointer, n uintptr) memclrHasPointers clears n bytes of typed memory starting at ptr. The caller must ensure that the type of the object at ptr has pointers, usually by checking typ.ptrdata. However, ptr does not have to point to the start of the allocation. func memclrNoHeapPointers(ptr unsafe.Pointer, n uintptr) memclrNoHeapPointers clears n bytes starting at ptr. Usually you should use typedmemclr. memclrNoHeapPointers should be used only when the caller knows that *ptr contains no heap pointers because either: *ptr is initialized memory and its type is pointer-free, or *ptr is uninitialized memory (e.g., memory that's being reused for a new allocation) and hence contains only "junk". The (CPU-specific) implementations of this function are in memclr_*.s. go:noescape func memequal(a, b unsafe.Pointer, size uintptr) bool in asm_*.s go:noescape func memequal0(p, q unsafe.Pointer) bool func memequal128(p, q unsafe.Pointer) bool func memequal16(p, q unsafe.Pointer) bool func memequal32(p, q unsafe.Pointer) bool func memequal64(p, q unsafe.Pointer) bool func memequal8(p, q unsafe.Pointer) bool func memequal_varlen(a, b unsafe.Pointer) bool func memhash(p unsafe.Pointer, seed, s uintptr) uintptr func memhash0(p unsafe.Pointer, h uintptr) uintptr func memhash128(p unsafe.Pointer, h uintptr) uintptr func memhash16(p unsafe.Pointer, h uintptr) uintptr func memhash32(p unsafe.Pointer, seed uintptr) uintptr func memhash64(p unsafe.Pointer, seed uintptr) uintptr func memhash8(p unsafe.Pointer, h uintptr) uintptr func memhash_varlen(p unsafe.Pointer, h uintptr) uintptr func memmove(to, from unsafe.Pointer, n uintptr) memmove copies n bytes from "from" to "to". in memmove_*.s go:noescape func mexit(osStack bool) mexit tears down and exits the current thread. Don't call this directly to exit the thread, since it must run at the top of the thread stack. Instead, use gogo(&_g_.m.g0.sched) to unwind the stack to the point that exits the thread. It is entered with m.p != nil, so write barriers are allowed. It will release the P before exiting. func mincore(addr unsafe.Pointer, n uintptr, dst *byte) int32 func minit() Called to initialize a new m (including the bootstrap m). Called on the new thread, cannot allocate memory. func minitSignalMask() minitSignalMask is called when initializing a new m to set the thread's signal mask. When this is called all signals have been blocked for the thread. This starts with m.sigmask, which was set either from initSigmask for a newly created thread or by calling msigsave if this is a non-Go thread calling a Go function. It removes all essential signals from the mask, thus causing those signals to not be blocked. Then it sets the thread's signal mask. After this is called the thread can receive signals. func minitSignalStack() minitSignalStack is called when initializing a new m to set the alternate signal stack. If the alternate signal stack is not set for the thread (the normal case) then set the alternate signal stack to the gsignal stack. If the alternate signal stack is set for the thread (the case when a non-Go thread sets the alternate signal stack and then calls a Go function) then set the gsignal stack to the alternate signal stack. Record which choice was made in newSigstack, so that it can be undone in unminit. func minitSignals() minitSignals is called when initializing a new m to set the thread's alternate signal stack and signal mask. func mmap(addr unsafe.Pointer, n uintptr, prot, flags, fd int32, off uint32) (unsafe.Pointer, int) func modtimer(t *timer, when, period int64, f func(interface{}, uintptr), arg interface{}, seq uintptr) func moduledataverify() func moduledataverify1(datap *moduledata) func modulesinit() modulesinit creates the active modules slice out of all loaded modules. When a module is first loaded by the dynamic linker, an .init_array function (written by cmd/link) is invoked to call addmoduledata, appending to the module to the linked list that starts with firstmoduledata. There are two times this can happen in the lifecycle of a Go program. First, if compiled with -linkshared, a number of modules built with -buildmode=shared can be loaded at program initialization. Second, a Go program can load a module while running that was built with -buildmode=plugin. After loading, this function is called which initializes the moduledata so it is usable by the GC and creates a new activeModules list. Only one goroutine may call modulesinit at a time. func morestack() func morestack_noctxt() func morestackc() This is exported as ABI0 via linkname so obj can call it. go:nosplit go:linkname morestackc func mpreinit(mp *m) Called to initialize a new m (including the bootstrap m). Called on the parent thread (main thread in case of bootstrap), can allocate memory. func mput(mp *m) Put mp on midle list. Sched must be locked. May run during STW, so write barriers are not allowed. go:nowritebarrierrec func msanfree(addr unsafe.Pointer, sz uintptr) func msanmalloc(addr unsafe.Pointer, sz uintptr) func msanread(addr unsafe.Pointer, sz uintptr) func msanwrite(addr unsafe.Pointer, sz uintptr) func msigrestore(sigmask sigset) msigrestore sets the current thread's signal mask to sigmask. This is used to restore the non-Go signal mask when a non-Go thread calls a Go function. This is nosplit and nowritebarrierrec because it is called by dropm after g has been cleared. go:nosplit go:nowritebarrierrec func msigsave(mp *m) msigsave saves the current thread's signal mask into mp.sigmask. This is used to preserve the non-Go signal mask when a non-Go thread calls a Go function. This is nosplit and nowritebarrierrec because it is called by needm which may be called on a non-Go thread with no g available. go:nosplit go:nowritebarrierrec func mspinning() func mstart() mstart is the entry-point for new Ms. This must not split the stack because we may not even have stack bounds set up yet. May run during STW (because it doesn't have a P yet), so write barriers are not allowed. func mstart1() func mstartm0() mstartm0 implements part of mstart1 that only runs on the m0. Write barriers are allowed here because we know the GC can't be running yet, so they'll be no-ops. func mullu(u, v uint64) (lo, hi uint64) 64x64 -> 128 multiply. adapted from hacker's delight. func munmap(addr unsafe.Pointer, n uintptr) func mutexevent(cycles int64, skip int) go:linkname mutexevent sync.event func nanotime() int64 func needm(x byte) needm is called when a cgo callback happens on a thread without an m (a thread not created by Go). In this case, needm is expected to find an m to use and return with m, g initialized correctly. Since m and g are not set now (likely nil, but see below) needm is limited in what routines it can call. In particular it can only call nosplit functions (textflag 7) and cannot do any scheduling that requires an m. In order to avoid needing heavy lifting here, we adopt the following strategy: there is a stack of available m's that can be stolen. Using compare-and-swap to pop from the stack has ABA races, so we simulate a lock by doing an exchange (via Casuintptr) to steal the stack head and replace the top pointer with MLOCKED (1). This serves as a simple spin lock that we can use even without an m. The thread that locks the stack in this way unlocks the stack by storing a valid stack head pointer. In order to make sure that there is always an m structure available to be stolen, we maintain the invariant that there is always one more than needed. At the beginning of the program (if cgo is in use) the list is seeded with a single m. If needm finds that it has taken the last m off the list, its job is - once it has installed its own m so that it can do things like allocate memory - to create a spare m and put it on the list. Each of these extra m's also has a g0 and a curg that are pressed into service as the scheduling stack and current goroutine for the duration of the cgo callback. When the callback is done with the m, it calls dropm to put the m back on the list. go:nosplit func netpollDeadline(arg interface{}, seq uintptr) func netpollReadDeadline(arg interface{}, seq uintptr) func netpollWriteDeadline(arg interface{}, seq uintptr) func netpollarm(pd *pollDesc, mode int) func netpollblock(pd *pollDesc, mode int32, waitio bool) bool returns true if IO is ready, or false if timedout or closed waitio - wait only for completed IO, ignore errors func netpollblockcommit(gp *g, gpp unsafe.Pointer) bool func netpollcheckerr(pd *pollDesc, mode int32) int func netpollclose(fd uintptr) int32 func netpolldeadlineimpl(pd *pollDesc, seq uintptr, read, write bool) func netpolldescriptor() uintptr func netpollgoready(gp *g, traceskip int) func netpollinit() func netpollinited() bool func netpollopen(fd uintptr, pd *pollDesc) int32 func netpollready(toRun *gList, pd *pollDesc, mode int32) make pd ready, newly runnable goroutines (if any) are added to toRun. May run during STW, so write barriers are not allowed. go:nowritebarrier func newarray(typ *_type, n int) unsafe.Pointer newarray allocates an array of n elements of type typ. func newextram() newextram allocates m's and puts them on the extra list. It is called with a working local m, so that it can do things like call schedlock and allocate. func newm(fn func(), _p_ *p) Create a new m. It will start off with a call to fn, or else the scheduler. fn needs to be static and not a heap allocated closure. May run with m.p==nil, so write barriers are not allowed. go:nowritebarrierrec func newm1(mp *m) func newobject(typ *_type) unsafe.Pointer implementation of new builtin compiler (both frontend and SSA backend) knows the signature of this function func newosproc(mp *m) May run with m.p==nil, so write barriers are not allowed. go:nowritebarrier func newosproc0(stacksize uintptr, fn unsafe.Pointer) Version of newosproc that doesn't require a valid G. go:nosplit func newproc(siz int32, fn *funcval) func newproc1(fn *funcval, argp *uint8, narg int32, callergp *g, callerpc uintptr) Create a new g running fn with narg bytes of arguments starting at argp. callerpc is the address of the go statement that created this. The new g is put on the queue of g's waiting to run. func newstack() Called from runtime·morestack when more stack is needed. Allocate larger stack and relocate to new stack. Stack growth is multiplicative, for constant amortized cost. g->atomicstatus will be Grunning or Gscanrunning upon entry. If the GC is trying to stop this g then it will set preemptscan to true. This must be nowritebarrierrec because it can be called as part of stack growth from other nowritebarrierrec functions, but the compiler doesn't check this. func nextMarkBitArenaEpoch() nextMarkBitArenaEpoch establishes a new epoch for the arenas holding the mark bits. The arenas are named relative to the current GC cycle which is demarcated by the call to finishweep_m. All current spans have been swept. During that sweep each span allocated room for its gcmarkBits in gcBitsArenas.next block. gcBitsArenas.next becomes the gcBitsArenas.current where the GC will mark objects and after each span is swept these bits will be used to allocate objects. gcBitsArenas.current becomes gcBitsArenas.previous where the span's gcAllocBits live until all the spans have been swept during this GC cycle. The span's sweep extinguishes all the references to gcBitsArenas.previous by pointing gcAllocBits into the gcBitsArenas.current. The gcBitsArenas.previous is released to the gcBitsArenas.free list. func nextSample() uintptr nextSample returns the next sampling point for heap profiling. The goal is to sample allocations on average every MemProfileRate bytes, but with a completely random distribution over the allocation timeline; this corresponds to a Poisson process with parameter MemProfileRate. In Poisson processes, the distance between two samples follows the exponential distribution (exp(MemProfileRate)), so the best return value is a random number taken from an exponential distribution whose mean is MemProfileRate. func nextSampleNoFP() uintptr nextSampleNoFP is similar to nextSample, but uses older, simpler code to avoid floating point. func nilfunc() func nilinterequal(p, q unsafe.Pointer) bool func nilinterhash(p unsafe.Pointer, h uintptr) uintptr func noSignalStack(sig uint32) This is called when we receive a signal when there is no signal stack. This can only happen if non-Go code calls sigaltstack to disable the signal stack. func noescape(p unsafe.Pointer) unsafe.Pointer noescape hides a pointer from escape analysis. noescape is the identity function but escape analysis doesn't think the output depends on the input. noescape is inlined and currently compiles down to zero instructions. USE CAREFULLY! go:nosplit func noteclear(n *note) One-time notifications. func notesleep(n *note) func notetsleep(n *note, ns int64) bool func notetsleep_internal(n *note, ns int64) bool May run with m.p==nil if called from notetsleep, so write barriers are not allowed. func notetsleepg(n *note, ns int64) bool same as runtime·notetsleep, but called on user g (not g0) calls only nosplit functions between entersyscallblock/exitsyscall func notewakeup(n *note) func notifyListAdd(l *notifyList) uint32 notifyListAdd adds the caller to a notify list such that it can receive notifications. The caller must eventually call notifyListWait to wait for such a notification, passing the returned ticket number. go:linkname notifyListAdd sync.runtime_notifyListAdd func notifyListCheck(sz uintptr) go:linkname notifyListCheck sync.runtime_notifyListCheck func notifyListNotifyAll(l *notifyList) notifyListNotifyAll notifies all entries in the list. go:linkname notifyListNotifyAll sync.runtime_notifyListNotifyAll func notifyListNotifyOne(l *notifyList) notifyListNotifyOne notifies one entry in the list. go:linkname notifyListNotifyOne sync.runtime_notifyListNotifyOne func notifyListWait(l *notifyList, t uint32) notifyListWait waits for a notification. If one has been sent since notifyListAdd was called, it returns immediately. Otherwise, it blocks. go:linkname notifyListWait sync.runtime_notifyListWait func oneNewExtraM() oneNewExtraM allocates an m and puts it on the extra list. func open(name *byte, mode, perm int32) int32 func osRelax(relax bool) osRelax is called by the scheduler when transitioning to and from all Ps being idle. func osStackAlloc(s *mspan) osStackAlloc performs OS-specific initialization before s is used as stack memory. func osStackFree(s *mspan) osStackFree undoes the effect of osStackAlloc before s is returned to the heap. func os_beforeExit() os_beforeExit is called from os.Exit(0). go:linkname os_beforeExit os.runtime_beforeExit func os_runtime_args() []string go:linkname os_runtime_args os.runtime_args func os_sigpipe() go:linkname os_sigpipe os.sigpipe func osinit() func osyield() func overLoadFactor(count int, B uint8) bool overLoadFactor reports whether count items placed in 1<<B buckets is over loadFactor. func pageIndexOf(p uintptr) (arena *heapArena, pageIdx uintptr, pageMask uint8) pageIndexOf returns the arena, page index, and page mask for pointer p. The caller must ensure p is in the heap. func panicCheck1(pc uintptr, msg string) Check to make sure we can really generate a panic. If the panic was generated from the runtime, or from inside malloc, then convert to a throw of msg. pc should be the program counter of the compiler-generated code that triggered this panic. func panicCheck2(err string) Same as above, but calling from the runtime is allowed. Using this function is necessary for any panic that may be generated by runtime.sigpanic, since those are always called by the runtime. func panicIndex(x int, y int) Implemented in assembly, as they take arguments in registers. Declared here to mark them as ABIInternal. func panicIndexU(x uint, y int) func panicSlice3Acap(x int, y int) func panicSlice3AcapU(x uint, y int) func panicSlice3Alen(x int, y int) func panicSlice3AlenU(x uint, y int) func panicSlice3B(x int, y int) func panicSlice3BU(x uint, y int) func panicSlice3C(x int, y int) func panicSlice3CU(x uint, y int) func panicSliceAcap(x int, y int) func panicSliceAcapU(x uint, y int) func panicSliceAlen(x int, y int) func panicSliceAlenU(x uint, y int) func panicSliceB(x int, y int) func panicSliceBU(x uint, y int) func panicdivide() func panicdottypeE(have, want, iface *_type) panicdottypeE is called when doing an e.(T) conversion and the conversion fails. have = the dynamic type we have. want = the static type we're trying to convert to. iface = the static type we're converting from. func panicdottypeI(have *itab, want, iface *_type) panicdottypeI is called when doing an i.(T) conversion and the conversion fails. Same args as panicdottypeE, but "have" is the dynamic itab we have. func panicfloat() func panicmakeslicecap() func panicmakeslicelen() func panicmem() func panicnildottype(want *_type) panicnildottype is called when doing a i.(T) conversion and the interface i is nil. want = the static type we're trying to convert to. func panicoverflow() func panicshift() func panicwrap() panicwrap generates a panic for a call to a wrapped value method with a nil pointer receiver. It is called from the generated wrapper code. func park_m(gp *g) park continuation on g0. func parkunlock_c(gp *g, lock unsafe.Pointer) bool func parsedebugvars() func pcdatastart(f funcInfo, table int32) int32 func pcdatavalue(f funcInfo, table int32, targetpc uintptr, cache *pcvalueCache) int32 func pcdatavalue1(f funcInfo, table int32, targetpc uintptr, cache *pcvalueCache, strict bool) int32 func pcvalue(f funcInfo, off int32, targetpc uintptr, cache *pcvalueCache, strict bool) int32 func pcvalueCacheKey(targetpc uintptr) uintptr pcvalueCacheKey returns the outermost index in a pcvalueCache to use for targetpc. It must be very cheap to calculate. For now, align to sys.PtrSize and reduce mod the number of entries. In practice, this appears to be fairly randomly and evenly distributed. func persistentalloc(size, align uintptr, sysStat *uint64) unsafe.Pointer Wrapper around sysAlloc that can allocate small chunks. There is no associated free operation. Intended for things like function/type/debug-related persistent data. If align is 0, uses default align (currently 8). The returned memory will be zeroed. Consider marking persistentalloc'd types go:notinheap. func pidleput(_p_ *p) Put p to on _Pidle list. Sched must be locked. May run during STW, so write barriers are not allowed. go:nowritebarrierrec func plugin_lastmoduleinit() (path string, syms map[string]interface{}, errstr string) go:linkname plugin_lastmoduleinit plugin.lastmoduleinit func pluginftabverify(md *moduledata) func pollFractionalWorkerExit() bool pollFractionalWorkerExit reports whether a fractional mark worker should self-preempt. It assumes it is called from the fractional worker. func pollWork() bool pollWork reports whether there is non-background work this P could be doing. This is a fairly lightweight check to be used for background work loops, like idle GC. It checks a subset of the conditions checked by the actual scheduler. func poll_runtime_Semacquire(addr *uint32) go:linkname poll_runtime_Semacquire internal/poll.runtime_Semacquire func poll_runtime_Semrelease(addr *uint32) go:linkname poll_runtime_Semrelease internal/poll.runtime_Semrelease func poll_runtime_isPollServerDescriptor(fd uintptr) bool poll_runtime_isPollServerDescriptor reports whether fd is a descriptor being used by netpoll. func poll_runtime_pollClose(pd *pollDesc) go:linkname poll_runtime_pollClose internal/poll.runtime_pollClose func poll_runtime_pollOpen(fd uintptr) (*pollDesc, int) go:linkname poll_runtime_pollOpen internal/poll.runtime_pollOpen func poll_runtime_pollReset(pd *pollDesc, mode int) int go:linkname poll_runtime_pollReset internal/poll.runtime_pollReset func poll_runtime_pollServerInit() go:linkname poll_runtime_pollServerInit internal/poll.runtime_pollServerInit func poll_runtime_pollSetDeadline(pd *pollDesc, d int64, mode int) go:linkname poll_runtime_pollSetDeadline internal/poll.runtime_pollSetDeadline func poll_runtime_pollUnblock(pd *pollDesc) go:linkname poll_runtime_pollUnblock internal/poll.runtime_pollUnblock func poll_runtime_pollWait(pd *pollDesc, mode int) int go:linkname poll_runtime_pollWait internal/poll.runtime_pollWait func poll_runtime_pollWaitCanceled(pd *pollDesc, mode int) go:linkname poll_runtime_pollWaitCanceled internal/poll.runtime_pollWaitCanceled func preemptall() bool Tell all goroutines that they have been preempted and they should stop. This function is purely best-effort. It can fail to inform a goroutine if a processor just started running it. No locks need to be held. Returns true if preemption request was issued to at least one goroutine. func preemptone(_p_ *p) bool Tell the goroutine running on processor P to stop. This function is purely best-effort. It can incorrectly fail to inform the goroutine. It can send inform the wrong goroutine. Even if it informs the correct goroutine, that goroutine might ignore the request if it is simultaneously executing newstack. No lock needs to be held. Returns true if preemption request was issued. The actual preemption will happen at some point in the future and will be indicated by the gp->status no longer being Grunning func prepGoExitFrame(sp uintptr) func prepareFreeWorkbufs() prepareFreeWorkbufs moves busy workbuf spans to free list so they can be freed to the heap. This must only be called when all workbufs are on the empty list. func preprintpanics(p *_panic) Call all Error and String methods before freezing the world. Used when crashing with panicking. func printAncestorTraceback(ancestor ancestorInfo) printAncestorTraceback prints the traceback of the given ancestor. TODO: Unify this with gentraceback and CallersFrames. func printAncestorTracebackFuncInfo(f funcInfo, pc uintptr) printAncestorTraceback prints the given function info at a given pc within an ancestor traceback. The precision of this info is reduced due to only have access to the pcs at the time of the caller goroutine being created. func printCgoTraceback(callers *cgoCallers) cgoTraceback prints a traceback of callers. func printDebugLog() printDebugLog prints the debug log. func printDebugLogPC(pc uintptr) func printOneCgoTraceback(pc uintptr, max int, arg *cgoSymbolizerArg) int printOneCgoTraceback prints the traceback of a single cgo caller. This can print more than one line because of inlining. Returns the number of frames printed. func printany(i interface{}) printany prints an argument passed to panic. If panic is called with a value that has a String or Error method, it has already been converted into a string by preprintpanics. func printbool(v bool) func printcomplex(c complex128) func printcreatedby(gp *g) func printcreatedby1(f funcInfo, pc uintptr) func printeface(e eface) func printfloat(v float64) func printhex(v uint64) func printiface(i iface) func printint(v int64) func printlock() func printnl() func printpanics(p *_panic) Print all currently active panics. Used when crashing. Should only be called after preprintpanics. func printpointer(p unsafe.Pointer) func printslice(s []byte) func printsp() func printstring(s string) func printuint(v uint64) func printunlock() func procPin() int func procUnpin() func procyield(cycles uint32) func profilealloc(mp *m, x unsafe.Pointer, size uintptr) func publicationBarrier() publicationBarrier performs a store/store barrier (a "publication" or "export" barrier). Some form of synchronization is required between initializing an object and making that object accessible to another processor. Without synchronization, the initialization writes and the "publication" write may be reordered, allowing the other processor to follow the pointer and observe an uninitialized object. In general, higher-level synchronization should be used, such as locking or an atomic pointer write. publicationBarrier is for when those aren't an option, such as in the implementation of the memory manager. There's no corresponding barrier for the read side because the read side naturally has a data dependency order. All architectures that Go supports or seems likely to ever support automatically enforce data dependency ordering. func purgecachedstats(c *mcache) func putCachedDlogger(l *dlogger) bool func putempty(b *workbuf) putempty puts a workbuf onto the work.empty list. Upon entry this go routine owns b. The lfstack.push relinquishes ownership. go:nowritebarrier func putfull(b *workbuf) putfull puts the workbuf on the work.full list for the GC. putfull accepts partially full buffers so the GC can avoid competing with the mutators for ownership of partially full buffers. go:nowritebarrier func queuefinalizer(p unsafe.Pointer, fn *funcval, nret uintptr, fint *_type, ot *ptrtype) func raceReadObjectPC(t *_type, addr unsafe.Pointer, callerpc, pc uintptr) func raceWriteObjectPC(t *_type, addr unsafe.Pointer, callerpc, pc uintptr) func raceacquire(addr unsafe.Pointer) func raceacquireg(gp *g, addr unsafe.Pointer) func racefingo() func racefini() func racefree(p unsafe.Pointer, sz uintptr) func racegoend() func racegostart(pc uintptr) uintptr func raceinit() (uintptr, uintptr) func racemalloc(p unsafe.Pointer, sz uintptr) func racemapshadow(addr unsafe.Pointer, size uintptr) func raceproccreate() uintptr func raceprocdestroy(ctx uintptr) func racereadpc(addr unsafe.Pointer, callerpc, pc uintptr) func racereadrangepc(addr unsafe.Pointer, sz, callerpc, pc uintptr) func racerelease(addr unsafe.Pointer) func racereleaseg(gp *g, addr unsafe.Pointer) func racereleasemerge(addr unsafe.Pointer) func racereleasemergeg(gp *g, addr unsafe.Pointer) func racesync(c *hchan, sg *sudog) func racewritepc(addr unsafe.Pointer, callerpc, pc uintptr) func racewriterangepc(addr unsafe.Pointer, sz, callerpc, pc uintptr) func raise(sig uint32) func raisebadsignal(sig uint32, c *sigctxt) raisebadsignal is called when a signal is received on a non-Go thread, and the Go program does not want to handle it (that is, the program has not called os/signal.Notify for the signal). func raiseproc(sig uint32) func rawbyteslice(size int) (b []byte) rawbyteslice allocates a new byte slice. The byte slice is not zeroed. func rawruneslice(size int) (b []rune) rawruneslice allocates a new rune slice. The rune slice is not zeroed. func rawstring(size int) (s string, b []byte) rawstring allocates storage for a new string. The returned string and byte slice both refer to the same storage. The storage is not zeroed. Callers should use b to set the string contents and then drop b. func rawstringtmp(buf *tmpBuf, l int) (s string, b []byte) func read(fd int32, p unsafe.Pointer, n int32) int32 func readGCStats(pauses *[]uint64) go:linkname readGCStats runtime/debug.readGCStats func readGCStats_m(pauses *[]uint64) readGCStats_m must be called on the system stack because it acquires the heap lock. See mheap for details. go:systemstack func readUnaligned32(p unsafe.Pointer) uint32 Note: These routines perform the read with an native endianness. func readUnaligned64(p unsafe.Pointer) uint64 func readgogc() int32 func readgstatus(gp *g) uint32 All reads and writes of g's status go through readgstatus, casgstatus castogscanstatus, casfrom_Gscanstatus. go:nosplit func readmemstats_m(stats *MemStats) func readvarint(p []byte) (read uint32, val uint32) readvarint reads a varint from p. func ready(gp *g, traceskip int, next bool) Mark gp ready to run. func readyWithTime(s *sudog, traceskip int) func record(r *MemProfileRecord, b *bucket) Write b's data to r. func recordForPanic(b []byte) recordForPanic maintains a circular buffer of messages written by the runtime leading up to a process crash, allowing the messages to be extracted from a core dump. The text written during a process crash (following "panic" or "fatal error") is not saved, since the goroutine stacks will generally be readable from the runtime datastructures in the core file. func recordspan(vh unsafe.Pointer, p unsafe.Pointer) recordspan adds a newly allocated span to h.allspans. This only happens the first time a span is allocated from mheap.spanalloc (it is not called when a span is reused). Write barriers are disallowed here because it can be called from gcWork when allocating new workbufs. However, because it's an indirect call from the fixalloc initializer, the compiler can't see this. func recovery(gp *g) Unwind the stack after a deferred function calls recover after a panic. Then arrange to continue running as though the caller of the deferred function returned normally. func recv(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int) recv processes a receive operation on a full channel c. There are 2 parts: 1) The value sent by the sender sg is put into the channel and the sender is woken up to go on its merry way. 2) The value received by the receiver (the current G) is written to ep. For synchronous channels, both values are the same. For asynchronous channels, the receiver gets its data from the channel buffer and the sender's data is put in the channel buffer. Channel c must be full and locked. recv unlocks c with unlockf. sg must already be dequeued from c. A non-nil ep must point to the heap or the caller's stack. func recvDirect(t *_type, sg *sudog, dst unsafe.Pointer) func reentersyscall(pc, sp uintptr) The goroutine g is about to enter a system call. Record that it's not using the cpu anymore. This is called only from the go syscall library and cgocall, not from the low-level system calls used by the runtime. Entersyscall cannot split the stack: the gosave must make g->sched refer to the caller's stack segment, because entersyscall is going to return immediately after. Nothing entersyscall calls can split the stack either. We cannot safely move the stack during an active call to syscall, because we do not know which of the uintptr arguments are really pointers (back into the stack). In practice, this means that we make the fast path run through entersyscall doing no-split things, and the slow path has to use systemstack to run bigger things on the system stack. reentersyscall is the entry point used by cgo callbacks, where explicitly saved SP and PC are restored. This is needed when exitsyscall will be called from a function further up in the call stack than the parent, as g->syscallsp must always point to a valid stack frame. entersyscall below is the normal entry point for syscalls, which obtains the SP and PC from the caller. Syscall tracing: At the start of a syscall we emit traceGoSysCall to capture the stack trace. If the syscall does not block, that is it, we do not emit any other events. If the syscall blocks (that is, P is retaken), retaker emits traceGoSysBlock; when syscall returns we emit traceGoSysExit and when the goroutine starts running (potentially instantly, if exitsyscallfast returns true) we emit traceGoStart. To ensure that traceGoSysExit is emitted strictly after traceGoSysBlock, we remember current value of syscalltick in m (_g_.m.syscalltick = _g_.m.p.ptr().syscalltick), whoever emits traceGoSysBlock increments p.syscalltick afterwards; and we wait for the increment before emitting traceGoSysExit. Note that the increment is done even if tracing is not enabled, because tracing can be enabled in the middle of syscall. We don't want the wait to hang. func reflectOffsLock() func reflectOffsUnlock() func reflect_addReflectOff(ptr unsafe.Pointer) int32 reflect_addReflectOff adds a pointer to the reflection offset lookup map. go:linkname reflect_addReflectOff reflect.addReflectOff func reflect_chancap(c *hchan) int go:linkname reflect_chancap reflect.chancap func reflect_chanclose(c *hchan) go:linkname reflect_chanclose reflect.chanclose func reflect_chanlen(c *hchan) int go:linkname reflect_chanlen reflect.chanlen func reflect_chanrecv(c *hchan, nb bool, elem unsafe.Pointer) (selected bool, received bool) go:linkname reflect_chanrecv reflect.chanrecv func reflect_chansend(c *hchan, elem unsafe.Pointer, nb bool) (selected bool) go:linkname reflect_chansend reflect.chansend func reflect_gcbits(x interface{}) []byte gcbits returns the GC type info for x, for testing. The result is the bitmap entries (0 or 1), one entry per byte. go:linkname reflect_gcbits reflect.gcbits func reflect_ifaceE2I(inter *interfacetype, e eface, dst *iface) go:linkname reflect_ifaceE2I reflect.ifaceE2I func reflect_ismapkey(t *_type) bool go:linkname reflect_ismapkey reflect.ismapkey func reflect_mapaccess(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer go:linkname reflect_mapaccess reflect.mapaccess func reflect_mapassign(t *maptype, h *hmap, key unsafe.Pointer, elem unsafe.Pointer) go:linkname reflect_mapassign reflect.mapassign func reflect_mapdelete(t *maptype, h *hmap, key unsafe.Pointer) go:linkname reflect_mapdelete reflect.mapdelete func reflect_mapiterelem(it *hiter) unsafe.Pointer go:linkname reflect_mapiterelem reflect.mapiterelem func reflect_mapiterkey(it *hiter) unsafe.Pointer go:linkname reflect_mapiterkey reflect.mapiterkey func reflect_mapiternext(it *hiter) go:linkname reflect_mapiternext reflect.mapiternext func reflect_maplen(h *hmap) int go:linkname reflect_maplen reflect.maplen func reflect_memclrNoHeapPointers(ptr unsafe.Pointer, n uintptr) go:linkname reflect_memclrNoHeapPointers reflect.memclrNoHeapPointers func reflect_memmove(to, from unsafe.Pointer, n uintptr) go:linkname reflect_memmove reflect.memmove func reflect_resolveNameOff(ptrInModule unsafe.Pointer, off int32) unsafe.Pointer reflect_resolveNameOff resolves a name offset from a base pointer. go:linkname reflect_resolveNameOff reflect.resolveNameOff func reflect_resolveTextOff(rtype unsafe.Pointer, off int32) unsafe.Pointer reflect_resolveTextOff resolves an function pointer offset from a base type. go:linkname reflect_resolveTextOff reflect.resolveTextOff func reflect_resolveTypeOff(rtype unsafe.Pointer, off int32) unsafe.Pointer reflect_resolveTypeOff resolves an *rtype offset from a base type. go:linkname reflect_resolveTypeOff reflect.resolveTypeOff func reflect_rselect(cases []runtimeSelect) (int, bool) go:linkname reflect_rselect reflect.rselect func reflect_typedmemclr(typ *_type, ptr unsafe.Pointer) go:linkname reflect_typedmemclr reflect.typedmemclr func reflect_typedmemclrpartial(typ *_type, ptr unsafe.Pointer, off, size uintptr) go:linkname reflect_typedmemclrpartial reflect.typedmemclrpartial func reflect_typedmemmove(typ *_type, dst, src unsafe.Pointer) go:linkname reflect_typedmemmove reflect.typedmemmove func reflect_typedmemmovepartial(typ *_type, dst, src unsafe.Pointer, off, size uintptr) typedmemmovepartial is like typedmemmove but assumes that dst and src point off bytes into the value and only copies size bytes. go:linkname reflect_typedmemmovepartial reflect.typedmemmovepartial func reflect_typedslicecopy(elemType *_type, dst, src slice) int go:linkname reflect_typedslicecopy reflect.typedslicecopy func reflect_typelinks() ([]unsafe.Pointer, [][]int32) go:linkname reflect_typelinks reflect.typelinks func reflect_unsafe_New(typ *_type) unsafe.Pointer go:linkname reflect_unsafe_New reflect.unsafe_New func reflect_unsafe_NewArray(typ *_type, n int) unsafe.Pointer go:linkname reflect_unsafe_NewArray reflect.unsafe_NewArray func reflectcall(argtype *_type, fn, arg unsafe.Pointer, argsize uint32, retoffset uint32) reflectcall calls fn with a copy of the n argument bytes pointed at by arg. After fn returns, reflectcall copies n-retoffset result bytes back into arg+retoffset before returning. If copying result bytes back, the caller should pass the argument frame type as argtype, so that call can execute appropriate write barriers during the copy. Package reflect passes a frame type. In package runtime, there is only one call that copies results back, in cgocallbackg1, and it does NOT pass a frame type, meaning there are no write barriers invoked. See that call site for justification. Package reflect accesses this symbol through a linkname. func reflectcallmove(typ *_type, dst, src unsafe.Pointer, size uintptr) reflectcallmove is invoked by reflectcall to copy the return values out of the stack and into the heap, invoking the necessary write barriers. dst, src, and size describe the return value area to copy. typ describes the entire frame (not just the return values). typ may be nil, which indicates write barriers are not needed. It must be nosplit and must only call nosplit functions because the stack map of reflectcall is wrong. func reflectlite_chanlen(c *hchan) int go:linkname reflectlite_chanlen internal/reflectlite.chanlen func reflectlite_ifaceE2I(inter *interfacetype, e eface, dst *iface) go:linkname reflectlite_ifaceE2I internal/reflectlite.ifaceE2I func reflectlite_maplen(h *hmap) int go:linkname reflectlite_maplen internal/reflectlite.maplen func reflectlite_resolveNameOff(ptrInModule unsafe.Pointer, off int32) unsafe.Pointer reflectlite_resolveNameOff resolves a name offset from a base pointer. go:linkname reflectlite_resolveNameOff internal/reflectlite.resolveNameOff func reflectlite_resolveTypeOff(rtype unsafe.Pointer, off int32) unsafe.Pointer reflectlite_resolveTypeOff resolves an *rtype offset from a base type. go:linkname reflectlite_resolveTypeOff internal/reflectlite.resolveTypeOff func reflectlite_typedmemmove(typ *_type, dst, src unsafe.Pointer) go:linkname reflectlite_typedmemmove internal/reflectlite.typedmemmove func reflectlite_unsafe_New(typ *_type) unsafe.Pointer go:linkname reflectlite_unsafe_New internal/reflectlite.unsafe_New func releaseSudog(s *sudog) func releasem(mp *m) func removefinalizer(p unsafe.Pointer) Removes the finalizer (if any) from the object p. func resetspinning() func restartg(gp *g) The GC requests that this routine be moved from a scanmumble state to a mumble state. func restoreGsignalStack(st *gsignalStack) restoreGsignalStack restores the gsignal stack to the value it had before entering the signal handler. go:nosplit go:nowritebarrierrec func retake(now int64) uint32 func return0() return0 is a stub used to return 0 from deferproc. It is called at the very end of deferproc to signal the calling Go function that it should not jump to deferreturn. in asm_*.s func rotl_31(x uint64) uint64 Note: in order to get the compiler to issue rotl instructions, we need to constant fold the shift amount by hand. TODO: convince the compiler to issue rotl instructions after inlining. func round(n, a uintptr) uintptr round n up to a multiple of a. a must be a power of 2. func round2(x int32) int32 round x up to a power of 2. func roundupsize(size uintptr) uintptr Returns size of the memory block that mallocgc will allocate if you ask for the size. func rt0_go() func rt_sigaction(sig uintptr, new, old *sigactiont, size uintptr) int32 rt_sigaction is implemented in assembly. go:noescape func rtsigprocmask(how int32, new, old *sigset, size int32) func runGCProg(prog, trailer, dst *byte, size int) uintptr runGCProg executes the GC program prog, and then trailer if non-nil, writing to dst with entries of the given size. If size == 1, dst is a 1-bit pointer mask laid out moving forward from dst. If size == 2, dst is the 2-bit heap bitmap, and writes move backward starting at dst (because the heap bitmap does). In this case, the caller guarantees that only whole bytes in dst need to be written. runGCProg returns the number of 1- or 2-bit entries written to memory. func runSafePointFn() runSafePointFn runs the safe point function, if any, for this P. This should be called like if getg().m.p.runSafePointFn != 0 { runSafePointFn() } runSafePointFn must be checked on any transition in to _Pidle or _Psyscall to avoid a race where forEachP sees that the P is running just before the P goes into _Pidle/_Psyscall and neither forEachP nor the P run the safe-point function. func runfinq() This is the goroutine that runs all of the finalizers func runqempty(_p_ *p) bool runqempty reports whether _p_ has no Gs on its local run queue. It never returns true spuriously. func runqget(_p_ *p) (gp *g, inheritTime bool) Get g from local runnable queue. If inheritTime is true, gp should inherit the remaining time in the current time slice. Otherwise, it should start a new time slice. Executed only by the owner P. func runqgrab(_p_ *p, batch *[256]guintptr, batchHead uint32, stealRunNextG bool) uint32 Grabs a batch of goroutines from _p_'s runnable queue into batch. Batch is a ring buffer starting at batchHead. Returns number of grabbed goroutines. Can be executed by any P. func runqput(_p_ *p, gp *g, next bool) runqput tries to put g on the local runnable queue. If next is false, runqput adds g to the tail of the runnable queue. If next is true, runqput puts g in the _p_.runnext slot. If the run queue is full, runnext puts g on the global queue. Executed only by the owner P. func runqputslow(_p_ *p, gp *g, h, t uint32) bool Put g and a batch of work from local runnable queue on global queue. Executed only by the owner P. func runtime_debug_WriteHeapDump(fd uintptr) go:linkname runtime_debug_WriteHeapDump runtime/debug.WriteHeapDump func runtime_debug_freeOSMemory() go:linkname runtime_debug_freeOSMemory runtime/debug.freeOSMemory func runtime_getProfLabel() unsafe.Pointer go:linkname runtime_getProfLabel runtime/pprof.runtime_getProfLabel func runtime_pprof_readProfile() ([]uint64, []unsafe.Pointer, bool) readProfile, provided to runtime/pprof, returns the next chunk of binary CPU profiling stack trace data, blocking until data is available. If profiling is turned off and all the profile data accumulated while it was on has been returned, readProfile returns eof=true. The caller must save the returned data and tags before calling readProfile again. go:linkname runtime_pprof_readProfile runtime/pprof.readProfile func runtime_pprof_runtime_cyclesPerSecond() int64 go:linkname runtime_pprof_runtime_cyclesPerSecond runtime/pprof.runtime_cyclesPerSecond func runtime_setProfLabel(labels unsafe.Pointer) go:linkname runtime_setProfLabel runtime/pprof.runtime_setProfLabel func save(pc, sp uintptr) save updates getg().sched to refer to pc and sp so that a following gogo will restore pc and sp. save must not have write barriers because invoking a write barrier can clobber getg().sched. func saveAncestors(callergp *g) *[]ancestorInfo saveAncestors copies previous ancestors of the given caller g and includes infor for the current caller into a new set of tracebacks for a g being created. func saveblockevent(cycles int64, skip int, which bucketType) func saveg(pc, sp uintptr, gp *g, r *StackRecord) func sbrk0() uintptr func scanblock(b0, n0 uintptr, ptrmask *uint8, gcw *gcWork, stk *stackScanState) scanblock scans b as scanobject would, but using an explicit pointer bitmap instead of the heap bitmap. This is used to scan non-heap roots, so it does not update gcw.bytesMarked or gcw.scanWork. If stk != nil, possible stack pointers are also reported to stk.putPtr. go:nowritebarrier func scanframeworker(frame *stkframe, state *stackScanState, gcw *gcWork) Scan a stack frame: local variables and function arguments/results. go:nowritebarrier func scang(gp *g, gcw *gcWork) scang blocks until gp's stack has been scanned. It might be scanned by scang or it might be scanned by the goroutine itself. Either way, the stack scan has completed when scang returns. func scanobject(b uintptr, gcw *gcWork) scanobject scans the object starting at b, adding pointers to gcw. b must point to the beginning of a heap object or an oblet. scanobject consults the GC bitmap for the pointer mask and the spans for the size of the object. func scanstack(gp *g, gcw *gcWork) scanstack scans gp's stack, greying all pointers found on the stack. scanstack is marked go:systemstack because it must not be preempted while using a workbuf. func scavengeSleep(ns int64) bool false if awoken early (i.e. true means a complete sleep). func schedEnableUser(enable bool) schedEnableUser enables or disables the scheduling of user goroutines. This does not stop already running user goroutines, so the caller should first stop the world when disabling user goroutines. func schedEnabled(gp *g) bool schedEnabled reports whether gp should be scheduled. It returns false is scheduling of gp is disabled. func sched_getaffinity(pid, len uintptr, buf *byte) int32 func schedinit() The bootstrap sequence is: call osinit call schedinit make & queue new G call runtime·mstart The new G calls runtime·main. func schedtrace(detailed bool) func schedule() One round of scheduler: find a runnable goroutine and execute it. Never returns. func selectgo(cas0 *scase, order0 *uint16, ncases int) (int, bool) selectgo implements the select statement. cas0 points to an array of type [ncases]scase, and order0 points to an array of type [2*ncases]uint16. Both reside on the goroutine's stack (regardless of any escaping in selectgo). selectgo returns the index of the chosen scase, which matches the ordinal position of its respective select{recv,send,default} call. Also, if the chosen scase was a receive operation, it reports whether a value was received. func selectnbrecv(elem unsafe.Pointer, c *hchan) (selected bool) compiler implements case v = <-c: ... foo default: ... bar } as if selectnbrecv(&v, c) { ... foo } else { ... bar } func selectnbrecv2(elem unsafe.Pointer, received *bool, c *hchan) (selected bool) case v, ok = <-c: ... foo default: ... bar } if c != nil && selectnbrecv2(&v, &ok, c) { ... foo } else { ... bar } func selectnbsend(c *hchan, elem unsafe.Pointer) (selected bool) case c <- v: ... foo default: ... bar } if selectnbsend(c, v) { ... foo } else { ... bar } func selectsetpc(cas *scase) func sellock(scases []scase, lockorder []uint16) func selparkcommit(gp *g, _ unsafe.Pointer) bool func selunlock(scases []scase, lockorder []uint16) func semacquire(addr *uint32) Called from runtime. func semacquire1(addr *uint32, lifo bool, profile semaProfileFlags, skipframes int) func semrelease(addr *uint32) func semrelease1(addr *uint32, handoff bool, skipframes int) func send(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int) send processes a send operation on an empty channel c. The value ep sent by the sender is copied to the receiver sg. The receiver is then woken up to go on its merry way. Channel c must be empty and locked. send unlocks c with unlockf. sg must already be dequeued from c. ep must be non-nil and point to the heap or the caller's stack. func sendDirect(t *_type, sg *sudog, src unsafe.Pointer) func setGCPercent(in int32) (out int32) go:linkname setGCPercent runtime/debug.setGCPercent func setGCPhase(x uint32) func setGNoWB(gp **g, new *g) setGNoWB performs *gp = new without a write barrier. For times when it's impractical to use a guintptr. go:nosplit go:nowritebarrier func setGsignalStack(st *stackt, old *gsignalStack) setGsignalStack sets the gsignal stack of the current m to an alternate signal stack returned from the sigaltstack system call. It saves the old values in *old for use by restoreGsignalStack. This is used when handling a signal if non-Go code has set the alternate signal stack. go:nosplit go:nowritebarrierrec func setMNoWB(mp **m, new *m) setMNoWB performs *mp = new without a write barrier. For times when it's impractical to use an muintptr. go:nosplit go:nowritebarrier func setMaxStack(in int) (out int) go:linkname setMaxStack runtime/debug.setMaxStack func setMaxThreads(in int) (out int) go:linkname setMaxThreads runtime/debug.setMaxThreads func setPanicOnFault(new bool) (old bool) go:linkname setPanicOnFault runtime/debug.setPanicOnFault func setProcessCPUProfiler(hz int32) setProcessCPUProfiler is called when the profiling timer changes. It is called with prof.lock held. hz is the new timer, and is 0 if profiling is being disabled. Enable or disable the signal as required for -buildmode=c-archive. func setSignalstackSP(s *stackt, sp uintptr) setSignaltstackSP sets the ss_sp field of a stackt. go:nosplit func setThreadCPUProfiler(hz int32) setThreadCPUProfiler makes any thread-specific changes required to implement profiling at a rate of hz. func setTraceback(level string) go:linkname setTraceback runtime/debug.SetTraceback func setcpuprofilerate(hz int32) setcpuprofilerate sets the CPU profiling rate to hz times per second. If hz <= 0, setcpuprofilerate turns off CPU profiling. func setg(gg *g) func setitimer(mode int32, new, old *itimerval) func setprofilebucket(p unsafe.Pointer, b *bucket) Set the heap profile bucket associated with addr to b. func setsSP(pc uintptr) bool Reports whether a function will set the SP to an absolute value. Important that we don't traceback when these are at the bottom of the stack since we can't be sure that we will find the caller. If the function is not on the bottom of the stack we assume that it will have set it up so that traceback will be consistent, either by being a traceback terminating function or putting one on the stack at the right offset. func setsig(i uint32, fn uintptr) func setsigsegv(pc uintptr) setsigsegv is used on darwin/arm{,64} to fake a segmentation fault. This is exported via linkname to assembly in runtime/cgo. go:nosplit go:linkname setsigsegv func setsigstack(i uint32) func settls() Called from assembly only; declared for go vet. func shade(b uintptr) Shade the object if it isn't already. The object is not nil and known to be in the heap. Preemption must be disabled. go:nowritebarrier func shouldPushSigpanic(gp *g, pc, lr uintptr) bool shouldPushSigpanic reports whether pc should be used as sigpanic's return PC (pushing a frame for the call). Otherwise, it should be left alone so that LR is used as sigpanic's return PC, effectively replacing the top-most frame with sigpanic. This is used by preparePanic. func showframe(f funcInfo, gp *g, firstFrame bool, funcID, childID funcID) bool showframe reports whether the frame with the given characteristics should be printed during a traceback. func showfuncinfo(f funcInfo, firstFrame bool, funcID, childID funcID) bool showfuncinfo reports whether a function with the given characteristics should be printed during a traceback. func shrinkstack(gp *g) Maybe shrink the stack being used by gp. Called at garbage collection time. gp must be stopped, but the world need not be. func siftdownTimer(t []*timer, i int) bool func siftupTimer(t []*timer, i int) bool func sigInitIgnored(s uint32) sigInitIgnored marks the signal as already ignored. This is called at program start by initsig. In a shared library initsig is called by libpreinit, so the runtime may not be initialized yet. go:nosplit func sigInstallGoHandler(sig uint32) bool func sigNotOnStack(sig uint32) This is called if we receive a signal when there is a signal stack but we are not on it. This can only happen if non-Go code called sigaction without setting the SS_ONSTACK flag. func sigNoteSetup(*note) func sigNoteSleep(*note) func sigNoteWakeup(*note) func sigaction(sig uint32, new, old *sigactiont) func sigaddset(mask *sigset, i int) func sigaltstack(new, old *stackt) func sigblock() sigblock blocks all signals in the current thread's signal mask. This is used to block signals while setting up and tearing down g when a non-Go thread calls a Go function. The OS-specific code is expected to define sigset_all. This is nosplit and nowritebarrierrec because it is called by needm which may be called on a non-Go thread with no g available. go:nosplit go:nowritebarrierrec func sigdelset(mask *sigset, i int) func sigdisable(sig uint32) sigdisable disables the Go signal handler for the signal sig. It is only called while holding the os/signal.handlers lock, via os/signal.disableSignal and signal_disable. func sigenable(sig uint32) sigenable enables the Go signal handler to catch the signal sig. It is only called while holding the os/signal.handlers lock, via os/signal.enableSignal and signal_enable. func sigfillset(mask *uint64) func sigfwd(fn uintptr, sig uint32, info *siginfo, ctx unsafe.Pointer) func sigfwdgo(sig uint32, info *siginfo, ctx unsafe.Pointer) bool Determines if the signal should be handled by Go and if not, forwards the signal to the handler that was installed before Go's. Returns whether the signal was forwarded. This is called by the signal handler, and the world may be stopped. go:nosplit go:nowritebarrierrec func sighandler(sig uint32, info *siginfo, ctxt unsafe.Pointer, gp *g) sighandler is invoked when a signal occurs. The global g will be set to a gsignal goroutine and we will be running on the alternate signal stack. The parameter g will be the value of the global g when the signal occurred. The sig, info, and ctxt parameters are from the system signal handler: they are the parameters passed when the SA is passed to the sigaction system call. The garbage collector may have stopped the world, so write barriers are not allowed. func sigignore(sig uint32) sigignore ignores the signal sig. It is only called while holding the os/signal.handlers lock, via os/signal.ignoreSignal and signal_ignore. func signalDuringFork(sig uint32) signalDuringFork is called if we receive a signal while doing a fork. We do not want signals at that time, as a signal sent to the process group may be delivered to the child process, causing confusion. This should never be called, because we block signals across the fork; this function is just a safety check. See issue 18600 for background. func signalWaitUntilIdle()_disable(s uint32) Must only be called from a single goroutine at a time. go:linkname signal_disable os/signal.signal_disable func signal_enable(s uint32) Must only be called from a single goroutine at a time. go:linkname signal_enable os/signal.signal_enable func signal_ignore(s uint32) Must only be called from a single goroutine at a time. go:linkname signal_ignore os/signal.signal_ignore func signal_ignored(s uint32) bool Checked by signal handlers. go:linkname signal_ignored os/signal.signal_ignored func signal_recv() uint32 Called to receive the next queued signal. Must only be called from a single goroutine at a time. go:linkname signal_recv os/signal.signal_recv func signalstack(s *stack) signalstack sets the current thread's alternate signal stack to s. go:nosplit func signame(sig uint32) string func sigpanic() sigpanic turns a synchronous signal into a run-time panic. If the signal handler sees a synchronous panic, it arranges the stack to look like the function where the signal occurred called sigpanic, sets the signal's PC value to sigpanic, and returns from the signal handler. The effect is that the program will act as though the function that got the signal simply called sigpanic instead. This must NOT be nosplit because the linker doesn't know where sigpanic calls can be injected. The signal handler must not inject a call to sigpanic if getg().throwsplit, since sigpanic may need to grow the stack. This is exported via linkname to assembly in runtime/cgo. go:linkname sigpanic func sigpipe() func sigprocmask(how int32, new, old *sigset) func sigprof(pc, sp, lr uintptr, gp *g, mp *m) Called if we receive a SIGPROF signal. Called by the signal handler, may run during STW. go:nowritebarrierrec func sigprofNonGo() sigprofNonGo is called if we receive a SIGPROF signal on a non-Go thread, and the signal handler collected a stack trace in sigprofCallers. When this is called, sigprofCallersUse will be non-zero. g is nil, and what we can do is very limited. go:nosplit go:nowritebarrierrec func sigprofNonGoPC(pc uintptr) sigprofNonGoPC is called when a profiling signal arrived on a non-Go thread and we have a single PC value, not a stack trace. g is nil, and what we can do is very limited. go:nosplit go:nowritebarrierrec func sigreturn() func sigsend(s uint32) bool sigsend delivers a signal from sighandler to the internal signal delivery queue. It reports whether the signal was sent. If not, the caller typically crashes the program. It runs from the signal handler, so it's limited in what it can do. func sigtramp(sig uint32, info *siginfo, ctx unsafe.Pointer) func sigtrampgo(sig uint32, info *siginfo, ctx unsafe.Pointer) sigtrampgo is called from the signal handler function, sigtramp, written in assembly code. This is called by the signal handler, and the world may be stopped. It must be nosplit because getg() is still the G that was running (if any) when the signal was delivered, but it's (usually) called on the gsignal stack. Until this switches the G to gsignal, the stack bounds check won't work. func skipPleaseUseCallersFrames() This function is defined in asm.s to be sizeofSkipFunction bytes long. func slicebytetostring(buf *tmpBuf, b []byte) (str string) Buf is a fixed-size buffer for the result, it is not nil if the result does not escape. func slicebytetostringtmp(b []byte) string slicebytetostringtmp returns a "string" referring to the actual []byte bytes. Callers need to ensure that the returned string will not be used after the calling goroutine modifies the original slice or synchronizes with another goroutine. The function is only called when instrumenting and otherwise intrinsified by the compiler. Some internal compiler optimizations use this function. - Used for m[T1{... Tn{..., string(k), ...} ...}] and m[string(k)] where k is []byte, T1 to Tn is a nesting of struct and array literals. - Used for "<"+string(b)+">" concatenation where b is []byte. - Used for string(b)=="foo" comparison where b is []byte. func slicecopy(to, fm slice, width uintptr) int func slicerunetostring(buf *tmpBuf, a []rune) string func slicestringcopy(to []byte, fm string) int func socket(domain int32, typ int32, prot int32) int32 func stackcache_clear(c *mcache) func stackcacherefill(c *mcache, order uint8) stackcacherefill/stackcacherelease implement a global pool of stack segments. The pool is required to prevent unlimited growth of per-thread caches. func stackcacherelease(c *mcache, order uint8) func stackcheck() stackcheck checks that SP is in range [g->stack.lo, g->stack.hi). func stackfree(stk stack) stackfree frees an n byte stack allocation at stk. stackfree must run on the system stack because it uses per-P resources and must not split the stack. func stackinit() func stacklog2(n uintptr) int stacklog2 returns ⌊log_2(n)⌋. func stackpoolfree(x gclinkptr, order uint8) Adds stack x to the free pool. Must be called with stackpoolmu held. func startTemplateThread() startTemplateThread starts the template thread if it is not already running. The calling thread must itself be in a known-good state. func startTheWorld() startTheWorld undoes the effects of stopTheWorld. func startTheWorldWithSema(emitTraceEvent bool) int64 func startTimer(t *timer) startTimer adds t to the timer heap. go:linkname startTimer time.startTimer func startlockedm(gp *g) Schedules the locked m to run the locked gp. May run during STW, so write barriers are not allowed. go:nowritebarrierrec func startm(_p_ *p, spinning bool) Schedules some M to run the p (creates an M if necessary). If p==nil, tries to get an idle P, if no idle P's does nothing. May run with m.p==nil, so write barriers are not allowed. If spinning is set, the caller has incremented nmspinning and startm will either decrement nmspinning or set m.spinning in the newly started M. go:nowritebarrierrec func startpanic_m() bool startpanic_m prepares for an unrecoverable panic. It returns true if panic messages should be printed, or false if the runtime is in bad shape and should just print stacks. It must not have write barriers even though the write barrier explicitly ignores writes once dying > 0. Write barriers still assume that g.m.p != nil, and this function may not have P in some contexts (e.g. a panic in a signal handler for a signal sent to an M with no P). func step(p []byte, pc *uintptr, val *int32, first bool) (newp []byte, ok bool) step advances to the next pc, value pair in the encoded table. func stopTheWorld(reason string) stopTheWorld stops all P's from executing goroutines, interrupting all goroutines at GC safe points and records reason as the reason for the stop. On return, only the current goroutine's P is running. stopTheWorld must not be called from a system stack and the caller must not hold worldsema. The caller must call startTheWorld when other P's should resume execution. stopTheWorld is safe for multiple goroutines to call at the same time. Each will execute its own stop, and the stops will be serialized. This is also used by routines that do stack dumps. If the system is in panic or being exited, this may not reliably stop all goroutines. func stopTheWorldWithSema() stopTheWorldWithSema is the core implementation of stopTheWorld. The caller is responsible for acquiring worldsema and disabling preemption first and then should stopTheWorldWithSema on the system stack: semacquire(&worldsema, 0) m.preemptoff = "reason" systemstack(stopTheWorldWithSema) When finished, the caller must either call startTheWorld or undo these three operations separately: m.preemptoff = "" systemstack(startTheWorldWithSema) semrelease(&worldsema) It is allowed to acquire worldsema once and then execute multiple startTheWorldWithSema/stopTheWorldWithSema pairs. Other P's are able to execute between successive calls to startTheWorldWithSema and stopTheWorldWithSema. Holding worldsema causes any other goroutines invoking stopTheWorld to block. func stopTimer(t *timer) bool stopTimer removes t from the timer heap if it is there. It returns true if t was removed, false if t wasn't even there. go:linkname stopTimer time.stopTimer func stoplockedm() Stops execution of the current m that is locked to a g until the g is runnable again. Returns with acquired P. func stopm() Stops execution of the current m until new work is available. Returns with acquired P. func strequal(p, q unsafe.Pointer) bool func strhash(a unsafe.Pointer, h uintptr) uintptr func stringDataOnStack(s string) bool stringDataOnStack reports whether the string's data is stored on the current goroutine's stack. func stringHash(s string, seed uintptr) uintptr Testing adapters for hash quality tests (see hash_test.go) func stringtoslicebyte(buf *tmpBuf, s string) []byte func stringtoslicerune(buf *[tmpStringBufSize]rune, s string) []rune func subtract1(p *byte) *byte subtract1 returns the byte pointer p-1. go:nowritebarrier nosplit because it is used during write barriers and must not be preempted. go:nosplit func subtractb(p *byte, n uintptr) *byte subtractb returns the byte pointer p-n. go:nowritebarrier go:nosplit func sweepone() uintptr sweepone sweeps some unswept heap span and returns the number of pages returned to the heap, or ^uintptr(0) if there was nothing to sweep. func sync_atomic_CompareAndSwapPointer(ptr *unsafe.Pointer, old, new unsafe.Pointer) bool go:linkname sync_atomic_CompareAndSwapPointer sync/atomic.CompareAndSwapPointer go:nosplit func sync_atomic_CompareAndSwapUintptr(ptr *uintptr, old, new uintptr) bool go:linkname sync_atomic_CompareAndSwapUintptr sync/atomic.CompareAndSwapUintptr func sync_atomic_StorePointer(ptr *unsafe.Pointer, new unsafe.Pointer) go:linkname sync_atomic_StorePointer sync/atomic.StorePointer go:nosplit func sync_atomic_StoreUintptr(ptr *uintptr, new uintptr) go:linkname sync_atomic_StoreUintptr sync/atomic.StoreUintptr func sync_atomic_SwapPointer(ptr *unsafe.Pointer, new unsafe.Pointer) unsafe.Pointer go:linkname sync_atomic_SwapPointer sync/atomic.SwapPointer go:nosplit func sync_atomic_SwapUintptr(ptr *uintptr, new uintptr) uintptr go:linkname sync_atomic_SwapUintptr sync/atomic.SwapUintptr func sync_atomic_runtime_procPin() int go:linkname sync_atomic_runtime_procPin sync/atomic.runtime_procPin go:nosplit func sync_atomic_runtime_procUnpin() go:linkname sync_atomic_runtime_procUnpin sync/atomic.runtime_procUnpin go:nosplit func sync_fastrand() uint32 go:linkname sync_fastrand sync.fastrand func sync_nanotime() int64 go:linkname sync_nanotime sync.runtime_nanotime func sync_runtime_Semacquire(addr *uint32) go:linkname sync_runtime_Semacquire sync.runtime_Semacquire func sync_runtime_SemacquireMutex(addr *uint32, lifo bool, skipframes int) go:linkname sync_runtime_SemacquireMutex sync.runtime_SemacquireMutex func sync_runtime_Semrelease(addr *uint32, handoff bool, skipframes int) go:linkname sync_runtime_Semrelease sync.runtime_Semrelease func sync_runtime_canSpin(i int) bool Active spinning for sync.Mutex. go:linkname sync_runtime_canSpin sync.runtime_canSpin go:nosplit func sync_runtime_doSpin() go:linkname sync_runtime_doSpin sync.runtime_doSpin go:nosplit func sync_runtime_procPin() int go:linkname sync_runtime_procPin sync.runtime_procPin go:nosplit func sync_runtime_procUnpin() go:linkname sync_runtime_procUnpin sync.runtime_procUnpin go:nosplit func sync_runtime_registerPoolCleanup(f func()) go:linkname sync_runtime_registerPoolCleanup sync.runtime_registerPoolCleanup func sync_throw(s string) go:linkname sync_throw sync.throw func syncadjustsudogs(gp *g, used uintptr, adjinfo *adjustinfo) uintptr syncadjustsudogs adjusts gp's sudogs and copies the part of gp's stack they refer to while synchronizing with concurrent channel operations. It returns the number of bytes of stack copied. func sysAlloc(n uintptr, sysStat *uint64) unsafe.Pointer Don't split the stack as this method may be invoked without a valid G, which prevents us from allocating more stack. go:nosplit func sysFault(v unsafe.Pointer, n uintptr) func sysFree(v unsafe.Pointer, n uintptr, sysStat *uint64) Don't split the stack as this function may be invoked without a valid G, which prevents us from allocating more stack. go:nosplit func sysHugePage(v unsafe.Pointer, n uintptr) func sysMap(v unsafe.Pointer, n uintptr, sysStat *uint64) func sysMmap(addr unsafe.Pointer, n uintptr, prot, flags, fd int32, off uint32) (p unsafe.Pointer, err int) sysMmap calls the mmap system call. It is implemented in assembly. func sysMunmap(addr unsafe.Pointer, n uintptr) sysMunmap calls the munmap system call. It is implemented in assembly. func sysReserve(v unsafe.Pointer, n uintptr) unsafe.Pointer func sysReserveAligned(v unsafe.Pointer, size, align uintptr) (unsafe.Pointer, uintptr) sysReserveAligned is like sysReserve, but the returned pointer is aligned to align bytes. It may reserve either n or n+align bytes, so it returns the size that was reserved. func sysSigaction(sig uint32, new, old *sigactiont) sysSigaction calls the rt_sigaction system call. go:nosplit func sysUnused(v unsafe.Pointer, n uintptr) func sysUsed(v unsafe.Pointer, n uintptr) func sysargs(argc int32, argv **byte) func sysauxv(auxv []uintptr) int func syscall_Exit(code int) go:linkname syscall_Exit syscall.Exit go:nosplit func syscall_Getpagesize() int go:linkname syscall_Getpagesize syscall.Getpagesize func syscall_runtime_AfterExec() Called from syscall package after Exec. go:linkname syscall_runtime_AfterExec syscall.runtime_AfterExec func syscall_runtime_AfterFork() Called from syscall package after fork in parent. go:linkname syscall_runtime_AfterFork syscall.runtime_AfterFork go:nosplit func syscall_runtime_AfterForkInChild() Called from syscall package after fork in child. It resets non-sigignored signals to the default handler, and restores the signal mask in preparation for the exec. Because this might be called during a vfork, and therefore may be temporarily sharing address space with the parent process, this must not change any global variables or calling into C code that may do so. go:linkname syscall_runtime_AfterForkInChild syscall.runtime_AfterForkInChild go:nosplit go:nowritebarrierrec func syscall_runtime_BeforeExec() Called from syscall package before Exec. go:linkname syscall_runtime_BeforeExec syscall.runtime_BeforeExec func syscall_runtime_BeforeFork() Called from syscall package before fork. go:linkname syscall_runtime_BeforeFork syscall.runtime_BeforeFork go:nosplit func syscall_runtime_envs() []string go:linkname syscall_runtime_envs syscall.runtime_envs func syscall_setenv_c(k string, v string) Update the C environment if cgo is loaded. Called from syscall.Setenv. go:linkname syscall_setenv_c syscall.setenv_c func syscall_unsetenv_c(k string) Update the C environment if cgo is loaded. Called from syscall.unsetenv. go:linkname syscall_unsetenv_c syscall.unsetenv_c func sysmon() Always runs without a P, so write barriers are not allowed. func systemstack(fn func()) systemstack runs fn on a system stack. If systemstack is called from the per-OS-thread (g0) stack, or if systemstack is called from the signal handling (gsignal) stack, systemstack calls fn directly and returns. Otherwise, systemstack is being called from the limited stack of an ordinary goroutine. In this case, systemstack switches to the per-OS-thread stack, calls fn, and switches back. It is common to use a func literal as the argument, in order to share inputs and outputs with the code around the call to system stack: ... set up y ... systemstack(func() { x = bigcall(y) }) ... use x ... func systemstack_switch() func templateThread() templateThread is a thread in a known-good state that exists solely to start new threads in known-good states when the calling thread may not be in a good state. Many programs never need this, so templateThread is started lazily when we first enter a state that might lead to running on a thread in an unknown state. templateThread runs on an M without a P, so it must not have write barriers. func testAtomic64() func testdefersizes() Ensure that defer arg sizes that map to the same defer size class also map to the same malloc size class. func throw(s string) func tickspersecond() int64 Note: Called by runtime/pprof in addition to runtime code. func timeSleep(ns int64) timeSleep puts the current goroutine to sleep for at least ns nanoseconds. go:linkname timeSleep time.Sleep func timeSleepUntil() int64 func time_now() (sec int64, nsec int32, mono int64) go:linkname time_now time.now func timediv(v int64, div int32, rem *int32) int32 Poor mans 64-bit division. This is a very special function, do not use it if you are not sure what you are doing. int64 division is lowered into _divv() call on 386, which does not fit into nosplit functions. Handles overflow in a time-specific manner. This keeps us within no-split stack limits on 32-bit processors. go:nosplit func timerproc(tb *timersBucket) Timerproc runs the time-driven events. It sleeps until the next event in the tb heap. If addtimer inserts a new earlier event, it wakes timerproc early. func tooManyOverflowBuckets(noverflow uint16, B uint8) bool tooManyOverflowBuckets reports whether noverflow buckets is too many for a map with 1<<B buckets. Note that most of these overflow buckets must be in sparse use; if use was dense, then we'd have already triggered regular map growth. func tophash(hash uintptr) uint8 tophash calculates the tophash value for hash. func topofstack(f funcInfo, g0 bool) bool Does f mark the top of a goroutine stack? func totaldefersize(siz uintptr) uintptr total size of memory block for defer with arg size sz func traceAcquireBuffer() (mp *m, pid int32, bufp *traceBufPtr) traceAcquireBuffer returns trace buffer to use and, if necessary, locks it. func traceAppend(buf []byte, v uint64) []byte traceAppend appends v to buf in little-endian-base-128 encoding. func traceEvent(ev byte, skip int, args ...uint64) traceEvent writes a single event to trace buffer, flushing the buffer if necessary. ev is event type. If skip > 0, write current stack id as the last argument (skipping skip top frames). If skip = 0, this event type should contain a stack, but we don't want to collect and remember it for this particular call. func traceEventLocked(extraBytes int, mp *m, pid int32, bufp *traceBufPtr, ev byte, skip int, args ...uint64) func traceFrameForPC(buf traceBufPtr, pid int32, f Frame) (traceFrame, traceBufPtr) traceFrameForPC records the frame information. It may allocate memory. func traceFullQueue(buf traceBufPtr) traceFullQueue queues buf into queue of full buffers. func traceGCDone() func traceGCMarkAssistDone() func traceGCMarkAssistStart() func traceGCSTWDone() func traceGCSTWStart(kind int) func traceGCStart() func traceGCSweepDone() func traceGCSweepSpan(bytesSwept uintptr) traceGCSweepSpan traces the sweep of a single page. This may be called outside a traceGCSweepStart/traceGCSweepDone pair; however, it will not emit any trace events in this case. func traceGCSweepStart() traceGCSweepStart prepares to trace a sweep loop. This does not emit any events until traceGCSweepSpan is called. traceGCSweepStart must be paired with traceGCSweepDone and there must be no preemption points between these two calls. func traceGoCreate(newg *g, pc uintptr) func traceGoEnd() func traceGoPark(traceEv byte, skip int) func traceGoPreempt() func traceGoSched() func traceGoStart() func traceGoSysBlock(pp *p) func traceGoSysCall() func traceGoSysExit(ts int64) func traceGoUnpark(gp *g, skip int) func traceGomaxprocs(procs int32) func traceHeapAlloc() func traceNextGC() func traceProcFree(pp *p) traceProcFree frees trace buffer associated with pp. func traceProcStart() func traceProcStop(pp *p) func traceReleaseBuffer(pid int32) traceReleaseBuffer releases a buffer previously acquired with traceAcquireBuffer. func traceStackID(mp *m, buf []uintptr, skip int) uint64 func traceString(bufp *traceBufPtr, pid int32, s string) (uint64, *traceBufPtr) traceString adds a string to the trace.strings and returns the id. func trace_userLog(id uint64, category, message string) go:linkname trace_userLog runtime/trace.userLog func trace_userRegion(id, mode uint64, name string) go:linkname trace_userRegion runtime/trace.userRegion func trace_userTaskCreate(id, parentID uint64, taskType string) go:linkname trace_userTaskCreate runtime/trace.userTaskCreate func trace_userTaskEnd(id uint64) go:linkname trace_userTaskEnd runtime/trace.userTaskEnd func tracealloc(p unsafe.Pointer, size uintptr, typ *_type) func traceback(pc, sp, lr uintptr, gp *g) func traceback1(pc, sp, lr uintptr, gp *g, flags uint) func tracebackCgoContext(pcbuf *uintptr, printing bool, ctxt uintptr, n, max int) int tracebackCgoContext handles tracing back a cgo context value, from the context argument to setCgoTraceback, for the gentraceback function. It returns the new value of n. func tracebackHexdump(stk stack, frame *stkframe, bad uintptr) tracebackHexdump hexdumps part of stk around frame.sp and frame.fp for debugging purposes. If the address bad is included in the hexdumped range, it will mark it as well. func tracebackdefers(gp *g, callback func(*stkframe, unsafe.Pointer) bool, v unsafe.Pointer) Traceback over the deferred function calls. Report them like calls that have been invoked but not started executing yet. func tracebackinit() func tracebackothers(me *g) func tracebacktrap(pc, sp, lr uintptr, gp *g) tracebacktrap is like traceback but expects that the PC and SP were obtained from a trap, not from gp->sched or gp->syscallpc/gp->syscallsp or getcallerpc/getcallersp. Because they are from a trap instead of from a saved pair, the initial PC must not be rewound to the previous instruction. (All the saved pairs record a PC that is a return address, so we rewind it into the CALL instruction.) If gp.m.libcall{g,pc,sp} information is available, it uses that information in preference to the pc/sp/lr passed in. func tracefree(p unsafe.Pointer, size uintptr) func tracegc() func typeBitsBulkBarrier(typ *_type, dst, src, size uintptr) typeBitsBulkBarrier executes a write barrier for every pointer that would be copied from [src, src+size) to [dst, dst+size) by a memmove using the type bitmap to locate those pointer slots. The type typ must correspond exactly to [src, src+size) and [dst, dst+size). dst, src, and size must be pointer-aligned. The type typ must have a plain bitmap, not a GC program. The only use of this function is in channel sends, and the 64 kB channel element limit takes care of this for us. Must not be preempted because it typically runs right before memmove, and the GC must observe them as an atomic action. func typedmemclr(typ *_type, ptr unsafe.Pointer) typedmemclr clears the typed memory at ptr with type typ. The memory at ptr must already be initialized (and hence in type-safe state). If the memory is being initialized for the first time, see memclrNoHeapPointers. If the caller knows that typ has pointers, it can alternatively call memclrHasPointers. func typedmemmove(typ *_type, dst, src unsafe.Pointer) typedmemmove copies a value of type t to dst from src. Must be nosplit, see #16026. TODO: Perfect for go:nosplitrec since we can't have a safe point anywhere in the bulk barrier or memmove. func typedslicecopy(typ *_type, dst, src slice) int func typelinksinit() typelinksinit scans the types from extra modules and builds the moduledata typemap used to de-duplicate type pointers. func typesEqual(t, v *_type, seen map[_typePair]struct{}) bool typesEqual reports whether two types are equal. Everywhere in the runtime and reflect packages, it is assumed that there is exactly one *_type per Go type, so that pointer equality can be used to test if types are equal. There is one place that breaks this assumption: buildmode=shared. In this case a type can appear as two different pieces of memory. This is hidden from the runtime and reflect package by the per-module typemap built in typelinksinit. It uses typesEqual to map types from later modules back into earlier ones. Only typelinksinit needs this function. func typestring(x interface{}) string func unblocksig(sig uint32) unblocksig removes sig from the current thread's signal mask. This is nosplit and nowritebarrierrec because it is called from dieFromSignal, which can be called by sigfwdgo while running in the signal handler, on the signal stack, with no g available. go:nosplit go:nowritebarrierrec func unlock(l *mutex) func unlockOSThread() func unlockextra(mp *m) func unminit() Called from dropm to undo the effect of an minit. go:nosplit func unminitSignals() unminitSignals is called from dropm, via unminit, to undo the effect of calling minit on a non-Go thread. go:nosplit func unwindm(restore *bool) func updatememstats() func usleep(usec uint32) func vdsoFindVersion(info *vdsoInfo, ver *vdsoVersionKey) int32 func vdsoInitFromSysinfoEhdr(info *vdsoInfo, hdr *elfEhdr) func vdsoParseSymbols(info *vdsoInfo, version int32) func vdsoauxv(tag, val uintptr) func wakeScavenger() wakeScavenger unparks the scavenger if necessary. It must be called after any pacing update. mheap_.lock and scavenge.lock must not be held. func wakep() Tries to add one more P to execute G's. Called when a G is made runnable (newproc, ready). func walltime() (sec int64, nsec int32) func wbBufFlush(dst *uintptr, src uintptr) wbBufFlush flushes the current P's write barrier buffer to the GC workbufs. It is passed the slot and value of the write barrier that caused the flush so that it can implement cgocheck. This must not have write barriers because it is part of the write barrier implementation. This and everything it calls must be nosplit because 1) the stack contains untyped slots from gcWriteBarrier and 2) there must not be a GC safe point between the write barrier test in the caller and flushing the buffer. TODO: A "go:nosplitrec" annotation would be perfect for this. go:nowritebarrierrec go:nosplit func wbBufFlush1(_p_ *p) wbBufFlush1 flushes p's write barrier buffer to the GC work queue. This must not have write barriers because it is part of the write barrier implementation, so this may lead to infinite loops or buffer corruption. This must be non-preemptible because it uses the P's workbuf. go:nowritebarrierrec go:systemstack func wbBufFlush1Debug(old, buf1, buf2 uintptr, start *uintptr, next uintptr) wbBufFlush1Debug is a temporary function for debugging issue #27993. It exists solely to add some context to the traceback. go:nowritebarrierrec go:systemstack go:noinline func wirep(_p_ *p) wirep is the first step of acquirep, which actually associates the current M to _p_. This is broken out so we can disallow write barriers for this part, since we don't yet have a P. func write(fd uintptr, p unsafe.Pointer, n int32) int32 func writeErr(b []byte) func writeheapdump_m(fd uintptr) BlockProfileRecord describes blocking events originated at a particular call sequence (stack trace). type BlockProfileRecord struct { Count int64 Cycles int64 StackRecord } The Error interface identifies a run time error. type Error interface { error // RuntimeError is a no-op function but // serves to distinguish types that are run time // errors from ordinary errors: a type is a // run time error if it has a RuntimeError method. RuntimeError() } Frame is the information returned by Frames for each call frame. type Frame struct { // PC is the program counter for the location in this frame. // For a frame that calls another frame, this will be the // program counter of a call instruction. Because of inlining, // multiple frames may have the same PC value, but different // symbolic information. PC uintptr // Func is the Func value of this call frame. This may be nil // for non-Go code or fully inlined functions. Func *Func // Function is the package path-qualified function name of // this call frame. If non-empty, this string uniquely // identifies a single function in the program. // This may be the empty string if not known. // If Func is not nil then Function == Func.Name(). Function string // File and Line are the file name and line number of the // location in this frame. For non-leaf frames, this will be // the location of a call. These may be the empty string and // zero, respectively, if not known. File string Line int // Entry point program counter for the function; may be zero // if not known. If Func is not nil then Entry == // Func.Entry(). Entry uintptr // The runtime's internal view of the function. This field // is set (funcInfo.valid() returns true) only for Go functions, // not for C functions. funcInfo funcInfo } func allFrames(pcs []uintptr) []Frame allFrames returns all of the Frames corresponding to pcs. func expandCgoFrames(pc uintptr) []Frame expandCgoFrames expands frame information for pc, known to be a non-Go function, using the cgoSymbolizer hook. expandCgoFrames returns nil if pc could not be expanded. Frames may be used to get function/file/line information for a slice of PC values returned by Callers. type Frames struct { // callers is a slice of PCs that have not yet been expanded to frames. callers []uintptr // frames is a slice of Frames that have yet to be returned. frames []Frame frameStore [2]Frame } ▹ Example ▾ Example - more:true | runtime.Callers - more:true | runtime_test.ExampleFrames.func1 - more:true | runtime_test.ExampleFrames.func2 - more:true | runtime_test.ExampleFrames.func3 - more:true | runtime_test.ExampleFrames func CallersFrames(callers []uintptr) *Frames CallersFrames takes a slice of PC values returned by Callers and prepares to return function/file/line information. Do not change the slice until you are done with the Frames. func (ci *Frames) Next() (frame Frame, more bool) Next returns frame information for the next caller. If more is false, there are no more callers (the Frame value is valid). A Func represents a Go function in the running binary. type Func struct { opaque struct{} // unexported field to disallow conversions } func FuncForPC(pc uintptr) *Func FuncForPC returns a *Func describing the function that contains the given program counter address, or else nil. If pc represents multiple functions because of inlining, it returns the a *Func describing the innermost function, but with an entry of the outermost function.. func (f *Func) funcInfo() funcInfo func (f *Func) raw() *. estimates // Go 1.2 // OtherSys is bytes of memory in miscellaneous off-heap // runtime allocations. OtherSys uint64 // Go 1.2 // // Go 1.4 // NumGC is the number of completed GC cycles. NumGC uint32 // NumForcedGC is the number of GC cycles that were forced by // the application calling the GC function. NumForcedGC uint32 // Go 1.8 // // Go 1.5 // } } A StackRecord describes a single execution stack. type StackRecord struct { Stack0 [32]uintptr // stack trace for this record; ends at first 0 entry } func (r *StackRecord) Stack() []uintptr A TypeAssertionError explains a failed type assertion. type TypeAssertionError struct { _interface *_type concrete *_type asserted *_type missingMethod string // one method needed by Interface, missing from Concrete } func (e *TypeAssertionError) Error() string func (*TypeAssertionError) RuntimeError() A _defer holds an entry on the list of deferred calls. If you add a field here, add code to clear it in freedefer. sp uintptr // sp at time of defer pc uintptr fn *funcval _panic *_panic // panic that is running defer link *_defer } func newdefer(siz int32) *_defer Allocate a Defer, usually using per-P pool. Each defer must be released with freedefer. This must not grow the stack because there may be a frame without stack map information when this is called. a deferreturn block from entry, if any. pcsp int32 pcfile int32 pcln int32 npcdata int32 funcID funcID // set for certain special runtime functions _ [2]int8 // unused nfuncdata uint8 // must be last } A _panic holds information about an active panic. This is marked go:notinheap because _panic values must only ever live on the stack. The argp and link fields are stack pointers, but don't need special handling during stack growth: because they are pointer-typed and _panic values only live on the stack, regular stack pointer adjustment takes care of them. go:notinheap type _panic struct { argp unsafe.Pointer // pointer to arguments of deferred call run during panic; cannot move - known to liblink arg interface{} // argument to panic link *_panic // link to earlier panic recovered bool // whether this panic is over aborted bool // the panic was aborted } Needs to be in sync with ../cmd/link/internal/ld/decodesym.go:/^func.commonsize, ../cmd/compile/internal/gc/reflect.go:/^func.dcommontype and ../reflect/type.go:/^type.rtype. } var deferType *_type // type of _defer struct func resolveTypeOff(ptrInModule unsafe.Pointer, off typeOff) *_type func (t *_type) name() string func (t *_type) nameOff(off nameOff) name func (t *_type) pkgpath() string pkgpath returns the path of the package where t was defined, if available. This is not the same as the reflect package's PkgPath method, in that it returns the package path for struct and interface types, not just named types. func (t *_type) string() string func (t *_type) textOff(off textOff) unsafe.Pointer func (t *_type) typeOff(off typeOff) *_type func (t *_type) uncommon() *uncommontype type _typePair struct { t1 *_type t2 *_type } type adjustinfo struct { old stack delta uintptr // ptr distance from old to new stack (newbase - oldbase) cache pcvalueCache // sghi is the highest sudog.elem on the stack. sghi uintptr } ancestorInfo records details of where a goroutine was started. type ancestorInfo struct { pcs []uintptr // pcs from the stack of this goroutine goid int64 // goroutine id of this goroutine; original goroutine possibly dead gopc uintptr // pc of go statement that created this goroutine } arenaHint is a hint for where to grow the heap arenas. See mheap_.arenaHints. type arenaHint struct { addr uintptr down bool next *arenaHint } type arenaIdx uint func arenaIndex(p uintptr) arenaIdx arenaIndex returns the index into mheap_.arenas of the arena containing metadata for p. This index combines of an index into the L1 map and an index into the L2 map and should be used as mheap_.arenas[ai.l1()][ai.l2()]. If p is outside the range of valid heap addresses, either l1() or l2() will be out of bounds. It is nosplit because it's called by spanOf and several other nosplit functions. func (i arenaIdx) l1() uint func (i arenaIdx) l2() uint type arraytype struct { typ _type elem *_type slice *_type len uintptr } Information from the compiler about the layout of stack frames. type bitvector struct { n int32 // # of bits bytedata *uint8 } func makeheapobjbv(p uintptr, size uintptr) bitvector func progToPointerMask(prog *byte, size uintptr) bitvector progToPointerMask returns the 1-bit pointer mask output by the GC program prog. size the size of the region described by prog, in bytes. The resulting bitvector will have no more than size/sys.PtrSize bits. func stackmapdata(stkmap *stackmap, n int32) bitvector func (bv *bitvector) ptrbit(i uintptr) uint8 ptrbit returns the i'th bit in bv. ptrbit is less efficient than iterating directly over bitvector bits, and should only be used in non-performance-critical code. See adjustpointers for an example of a high-efficiency walk of a bitvector. A blockRecord is the bucket data for a bucket of type blockProfile, which is used in blocking and mutex profiles. type blockRecord struct { count int64 cycles int64 } A bucket for a Go map. type bmap struct { // tophash generally contains the top byte of the hash value // for each key in this bucket. If tophash[0] < minTopHash, // tophash[0] is a bucket evacuation state instead. tophash [bucketCnt]uint8 } func makeBucketArray(t *maptype, b uint8, dirtyalloc unsafe.Pointer) (buckets unsafe.Pointer, nextOverflow *bmap) makeBucketArray initializes a backing array for map buckets. 1<<b is the minimum number of buckets to allocate. dirtyalloc should either be nil or a bucket array previously allocated by makeBucketArray with the same t and b parameters. If dirtyalloc is nil a new backing array will be alloced and otherwise dirtyalloc will be cleared and reused as backing array. func (b *bmap) keys() unsafe.Pointer func (b *bmap) overflow(t *maptype) *bmap func (b *bmap) setoverflow(t *maptype, ovf *bmap) An boundsError represents a an indexing or slicing operation gone wrong. type boundsError struct { x int64 y int // Values in an index or slice expression can be signed or unsigned. // That means we'd need 65 bits to encode all possible indexes, from -2^63 to 2^64-1. // Instead, we keep track of whether x should be interpreted as signed or unsigned. // y is known to be nonnegative and to fit in an int. signed bool code boundsErrorCode } func (e boundsError) Error() string func (e boundsError) RuntimeError() type boundsErrorCode uint8 const ( boundsIndex boundsErrorCode = iota // s[x], 0 <= x < len(s) failed boundsSliceAlen // s[?:x], 0 <= x <= len(s) failed boundsSliceAcap // s[?:x], 0 <= x <= cap(s) failed boundsSliceB // s[x:y], 0 <= x <= y failed (but boundsSliceA didn't happen) boundsSlice3Alen // s[?:?:x], 0 <= x <= len(s) failed boundsSlice3Acap // s[?:?:x], 0 <= x <= cap(s) failed boundsSlice3B // s[?:x:y], 0 <= x <= y failed (but boundsSlice3A didn't happen) boundsSlice3C // s[x:y:?], 0 <= x <= y failed (but boundsSlice3A/B didn't happen) ) A bucket holds per-call-stack profiling information. The representation is a bit sleazy, inherited from C. This struct defines the bucket header. It is followed in memory by the stack words and then the actual record data, either a memRecord or a blockRecord. Per-call-stack profiling information. Lookup by hashing call stack into a linked-list hash table. No heap pointers. type bucket struct { next *bucket allnext *bucket typ bucketType // memBucket or blockBucket (includes mutexProfile) hash uintptr size uintptr nstk uintptr } func newBucket(typ bucketType, nstk int) *bucket newBucket allocates a bucket with the given type and number of stack entries. func stkbucket(typ bucketType, size uintptr, stk []uintptr, alloc bool) *bucket Return the bucket for stk[0:nstk], allocating new bucket if needed. func (b *bucket) bp() *blockRecord bp returns the blockRecord associated with the blockProfile bucket b. func (b *bucket) mp() *memRecord mp returns the memRecord associated with the memProfile bucket b. func (b *bucket) stk() []uintptr stk returns the slice in b holding the stack. type bucketType int const ( // profile types memProfile bucketType = 1 + iota blockProfile mutexProfile // size of bucket hash table buckHashSize = 179999 // max depth of stack to record in bucket maxStack = 32 ) Addresses collected in a cgo backtrace when crashing. Length must match arg.Max in x_cgo_callers in runtime/cgo/gcc_traceback.c. type cgoCallers [32]uintptr If the signal handler receives a SIGPROF signal on a non-Go thread, it tries to collect a traceback into sigprofCallers. sigprofCallersUse is set to non-zero while sigprofCallers holds a traceback. var sigprofCallers cgoCallers cgoContextArg is the type passed to the context function. type cgoContextArg struct { context uintptr } cgoSymbolizerArg is the type passed to cgoSymbolizer. type cgoSymbolizerArg struct { pc uintptr file *byte lineno uintptr funcName *byte entry uintptr more uintptr data uintptr } cgoTracebackArg is the type passed to cgoTraceback. type cgoTracebackArg struct { context uintptr sigContext uintptr buf *uintptr max uintptr } type cgothreadstart struct { g guintptr tls *uint64 fn unsafe.Pointer } type chantype struct { typ _type elem *_type dir uintptr } type childInfo struct { // Information passed up from the callee frame about // the layout of the outargs region. argoff uintptr // where the arguments start in the frame arglen uintptr // size of args region args bitvector // if args.n >= 0, pointer map of args region sp *uint8 // callee sp depth uintptr // depth in call stack (0 == most recent) } type cpuProfile struct { lock mutex on bool // profiling is on log *profBuf // profile events written here // extra holds extra stacks accumulated in addNonGo // corresponding to profiling signals arriving on // non-Go-created threads. Those stacks are written // to log the next time a normal Go thread gets the // signal handler. // Assuming the stacks are 2 words each (we don't get // a full traceback from those threads), plus one word // size for framing, 100 Hz profiling would generate // 300 words per second. // Hopefully a normal Go thread will get the profiling // signal at least once every few seconds. extra [1000]uintptr numExtra int lostExtra uint64 // count of frames lost because extra is full lostAtomic uint64 // count of frames lost because of being in atomic64 on mips/arm; updated racily } var cpuprof cpuProfile func (p *cpuProfile) add(gp *g, stk []uintptr) add adds the stack trace to the profile. It is called from signal handlers and other limited environments and cannot allocate memory or acquire locks that might be held at the time of the signal, nor can it use substantial amounts of stack. go:nowritebarrierrec func (p *cpuProfile) addExtra() addExtra adds the "extra" profiling events, queued by addNonGo, to the profile log. addExtra is called either from a signal handler on a Go thread or from an ordinary goroutine; either way it can use stack and has a g. The world may be stopped, though. func (p *cpuProfile) addNonGo(stk []uintptr) addNonGo adds the non-Go stack trace to the profile. It is called from a non-Go thread, so we cannot use much stack at all, nor do anything that needs a g or an m. In particular, we can't call cpuprof.log.write. Instead, we copy the stack into cpuprof.extra, which will be drained the next time a Go thread gets the signal handling event. go:nosplit go:nowritebarrierrec type dbgVar struct { name string value *int32 } type debugLogBuf [debugLogBytes]byte type debugLogReader struct { data *debugLogBuf // begin and end are the positions in the log of the beginning // and end of the log data, modulo len(data). begin, end uint64 // tick and nano are the current time base at begin. tick, nano uint64 } func (r *debugLogReader) header() (end, tick, nano uint64, p int) func (r *debugLogReader) peek() (tick uint64) func (r *debugLogReader) printVal() bool func (r *debugLogReader) readUint16LEAt(pos uint64) uint16 func (r *debugLogReader) readUint64LEAt(pos uint64) uint64 func (r *debugLogReader) skip() uint64 func (r *debugLogReader) uvarint() uint64 func (r *debugLogReader) varint() int64 A debugLogWriter is a ring buffer of binary debug log records. A log record consists of a 2-byte framing header and a sequence of fields. The framing header gives the size of the record as a little endian 16-bit value. Each field starts with a byte indicating its type, followed by type-specific data. If the size in the framing header is 0, it's a sync record consisting of two little endian 64-bit values giving a new time base. Because this is a ring buffer, new records will eventually overwrite old records. Hence, it maintains a reader that consumes the log as it gets overwritten. That reader state is where an actual log reader would start. type debugLogWriter struct { write uint64 data debugLogBuf // tick and nano are the time bases from the most recently // written sync record. tick, nano uint64 // r is a reader that consumes records as they get overwritten // by the writer. It also acts as the initial reader state // when printing the log. r debugLogReader // buf is a scratch buffer for encoding. This is here to // reduce stack usage. buf [10]byte } func (l *debugLogWriter) byte(x byte) func (l *debugLogWriter) bytes(x []byte) func (l *debugLogWriter) ensure(n uint64) func (l *debugLogWriter) uvarint(u uint64) func (l *debugLogWriter) varint(x int64) func (l *debugLogWriter) writeFrameAt(pos, size uint64) bool func (l *debugLogWriter) writeSync(tick, nano uint64) func (l *debugLogWriter) writeUint64LE(x uint64) type divMagic struct { shift uint8 shift2 uint8 mul uint16 baseMask uint16 } type dlogPerM struct{} A dlogger writes to the debug log. To obtain a dlogger, call dlog(). When done with the dlogger, call end(). type dlogger struct { w debugLogWriter // allLink is the next dlogger in the allDloggers list. allLink *dlogger // owned indicates that this dlogger is owned by an M. This is // accessed atomically. owned uint32 } allDloggers is a list of all dloggers, linked through dlogger.allLink. This is accessed atomically. This is prepend only, so it doesn't need to protect against ABA races. var allDloggers *dlogger func dlog() *dlogger dlog returns a debug logger. The caller can use methods on the returned logger to add values, which will be space-separated in the final output, much like println. The caller must call end() to finish the message. dlog can be used from highly-constrained corners of the runtime: it is safe to use in the signal handler, from within the write barrier, from within the stack implementation, and in places that must be recursively nosplit. This will be compiled away if built without the debuglog build tag. However, argument construction may not be. If any of the arguments are not literals or trivial expressions, consider protecting the call with "if dlogEnabled". func getCachedDlogger() *dlogger func (l *dlogger) b(x bool) *dlogger func (l *dlogger) end() func (l *dlogger) hex(x uint64) *dlogger func (l *dlogger) i(x int) *dlogger func (l *dlogger) i16(x int16) *dlogger func (l *dlogger) i32(x int32) *dlogger func (l *dlogger) i64(x int64) *dlogger func (l *dlogger) i8(x int8) *dlogger func (l *dlogger) p(x interface{}) *dlogger func (l *dlogger) pc(x uintptr) *dlogger func (l *dlogger) s(x string) *dlogger func (l *dlogger) traceback(x []uintptr) *dlogger func (l *dlogger) u(x uint) *dlogger func (l *dlogger) u16(x uint16) *dlogger func (l *dlogger) u32(x uint32) *dlogger func (l *dlogger) u64(x uint64) *dlogger func (l *dlogger) u8(x uint8) *dlogger func (l *dlogger) uptr(x uintptr) *dlogger type eface struct { _type *_type data unsafe.Pointer } func convT2E(t *_type, elem unsafe.Pointer) (e eface) func convT2Enoptr(t *_type, elem unsafe.Pointer) (e eface) func efaceOf(ep *interface{}) *eface type elfDyn struct { d_tag int64 /* Dynamic entry type */ d_val uint64 /* Integer value */ } type elfEhdr struct { e_ident [_EI_NIDENT]byte /* Magic number and other info */ e_type uint16 /* Object file type */ e_machine uint16 /* Architecture */ e_version uint32 /* Object file version */ e_entry uint64 /* Entry point virtual address */ e_phoff uint64 /* Program header table file offset */ e_shoff uint64 /* Section header table file offset */ e_flags uint32 /* Processor-specific flags */ e_ehsize uint16 /* ELF header size in bytes */ e_phentsize uint16 /* Program header table entry size */ e_phnum uint16 /* Program header table entry count */ e_shentsize uint16 /* Section header table entry size */ e_shnum uint16 /* Section header table entry count */ e_shstrndx uint16 /* Section header string table index */ } type elfPhdr struct { p_type uint32 /* Segment type */ p_flags uint32 /* Segment flags */ p_offset uint64 /* Segment file offset */ p_vaddr uint64 /* Segment virtual address */ p_paddr uint64 /* Segment physical address */ p_filesz uint64 /* Segment size in file */ p_memsz uint64 /* Segment size in memory */ p_align uint64 /* Segment alignment */ } type elfShdr struct { sh_name uint32 /* Section name (string tbl index) */ sh_type uint32 /* Section type */ sh_flags uint64 /* Section flags */ sh_addr uint64 /* Section virtual addr at execution */ sh_offset uint64 /* Section file offset */ sh_size uint64 /* Section size in bytes */ sh_link uint32 /* Link to another section */ sh_info uint32 /* Additional section information */ sh_addralign uint64 /* Section alignment */ sh_entsize uint64 /* Entry size if section holds table */ } type elfSym struct { st_name uint32 st_info byte st_other byte st_shndx uint16 st_value uint64 st_size uint64 } type elfVerdaux struct { vda_name uint32 /* Version or dependency names */ vda_next uint32 /* Offset in bytes to next verdaux entry */ } type elfVerdef struct { vd_version uint16 /* Version revision */ vd_flags uint16 /* Version information */ vd_ndx uint16 /* Version Index */ vd_cnt uint16 /* Number of associated aux entries */ vd_hash uint32 /* Version name hash value */ vd_aux uint32 /* Offset in bytes to verdaux array */ vd_next uint32 /* Offset in bytes to next verdef entry */ } type epollevent struct { events uint32 data [8]byte // unaligned uintptr } An errorString represents a runtime error described by a single string. type errorString string func (e errorString) Error() string func (e errorString) RuntimeError() evacDst is an evacuation destination. type evacDst struct { b *bmap // current destination bucket i int // key/elem index into b k unsafe.Pointer // pointer to current key storage e unsafe.Pointer // pointer to current elem storage } NOTE: Layout known to queuefinalizer. type finalizer struct { fn *funcval // function to call (may be a heap pointer) arg unsafe.Pointer // ptr to object (may be a heap pointer) nret uintptr // bytes of return values from fn fint *_type // type of first argument of fn ot *ptrtype // type of ptr to object (may be a heap pointer) } finblock is an array of finalizers to be executed. finblocks are arranged in a linked list for the finalizer queue. finblock is allocated from non-GC'd memory, so any heap pointers must be specially handled. GC currently assumes that the finalizer queue does not grow during marking (but it can shrink). type finblock struct { alllink *finblock next *finblock cnt uint32 _ int32 fin [(_FinBlockSize - 2*sys.PtrSize - 2*4) / unsafe.Sizeof(finalizer{})]finalizer } var allfin *finblock // list of all blocks var finc *finblock // cache of free blocks var finq *finblock // list of finalizers that are to be executed findfunctab is an array of these structures. Each bucket represents 4096 bytes of the text segment. Each subbucket represents 256 bytes of the text segment. To find a function given a pc, locate the bucket and subbucket for that pc. Add together the idx and subbucket value to obtain a function index. Then scan the functab array starting at that index to find the target function. This table uses 20 bytes for every 4096 bytes of code, or ~0.5% overhead. type findfuncbucket struct { idx uint32 subbuckets [16]byte } FixAlloc is a simple free-list allocator for fixed size objects. Malloc uses a FixAlloc wrapped around sysAlloc to manage its mcache and mspan objects. Memory returned by fixalloc.alloc is zeroed by default, but the caller may take responsibility for zeroing allocations by setting the zero flag to false. This is only safe if the memory never contains heap pointers. The caller is responsible for locking around FixAlloc calls. Callers can keep state in the object but the first word is smashed by freeing and reallocating. Consider marking fixalloc'd types go:notinheap. type fixalloc struct { size uintptr first func(arg, p unsafe.Pointer) // called first time p is returned arg unsafe.Pointer list *mlink chunk uintptr // use uintptr instead of unsafe.Pointer to avoid write barriers nchunk uint32 inuse uintptr // in-use bytes now stat *uint64 zero bool // zero allocations } func (f *fixalloc) alloc() unsafe.Pointer func (f *fixalloc) free(p unsafe.Pointer) func (f *fixalloc) init(size uintptr, first func(arg, p unsafe.Pointer), arg unsafe.Pointer, stat *uint64) Initialize f to allocate objects of the given size, using the allocator to obtain chunks of memory. type forcegcstate struct { lock mutex g *g idle uint32 } type fpreg1 struct { significand [4]uint16 exponent uint16 } type fpstate struct { cwd uint16 swd uint16 ftw uint16 fop uint16 rip uint64 rdp uint64 mxcsr uint32 mxcr_mask uint32 _st [8]fpxreg _xmm [16]xmmreg padding [24]uint32 } type fpstate1 struct { cwd uint16 swd uint16 ftw uint16 fop uint16 rip uint64 rdp uint64 mxcsr uint32 mxcr_mask uint32 _st [8]fpxreg1 _xmm [16]xmmreg1 padding [24]uint32 } type fpxreg struct { significand [4]uint16 exponent uint16 padding [3]uint16 } type fpxreg1 struct { significand [4]uint16 exponent uint16 padding [3]uint16 } A FuncID identifies particular functions that need to be treated specially by the runtime. Note that in some situations involving plugins, there may be multiple copies of a particular special runtime function. Note: this list must match the list in cmd/internal/objabi/funcid.go. type funcID uint8.) ) type funcInfo struct { *_func datap *moduledata } func findfunc(pc uintptr) funcInfo func (f funcInfo) _Func() *Func func (f funcInfo) valid() bool } type functab struct { entry uintptr funcoff uintptr } type functype struct { typ _type inCount uint16 outCount uint16 } func (t *functype) dotdotdot() bool func (t *functype) in() []*_type func (t *functype) out() []*_type type funcval struct { fn uintptr } type g struct { // Stack parameters. // stack describes the actual stack memory: [stack.lo, stack.hi). // stackguard0 is the stack pointer compared in the Go stack growth prologue. // It is stack.lo+StackGuard normally, but can be StackPreempt to trigger a preemption. // stackguard1 is the stack pointer compared in the C stack growth prologue. // It is stack.lo+StackGuard on g0 and gsignal stacks. // It is ~0 on other goroutine stacks, to trigger a call to morestackc (and crash). stackLock uint32 // sigprof/scang lock; TODO: fold in to atomicstatus goid int64 schedlink guintptr waitsince int64 // approx time when the g become blocked waitreason waitReason // if status==Gwaiting preempt bool // preemption signal, duplicates stackguard0 = stackpreempt paniconfault bool // panic (instead of crash) on unexpected fault address preemptscan bool // preempted g does scan for gc gcscandone bool // g has scanned stack; protected by _Gscan bit in status gcscanvalid bool // false at start of gc cycle, true if G has not run since last scan; TODO: remove? throwsplit bool // must not split stack raceignore int8 // ignore race detection events sysblocktraced bool // StartTrace has emitted EvGoInSyscall about this goroutine sysexitticks int64 // cputicks when syscall has returned (for tracing) traceseq uint64 // trace event sequencer tracelastp puintptr // last P emitted an event for this goroutine lockedm muintptr sig uint32 writebuf []byte sigcode0 uintptr sigcode1 uintptr sigpc uintptr gopc uintptr // pc of go statement that created this goroutine ancestors *[]ancestorInfo // ancestor information goroutine(s) that created this goroutine (only used if debug.tracebackancestors) startpc uintptr // pc of goroutine function racectx uintptr waiting *sudog // sudog structures this g is waiting on (that have a valid elem ptr); in lock order cgoCtxt []uintptr // cgo traceback context labels unsafe.Pointer // profiler labels timer *timer // cached timer for time.Sleep selectDone uint32 // are we participating in a select and did someone win the race? // gcAssistBytes is this G's GC assist credit in terms of // bytes allocated. If this is positive, then the G has credit // to allocate gcAssistBytes bytes without assisting. If this // is negative, then the G must correct this by performing // scan work. We track this in bytes to make it fast to update // and check for debt in the malloc hot path. The assist ratio // determines how this corresponds to scan work debt. gcAssistBytes int64 } var fing *g // goroutine that runs finalizers func getg() *g getg returns the pointer to the current g. The compiler rewrites calls to this function into instructions that fetch the g directly (from TLS or from the dedicated register). func gfget(_p_ *p) *g Get from gfree list. If local list is empty, grab a batch from global list. func globrunqget(_p_ *p, max int32) *g Try get a batch of G's from the global runnable queue. Sched must be locked. func malg(stacksize int32) *g Allocate a new g, with a stack big enough for stacksize bytes. func netpollunblock(pd *pollDesc, mode int32, ioready bool) *g func runqsteal(_p_, p2 *p, stealRunNextG bool) *g Steal half of elements from local runnable queue of p2 and put onto local runnable queue of p. Returns one of the stolen elements (or nil if failed). func timejump() *g func timejumpLocked() *g func traceReader() *g traceReader returns the trace reader that should be woken up, if any. func wakefing() *g A gList is a list of Gs linked through g.schedlink. A G can only be on one gQueue or gList at a time. type gList struct { head guintptr } func netpoll(block bool) gList polls for ready network connections returns list of goroutines that become runnable func (l *gList) empty() bool empty reports whether l is empty. func (l *gList) pop() *g pop removes and returns the head of l. If l is empty, it returns nil. func (l *gList) push(gp *g) push adds gp to the head of l. func (l *gList) pushAll(q gQueue) pushAll prepends all Gs in q to l. A gQueue is a dequeue of Gs linked through g.schedlink. A G can only be on one gQueue or gList at a time. type gQueue struct { head guintptr tail guintptr } func (q *gQueue) empty() bool empty reports whether q is empty. func (q *gQueue) pop() *g pop removes and returns the head of queue q. It returns nil if q is empty. func (q *gQueue) popList() gList popList takes all Gs in q and returns them as a gList. func (q *gQueue) push(gp *g) push adds gp to the head of q. func (q *gQueue) pushBack(gp *g) pushBack adds gp to the tail of q. func (q *gQueue) pushBackAll(q2 gQueue) pushBackAll adds all Gs in l2 to the tail of q. After this q2 must not be used. gcBits is an alloc/mark bitmap. This is always used as *gcBits. type gcBits uint8 func newAllocBits(nelems uintptr) *gcBits newAllocBits returns a pointer to 8 byte aligned bytes to be used for this span's alloc bits. newAllocBits is used to provide newly initialized spans allocation bits. For spans not being initialized the mark bits are repurposed as allocation bits when the span is swept. func newMarkBits(nelems uintptr) *gcBits newMarkBits returns a pointer to 8 byte aligned bytes to be used for a span's mark bits. func (b *gcBits) bitp(n uintptr) (bytep *uint8, mask uint8) bitp returns a pointer to the byte containing bit n and a mask for selecting that bit from *bytep. func (b *gcBits) bytep(n uintptr) *uint8 bytep returns a pointer to the n'th byte of b. type gcBitsArena struct { // gcBitsHeader // side step recursive type bug (issue 14620) by including fields by hand. free uintptr // free is the index into bits of the next free byte; read/write atomically next *gcBitsArena bits [gcBitsChunkBytes - gcBitsHeaderBytes]gcBits } func newArenaMayUnlock() *gcBitsArena newArenaMayUnlock allocates and zeroes a gcBits arena. The caller must hold gcBitsArena.lock. This may temporarily release it. func (b *gcBitsArena) tryAlloc(bytes uintptr) *gcBits tryAlloc allocates from b or returns nil if b does not have enough room. This is safe to call concurrently. type gcBitsHeader struct { free uintptr // free is the index into bits of the next free byte. next uintptr // *gcBits triggers recursive type bug. (issue 14620) } } func (c *gcControllerState) endCycle() float64 endCycle computes the trigger ratio for the next cycle. func (c *gcControllerState) enlistWorker() enlistWorker encourages another dedicated mark worker to start on another P if there are spare worker slots. It is used by putfull when more work is made available. func (c *gcControllerState) findRunnableGCWorker(_p_ *p) *g findRunnableGCWorker returns the background mark worker for _p_ if it should be run. This must only be called when gcBlackenEnabled != 0. func (c *gcControllerState) revise()) startCycle() startCycle resets the GC controller's state and computes estimates for a new GC cycle. The caller must hold worldsema. type gcDrainFlags int const ( gcDrainUntilPreempt gcDrainFlags = 1 << iota gcDrainFlushBgCredit gcDrainIdle gcDrainFractional )BackgroundUtilization is not // an integer. The fractional worker should run until it is // preempted and will be scheduled to pick up the fractional // part of GOMAXPROCS*gcBackground) ) type gcSweepBlock struct { spans [gcSweepBlockEntries]*mspan } A gcSweepBuf is a set of *mspans. gcSweepBuf is safe for concurrent push operations *or* concurrent pop operations, but not both simultaneously. type gcSweepBuf struct { } func (b *gcSweepBuf) block(i int) []*mspan block returns the spans in the i'th block of buffer b. block is safe to call concurrently with push. func (b *gcSweepBuf) numBlocks()) pop() *mspan pop removes and returns a span from buffer b, or nil if b is empty. pop is safe to call concurrently with other pop operations, but NOT to call concurrently with push. func (b *gcSweepBuf) push(s *mspan) push adds span s to buffer b. push is safe to call concurrently with other push operations, but NOT to call concurrently with pop. } func (t gcTrigger) test() bool test reports whether the trigger condition is satisfied, meaning that the exit condition for the _GCoff phase has been met. The exit condition should be tested when allocating. type gcTriggerKind int const ( // gcTriggerHeap indicates that a cycle should be started when // the heap size reaches the trigger heap size computed by the // controller. gcTriggerHeap gcTriggerKind = iota // ) A gcWork provides the interface to produce and consume work for the garbage collector. A gcWork can be used on the stack as follows: (preemption must be disabled) gcw := &getg().m.p.ptr().gcw .. call gcw.put() to produce and gcw.tryGet() to consume .. It's important that any use of gcWork during the mark phase prevent the garbage collector from transitioning to mark termination since gcWork may locally hold GC work buffers. This can be done by disabling preemption (systemstack or acquirem). type gcWork struct { // wbuf1 and wbuf2 are the primary and secondary work buffers. // // This can be thought of as a stack of both work buffers' // pointers concatenated. When we pop the last pointer, we // shift the stack up by one work buffer by bringing in a new // full buffer and discarding an empty one. When we fill both // buffers, we shift the stack down by one work buffer by // bringing in a new empty buffer and discarding a full one. // This way we have one buffer's worth of hysteresis, which // amortizes the cost of getting or putting a work buffer over // at least one buffer of work and reduces contention on the // global work lists. // // wbuf1 is always the buffer we're currently pushing to and // popping from and wbuf2 is the buffer that will be discarded // // Invariant: Both wbuf1 and wbuf2 are nil or neither are. wbuf1, wbuf2 *workbuf // Bytes marked (blackened) on this gcWork. This is aggregated // into work.bytesMarked by dispose. bytesMarked uint64 // Scan work performed on this gcWork. This is aggregated into // gcController by dispose and may also be flushed by callers. scanWork int64 // flushedWork indicates that a non-empty work buffer was // flushed to the global work list since the last gcMarkDone // termination check. Specifically, this indicates that this // gcWork may have communicated work to another gcWork. flushedWork bool // pauseGen causes put operations to spin while pauseGen == // gcWorkPauseGen if debugCachedWork is true. pauseGen uint32 // putGen is the pauseGen of the last putGen. putGen uint32 // pauseStack is the stack at which this P was paused if // debugCachedWork is true. pauseStack [16]uintptr } func (w *gcWork) balance() balance moves some work that's cached in this gcWork back on the global queue. go:nowritebarrierrec func (w *gcWork) checkPut(ptr uintptr, ptrs []uintptr) func (w *gcWork) dispose() dispose returns any cached pointers to the global queue. The buffers are being put on the full queue so that the write barriers will not simply reacquire them before the GC can inspect them. This helps reduce the mutator's ability to hide pointers during the concurrent mark phase. func (w *gcWork) empty() bool empty reports whether w has no mark work available. go:nowritebarrierrec func (w *gcWork) init() func (w *gcWork) put(obj uintptr) put enqueues a pointer for the garbage collector to trace. obj must point to the beginning of a heap object or an oblet. go:nowritebarrierrec func (w *gcWork) putBatch(obj []uintptr) putBatch performs a put on every pointer in obj. See put for constraints on these pointers. func (w *gcWork) putFast(obj uintptr) bool putFast does a put and reports whether it can be done quickly otherwise it returns false and the caller needs to call put. go:nowritebarrierrec func (w *gcWork) tryGet() uintptr tryGet dequeues a pointer for the garbage collector to trace. If there are no pointers remaining in this gcWork or in the global queue, tryGet returns 0. Note that there may still be pointers in other gcWork instances or other caches. go:nowritebarrierrec func (w *gcWork) tryGetFast() uintptr tryGetFast dequeues a pointer for the garbage collector to trace if one is readily available. Otherwise it returns 0 and the caller is expected to call tryGet(). go:nowritebarrierrec A gclink is a node in a linked list of blocks, like mlink, but it is opaque to the garbage collector. The GC does not trace the pointers during collection, and the compiler does not emit write barriers for assignments of gclinkptr values. Code should store references to gclinks as gclinkptr, not as *gclink. type gclink struct { next gclinkptr } A gclinkptr is a pointer to a gclink, but it is opaque to the garbage collector. type gclinkptr uintptr func nextFreeFast(s *mspan) gclinkptr nextFreeFast returns the next free object if one is quickly available. Otherwise it returns 0. func stackpoolalloc(order uint8) gclinkptr Allocates a stack from the free pool. Must be called with stackpoolmu held. func (p gclinkptr) ptr() *gclink ptr returns the *gclink form of p. The result should be used for accessing fields, not stored in other data structures. GOEXPERIMENT=framepointer } gsignalStack saves the fields of the gsignal stack changed by setGsignalStack. type gsignalStack struct { stack stack stackguard0 uintptr stackguard1 uintptr stktopsp uintptr } A guintptr holds a goroutine pointer, but typed as a uintptr to bypass write barriers. It is used in the Gobuf goroutine state and in scheduling lists that are manipulated without a P. The Gobuf.g goroutine pointer is almost always updated by assembly code. In one of the few places it is updated by Go code - func save - it must be treated as a uintptr to avoid a write barrier being emitted at a bad time. Instead of figuring out how to emit the write barriers missing in the assembly manipulation, we change the type of the field to uintptr, so that it does not require write barriers at all. Goroutine structs are published in the allg list and never freed. That will keep the goroutine structs from being collected. There is never a time that Gobuf.g's contain the only references to a goroutine: the publishing of the goroutine in allg comes first. Goroutine pointers are also kept in non-GC-visible places like TLS, so I can't see them ever moving. If we did want to start moving data in the GC, we'd need to allocate the goroutine structs from an alternate arena. Using guintptr doesn't make that problem any worse. type guintptr uintptr func (gp *guintptr) cas(old, new guintptr) bool func (gp guintptr) ptr() *g func (gp *guintptr) set(g *g) } func makechan(t *chantype, size int) *hchan func makechan64(t *chantype, size int64) *hchan func reflect_makechan(t *chantype, size int) *hchan go:linkname reflect_makechan reflect.makechan func (c *hchan) raceaddr() unsafe.Pointer func (c *hchan) sortkey() uintptr A heapArena stores metadata for a heap arena. heapArenas are stored outside of the Go heap and accessed via the mheap_.arenas index. This gets allocated directly from the OS, so ideally it should be a multiple of the system page size. For example, avoid adding small fields. type heapArena struct { // bitmap stores the pointer/scalar bitmap for the words in // this arena. See mbitmap.go for a description. Use the // heapBits type to access this. bitmap [heapArenaBitmapBytes]byte // spans maps from virtual address page ID within this arena to *mspan. // For allocated spans, their pages map to the span itself. // For free spans, only the lowest and highest pages map to the span itself. // Internal pages map to an arbitrary span. // For pages that have never been allocated, spans entries are nil. // // Modifications are protected by mheap.lock. Reads can be // performed without locking, but ONLY from indexes that are // known to contain in-use or stack spans. This means there // must not be a safe-point between establishing that an // address is live and looking it up in the spans array. spans [pagesPerArena]*mspan // pageInUse is a bitmap that indicates which spans are in // state mSpanInUse. This bitmap is indexed by page number, // but only the bit corresponding to the first page in each // span is used. // // Writes are protected by mheap_.lock. pageInUse [pagesPerArena / 8]uint8 // pageMarks is a bitmap that indicates which spans have any // marked objects on them. Like pageInUse, only the bit // corresponding to the first page in each span is used. // // Writes are done atomically during marking. Reads are // non-atomic and lock-free since they only occur during // sweeping (and hence never race with writes). // // This is used to quickly find whole spans that can be freed. // // TODO(austin): It would be nice if this was uint64 for // faster scanning, but we don't have 64-bit atomic bit // operations. pageMarks [pagesPerArena / 8]uint8 } heapBits provides access to the bitmap bits for a single heap word. The methods on heapBits take value receivers so that the compiler can more easily inline calls to those methods and registerize the struct fields independently. type heapBits struct { bitp *uint8 shift uint32 arena uint32 // Index of heap arena containing bitp last *uint8 // Last byte arena's bitmap } func heapBitsForAddr(addr uintptr) (h heapBits) heapBitsForAddr returns the heapBits for the address addr. The caller must ensure addr is in an allocated span. In particular, be careful not to point past the end of an object. func (h heapBits) bits() uint32 The caller can test morePointers and isPointer by &-ing with bitScan and bitPointer. The result includes in its higher bits the bits for subsequent words described by the same bitmap byte. func (h heapBits) clearCheckmarkSpan(size, n, total uintptr) clearCheckmarkSpan undoes all the checkmarking in a span. The actual checkmark bits are ignored, so the only work to do is to fix the pointer bits. (Pointer bits are ignored by scanobject but consulted by typedmemmove.) func (h heapBits) forward(n uintptr) heapBits forward returns the heapBits describing n pointer-sized words ahead of h in memory. That is, if h describes address p, h.forward(n) describes p+n*ptrSize. h.forward(1) is equivalent to h.next(), just slower. Note that forward does not modify h. The caller must record the result. bits returns the heap bits for the current word. go:nosplit func (h heapBits) forwardOrBoundary(n uintptr) (heapBits, uintptr) forwardOrBoundary is like forward, but stops at boundaries between contiguous sections of the bitmap. It returns the number of words advanced over, which will be <= n. func (h heapBits) initCheckmarkSpan(size, n, total uintptr) initCheckmarkSpan initializes a span for being checkmarked. It clears the checkmark bits, which are set to 1 in normal operation. func (h heapBits) initSpan(s *mspan) initSpan initializes the heap bitmap for a span. It clears all checkmark bits. If this is a span of pointer-sized objects, it initializes all words to pointer/scan. Otherwise, it initializes all words to scalar/dead. func (h heapBits) isCheckmarked(size uintptr) bool isCheckmarked reports whether the heap bits have the checkmarked bit set. It must be told how large the object at h is, because the encoding of the checkmark bit varies by size. h must describe the initial word of the object. func (h heapBits) isPointer() bool isPointer reports whether the heap bits describe a pointer word. func (h heapBits) morePointers() bool morePointers reports whether this word and all remaining words in this object are scalars. h must not describe the second word of the object. func (h heapBits) next() heapBits next returns the heapBits describing the next pointer-sized word in memory. That is, if h describes address p, h.next() describes p+ptrSize. Note that next does not modify h. The caller must record the result. func (h heapBits) nextArena() heapBits nextArena advances h to the beginning of the next heap arena. This is a slow-path helper to next. gc's inliner knows that heapBits.next can be inlined even though it calls this. This is marked noinline so it doesn't get inlined into next and cause next to be too big to inline. go:nosplit go:noinline func (h heapBits) setCheckmarked(size uintptr) setCheckmarked sets the checkmarked bit. It must be told how large the object at h is, because the encoding of the checkmark bit varies by size. h must describe the initial word of the object. The compiler knows that a print of a value of this type should use printhex instead of printuint (decimal). type hex uint64 A hash iteration structure. If you modify hiter, also change cmd/compile/internal/gc/reflect.go to indicate the layout of this structure. type hiter struct { key unsafe.Pointer // Must be in first position. Write nil to indicate iteration end (see cmd/internal/gc/range.go). elem unsafe.Pointer // Must be in second position (see cmd/internal/gc/range.go). t *maptype h *hmap buckets unsafe.Pointer // bucket ptr at hash_iter initialization time bptr *bmap // current bucket overflow *[]*bmap // keeps overflow buckets of hmap.buckets alive oldoverflow *[]*bmap // keeps overflow buckets of hmap.oldbuckets alive startBucket uintptr // bucket iteration started at offset uint8 // intra-bucket offset to start from during iteration (should be big enough to hold bucketCnt-1) wrapped bool // already wrapped around from end of bucket array to beginning B uint8 i uint8 bucket uintptr checkBucket uintptr } func reflect_mapiterinit(t *maptype, h *hmap) *hiter go:linkname reflect_mapiterinit reflect.mapiterinit A header for a Go map. type hmap struct { // Note: the format of the hmap is also encoded in cmd/compile/internal/gc/reflect.go. // Make sure this stays in sync with the compiler's definition. count int // # live cells == size of map. Must be first (used by len() builtin) flags uint8 B uint8 // log_2 of # of buckets (can hold up to loadFactor * 2^B items) noverflow uint16 // approximate number of overflow buckets; see incrnoverflow for details hash0 uint32 // hash seed buckets unsafe.Pointer // array of 2^B Buckets. may be nil if count==0. oldbuckets unsafe.Pointer // previous bucket array of half the size, non-nil only when growing nevacuate uintptr // progress counter for evacuation (buckets less than this have been evacuated) extra *mapextra // optional fields } func makemap(t *maptype, hint int, h *hmap) *hmap makemap implements Go map creation for make(map[k]v, hint). If the compiler has determined that the map or the first bucket can be created on the stack, h and/or bucket may be non-nil. If h != nil, the map can be created directly in h. If h.buckets != nil, bucket pointed to can be used as the first bucket. func makemap64(t *maptype, hint int64, h *hmap) *hmap func makemap_small() *hmap makemap_small implements Go map creation for make(map[k]v) and make(map[k]v, hint) when hint is known to be at most bucketCnt at compile time and the map needs to be allocated on the heap. func reflect_makemap(t *maptype, cap int) *hmap go:linkname reflect_makemap reflect.makemap func (h *hmap) createOverflow() func (h *hmap) growing() bool growing reports whether h is growing. The growth may be to the same size or bigger. func (h *hmap) incrnoverflow() incrnoverflow increments h.noverflow. noverflow counts the number of overflow buckets. This is used to trigger same-size map growth. See also tooManyOverflowBuckets. To keep hmap small, noverflow is a uint16. When there are few buckets, noverflow is an exact count. When there are many buckets, noverflow is an approximate count. func (h *hmap) newoverflow(t *maptype, b *bmap) *bmap func (h *hmap) noldbuckets() uintptr noldbuckets calculates the number of buckets prior to the current map growth. func (h *hmap) oldbucketmask() uintptr oldbucketmask provides a mask that can be applied to calculate n % noldbuckets(). func (h *hmap) sameSizeGrow() bool sameSizeGrow reports whether the current growth is to a map of the same size. type iface struct { tab *itab data unsafe.Pointer } func assertE2I(inter *interfacetype, e eface) (r iface) func assertI2I(inter *interfacetype, i iface) (r iface) func convI2I(inter *interfacetype, i iface) (r iface) func convT2I(tab *itab, elem unsafe.Pointer) (i iface) func convT2Inoptr(tab *itab, elem unsafe.Pointer) (i iface) type imethod struct { name nameOff ityp typeOff } An initTask represents the set of initializations that need to be done for a package. type initTask struct { // TODO: pack the first 3 fields more tightly? state uintptr // 0 = uninitialized, 1 = in progress, 2 = done ndeps uintptr nfns uintptr } go:linkname main_inittask main..inittask var main_inittask initTask go:linkname runtime_inittask runtime..inittask var runtime_inittask initTask inlinedCall is the encoding of entries in the FUNCDATA_InlTree table. type inlinedCall struct { parent int16 // index of parent in the inltree, or < 0 funcID funcID // type of the called function _ byte file int32 // fileno index into filetab line int32 // line number of the call site func_ int32 // offset into pclntab for name of called function parentPc int32 // position of an instruction whose source position is the call site (offset from entry) } type interfacetype struct { typ _type pkgpath name mhdr []imethod } layout of Itab known to compilers allocated in non-garbage-collected memory Needs to be in sync with ../cmd/compile/internal/gc/reflect.go:/^func.dumptypestructs. type itab struct { inter *interfacetype _type *_type hash uint32 // copy of _type.hash. Used for type switches. _ [4]byte fun [1]uintptr // variable sized. fun[0]==0 means _type does not implement inter. } func getitab(inter *interfacetype, typ *_type, canfail bool) *itab func (m *itab) init() string init fills in the m.fun array with all the code pointers for the m.inter/m._type pair. If the type does not implement the interface, it sets m.fun[0] to 0 and returns the name of an interface function that is missing. It is ok to call this multiple times on the same m, even concurrently. Note: change the formula in the mallocgc call in itabAdd if you change these fields. type itabTableType struct { size uintptr // length of entries array. Always a power of 2. count uintptr // current number of filled entries. entries [itabInitSize]*itab // really [size] large } func (t *itabTableType) add(m *itab) add adds the given itab to itab table t. itabLock must be held. func (t *itabTableType) find(inter *interfacetype, typ *_type) *itab find finds the given interface/type pair in t. Returns nil if the given interface/type pair isn't present. type itimerval struct { it_interval timeval it_value timeval } Lock-free stack node. Also known to export_test.go. type lfnode struct { next uint64 pushcnt uintptr } func lfstackUnpack(val uint64) *lfnode lfstack is the head of a lock-free stack. The zero value of lfstack is an empty list. This stack is intrusive. Nodes must embed lfnode as the first field. The stack does not keep GC-visible pointers to nodes, so the caller is responsible for ensuring the nodes are not garbage collected (typically by allocating them from manually-managed memory). type lfstack uint64 func (head *lfstack) empty() bool func (head *lfstack) pop() unsafe.Pointer func (head *lfstack) push(node *lfnode)
https://golang.org/pkg/runtime/?m=all
CC-MAIN-2020-05
refinedweb
42,857
56.35
I'm currently making an IDE application called Moonlite. When I started coding it, I looked for good, already existing code for creating IDEs. I found nothing. So, when I was almost done with it (I'm not done with it yet, because of this framework taking all my time), I figured that I found it rather unfair that everyone should go through the same as I did for making such an application. (It took me 8 months of hard work - about 7 - 10 hours a day - to create this. Not because the main coding would've taken that long, but because I had to figure out how to do it and what was the most efficient solution.) So I started Storm, and now that it is finished, I'm going to present it to you! :) Please note that I did this of my own free will and my own free time. I would be happy if you could respect me and my work and leave me a comment on how to make it better, bug reports, and so on. Thank you for your time :) Using the code is a simple task; simply drag-drop from the toolbox when you have referenced the controls you want, and that should be it. However, for those who want a more in-depth tutorial, go to the folder "doc" in the package and open "index.htm". In this chapter, I will mostly cover docking, plug-ins, and TextEditor, since they are the most advanced ones. I will not cover Win32 and TabControl. CodeCompletion relies on the TextEditor, and it really isn't as advanced as some may think. It's just a control containing a generic ListBox that can draw icons. The ListBox' items are managed by the CodeCompletion itself. CodeCompletion handles the TextEditor's KeyUp event, and in that, it displays the members of the ListBox depending on what the user has typed in the TextEditor. CodeCompletion TextEditor KeyUp Update: CodeCompletion is now contained in the TextEditor library! Every time it registers a key press, it updates a string containing the currently typed string by calling a method GetLastWord(), which returns the word that the user is currently on. How a string is split up in words is defined in the TextEditor as 'separators'. Every time GetLastWord() is called, CodeCompletion calls the native Win32 function 'LockWindowUpdate' along with the parent TextEditor's handle to prevent flickering as the OS renderers the TextEditor/CodeCompletion. GetLastWord() LockWindowUpdate Actually, CodeCompletion does this when it auto completes a selected item in the child GListBox, too. Every time CodeCompletion registers a key that it doesn't recognize as a 'valid' character (any non-letter/digit character that isn't _), it calls the method SelectItem() along with a specific CompleteType. GListBox SelectItem() CompleteType Now, what is a CompleteType? You see, CompleteType defines how the SelectItem() will act when auto completing a selected item in the GListBox. There are two modes - Normal and Parenthesis. When Normal is used, the SelectItem() method removes the whole currently typed word; Parenthesis, however, removes the whole currently typed word except the first letter. This might seem strange, but it is necessary when, for example, the user has typed a starting parenthesis. You might find yourself having a wrong auto completed word sometimes, too - this is where you should use Parenthesis instead of Normal as the CompleteType. (You are able to define a custom CompleteType when you add a member item to CodeCompletion.) Normal Parenthesis Since the users define the tooltips of member items themselves, it is rather easy to display the description of items. When a new item is selected in the GListBox, a method updates the currently displayed ToolTip to match the selected item's description/declaration fields. Since a normal TreeNode/ListBoxItem wouldn't be able to have multiple Tags, I created the GListBoxItem, which also contains an ImageIndex for the parent GListBox' ImageList. The GListBoxItem contains a lot of values that are set by the user, either on initialization or through properties. ToolTip TreeNode ListBoxItem Tag GListBoxItem ImageIndex ImageList Each time the control itself or its tooltip is displayed, their positions are updated. The formula for the tooltip is this: Y = CaretPosition.Y + FontHeight * CaretIndex + Math.Ceiling(FontHeight + 2) for Y. The setting of X is simply CaretPositon.X + 100 + CodeCompletion.Width + 2. The formula for CodeCompletion's Y is the same as for the tooltip; however, X is different; X = CaretPosition.X + 100. Y = CaretPosition.Y + FontHeight * CaretIndex + Math.Ceiling(FontHeight + 2) Y X CaretPositon.X + 100 + CodeCompletion.Width + 2 X = CaretPosition.X + 100 First, I will start out with a Class Diagram to help me out: As you can see, there are a lot of classes. A DockPane can contain DockPanels, and DockPanels are the panels that are docked inside the DockPane. A DockPanel contains a Form, DockCaption, and DockTab. When a DockPanel's Form property is set, the DockPanel updates the Form to match the settings needed for it to act as a docked form. DockPane DockPanel Form DockCaption DockTab A DockCaption is a custom drawn panel. It contains two Glyphs - OptionsGlyph and CloseGlyph - both inheriting the Glyph class, which contains the rendering logic for a general Glyph. The OptionsGlyph and CloseGlyph contain images that are supposed to have a transparent background. A lot of people use very complex solutions for this; however, I found a very, very simple and short solution: OptionsGlyph CloseGlyph Glyph /// <summary> /// Represents an image with a transparent background. /// </summary> [ToolboxItem(false)] public class TransImage : Panel { #region Properties /// <summary> /// Gets or sets the image of the TransImage. /// </summary> public Image Image { get { return this.BackgroundImage; } set { if (value != null) { Bitmap bitmap = new Bitmap(value); bitmap.MakeTransparent(); this.BackgroundImage = bitmap; Size = bitmap.Size; } } } #endregion /// <summary> /// Initializes a new instance of TransImage. /// </summary> /// <param name="image">Image that should /// have a transparent background.</param> public TransImage(Image image) { // Set styles to enable transparent background this.SetStyle(ControlStyles.Selectable, false); this.SetStyle(ControlStyles.SupportsTransparentBackColor, true); this.BackColor = Color.Transparent; this.Image = image; } } As simple as that. A basic panel with transparent background, and of course, an Image property - Bitmap.MakeTransparent() does the rest. Panel is indeed a lovable control. As we proceed in this article, you'll find that I base most of my controls on Panel. Image Bitmap.MakeTransparent() Panel Well, the DockCaption handles the undocking of the DockPanel and the moving of the DockForm. Yeah, DockForm. When a DockPanel is undocked from its DockPane container, a DockForm is created, and the DockPanel is added to it. The DockForm is a custom drawn form which can be resized and moved, and looks much like the Visual Studio 2010 Docking Form. DockForm Since the caption bar has been removed from the DockForm, the DockCaption takes care of the moving. This is where Win32 gets into our way - SendMessage and ReleaseCapture are used to do this. SendMessage ReleaseCapture When a DockPanel is added to a DockPane, and there's already a DockPanel docked to the side that the user wants to dock the new DockPanel, the DockPane uses the already docked DockPanel's DockTab to add the new DockPanel as a TabPage. The user can then switch between DockPanels. TabPage The DockTab inherits the normal TabControl, and overrides its drawing methods. This means that it is completely customizable for the user and very flexible for us to use. TabControl The plug-ins library is one of the shorter; however, it is probably the most complex too. Since the PluginManager class has to locate dynamic link libraries, we should check whether they are actual plug-ins, check if they use the optional plugin attribute, and if they do, store the found information in an IPlugin, and add the found plug-in to the form given by the user if it's a UserControl. PluginManager plugin IPlugin So basically, most of these processes happen in the LoadPlugins method. However, the LoadPlugins method is just a wrapper that calls LoadPluginsInDirectory with the PluginsPath set by the user. Now, the LoadPluginsInDirectory method loops through all the files in the specific folder, checks whether their file extension is ".dll" (which indicates that the file is a code library), and then starts the whole "check if library contains plug-ins and check if the plug-ins have any attributes"-process: LoadPlugins LoadPluginsInDirectory PluginsPath This is done with the Assembly class, located in the System.Reflection namespace: Assembly System.Reflection Assembly a = Assembly.LoadFile(file); Then, an array of System.Type is declared, which is set to a.GetTypes(). This gives us an array of all types (class, enums, interfaces, etc.) in the assembly. We can then loop through each Type in the Type array and check whether it is an actual plug-in, by using this little trick: System.Type a.GetTypes() Type (t.IsSubclassOf(typeof(IPlugin)) == true || t.GetInterfaces().Contains(typeof(IPlugin)) == true) Yeah - simple - this simply can't go wrong. Well, we all know that interfaces can't get initialized like normal classes. So, instead, we use the System.Activator class' CreateInstance method: System.Activator CreateInstance IPlugin currentPlugin = (IPlugin)Activator.CreateInstance(t); Boom. We just initialized an interface like we would with a normal class. Neat, huh? Now, we just need to setup the initialized interface's properties to match the options of the PluginManager and the current environment. This can by used by the creator of the plug-ins to create more interactive plug-ins. When we've done this, we simply add IPlugin to the list of loaded plug-ins in the PluginManager. However, the plug-ins loaded by the PluginManager aren't enabled by default. This is where the user has to do some action. The user has to loop through all the IPlugins in the PluginManager.LoadedPlugins list, and call the PluginManager.EnablePlugin(plugin) method on it. PluginManager.LoadedPlugins PluginManager.EnablePlugin(plugin) Now, if you have, for example, a plug-in managing form in your application, like Firefox, for example, you can use the PluginManager.GetPluginAttribute method to get an attribute containing information about the plug-in, if provided by the creator of the plug-in. PluginManager.GetPluginAttribute The way this works, is by creating an object array and setting it to the System.Type method, GetCustomAttributes(). The variable "type" is set to be the plug-in's Type property, which is set in the loading of a plug-in. object GetCustomAttributes() type object[] pAttributes = type.GetCustomAttributes(typeof(Plugin), false); Add it to the list of plug-ins: attributes.Add(pAttributes[0] as Plugin); And, when we're done looping, we'll finally return the list of found attributes. Since I love my TextEditor, I will give you a little preview of what it's capable of. And it's not a little ;) As you might have pictured already, this library has incredibly many classes/enums/interfaces/namespaces. Actually, there's so many that I won't put up a class diagram or explain the links between all the classes. The TextEditor is basically just a container of the class TextEditorBase; it is actually TextEditorBase that contains all the logic for doing whatever you do in the TextEditor. The TextEditor only manages its four TextEditorBases along with splitters when you've split up the TextEditor in two or more split views. TextEditorBase However, the TexteditorBase doesn't take care of the drawing; it simply contains a DefaultPainter field which contains the logic for rendering all the different stuff. Whenever drawing is needed, the TextEditorBase calls the appropriate rendering methods in the DefaultPainter. The DefaultPainter also contains a method named RenderAll which, as you might've thought about already, renders all the things that are supposed to be rendered in the TextEditor. TexteditorBase DefaultPainter RenderAll Since the different highlighting modes are defined in XML sheets, an XML sheet reader is required. The LanguageReader parses a given XML sheet and tells the parser how to parse each token it finds in the typed text in the TextEditor. A user does not use the LanguageReader directly; the user can either use the SetHighlighting method of a TextEditor, which is a wrapper, or use the TextEditorSyntaxLoader.SetSyntax method. LanguageReader SetHighlighting TextEditorSyntaxLoader.SetSyntax Unfortunately, I can't take credit for it all. I based it on DotNetFireball's CodeEditor; however, the code was so ugly, inefficient, and unstructured that it would probably have taken me less time to remake it from scratch than fix all these things. The code still isn't really that nice; however, it is certainly better than before. Update: Since I have now gone through all source code and documented and updated it to fit my standards, I claim this TextEditor my own work. However, the way I do things are still the same as the original, therefore I credit the original creators. I should probably mention that DotNetFireball did not create the CodeEditor. They simply took another component, the SyntaxBox, and changed its name. Just for your information. So, as you can see (or, I certainly hope you can), it is a gigantic project, which is hard to manage, and I have one advice to you: don't do this at home. It has taken so much of my time, not saying that I regret it, but really, if such a framework already exists, why not use it? Making your own would be lame. Not saying this for my own fault, so I can get more users, I'm saying this because I don't want you to go through the same things I did for such 'basic' things. (Not really basic, but stuff that modern users require applications to have.) So yeah, I suppose that this is it. The place where you say 'enjoy' and leave the last notes, etc. Yeah, enjoy it, and make good use of it - and let me see some awesome applications made with this, please :) TabStrip AutomaticLanguageDetection CurrentLanguage This article, along with any associated source code and files, is licensed under The GNU Lesser General Public License (LGPLv3) General News Suggestion Question Bug Answer Joke Rant Admin Man throws away trove of Bitcoin worth $7.5 million
http://www.codeproject.com/Articles/42799/Storm-the-world-s-best-IDE-framework-for-NET?fid=1550096&df=90&mpp=10&sort=Position&spc=None&tid=4324846
CC-MAIN-2013-48
refinedweb
2,357
55.64
25 June 2008 22:31 [Source: ICIS news] WASHINGTON (?xml:namespace> ?xml:namespace> The High Court ruled that punitive damages of $2.5bn (€1.6bn) assessed against ExxonMobil by a lower federal court were excessive and said that the punitive or punishment amount should instead reflect the $507.5m assessed by the federal jury for compensatory damages. The Supreme Court sent the 19-year-old case back to the appellate court level with the instruction to reconsider the jury’s $2.5bn punitive award. ExxonMobil chief executive Rex Tillerson recognized the ruling with a subdued statement that lacked any celebratory terms, saying that “The Valdez spill was a tragic accident and one which the corporation deeply regrets”. Tillerson said that ExxonMobil took immediate responsibility for the 24 March 1989 tanker spill at A federal court jury had initially assessed a $5bn punitive damages against ExxonMobil in addition to the $507.5m compensatory assessment, but a federal appeals court cut that punitive penalty in half. The High Court’s decision on Wednesday means that the initial $5bn punitive damages award will likely be reduced by about 90% to $507m. The ruling was welcomed by the broader . Manufacturers in general were worried that, if allowed to stand, the $2.5bn punitive award against the energy company would serve as a precedent that could influence a broad range of future product liability rulings. ($1 = €
http://www.icis.com/Articles/2008/06/25/9135494/court-gives-big-win-to-exxonmobil-and-us-business.html
CC-MAIN-2014-52
refinedweb
231
54.42
This is your resource to discuss support topics with your peers, and learn from each other. 08-12-2008 05:43 AM Hi Community I have been looking for best practise to convert "tim" (Java statement below) in milli-seconds (since 1970 midnight) to human readable Date & Time. I know roughly about SimpleDateFormat() but not sure how to implement it. If anyone knows or can provide a sample code that will be great. **** String tim = Long.toString(l.getTimestamp()); // timestamp **** Thanks in advance Solved! Go to Solution. 08-12-2008 08:49 AM 08-12-2008 09:43 PM Many Thanks for your reply. I tried to use the above lines of code as it is and got few errors. java:49: cannot find symbol symbol : class SimpleDateFormat location : class ................ ..................................... I also used import net.rim.device.api.i18n.Format.*; But I am still getting error. Pardon me for my inexpertese with Java. Thanks again 08-12-2008 10:49 PM - edited 08-13-2008 12:27 PM Let me ask seriously: Do you program w/o using the Javadocs? I can't even tie my shoes without looking at the Javadocs first - they're indispensable. If you open the Javadocs and browse through the alphabetical list of classes you'll find SimpleDateFormat in net.rim.device.api.i18n (Format isn't a package, it's a class so you shouldn't be trying to do a wildcard import with it). import net.rim.device.api.i18n.SimpleDateFormat; Even better, if you use Eclipse you can put your cursor on the SimpleDateFormat word and hit ctrl-shift-m and it will add the import for you. In NetBeans do ctrl-shift-i. 08-12-2008 11:28 PM Sorry Richard, for not doing my homework. I did have a look at the java docs and with my limited knowledge in Java I did have import net.rim.device.api.i18n.SimpleDateFormat.*; But after looking at your reply I removed (.*) at the end of my import. Since all the imports in my code had .* at the end I assumed that as a syntax which wasnt true. My Sincere Thanks! Regards Adi 08-13-2008 03:15 AM if you use Eclipse you can put your cursor on the SimpleDateFormat word and hit ctrl-shift-m and it will add the import for you. 08-13-2008 04:59 AM Thanks Simon I am using JDE and probably would like to use eclipse down the track, as you descibed there are benifits in doing that. 08-13-2008 05:15 AM
https://supportforums.blackberry.com/t5/Java-Development/HOW-TO-Timestamp-gt-simpledateformat/m-p/30209/highlight/true
CC-MAIN-2016-30
refinedweb
429
75.61
Gauge Charts¶ Gauge charts combine a pie chart and a doughnut chart to create a “gauge”. The first chart is a doughnut chart with four slices. The first three slices correspond to the colours of the gauge; the fourth slice, which is half of the doughnut, is made invisible. A pie chart containing three slices is added. The first and third slice are invisible so that the second slice can act as the needle on the gauge. The effects are done using the graphical properties of individual data points in a data series. from openpyxl import Workbook from openpyxl.chart import PieChart, DoughnutChart, Series, Reference from openpyxl.chart.series import DataPoint data = [ ["Donut", "Pie"], [25, 75], [50, 1], [25, 124], [100], ] # based on wb = Workbook() ws = wb.active for row in data: ws.append(row) # First chart is a doughnut chart c1 = DoughnutChart(firstSliceAng=270, holeSize=50) c1.title = "Code coverage" c1.legend = None ref = Reference(ws, min_col=1, min_row=2, max_row=5) s1 = Series(ref, title_from_data=False) slices = [DataPoint(idx=i) for i in range(4)] slices[0].graphicalProperties.solidFill = "FF3300" # red slices[1].graphicalProperties.solidFill = "FCF305" # yellow slices[2].graphicalProperties.solidFill = "1FB714" # green slices[3].graphicalProperties.noFill = True # invisible s1.data_points = slices c1.series = [s1] # Second chart is a pie chart c2 = PieChart(firstSliceAng=270) c2.legend = None ref = Reference(ws, min_col=2, min_row=2, max_col=2, max_row=4) s2 = Series(ref, title_from_data=False) slices = [DataPoint(idx=i) for i in range(3)] slices[0].graphicalProperties.noFill = True # invisible slices[1].graphicalProperties.solidFill = "000000" # black needle slices[2].graphicalProperties.noFill = True # invisible s2.data_points = slices c2.series = [s2] c1 += c2 # combine charts ws.add_chart(c1, "D1") wb.save("gauge.xlsx")
http://openpyxl.readthedocs.io/en/latest/charts/gauge.html
CC-MAIN-2017-30
refinedweb
282
54.49
F# is similar to the Caml programming language family, and was partially inspired by OCaml. However, the F# compiler implementation is a new re-implementation of a Caml-style language, and some relatively minor differences in the implemented languages exist, much as relatively minor differences exist between "Caml Light" and "OCaml". One of the explicit aims of F# is to make it feasible to cross-compile F# code and other ML code such as OCaml code, at least for the core computational components of an application. Cross-compilation between OCaml and F# has been applied to very substantial code bases, including the F# compiler itself, the Abstract IL library and the Static Driver Verifier product shipped by Microsoft. However, F# and OCaml are different languages, so programmers need to be aware of the differences in order to develop code that can be compiled under both systems. The OCaml language is documented at the OCaml website. This section documents the places where F# differs from the OCaml language, i.e., where OCaml code will either not compile or will execute with a different semantics when compiled with F#. This list was compiled with reference to OCaml 3.06, so later updates to OCaml may not be mentioned. In addition, the list does not mention differences in the libraries provided. If any further differences are detected please contact the F# team and we will attempt to either fix them or document them. In summary: Functors, OCaml-style objects, labels and optional arguments are not supported. Some common functors like Set.Make and Hashtbl.Make are simulated by returning records of functions. Some identifiers are now keywords. Top-level initialisation effects occur when .dll is loaded. Some minor parsing differences. Strings are Unicode and immutable. The "include" declaration is not supported. Two top-level definitions with the same name are not allowed within a module or a module type. Type equations may not be abstracted. Constraining a module by a signature may not make the values in the module less polymorphic. The C stub mechanisms to call C from OCaml are not supported. (F# has good alternatives that involve no coding in C at all.) Parsers generated with fsyacc have local parser state, while ocamlyacc has a single global parser state code. Module abbreviations are just local aliases. Some additional restrictions are placed on immediately-recursive data. The in_channel and out_channel abstractions do not apply LF/CRLF translation on Windows platforms. Functors, OCaml-style objects, labels and optional arguments are not supported. The design of F# aims for a degree of compatibility with the core of the OCaml language and omits these features. The Set, Map and Hashtbl modules in the library all support pseudo-functors that accept comparison/hash/equality functions as input and return records of functions. Some identifiers are now keywords. For example, the identifiers null and inline are keywords in F#. See the informal language specification for full details. Some identifiers are not keywords but are reserved for future use. Some operator names are used for quotations. All operators beginning with <@ or ending with @> are reserved for use as quotation-processing operators. Top-level initialisation effects occur when a .dll is loaded or a top-level module is first referenced . In OCAML the top-level value bindings are executed first by module, then by definition order within a module. In F#, the initialisation sequence within a module is still the definition order, however, module initialisation may occur at any point prior to the first use of any item within a top-level module. This need not be when the application starts. An application itself initializes by eagerly executing the bindings in the last module specified on the command line when the application was compiled. There are some minor parsing differences. The syntax !x.y.z parses as !(x.y.z) rather than (!x).y.z. OCaml supports the original parsing partly by making case distinctions in the language grammer (uppercase identifiers are module names, lower case identifiers are values, fields and types). However, F# does not lexically distinguish between module names and value names, e.g., upper case identifiers can be used as value names. However, in order to maintain compatibility with OCaml one would then still have to modify the parsing of long identifiers based on case. Although we would prefer to follow the original OCaml syntax, we have decided to depart from OCaml at this point. For some reason, OCaml allows constructor application without parentheses, e.g., A Some i gives type t = A of int option. F# rejects this syntax. OCaml gives type annotations on patterns a precedence lower than that of tuple patterns. This means that in OCaml (x : int, y : int) is not a legal pattern, and (x, y : int) will give a type error since it is trying to assert that x,y has type int. In contrast F# binds type annotations with lower precedence, so (x : int, y : int) is legal and annotates each parameter. The OCaml approach is OK except when you have to start writing more type annotations, which is more common in F# code. Other minor parsing differences may also be present in any particular release, since the F# parser is a complete from-scratch re-implementation of an ML-like language. Strings are Unicode and immutable. This has a number of follow-on effects. For example, some of the library signatures differ, e.g., for the IO function input accepts a mutable byte[] buffer rather than a string. Chars are "wide characters", giving Unicode support at the expense of breaking the equivalence between characters and bytes. To convert between byte arrays and strings you must call library functions such as the following defined in the Bytearray module in F#'s mllib.dll: let ascii_to_string (b:byte[]) = System.Text.Encoding.ASCII.GetString(b) let string_to_ascii (s:string) = System.Text.Encoding.ASCII.GetBytes(s) let ascii_to_string (b:byte[]) = System.Text.Encoding.ASCII.GetString(b) let string_to_ascii (s:string) = System.Text.Encoding.ASCII.GetBytes(s) Module abbreviations are just abbreviations. It is common for ML programmers to use abbreviations such as module M = Matrix in their module signatures and module implementations. These are just abbreviations: they do not define a new module, nor does other code referring to the given module see names such as M within the module namespace. This choice stems from the fact that the facility is nearly always used for local abbreviations, and many programmers are surprised to find that the abbreviations become part of the published interface of their module, and sometimes even stop using abbreviations as a result. This decision is reviewed from time to time and at some point we may support an --ml-compatibility option that supports the alternative treatment. Two top-level definitions with the same name are not allowed within a module or a module type. let x = 1 let x = 3 let x = 1 let x = 3 will give a compilation error. Duplicates are allowed in modules constrained by signatures. Type equations may not be abstracted. Type equations (as opposed to new type definitions) may not be hidden by a signature. For example, the type abbreviation type x = int type x = int constrained by a signature type x type x will give an error. But the following constructed type (here X is a data cosntructor) type x = X of int type x = X of int constrained by the same signature will not, and constrained by a signature will compile. Constraining a module by a signature may not make the values in the module less polymorphic. That is, values in modules may not be more polymorphic than the types given in the corresponding signature. For example, a compile-time error will occur if a module declares let f x = x let f x = x and the signature declares val f : int -> int val f : int -> int The value in the module must be constrained explicitly to be less polymorphic: let f (x:int) = x let f (x:int) = x This can be annoying because extra type annotations are needed, but greatly simplifies compilation. In addition, the code produced turns out to be more efficient. Some additional restrictions are placed on immediately-recursive data. OCaml supports "recursion through data types using 'let rec'" to create "infinite" (i.e., self-referential) data structures. F# both extends this feature (see the advanced section of the manual) and places some additional restrictions. In particular, you can't use recursive 'let rec' bindings through immutable fields except in the assembly where the type is declared. This means let rec x = 1 :: x is not permitted. This restriction is required to make sure the head/tail fields of lists are may be made immutable in the underlying assembly, which is ultimately more important than supporting all variations on this rarely-used feature. However, note that type node = { x: int; y: node} let rec myInfiniteNode = {x=1;y=myInfiniteNode} is still supported since the "let rec" occurs in the same assembly as the type definition, and type node = node ref let rec myInfiniteNode = { contents = myInfiniteNode } is supported since "contents" is a mutable field of the type "ref". (When compiling for .NET Versions 1.0 and 1.1 only.) There are two kinds of array types The first is the truly polymorphic set of F# array types, i.e., 'a array. These are correctly polymorphic in the sense that you may write new polymorphic code that manipulates these values. However, because of the lack of support for generics in the CLR these array types are always compiled to the .NET type object[]. A rich set of polymorphic operations over these array types is provided in the Array module. .NET array types are also provided, e.g., int[] or string[]. These are not truly polymorphic in the sense that the F# compiler must be able to infer the exact .NET array types manipulated by any code you write. If you want to write new polymorphic operations over these types then you must duplicate your code for each new array type you wish to manipulate. (This also means you can't use these types as building blocks for new F# data structures such as hash tables – use the polymorphic array types above instead. This is what the built-in Hashtable module does.) A rich set of pseudo-polymorphic operations over these array types is provided in the Microsoft.FSharp.Compatibility.CompatArray module. These are pseudo-polymorphic because the code will be duplicated and type-specialized at each callsite. The C stub mechanisms to call C from OCaml are not supported. Instead, [<DllImport(...)>] attributes can be used to declare stubs directly in F#. Pinning and allocation can also be done ddirectly from F#. The F# Wiki has extended notes on using C from F#. Parsers generated with fsyacc have local parser state, while ocamlyacc has a single global parser state code. Parsers generated by fsyacc.exe provide location information within parser actions. However, that information is not available globally, but rather is accessed via the functions available on the following local variable which is available in all parser actions: parseState : 'a Microsoft.FSharp.Primitives.ParserState.Provider However, this is not compatible with the parser specifications used with OCamlYacc and similar tools, which make a single parser state available globally. If you wish to use a global parser state (e.g., so your code will cross-compile with OCaml) then you can use the functions in this file. You will need to either generate the parser with '--ml-compatibility' option or add the code Parsing.set_parse_state parseState; at the start of each action of your grammar. The functions below simply report the results of corresponding calls to the latest object specified by a call to set_parse_state. Note that there could be unprotected multi-threaded concurrent access for the parser information, so you should not in general use these functions if there may be more than one parser active, and should instead use the functions directly available from the parseState object. Missing compatibility modules. OCaml programmers will notice that compatibility wrappers for OCaml libraries are not always available with the distribution. The F# community and Wiki may provide pointers or samples that provide these wrappers. It is also sometimes simpler to access the .NET Framework Class Library dirctly, e.g., it may be easier to use System.Text.RegularExpressions than to use the OCaml Regexp package, of System.Net.Sockets for the socket portion of the OCaml Unix library and System.Windows.Forms as a basic windowing package. On Windows, OCaml's open_in abstraction opens text files in a mode where output_string "\n" (i.e. LF or line feed) outputs the two characters "\r\n", and this mapping is reversed on output. This translation is not applied by F#: output_string "\n" will write a single LF character, and output_string "\r\n" must be used explicitly to get the two characters. The same applies to uses of printf, fprintf, eprintf and related functions.
http://web.archive.org/web/20080410181630/http:/research.microsoft.com/fsharp/manual/ml-compat.aspx
CC-MAIN-2014-52
refinedweb
2,168
56.66
found it impossible to reference a generic class defined from inside the shell. This issue is reproducible in both debian and OS X. For example: ``` Mono C# Shell, type "help;" for help Enter statements below. csharp> public class Foo { > > } csharp> typeof(Foo) Foo csharp> public class Bar<T> { > > } csharp> typeof(Bar<int>) (1,2): error CS0584: Internal compiler error: Object reference not set to an instance of an object ``` Oddly enough, the only way I was able work around this was by giving a one-liner definition: ``` csharp> class Bar<T> { } csharp> typeof(Bar<int>) Bar`1[System.Int32] ``` Fixed in master
https://xamarin.github.io/bugzilla-archives/22/22393/bug.html
CC-MAIN-2019-39
refinedweb
102
61.67
In this example, we are given a noisy series of data points which we want to fit to an ellipse. The equation for an ellipse may be written as a nonlinear function of angle, $\theta$ ($0 \le \theta < 2\pi$), which depends on the parameters $a$ (the semi-major axis) and $e$ (the eccentricity): $$ r(\theta; a,e) = \frac{a(1-e^2)}{1-e\cos\theta}. $$ To fit a sequence of data points $(\theta, r)$ to this function, we first code it as a Python function taking two arguments: the independent variable, theta, and a tuple of the parameters, p = (a, e). The function we wish to minimise is the difference between this model function and the data, r, defined as the method residuals: def f(theta, p): a, e = p return a * (1 - e**2)/(1 - e*np.cos(theta)) def residuals(p, r, theta): return r - f(theta, p) We also need to give leastsq an initial guess for the fit parameters, say p0 = (1,0.5). The simplest call to fit the function would then pass to leastsq the objects residuals, p0 and args=(r, theta) (the additional arguments needed by the residuals function): plsq = leastsq(residuals, p0, args=(r, theta)) If at all possible, however, it is better to also provide the Jacobian (the first derivative of the fit function with respect to the parameters to be fitted). Expressions for these are straightforward to calculate and implement: \begin{align*} \frac{\partial f}{\partial a} &= \frac{(1-e^2)}{1-e\cos\theta},\\ \frac{\partial f}{\partial e} &= \frac{a(1-e^2)\cos\theta - 2ae(1-e\cos\theta)}{(1-e\cos\theta)^2}. \end{align*} However, the function we wish to minimise is the residuals function, $r - f$ so we need the negatives of these derivatives. Here is the working code and the fit result. import numpy as np from scipy import optimize import pylab def f(theta, p): a, e = p return a * (1 - e**2)/(1 - e*np.cos(theta)) # The data to fit theta = np.array([0.0000,0.4488,0.8976,1.3464,1.7952,2.2440,2.6928, 3.1416,3.5904,4.0392,4.4880,4.9368,5.3856,5.8344,6.2832]) r = np.array([4.6073, 2.8383, 1.0795, 0.8545, 0.5177, 0.3130, 0.0945, 0.4303, 0.3165, 0.4654, 0.5159, 0.7807, 1.2683, 2.5384, 4.7271]) def residuals(p, r, theta): """ Return the observed - calculated residuals using f(theta, p). """ return r - f(theta, p) def jac(p, r, theta): """ Calculate and return the Jacobian of residuals. """ a, e = p da = (1 - e**2)/(1 - e*np.cos(theta)) de = (-2*a*e*(1-e*np.cos(theta)) + a*(1-e**2)*np.cos(theta))/(1 - e*np.cos(theta))**2 return -da, -de return np.array((-da, -de)).T # Initial guesses for a, e p0 = (1, 0.5) plsq = optimize.leastsq(residuals, p0, Dfun=jac, args=(r, theta), col_deriv=True) print(plsq) pylab.polar(theta, r, 'x') theta_grid = np.linspace(0, 2*np.pi, 200) pylab.polar(theta_grid, f(theta_grid, plsq[0]), lw=2) pylab.show()
https://scipython.com/book/chapter-8-scipy/examples/non-linear-fitting-to-an-ellipse/
CC-MAIN-2019-22
refinedweb
529
54.93
On Sat, Nov 06, 2004 at 05:26:16PM -0500, Karl Berry wrote: > (2) missing #ifdef HAVE_LC_MESSAGES ? > gcc ... -c makeinfo.c > makeinfo.c: In function `main': > makeinfo.c:530: `LC_MESSAGES' undeclared (first use in this function) > > If Ultrix does not even have LC_MESSAGES, you might as well configure > with --disable-nls. I haven't worked on an Ultrix system in 10 years, > so if something better is needed, you'll have to send me patches. Bernhard had written some longer (german) text to me from which I conclude that it is sufficient to protect the use of LC_MESSAGES: #ifdef HAVE_LC_MESSAGES setlocale (LC_MESSAGES, ""); #endif Thomas
http://lists.gnu.org/archive/html/bug-texinfo/2004-11/msg00017.html
CC-MAIN-2016-26
refinedweb
104
69.11
Hello Guys, I have a project on my hands. I want to ensure that it goes smoothly. I am tasked with Upgrading a 2003 Environment to Server 2012 R2. I have my plans and research, but as always, it's always to check with the community to get a solid plan forward. I am very grateful for your input! Here's the scenario. There are 2 Servers. Server 1 is our File Server. It also functions as our Domain Controller and our DHCP server. It runs Small Business Server 2003 (32bit edition) Server 2: is our Terminal Server. It runs Server 2003 R2 Standard. We have a plan to upgrade this environment via phased approach, as IT is critical to this organization's function. We're buying new servers in about a year and a half to two years time, our hardware has some life in it. So this is OS and environment we're talking about. Step 1: Back up 2 Servers Fully Step 2:We're considering to firstly upgrade Server 1 to Server 2012 R2 Standard so that it preserves its roles and function - As a DC, as a DHCP Server and File Server. After the upgrade of our File Server to 2012 R2 Standard with Roles, We would like our Terminal Server with Server 2003 R2 Standard to function for a week with Server 1 serving as a 2012 DC, DHCP and File Server. Step 2:One week later, Upgrade Server 2 to Server 2012 R2 Standard and move from Terminal Services to Remote Deskop Services Environment, preserving profile data/user data etc. We also want to configure RemoteApp. Step 3: Tie up any loose configurations on the server end and client end. ***End of Plan*** Question 1: Do you see holes in my plan? Question 2: I am unaware if there is a direct pathway/established procedure to upgrade Small Business Server 2003 (32 bit) to Server 2012 R2 Standard (64 bit). (We're good hardware wise) I see posted on this forum, how to Migrate Active Directory from Server 2003 to Server 2012 R2(...) but I do not know if it will be the same for SBS 2003 (32 bit). Question 3: Can the existing Server 2003 Terminal Services Environment work were Server 2 is a member server and Server One is a 2012 R2 Domain Controller , DHCP and File Server? Question 4: Is there a smooth pathway from Server 2003 Terminal Server to Server 2012 RDS to Server 2012 RemoteApp? (I think it should) Any Best Practice advice will be welcome. Additional Info: We're talking 25 users via RDS. We want to add 2 to 5 more max to this. Here are the specs and Shipping Dates from manufacturer of each Server from the Dell Website via Service Tag: ***note -BOTH SERVERS WILL BE UPGRADED TO HAVE 24GB of RAM EACH**** Terminal Server: File Server: I humbly thank the community for your support and contribution. Kind Regards, Marlon R Edited May 6, 2016 at 00:13 UTC 4 Replies 1. Install 2012R2 on your new server. 2. Create a 2012R2 virtual machine and promote it to a DC. Let it replicate, then transfer all FSMO roles to it. 3. Create a second 2012R2 virtual machine and make it your new file server. Install DFS. Create a DFS namespace and point it at your 2003 fileshare. Change all your workstations to reference the DFS share. Duplicate the files to the 2012R2 file server. Add another DFS reference to the 2012R2 share, but leave it disabled. When ready, perform a final sync and turn on the 2012R2 link and turn off the 2003 link. Then, migrate your TS to a new 2012R2 VM.. Thank you for clarifying that - good to know. I would think, starting from scratch, you could configure the new DC/DHCP/DNS server and the new file server, get all the files transferred and the FSMO roles transferred in about 4 hours (assuming about 350GB of files). That's assuming that you already have the DFS namespace defined.
https://community.spiceworks.com/topic/1595810-upgrade-2-servers-from-server-2003-product-family-to-2-server-2012-r2-standard?page=1
CC-MAIN-2019-26
refinedweb
677
72.76
Local privilege escalation via snapd socket Bug Description NOTE: Hello, snap team! The below is my full technical write up. My apologies if this is too much info or the wrong format, I figured I would include the entire thing as is. I hope this is helpful. I would like to disclose publicly after you have patched. Thanks for your hard work!!! ======= # (https:/ Management of locally installed snaps and communication with this online store are partially handled by a systemd service called "snapd" (https:/ # Vulnerability Overview ## Interesting Linux OS Information The snapd service is described in a systemd service unit file located at /lib/systemd/ Here are the first few lines: ``` [Unit] Description=Snappy daemon Requires= ``` This leads us to a systemd socket unit file, located at /lib/systemd/ The following lines provide some interesting information: ``` [Socket] ListenStream= ListenStream=. ## Vulnerable Code Being an open-source project, we can now move on to static analysis via source code. The developers have put together excellent documentation on this REST API available here: https:/ The API function that stands out as highly desirable for exploitation is "POST /v2/create-user", which is described simply as "Create a local user". The documentation tells us that this call requires root level access to execute. Reviewing the trail of code brings us to this file: https:/ Let's look at this line: ``` ucred, err := getUcred( ``` This is calling one of golang's standard libraries to analyze the ancillary data passed via the kernel's socket operations. This is a fairly rock solid way of determining the permissions of the process accessing the API. Using a golang debugger called delve, we can see exactly what this returns while executing the "nc" command from above. Here is the delve details when we break at the function above: ``` > github. ... 109: ucred, err := getUc: ``` func (wc *ucrednetConn) RemoteAddr() net.Addr { return &ucrednetAddr{ } ``` ...and then a bit more in this one: ``` func (wa *ucrednetAddr) String() string { return fmt.Sprintf( } ``` ..and is finally parse by this function: ``` func ucrednetGet( ... for _, token := range strings. var v uint64 ... } else if strings. if v, err = strconv.. ... => 41: for _, token := range strings. ... (dlv) print remoteAddr "pid=5127; ``` If we imagine the function splitting this string up by ";" and iterating through, we see that there are two sections that could potentially overwrite the first "uid=", if only we could influence them. The first ("socket=) ... => 210: func (c *conn) LocalAddr() Addr { ... (dlv) print c.fd ... laddr: net.Addr( Name: "/run/snapd. Net: "unix",}, raddr: net.Addr( ``` There is no remote address name, as we are simply using netcat to make the connection. The "@" symbol could be related to the concept of an abstract namespace for Unix domain sockets, as these start with that character. The whole concept of ancillary socket data that snapd's permission model relies on can be abused here. We can certainly influence that remote address, because that remote address is US!. client_ # Connect to the snap daemon client_ ``` Now watch what happens in the debugger when we look at the remoteAddr variable again: ``` > github. ... => 41: for _, token := range strings. ... (dlv) print remoteAddr "pid=5275; ```. ... => 65: return pid, uid, socket, err ... (dlv) print uid 0 ``` # Weaponizing With the knowledge to bypass the user access checks, developing a working exploit is possible. We will target the "create-user" function, as it eventually uses this file to run Linux shell commands to create a new user, complete with sudo rights and all: https:/ *NOTE: This module looks suspiciously vulnerable to OS command injection, as well. As the function itself creates users with root privileges, I did not bother to exploit those, but this should be thoroughly reviewed as well. Please sanitise all inputs to OS commands!!!* This function can create a local Linux user based on a username and an SSH key that are registered in the Ubuntu's snap developer portal. Anyone can create an account here, so this is not a limiting factor. To exploit using this POC, simply create an account at https:/ **IMPORTANT: YOU MUST CREATE AN UBUNTU ACCOUNT ONLINE AND UPLOAD AN SSH KEY TO EXPLOIT** dirty_sock.py -u "<email address hidden>" -k "id_rsa" The exploit will do the following: - Generate a random name for the sock file, including the "dirty sock" of ";uid=0;" - Create a socket bound to this file - Initiate a connection from the dirty sock to the snapd API - POST the correct specs to /v2/create-user - snapd queries the Ubuntu developer portal to gather the username and SSH public key associated with the account your provide - snapd creates a local Linux user based on those details - Verify a successful response - SSH to localhost using the new account and your SSH private key From this new account, you can execute "sudo" commands with no password. # Protection / Remediation As far as fixing this, the snap folks will know better than I what the best route is. I would advise sticking with the variables output from the golang standard libary - get rid of all that concatinating and re-parsing stuff in ucrednet.go. Thanks for reading!!! Interestingly this doesn't affect my openSUSE system since we've used "adduser" which is not used in this family of systems. I second Seth's comment, this some fantastic work Chris. I've attached a quick fix for this issue and confirmed it prevents the attack. Discussing this with Zyga right now. We might do something different, and drop the current approach entirely. This is what we discussed. +1 on John's patch. Wow, this was a fast response. Definitely the most pleasant disclosure experience I have had. Great work! This does appear to fix the issue. I know very little about golang myself, though, and I am still curious as to why this line is necessary: ``` return fmt.Sprintf( ``` The pid, uid, and socket variables are already set nicely by the standard library. Is there a reason to concatenate them into this string and then pull them apart again later? Would it not be easier and safer to simply pass the object as is and continue to reference them individually? I'm sure there is probably some other requirement that I just don't see. Anyway, again great work and thank your for being so kind and addressing this so quickly. Have a great weekend! - Chris Hey Chris! We are very grateful for such a fantastic and responsible disclosure. As for your question, AFAIR the problem was encapsulation. In golang, everything that is capitalised is a public interface and can be accessed from other packages (roughly directories translate to packages). Anything that is not capitalised is private and can be only accessed from the package it belongs to. The standard golang abstraction around UNIX sockets simply doesn't expose the peer credentials directly so we had to hack around in a way that would still be compatible with the rest of the standard library. Hi Zygmunt, Thank you for the fast and detailed response. That makes perfect sense. Keep up the great work! FYI, we are still discussing disclosure of this bug and the timing of fixes, etc so please consider it embargoed (private) for the time being. Thanks for the report and the response of the snapd team. Chris, thanks again for your thorough report. We are now working through disclosure with the other distros, obtaining a CVE assignment, etc and will of course give full attribution to you on the coordinated release date (still TBD). Hi Jamie, Thank your for the follow-up! I will wait to hear back from you and the team. Have a great day! - Chris Attaching regression testing patch FYI, I'm in the process of requesting a CVE and sent an email to the affected distributions for a CRD. The issue can be considered semi-public since a public commit refactored the offending code and fixed the issue along the way: https:/ Since the public commits do not reference this vulnerability, keeping this bug private until the agreed upon CRD (which is tentatively set for 2019-02-06 16:00 UTC. If that changes, I will update the bug). I will also make the bug public at the appropriate time. Hi Jamie, Thanks for the detailed update. Is the CRD the date when you are comfortable with me discussing the bug publicly, or is that a defined time after the CRD? Chris, the CRD is the date that the issue is considered public and once public, feel free to discuss publicly. An easy way for you to keep track of this is simply watching for when I mark this bug as Public Security. Thanks again for the report and responsibly disclosing the issue. As others have said, thanks for the careful disclosure Chris! For the record, this bug exists only between versions 2.28 and 2.37.0 of snapd. FYI, Fedora has not yet updated EPEL to 2.37.1, but I'm told it will happen today. The CRD is still set for 2019-02-06 16:00 UTC, but I may delay if Fedora is not updated yet. Please watch for when I mark this bug Public before disclosing the information. Thanks! Thanks Jamie! As of now, the upload for Fedora is prepared and uploaded but due to the mechanics of the Fedora archive, I'm told the updated package won't be available to Fedora users until late Thursday. Ubuntu has a policy of not releasing updates on Friday when possible, so we are delaying the CRD until Tuesday February 13th at 16:00 UTC. As always, please watch for when I mark this bug Public before disclosing the information. Chris, thanks again for your patience in this matter; it's really helping ensure users. Thanks! > so we are delaying the CRD until Tuesday February 13th at 16:00 UTC Whoops, I meant Tuesday February 12th at 16:00 UTC. OK, sounds good. Thanks Jamie! This bug was fixed in the package snapd - 2.35.5+18.10.1 --------------- snapd (2.35.5+18.10.1) cosmic:39:19 +0000 This bug was fixed in the package snapd - 2.34.2ubuntu0.1 --------------- snapd (2.34.2ubuntu0.1) xenial:54:00 +0000 This bug was fixed in the package snapd - 2.34.2+18.04.1 --------------- snapd (2.34.2+18.04.1) bionic:50:52 +0000 This bug was fixed in the package snapd - 2.34.2~14.04.1 --------------- snapd (2.34.2~14.04.1) trusty:55:31 +0000 FYI, the CVE assignment came later. This is CVE-2019-7304. Ubuntu 19.04 has 2.37.2, which is not affected. snapd upstream was fixed in 2.37.1. This is now public: - https:/ - https:/ Thanks again to everyone for your hard work, timely updates, and overall providing such a great disclosure experience. See you next time! - Chris Chris, I've just read your blog post at: https:/ There you install a snap in devmode, which does a bunch of things to demonstrate that the snap can access system resources via the vulnerability in <2.37. Just for the record, it's slightly undue to claim that the snap is exploiting the system in that scenario, because a snap in devmode already has full access to the system anyway. No need for any exploits. If you install a snap in devmode, you gave root to the snap: --devmode Put snap in development mode and disable security confinement If the snap was installed without devmode, it wouldn't not have access to the socket. Again, thanks for the report. Just wanted to clarify this point. Hi Gustavo, Yes, but remember that this is a low-privilege user exploiting the bug in order to install a snap in devmode to get root. This does indeed require an exploit, so that the install hook can execute the commands as root and add a new user. It's simply an alternative exploit to using the create-user API. You can see the code at github. Some of the tech journalists covering this incorrectly claimed that my exploit would be bundled inside malicious snaps. This is where there is a bit of confusion, as you're 100% right - that snap would not have access to the socket, so that is not realistic. I've tried to correct folks where I can, but I think my blog posting is still correctly describing things. If you see something specific in the blog posting that should be corrected, please let me know. Thanks! ^ Sorry, just to add clarity: I am not demonstrating the exploit working from within a devmode snap. I am demonstrating a devmode snap packaged inside the exploit. Thanks for the clarification, Chris. We're in complete agreement. Hello Chris, thank you for contacting us. This is absolutely beautiful work, well done. I'll get the snapd team working on this. Thanks
https://bugs.launchpad.net/snapd/+bug/1813365
CC-MAIN-2020-24
refinedweb
2,150
65.22
Benchmarking TensorFlow Lite on the New Raspberry Pi 4, Model B When the Raspberry Pi 4 was launched I sat down to update the benchmarks I’ve been putting together for the new generation of accelerator… When the Raspberry Pi 4 was launched, I sat down to update the benchmarks I’ve been putting together for the new generation of accelerator hardware intended for machine learning at the edge. Unfortunately, while there was a version of the official TensorFlow wheel ready for the launch of the Raspberry Pi 4, there were still problems with the community build of TensorFlow Lite. That just changed, so here goes…Headline Results From Benchmarking Using TensorFlow Lite we see a considerable speed increase when compared with the original results from our previous benchmarks using full TensorFlow.. Part I — BenchmarkingA More Detailed Analysis of the Results. ⚠️Warning As per our previous results with the Raspberry Pi 4 the addition of a small fan, driven from the Raspberry Pi’s own GPIO headers, was need to keep the CPU temperature stable and prevent thermal throttling of the CPU. These results can now be compared to our previously obtained benchmark results on the following platforms; the Coral Dev Board, the NVIDIA Jetson Nano, the Coral USB Accelerator with a Raspberry Pi, the original Movidus Neural Compute Stick with a Raspberry Pi, and the second generation Intel Neural Compute Stick 2 again with a Raspberry Pi. Comparison was also made with the Xnor.aiAI2GO platform using their proprietary binary convolution network. ℹ️ Information The Raspberry Pi 3, Model B+, has no USB 3 support, so no results are available for the Coral USB Accelerator using USB on the Raspberry Pi 3. While results for both generations of the Movidius-based Compute Stick, the Movidus Neural Compute Stick and the Intel Neural Compute Stick 2 are not available on the Raspberry Pi 4, Model B, as the Intel OpenVINO framework does not yet work with Python 3.7. You should not expect official support for the Intel Neural Compute Stick on the Raspberry Pi 4 in the near term. Our initial TensorFlow results on the new Raspberry Pi 4 showed a ×2 increase in performance. This is roughly in line with expectations as with twice the NEON capacity more than the Raspberry Pi 3, we would expect this order of speedup in performance for well-written NEON kernels. However we see a significantly larger speed increase with TensorFlow Lite, with a ×3 to ×4 increase in inferencing speeds between our TensorFlow benchmark, and the new results using TensorFlow Lite. This result is much larger than we saw when a similar comparison was made with the Raspberry Pi 3, where we saw only a ×2 increase in performance between the two packages. We are therefore seeing almost double the expected speed gain by using TensorFlow Lite over TensorFlow on the Raspberry Pi 4. This decrease in inferencing time brings the Raspberry Pi 4 directly into competition with both the NVIDIA Jetson Nano and the Movidius-based hardware from Intel. ⚠️Warning It is probable that the Movidius Neural Compute Stick and the Intel Neural Compute Stick 2 will show better performance when connected to the Raspberry Pi 4 using USB 3 rather than USB 2. However until the OpenVINO framework supports Python 3.7 it is impossible to know for certain. Right now the Movidius-based hardware from Intel is not usable with the Rapsberry Pi 4. If you were looking at purchasing the NVIDIA Jetson Nano to use for machine learning, there now seems no reason to do so as the Raspberry Pi 4 performs at a similar level, but for half the cost.Summary. Priced at $35 for the 1GB version, and $55 for the 4GB version, the new Raspberry Pi 4 is significantly cheaper than both the NVIDIA Jetson Nano, and the Intel Neural Compute Stick 2, both of which cost $99. Especially considering that, for the Compute Stick, this cost is in addition to the cost of the Raspberry Pi itself which therefore comes to a total of $134. While the Coral Dev Board from Google is still the ‘best in class’ board, the addition on USB 3 to the Raspberry Pi 4 means that it is now also price competitive with the Dev Board.Installing, before going ahead and install TensorFlow Lite. $ sudo apt-get update $ sudo apt-get install build-essential $sudo apt-get install git ℹ️ InformationIf you’re working on an existing installation, and you already have the official version of TensorFlow installed, you should make sure you have uninstalled it first, by doing sudo pip3 uninstall tensorflow. While there isn’t yet a build of TensorFlow Lite specifically for Python 3.7, we can make use of one of the Python 3.5 builds. However, you’ll need to make some tweaks before installation. $ sudo apt-get install libatlas-base-dev $ sudo apt-get install python3-pip$ git clone cd Tensorflow-bin $ mv tensorflow-1.14.0-cp35-cp35m-linux_armv7l.whl tensorflow-1.14.0-cp37-cp37m-linux_armv7l.whl$ pip3 install --upgrade setuptools $ pip3 install tensorflow-1.14.0-cp37-cp37])))" ⚠️WarningYou will receive ‘Runtime Warnings’ when you import tensorflow. These aren’t a concern and just indicate that the wheel was built under Python 3.5 and you’re using it with Python 3.7. You can safely ignore the warnings. Now TensorFlow has been successfully installed we need to install OpenCV, the Pillow fork of the Python Imaging Library (PIL) and the NumPy library. $ sudo apt-get install python3-opencv $ pip3 install Pillow $ pip3 install numpy We should now be ready to run our benchmarking scripts… Benchmarking Machine Learning on the New Raspberry Pi 4, Model B - How much faster is the new Raspberry Pi? It’s a lot faster
https://www.hackster.io/news/benchmarking-tensorflow-lite-on-the-new-raspberry-pi-4-model-b-3fd859d05b98
CC-MAIN-2019-47
refinedweb
970
56.08
Implement Ajax call in Grails web-flow In one of my recent project, i want to use grails web-flow with ajax call. It’s very easy to implement the web-flow with ajax call. Grails web-flow always track the actions on the basis of eventId & flow execution key. So, to implement ajax call in web-flow, we have to pass the event id & flow execution key. 1. Let us assume, we have the following web- flow code in grails. def stepFlow ={ first{ action{ code..... }on ("success"){code...}.to("second") } second{ on("third"){ }.to("fourth") } fourth(); } In this case, step is web flow name, for each call we have to pass step as request uri . First, second and third are events, events are always tracked through flow execution key 2. We want to call the event third through ajax call. For this we have to write the following code in ajax. function callAjaxFunctionInWebFlow(eventType,flowExecutionKey){ $.ajax({ url:"project-name/step", type:'POST', data:'_eventId=' + eventType + '&execution=' + flowExecutionKey, success:function (result) { }, error:function (jqXHR, textStatus, errorThrown) { } }) } 3. To call this function from GSP we have to write the following code. <a href='javascript:void(0)' onclick="callAjaxFunctionInWebFlow('third','${request.flowExecutionKey}')">Click</a> In this function callAjaxFunctionInWebFlow(‘third’,’${request.flowExecutionKey}’) third is eventId that we want to call, request.flowExecutionKey gives the current execution key that need to pass through each ajax call. Hope it will help. Thanks & Regards, Mohit Garg mohit@intelligrape.com @gargmohit143 You actually make it appear really easy along with your presentation however I to find this topic to be actually something which I feel I would never understand. It sort of feels too complicated and extremely broad for me. I’m looking forward for your subsequent submit, I’ll try to get the dangle of it! one question i have, So the ajax call needs data to be returned from webflow.. How can we return data from webflow. I added some code in third step def stepFlow ={ first{ action{ code….. }on (“success”){code…}.to(“second”) } second{ on(“third”){ def map= [:] map.id=’1′ return map as json //render map as json }.to(“fourth”) } fourth(); } Good example. There seems also a plugin now that helps in doing ajaxifying webflow.
http://www.tothenew.com/blog/implement-ajax-call-in-grails-web-flow/?replytocom=65172
CC-MAIN-2019-09
refinedweb
372
68.47
[Contents] [Next: Chapter 2] Files: In this first chapter we will see how to write a simple unit test for a class, and how to execute it. Let's assume you want to test the behavior of our QString class. First, you need a class that contains your test functions. This class has to inherit from QObject: #include <QtTest/QtTest> class TestQString: public QObject { Q_OBJECT private slots: void toUpper(); }; Note file, we also need to include the generated moc file to make Qt's introspection work. Now that we finished writing our test, we want to execute it. Assuming that our test was saved as testqstring.cpp in an empty directory: we build the test using qmake to create a project and generate a makefile. /myTestDirectory$ qmake -project .5-snapshot-20100716, Qt 4.5-snapshot-20100716]
http://doc.qt.nokia.com/4.5-snapshot/qtestlib-tutorial1.html
crawl-003
refinedweb
136
74.08
14564/syntax-error-invalid-syntax-for-no-apparent-reason I've been trying to get a fix and can't find why the error keeps appearing. Pmin,Pmax,w,fi1 and fi2 have all been assigned finite values guess=Pmin+(Pmax-Pmin)*((1-w**2)*fi1+(w**2)*fi2) When i remove this line from the code, the same error appears at the next line of code, again for no reason I can think of Edit: Here is the chunk of code I was referring to: def Psat(self, T):)) You're missing a close paren in this line: fi2=0.460*scipy.sqrt(1-(Tr-0.566)**2/(0.434**2)+0.494 There are three ( and only two ). I hope This will help you. Just for the record: >>> int('55063.000000') Traceback (most recent ...READ MORE please solve this error def TakeImages(): Id=(text.get()) ...READ MORE but i m getting this ouput: Enter the ...READ MORE Trying using the io module for this ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE Enumerate() method adds a counter to an ...READ MORE You can simply the built-in function in ...READ MORE The only solution is to define a ...READ MORE Well, it sounds like openpyxl is not ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/14564/syntax-error-invalid-syntax-for-no-apparent-reason?show=14571
CC-MAIN-2021-49
refinedweb
249
76.42
Hi, I want to remove one element pointed by iterator from a list. I read that when I use erase() on such list with precise iterator, then it returns the iterator for the element after which I removed. But now my question: is it correct not to catch what erase returns? Example: Becouse when there isBecouse when there isCode:#include <cstdio> #include <list> using namespace std; int main() { list<int> L; for(int i = 0; i < 5; i++) L.push_back(i+1); for(list<int>::iterator i = L.begin(); i != L.end(); i++) printf("%d\n", (*i)); printf(".\n"); for(list<int>::iterator i = L.begin(); i != L.end(); i++) { if((*i)==3) printf("Hola!\n"); if((*i)==2) { L.erase(i); // or i=L.erase(i); printf("%d\n", (*i)); } } for(list<int>::iterator i = L.begin(); i != L.end(); i++) printf("%d\n", (*i)); return 0; } L.erase(i); then printf() prints "2" but this "2" is not on the list yet... so something here is wired? But when I have i=L.erase(i); then printf() prints "3" (next element) what is correct - but then I have to do i-- not to omit one possition in the loop, right? (If I don't i-- then in the moment there will be i++ so element with "3" will be omitted in the loop - no "Hola" at the output.). Can anyone explain me that? Regards.
https://cboard.cprogramming.com/cplusplus-programming/71646-question-about-using-erase-lists-stl.html
CC-MAIN-2017-30
refinedweb
236
77.64
DBMSs define long data as any character or binary data over a certain size, such as 255 characters. This data may be small enough to be stored in a single buffer, such as a part description of several thousand characters. However, it might be too long to store in memory, such as long text documents or bitmaps. Because such data cannot be stored in a single buffer, it is retrieved from the driver in parts with SQLGetData after the other data in the row has been fetched. An application can actually retrieve any type of data with SQLGetData, not just long data, although only character and binary data can be retrieved in parts. However, if the data is small enough to fit in a single buffer, there is generally no reason to use SQLGetData. It is much easier to bind a buffer to the column and let the driver return the data in the buffer. To retrieve long data from a column, an application first calls SQLFetchScroll or SQLFetch to move to a row and fetch the data for bound columns. The application then calls SQLGetData. SQLGetData has the same arguments as SQLBindCol: a statement handle; a column number; the C data type, address, and byte length of an application variable; and the address of a length/indicator buffer. Both functions have the same arguments because they perform essentially the same task: They both describe an application variable to the driver and specify that the data for a particular column should be returned in that variable. The major differences are that SQLGetData is called after a row is fetched (and is sometimes referred to as late binding for this reason) and that the binding specified by SQLGetData lasts only for the duration of the call. Regarding a single column, SQLGetData behaves like SQLFetch: It retrieves the data for the column, converts it to the type of the application variable, and returns it in that variable. It also returns the byte length of the data in the length/indicator buffer. For more information about how SQLFetch returns data, see Fetching a Row of Data. SQLGetData differs from SQLFetch in one important respect. If it is called more than once in succession for the same column, each call returns a successive part of the data. Each call except the last call returns SQL_SUCCESS_WITH_INFO and SQLSTATE 01004 (String data, right truncated); the last call returns SQL_SUCCESS. This is how SQLGetData is used to retrieve long data in parts. When there is no more data to return, SQLGetData returns SQL_NO_DATA. The application is responsible for putting the long data together, which might mean concatenating the parts of the data. Each part is null-terminated; the application must remove the null-termination character if concatenating the parts. Retrieving data in parts can be done for variable-length bookmarks as well as for other long data. The value returned in the length/indicator buffer decreases in each call by the number of bytes returned in the previous call, although it is common for the driver to be unable to discover the amount of available data and return a byte length of SQL_NO_TOTAL. For example: // Declare a binary buffer to retrieve 5000 bytes of data at a time. SQLCHAR BinaryPtr[5000]; SQLUINTEGER PartID; SQLINTEGER PartIDInd, BinaryLenOrInd, NumBytes; SQLRETURN rc; SQLHSTMT hstmt; // Create a result set containing the ID and picture of each part. SQLExecDirect(hstmt, "SELECT PartID, Picture FROM Pictures", SQL_NTS); // Bind PartID to the PartID column. SQLBindCol(hstmt, 1, SQL_C_ULONG, &PartID, 0, &PartIDInd); // Retrieve and display each row of data. while ((rc = SQLFetch(hstmt)) != SQL_NO_DATA) { // Display the part ID and initialize the picture. DisplayID(PartID, PartIDInd); InitPicture(); // Retrieve the picture data in parts. Send each part and the number // of bytes in each part to a function that displays it. The number // of bytes is always 5000 if there were more than 5000 bytes // available to return (cbBinaryBuffer > 5000). Code to check if // rc equals SQL_ERROR or SQL_SUCCESS_WITH_INFO not shown. while ((rc = SQLGetData(hstmt, 2, SQL_C_BINARY, BinaryPtr, sizeof(BinaryPtr), &BinaryLenOrInd)) != SQL_NO_DATA) { NumBytes = (BinaryLenOrInd > 5000) || (BinaryLenOrInd == SQL_NO_TOTAL) ? 5000 : BinaryLenOrInd; DisplayNextPictPart(BinaryPtr, NumBytes); } } // Close the cursor. SQLCloseCursor(hstmt); There are several restrictions on using SQLGetData. Generally, columns accessed with SQLGetData: Must be accessed in order of increasing column number (because of the way the columns of a result set are read from the data source). For example, it is an error to call SQLGetData for column 5 and then call it for column 4. Cannot be bound. Must have a higher column number than the last bound column. For example, if the last bound column is column 3, it is an error to call SQLGetData for column 2. For this reason, applications should make sure to place long data columns at the end of the select list. Cannot be used if SQLFetch or SQLFetchScroll was called to retrieve more than one row. For more information, see Using Block Cursors. Some drivers do not enforce these restrictions. Interoperable applications should either assume they exist or determine which restrictions are not enforced by calling SQLGetInfo with the SQL_GETDATA_EXTENSIONS option. If the application does not need all the data in a character or binary data column, it can reduce network traffic in DBMS-based drivers by setting the SQL_ATTR_MAX_LENGTH statement attribute before executing the statement. This restricts the number of bytes of data that will be returned for any character or binary column. For example, suppose a column contains long text documents. An application that browses the table containing this column might have to display only the first page of each document. Although this statement attribute can be simulated in the driver, there is no reason to do this. In particular, if an application wants to truncate character or binary data, it should bind a small buffer to the column with SQLBindCol and let the driver truncate the data.
http://msdn.microsoft.com/en-us/library/ms712426(VS.85).aspx
crawl-002
refinedweb
978
53.41
A menu presented in a popup window. More... #include <Wt/WPopupMenu> A menu presented in a popup window. The menu implements a typical context menu, with support for submenu's. It is a specialized WMenu from which it inherits most of the API. When initially created, the menu is invisible, until popup() or exec() is called. Then, the menu will remain visible until an item is selected, or the user cancels the menu (by hitting Escape or clicking elsewhere). The implementation assumes availability of JavaScript to position the menu at the current mouse position and provide feed-back of the currently selected item. As with WDialog, there are two ways of using the menu. The simplest way is to use one of the synchronous exec() methods, which starts a reentrant event loop and waits until the user cancelled the popup menu (by hitting Escape or clicking elsewhere), or selected an item. Alternatively, you can use one of the popup() methods to show the menu and listen to the triggered signal where you read the result(), or associate the menu with a button using WPushButton::setMenu(). You have several options to react to the selection of an item: Usage example: A snapshot of the WPopupMenu: Signal emitted when the popup is hidden. Unlike the itemSelected() signal, aboutToHide() is only emitted by the toplevel popup menu (and not by submenus), and is also emitted when no item was selected. You can use result() to get the selected item, which may be 0. Executes the the popup at the location of a mouse event. This is a convenience method for exec(const WPoint& p) that uses the event's document coordinates. Executes the popup besides a widget. Shows the the popup at a position. Displays the popup at a point with document coordinates point. The positions intelligent, and will chose one of the four menu corners to correspond to this point so that the popup menu is completely visible within the window. Shows the the popup at the location of a mouse event. This is a convenience method for popup(const WPoint&) that uses the event's document coordinates. Shows the popup besides a widget.Menu. Returns the last triggered menu item. The result is 0 when the user cancelled the popup menu. Configure auto-hide when the mouse leaves the menu. If enabled, The popup menu will be hidden when the mouse leaves the menu for longer than autoHideDelay (milliseconds). The popup menu result will be 0, as if the user cancelled. By default, this option is disabled.::WCompositeWidget. Sets a maximum size. Specifies a maximum size for this widget, setting CSS max-width and max-height properties. The default the maximum width and height are WLength::Auto, indicating no maximum size. A WLength::Percentage size. WLength::Percentage size should not be used, as this is (in virtually all cases) undefined behaviour. When the widget is inserted in a layout manager, then the minimum size will be taken into account. Reimplemented from Wt::WCompositeWidget. Signal emitted when an item is selected. Unlike the itemSelected() signal, triggered() is only emitted by the toplevel popup menu (and not by submenus).
http://www.webtoolkit.eu/wt/wt-3.3.8/doc/reference/html/classWt_1_1WPopupMenu.html
CC-MAIN-2017-51
refinedweb
526
57.47
F2F10 Minutes From RIF DRAFT -- Currently Under Review - Present - Michael Kifer, Harold Boley, Adrian Paschke, Igor Mozetic, Sandro Hawke, Gary Hallmark, Andreas Harth, Jos de Bruijn, John Hall, Christian de Sainte Marie, Chris Welty, Axel Polleres - Remote - Dave Reynolds, Mike Dean, Hassan Aït-Kaci - Chair - Chris Welty and Christian de Sainte-Marie - Scribe (day 1) - Michael Kifer, Harold Boley, Adrian Paschke, Igor Mozetic, Gary Hallmark Day 1 (No activity for 10 minutes) (Scribe changed to Michael Kifer) Sandro Hawke: last call means we are done Sandro Hawke: after DTB is at last call, we can go for CR (candidate implementation). After that, we are not supposed to make any substantive changes to BLD. At this point, we call for implementation Christian de Sainte Marie: "Substantive" changes are not allowed to be made after a call for implementations. Sandro/Jos/csma: Example: changing the presentation syntax is a substantive change. (No activity for 7 minutes) ACTION: jos add explanatory text to SWC and reply to asking if it explains the matter sufficiently. ACTION: jdebruij2add explanatory text to SWC and reply to asking if it explains the matter sufficiently. ACTION: jdebruij2 add explanatory text to SWC and reply to asking if it explains the matter sufficiently. due wednesday discussion of Dan C's comments about use/mention of IRIs in RIF. Jos will respond to him. discussion of Dan C's comment on the fact that BLD does not allow the same symbol to be used in different contexts. This will be a problem for merging of rules (one set may have a predicate as a 2-ary thing and in another as a 3-ary thing, for example). Decided (guest): to keep discussing this issue. (No activity for 9 minutes) (No activity for 10 minutes) ACTION: Jos to add section to SWC which guides people familiar with W3C Semantic Web specs about the surprising differences, eg rif:iri - due Wednesday ACTION: jdebruij2 to add section to SWC which guides people familiar with W3C Semantic Web specs about the surprising differences, eg rif:iri - due Wednesday Discussion of Jeremy's comments about rif:text, rif:iri. Decided: will add clarifications. (No activity for 13 minutes) (No activity for 9 minutes) discussion of Dave R's comments. Most are editorial, which weren't discussed. Others will be discussed elsewhere (eg, in the FLD/BLD parts of the agenda). (No activity for 31 minutes) (No activity for 6 minutes) (Scribe changed to Harold Boley) Implementations Gary Hallmark: ORACLE UCR Use Case 1 Conclusion of rule uses a frame. Christian de Sainte Marie: Modeled as a frame only because a little easier? Gary Hallmark: OBR's best mapping of BLD goes to a frame because of its bean notion. Harold Boley: Use a frame as a conjunction? Gary Hallmark: Can also use Group to attach conclusion results. ... splitting the And in the then part into two rules. Jos de Bruijn: For only constants as slot fillers, frames could be used. Christian de Sainte Marie: Why nesting of <And> <formula> <And>? Gary Hallmark: Because of the principles of our general XML generator. Could be optimized away for the special case of BLD. Sandro Hawke: What about the other direction? Gary Hallmark: Yes. But will need to be worked out. ... We avoid showing durations to end users. ... For ex., days-between in OBR is not exposed to users. Christian de Sainte Marie: Implementations should have access to such built-ins. Michael Kifer: rif:new should really be rif:local (because rif:new is not unique). ... If it's a generator, it should be local. ... We need a standard way for handling skolems (via rif:local). ... Problem: The same symbol cannot be used in different contexts. ... Could change the definition of well-formedness. Not allow these skolems in equalities. Gary Hallmark: Round-tripping with OBR would be a problem. Sandro Hawke: Sent email about this last week. Christian de Sainte Marie: 'Hidden conjunction" forbidden? ... multiple slots of a frame in the head. ... normally there is no conjunction in the head. Michael Kifer: Right, maybe we should allow conjunction in the head, like disjunction in the body. Christian de Sainte Marie: Can we change the document? Michael Kifer: Cannot guarantee change will not introduce errors. PROPOSED: BLD will include Conjunction in the rule head PROPOSED: BLD will include Conjunction in the rule head (the "then" part) 0 RESOLVED: BLD will include Conjunction in the rule head (the "then" part) Harold Boley: May increase the burdon on BLD implementers to claim they have a direct (without rewriting) implementation of BLD. DTB review & publication Axel Polleres: Symbol Spaces Sandro Hawke: Maybe first motivate them before going into the mat. s/ mat./ math./ (failed) Sandro Hawke: The doc now doesn't use Curies? Axel Polleres: It does (now). Sandro Hawke: Keep Fcts and Preds in the same namespace? Axel Polleres: builtin-functions vs. builtin-predicates. ... different semantics. Jos de Bruijn: But user does not see this. PROPOSED: func: and pred: collapse to one, http:/ Axel Polleres: Right. Christian de Sainte Marie: What's the drawback of having two namespaces? Jos de Bruijn: It's just more (unnecessary) namespaces. Sandro Hawke: You have to remember both. Axel Polleres: If we added a type Boolean, then we could consider collapsing the Fct and Pred namespaces. Jos de Bruijn: Did we not decide that we will not have to versions (Fct and Pred) of the same builtin? PROPOSED: keep the two namespaces, pred: and func: .... Igor Mozetic: Whenever we tried to 'simplify' things in this way, it turned out there were later some complications. So we should keep the separation. PROPOSED: remove language about all the subtypes of xsf:string PROPOSED: remove language about all the subtypes of xsd:string being required RESOLVED: remove language about all the subtypes of xsd:string being required PROPOSED: instead of all-subtypes-of-decimal, have just Decimal, Integer, Long, ..... (No activity for 20 minutes) (No activity for 33 minutes) (Scribe changed to Adrian Paschke) Continue DTB discussion Axel Polleres: Goal for outsourcing into a seperate document to define the supported datatypes Axel Polleres: Current DTB says it defines the supported BLD datatypes Chris Welty: My understanding was that it is common to all RIF dialects Sandro Hawke: Each dialect will have a subset of the DTB datatypes and builtins Michael Kifer: or a superset Sandro Hawke: DTB has to grow if new datatypes are needed in standardized dialects Axel Polleres: it was moved / copied from FLD Michael Kifer: yes, it has been copied from FLD to DTB Chris Welty: It is a maintenance problem to add all new standard datatypes and builtins to DTB? Sandro Hawke: only datatypes and builtins from standardize RIF dialects Chris Welty: we agreed if a standard dialect needs a datatype or builtin function it needs to be in DTB Michael Kifer: dialects may support additional datatypes Jos de Bruijn: we only talk about standardized dialects Michael Kifer: we would have mandatory and optional datatypes Chris Welty: put it in core Harold Boley: DTP is in some sense parallel to FLD Michael Kifer: Organize built-ins in groups in DTB Michael Kifer: let's organize them in groups and link to them from BLD Sandro Hawke: organize them in levels, e.g. numeric built-ins level 1, level 2 PROPOSED: DTB will provide the menu of datatypes and builtins which diallect dialections can use, by reference, when they state which datatypes and builtins must be supported by implementations. Michael Kifer: change to ... built-in predicated required by the RIF BLD PROPOSED: DTB will provide the menu of datatypes and builtins which diallects can use, by reference, when they state which datatypes and builtins must be supported by implementations. PROPOSED: DTB will provide the menu of datatypes and builtins which dialects can use, by reference, when they state which datatypes and builtins must be supported by implementations. +1 RESOLVED: DTB will provide the menu of datatypes and builtins which dialects can use, by reference, when they state which datatypes and builtins must be supported by implementations. ACTION: axel to edit DTB to reflect its changed role with regard to DTB (eg in the Abstract) (No activity for 6 minutes) Adrian Paschke: We need durations to implement some of the use cases PROPOSED: add year-month and day-time duration, but NOT duration, to BLD. PROPOSED: add year-month and day-time duration, but NOT duration, to BLD, as in PROPOSED: add year-month and day-time duration, but NOT duration, to BLD (and of course DTB), as in PROPOSED: add xs:dayTimeDuration and xs:yearMonthDuration, but NOT duration, to BLD (and of course DTB), as in PROPOSED: add xs:dayTimeDuration and xs:yearMonthDuration, but NOT duration, to those required in BLD (and of course DTB), as in RESOLVED: add xs:dayTimeDuration and xs:yearMonthDuration, but NOT duration, to those required in BLD (and of course DTB), as in ACTION: axel to add the new duration subtypes to DTB ACTION: kifer to make sure BLD includes the appropriate normative reference to DTB to reflect the inclusion of the duration subtypes Chris Welty: Gary's numeric remark Gary Hallmark: less_than_or_equal ... Axel Polleres: you typically have general builtins such as > < <= ... Gary Hallmark: I only need numeric_greater_than_or_equal ... less_than_or_equal Chris Welty: short cuts such as >= <= != PROPOSED: add builtin predicates to BLD and DTB: <= >= and != for numeric values (they amount to shortcuts, to avoid disjunction). +1 PROPOSED: add builtin predicates to BLD and DTB: pred:numeric-less-or-equal, pred:numberic-greater-or-equal, pred:numberic-not-equal (they amount to shortcuts, to avoid disjunction). +1 RESOLVED: add builtin predicates to BLD and DTB: pred:numeric-less-or-equal, pred:numberic-greater-or-equal, pred:numberic-not-equal (they amount to shortcuts, to avoid disjunction). Sandro Hawke: have an editor from OWL and RIF to discuss about rif:text or using a owl datatype Christian de Sainte Marie: BLD requires rif:text, so we have a dependency to a document which does not exist if me move it to another document Chris Welty: keep in DTB Harold Boley: add an editors note about it ACTION: Harold to add AT RISK editor's note in BLD explaining that the IRI identifying rif:text might change. Axel Polleres: Casts from rif:iri as defined in XML Schema ACTION: Chris to open issue on casts to/from rif:iri Jos de Bruijn: cast functions are not defined, yet Axel Polleres: see section 4.3 ACTION: Axel to change editor's note on casting rif:iri to normal open-issue style, link to new issue on it. ACTION: Axel to convert text about concat2, etc, into an editor's note about how the handling of arities is a strawman proposal not yet agreed upon by WG. ACTION: axel comment out DTB 4.7 or otherwise make sure it doesn't end up in BLD (No activity for 5 minutes) PROPOSED: Publish DTB as a FPWD once changes decided so far today are made (and reviewed by ...someone...) PROPOSED: Publish DTB as a FPWD once changes decided so far today are made (and reviewed by ...someone...) PROPOSED: Publish DTB as a FPWD once changes decided so far today are made (and reviewed by Chris) +1 (REWERSE) RESOLVED: Publish DTB as a FPWD once changes decided so far today are made (and reviewed by Chris) (No activity for 10 minutes) (Scribe changed to Igor Mozetic) striping, typed-tagged XML aka Rigid RDF we have nearly full-striped syntax, why not move further? pro: small step, gives RDF compatibility pro: fallback mechanism is RIF pro: more implementers can support con: even more verbose Christian de Sainte Marie: main objective is not fully-stiped, but RDF compatible synta con: "marketing" issue (some don't want RDF) pro: "marketing" issue (if there is no dependency, then RDF compatibility is a plus) Christian de Sainte Marie: for Alex - changing rigid RDF to typed-tagged-XML would solve his opposition dependency on RDF namespace is _not_ RDF dependency Sandro pro: it is self describing Sandro Hawke: self-describing = deserialization back into "objects" proposed changes by Sandro: ... add rdf:parseType=Collection to args Christian de Sainte Marie: this proposal breaks full striping but we need RDF compatibility ... 2) add name under Var ... change from not-striped to striped ... 3) add value role under Const Michael Kifer: issue is with symbolspace that are not datatypes Michael Kifer: RDF friendliness brings semantics into syntax Jos de Bruijn: we need to specify order in the syntax anyway ... 4) rif:iri inside Const is serialized differently ... add 4) uses native RDF support ... add 4) rif:iri and rif:text would disapear from XML serialization Sandro Hawke: the only change is in the serialization, not in PS Christian de Sainte Marie: an RDF parser will get RDF triples, not rules Michael Kifer: is affraid that RDF semantics will be assumed Christian de Sainte Marie: if this change is zero-cost it is advantageous for the community ... since people will get RDF triples for free and do whatever they want ... Sandro: 5) role tags may have to be in alphabetical order Sandro Hawke: one can load RIF doc into triple store and extract rules by querying it changes would require the following in BLD document: translation tables, examples, XML schema (Scribe changed to Gary Hallmark) Conformance Michael Kifer: must preserve entailments Christian de Sainte Marie: but editors don't entail anything Chris Welty: semantic v. syntax Michael Kifer: editor doesn't actually DO anything Sandro Hawke: e.g. biz rule editor imports rif into GUI Jos de Bruijn: syntax checkers need to validate syntax Michael Kifer: bijective mapping between 2 languages ... entailments include datatype conformance Sandro Hawke: what if we encounter a datatype not in BLD? ... that would be an extension of BLD ... syntax check may pass Jos de Bruijn: rule processor has set of datatypes, and can talk about differences w.r.t. DTB Sandro Hawke: how to include xs:int in a rif document ... should it convert to xs:decimal, or ignore, or warn? ... should warn or reject Jos de Bruijn: if conversion is lossless, then it is supported Michael Kifer: must be in DTB to have a semantics in the above, "datatype" also includes associated builtins Michael Kifer: could separate syntactic and semantic conformance ... syntactic subset can be larger Sandro Hawke: hopes entire languages are translated to RIF, implying many extensions ... allows roundtripping Christian de Sainte Marie: must be able to add (non-std) extensions to make RIF usable Michael Kifer: don't require all datatypes are listed in DTB to have a valid RIF document Christian de Sainte Marie: any other notions of conformance? Jos de Bruijn: basic notion should be entailment ... first, a RIF consistency check (has a model) Christian de Sainte Marie: but I can have an inconsistent RIF document Jos de Bruijn: datatype conformance should be like OWL. Must support DTB, reject others ... for semantic conformance Michael Kifer: distinguish producers from consumers Christian de Sainte Marie: given agreement on datatypes (even if not in DTB), entailments are preserved Michael Kifer: what does it mean to agree? Jos de Bruijn: must decide what to do about datatypes not in DTB Chris Welty: conformance defined for a single RIF processor, not for a communicating pair Michael Kifer: for producer, must be mapping into RIF, and have right entailments ... for consumer, must reject datatypes you do not understand (which must not be in DTB) (No activity for 8 minutes) much discussion formalizing the conformance statement... Dave Reynolds: must a consumer be a complete BLD implementation? Sandro Hawke: yes Dave Reynolds: concerned about equality Christian de Sainte Marie: equality in head may be "at risk" Dave Reynolds: used to have a notion of BLD being a superset ... a conformant impl could be a subset Christian de Sainte Marie: is this the notion of profiles? Christian de Sainte Marie: maybe we'll need profiles since we no longer have CORE Christian de Sainte Marie: maybe solution is to remove equality from BLD Sandro Hawke: if we need a dialect between CORE and BLD, we'll create one (but I hope not) ... e.g. mini-BLD Christian de Sainte Marie: if we don't get implementations, we'll have to revisit Christian de Sainte Marie: does marking some hard things as "at risk" satisfy Dave? Dave Reynolds: yes Chris Welty: maybe the conformance statement is "at risk" Michael Kifer: could have levels of conformance, and lower the bar as needed Sandro Hawke: let's talk about syntax extensibility ... must reject syntax you don't understand ... is somewhat counter to the XML philosophy ... but can't ignore NOT, e.g. ... and we don't require "fallback" processing Naming Sandro Hawke: doesn't like upper/lower Christian de Sainte Marie: in PRD, object/class and subclass/class Christian de Sainte Marie: wants different tags for different content ... makes PS->XML mapping easier ... because it is not context dependent Sandro Hawke: sub/super for SubClass Harold Boley: instance/class for Member Hassan, I though you don't like abbrevs ACTION: Harold update BLD to change lower/upper to instance/class and sub/super. Chris Welty: doesn't like '->' in frames ... too much like implication ... also -> used for named args Christian de Sainte Marie: but PS is tomorrow Christian de Sainte Marie: prefer attribute instead of key ... for the frame case Harold Boley: principle of context-independent role names not maintainable in FLD Hassan Aït-Kaci: XML tags should be independent of BNF ... distinguish grammar from language ... former is not unique ... in prototype, had to modify EBNF for jacc, and this ripples through the XML mapping if they are linked Harold Boley: many XML tags have a different PS token, e.g. ?/Var, ^^/Const Day 2 - Scribes - Andreas Harth, John Hall, Axel Polleres, Adrian Paschke (Scribe changed to Andreas Harth) Agenda review this session: presentation syntax, especially shortcuts Christian de Sainte Marie: Issue 56 Jos de Bruijn: which shortcuts to define Axel Polleres: sent out proposal yesterday about grammar Presentation syntax Axel Polleres: do we want to have string lang tag? his proposal is to use grammar from sparql spec Jos de Bruijn: just use absolute IRIs not relative IRIRefs Jos de Bruijn: in the definition, all w3c standards have full IRIs in their spec Chris Welty: rif doesn't need to know about curis ... do preprocessing to expand iris Axel Polleres: basically just added the lang tag, ok to use iri, ok to use prefix def in presentation syntax ... still open are string and escaping, we should use the sparql syntax here as well Axel Polleres: syntax allows to only write the prefix, which is just resolved to the iri of the prefix Christian de Sainte Marie: decision to make: does it make sense, whether to use iri or iriref, how to address escaping ... any objections? Michael Kifer: not use anglebrackets Axel Polleres: we addressed that in the last telecon, just added last line Michael Kifer: problem is in one place it's really an iri, in other place it's just a marker Mike Dean: should have ways to use curis and iris Christian de Sainte Marie: issue is string^^<IRI> ... for const Michael Kifer: here, it's just a symbol that looks like a iri but it's just a constant Christian de Sainte Marie: it's mentioning the iri but not using it Michael Kifer: symbol space is not identified by iri, but just a symbol Christian de Sainte Marie: you want a different syntax for the different roles of an iri? Axel Polleres: do we open a can of worms here? for sake of readablilty, i'd buy the conceptual ambiguity Axel Polleres: with current proposal we're compatible with n3 syntax Michael Kifer: the aliases are not required to be iris Christian de Sainte Marie: need to change it either in bld or here Christian de Sainte Marie: must the aliases be iris or not? Michael Kifer: curis could be still too long PROPOSED: remove aliases for datatypes in BLD PROPOSED: remove aliases for symbol space identifiers in BLD PROPOSED: remove aliases for symbol space identifiers in RIF Sandro Hawke: does that leave us with this syntax? still possible that say two datatype iris are equal? Christian de Sainte Marie: any more questions? RESOLVED: remove aliases for symbol space identifiers in RIF Axel Polleres: next: string with lang tag Christian de Sainte Marie: Const ::= .... | STRING LANGTAG Christian de Sainte Marie: lang tag in rif:test is mandatory Chris Welty: no shortcut for non-lang-tagged strings? Sandro Hawke: there should be PROPOSED: add Const ::= STRING LANGTAG (allowing "chat"@en as short for "chat@en"^^rif:text) and "Const ::= STRING" (allowing "chat" as short for "chat"^^xs:string). Michael Kifer: when we lateron introduce modules @ is the right symbol for it Michael Kifer: how often we use that bit in the presentation syntax? Jos de Bruijn: for the examples in the doc Gary Hallmark: we have shortcuts for obscure features, but not for strings and integers PROPOSED: modify Presentation Syntax to incliude "Const ::= STRING" (allowing "chat" as short for "chat"^^xs:string). PROPOSED: modify Presentation Syntax to include "Const ::= STRING" (allowing "chat" as short for "chat"^^xs:string). RESOLVED: modify Presentation Syntax to include "Const ::= STRING" (allowing "chat" as short for "chat"^^xs:string). Christian de Sainte Marie: any objection? topic now: abridged presentation syntax Christian de Sainte Marie: in minutes there's a table for abridged syntax Sandro Hawke: how to distinguish between integer and long? Axel Polleres: for numerical literals the sparql spec could serve as example Axel Polleres: double not in symbol spaces PROPOSED: add xsl:double as a required symbol space Jos de Bruijn: why add double? Gary Hallmark: important for lot of engineering applications RESOLVED: add xsd:double as a required symbol space PROPOSED: add shortcut e notation for double sorry :) s/shortcat/shortcut (succeeded, 3 lines ago) discussion about grammar Sandro Hawke: why do we need long in presentation syntax and datatypes Chris Welty: get positive negative integer and decimals Axel Polleres: we can add hooks to link into the sparql grammar Michael Kifer: prefered to be self-contained, what if sparql changes? PROPOSED: import NumericLiteral from SPARQL giving us INTEGER, DECIMAL, and DOUBLE. PROPOSED: import NumericLiteral from SPARQL giving us INTEGER, DECIMAL, and DOUBLE to the Presentation Syntax PROPOSED: reuse NumericLiteral from SPARQL giving us INTEGER, DECIMAL, and DOUBLE to the Presentation Syntax RESOLVED: reuse NumericLiteral from SPARQL giving us INTEGER, DECIMAL, and DOUBLE to the Presentation Syntax Christian de Sainte Marie: open is rif:local by default? Michael Kifer: starts with letter or underscore PROPOSED: modify presentation syntax so that identifiers (as in C or Java - starting with letter or underscore, allowing digits later), are shortcut for rif:local Michael Kifer: followed by alphanumeric Sandro Hawke: most languages set aside a set of keywords for this Gary Hallmark: is not code, but presentation syntax Michael Kifer: keywords start with dollar sign? Sandro Hawke: leave out rif:local shortcuts? PROPOSED: modify presentation syntax so that alphanumeric identifiers starting with "_" are shortcut for rif:local (so _foo is short for "foo"^^rif:local) Harold Boley: other character than underscore, use dot like in linux Sandro Hawke: code convention in java and c for local variables Michael Kifer: appreciate the point that "_" prefix represents local things, but in programming languages it's part of the name ... maybe single quotes? RESOLVED: modify presentation syntax so that alphanumeric identifiers starting with "_" are shortcut for rif:local (so _foo is short for "foo"^^rif:local) Christian de Sainte Marie: next: lang tag and string ... proposal is to have lang tag seperated from string PROPOSED: modify Presentaton Syntax adding Const ::= STRING LANGTAG (allowing "chat"@en as short for "chat@en"^^rif:text) Michael Kifer: concern that @ symbol could be used for modules Hassan Aït-Kaci: sometimes we use rif:id, sometimes &rif;id Christian de Sainte Marie: in this session we talk about presentation syntax Jos de Bruijn: problem here is we would define iris for languages which is in an rfc and can change Sandro Hawke: ok, we could define it openly as a pattern Christian de Sainte Marie: rif:text might disappear into a document common for rif and owl Christian de Sainte Marie: we don't discuss that further now Christian de Sainte Marie: next topic irirefs vs iris ... irirefs or absolute iris Jos de Bruijn: issue with relative iris: you need a base iri to resolve relative iris to absolute iris ... need mechanism to specify base iris if we stay with irirefs ... benefit of irirefs is that ... they're shorter Axel Polleres: relatively simple, could have a base iri in the preamble PROPOSED: IRIs in the XML syntax can be relartive-IRIs (they need not be absolute) PROPOSED: IRIs in the XML syntax can be relative-IRIs (they need not be absolute) Dave Reynolds: where exactly use relatvie iris in the XML? Sandro Hawke: thought about const but maybe there's other places too Christian de Sainte Marie: table that queston for xml syntax, but focus on presentation syntax PROPOSED: In Presentation Syntax, the IRIs in rif:iri Consts can be relative. A "base" directive will be added to the preamble. Dave Reynolds: why does that matter when describing examples ... n3 does not have base directive? Axel Polleres: propose to adopt syntax proposal from sparql - use backlash to escape quotes inside strings PROPOSED: Adopt SPARQL convention for using backslash to allow quotes within quoted strings. PROPOSED: Adopt SPARQL convention for using backslash to allow quotes (and cr, lf, tab, etc) within quoted strings. PROPOSED: Adopt SPARQL convention for using backslash to allow quotes (and cr, lf, tab, etc) within quoted strings (in Presentation Syntax). RESOLVED: Adopt SPARQL convention for using backslash to allow quotes (and cr, lf, tab, etc) within quoted strings (in Presentation Syntax). Michael Kifer: where we should elaborte on the escaping? (No activity for 14 minutes) (Scribe changed to John Hall) PROPOSED: Close Issue 56 as addressed by the resolutions this morning. Axel Polleres: sent out email summarizing presentation syntax Chris Welty: proposal - move on Christian de Sainte Marie: did not close resolution RESOLVED: Close Issue 56 as addressed by the resolutions this morning. Chris Welty: closed Issue 56 Christian de Sainte Marie: issue - Jeremy C - not consider RDF entailment Christian de Sainte Marie: comment 5 in JC email 2 Christian de Sainte Marie: no discussion today - move on, no change Christian de Sainte Marie: JC email 1 Christian de Sainte Marie: coment 15 Chris Welty: point of confusion - is IP a subeset of IR? Jos de Bruijn: making it a little easier doe not justify changing Chris Welty: just changing our view Chris Welty: leave it Jos de Bruijn: leave things as they are - Jos already has an action Christian de Sainte Marie: JC email 2, comment 19 Christian de Sainte Marie: DL safeness at risk Jos de Bruijn: JC was referring to something else ... is OK ... not sure that DL safeness restrictions is what people need Chris Welty: not what this comment is about Jos de Bruijn: what we discussed yesterday Chris Welty: ref to Jos comments in document Chris Welty: csma: marj DL safeness at risk? s/marj/mark/ (failed) Chris Welty: action Jos to mark 3.1.1 ACTION: Jos to mark 3.1.1 at risk Christian de Sainte Marie: drop section 6.2 ... from Dave ACTION: jdebruij2 to mark SWC section 3.1.1 as "AT RISK", with explanation. Jos de Bruijn: not strongly in favour of keeping it ... or against Christian de Sainte Marie: what is status of DLP in OWL-R? Sandro Hawke: pretty good Christian de Sainte Marie: mark is as at risk? Jos de Bruijn: can't do that Christian de Sainte Marie: couild remove it later s/couild/could/ (failed) Jos de Bruijn: if OWL-R is well-designed, could be embedded in RIF Sandro Hawke: not sure .. keep 6.2 Chris Welty: no change Christian de Sainte Marie: Dave 7.1, profiels for imports s/profiels/profiles/ (failed) Jos de Bruijn: section 4.2 ... more general isue is that you can specify different profiles but have to pick one ... we now pick the highest ... could have one for the entire document ... if profiles for RDFS and OWL-FULL, OWL-FULL takes precedence, RDFS is not valid Dave Reynolds: if importing under two profiles, find the lowest one that is higher than both ... if not, abort Jos de Bruijn: agree ... updated to adopt proposal - see Wiki version Dave Reynolds: OK with rewording Igor Mozetic: some built-in predicates Jos de Bruijn: need to be updated ... and need to write down the proofs Chris Welty: what built-in predicates? Jos de Bruijn: detecting il--typed literals s/il/ill/ (failed) Christian de Sainte Marie: is informative - can be changed after last call Christian de Sainte Marie: show stopper for last call? Jos de Bruijn: no Chris Welty: editor's note 2.1.2 Jos de Bruijn: resolved Chris Welty: end of 3.1.1 - remove and mark as 'at risk' Jos de Bruijn: 6.1.7 - section to be removed Jos de Bruijn: syggest removing note in 6.2.3.1 .. 6.2.3.2 remove note? ... will add text Christian de Sainte Marie: any other features at risk? Jos de Bruijn: OWL DL annotation entailment 3.2.2.3 Chris Welty: what is problem? ... keep it Christian de Sainte Marie: marking at risk means we can remove it ... if there is a risk that some implementer will complain, mark as at risk Jos de Bruijn: leave it - we don't require them to implement ACTION: csma to review changes PROPOSED: Public SWC as LAST CALL Working Draft, after changes agreed upon this session are made (and checked by CSMA) PROPOSED: Publish SWC as LAST CALL Working Draft, after changes agreed upon this session are made (and checked by CSMA) PROPOSED: Publish SWC as LAST CALL Working Draft, after changes agreed upon this session and yesterday are made (and checked by CSMA) Jos de Bruijn: also changes from yesterday +1 (OMG RESOLVED: Publish SWC as LAST CALL Working Draft, after changes agreed upon this session and yesterday are made (and checked by CSMA) NAMING CONVENTIONS BLD document Chris Welty: agreed only to change upper and lower ... discussing for named arguments and frame slots - different content Harold Boley: leave as is Chris Welty: ... easy to handle in XSD Sandro Hawke: declare - has variable ... quantified variable? Harold Boley: has class name inside Chris Welty: declares? Sandro Hawke: never mind Gary Hallmark: not happy with Expr Gary Hallmark: Function for Expr, Predicate for Atom ... Atom is jargon Sandro Hawke: Equal roles should be left and right Christian de Sainte Marie: is symmetric Harold Boley: prefer not to go back to left and right Sandro Hawke: do not want to get your rules back from RIF with equalities flipped Chris Welty: discussion was that equality is symmetric, and we didn't want to force people to choose left and right PROPOSED: shall we switch from Equal/side/side to Equal/left/right ? leave as is Hassan Aït-Kaci: email on XML tagging Sandro Hawke: not about naming Chris Welty: we just have to be sure that we get it right - we use it when it will work Harold Boley: have to declare twice - as namespace and entity Chris Welty: not relevant to this session's topic Christian de Sainte Marie: also need to look at section 4.2 Christian de Sainte Marie: change name of 'implies' for less-overloaded name ... is not an implication (in logical sense) is some roles ... something like 'conditional'? Gary Hallmark: could they all be nouns? s/is some/in some/ (failed) Christian de Sainte Marie: Change 'manner' to 'profile' ... change 'implies' to 'conditional' Sandro Hawke: 'payload' to 'content' 'manner' to 'profile' unanimous ACTION: Harold to change "manner" to "profile" 'implies' to 'rule' not decided 'payload' to 'content' majority against 'address' to 'location' majority for ACTION: Harold to change "address" to "location" for Imports, in BLD. Christian de Sainte Marie: small objection to 'rule' for 'implies' - if rule has name, it will be far away from tag 'rule' Christian de Sainte Marie: did we address all parts of Issue 49? PROPOSED: Close Issue 49 with decisions made so far today RESOLVED: Close Issue 49 with decisions made so far today PROPOSED: Close Issue 54 with at-risk label as decided this morning. RESOLVED: Close Issue 54 with at-risk label as decided this morning. PROPOSED: Close Issue 60 as decided this morning -- if they are incomparable it's an error RESOLVED: Close Issue 60 as decided this morning -- if they are incomparable it's an error PROPOSED: Go to lunch (No activity for 63 minutes) (No activity for 11 minutes) BLD review (Scribe changed to Axel Polleres) BLD review Dan's comments on. Christian de Sainte Marie: What about Dan's comment on arity of predicates? Sandro Hawke: Problematic on merging rulesets where one uses p with arity n and the other uses p with arity m. Christian de Sainte Marie: if it is an IRI it should have the same arity, if it is a local name, then it is in fact different names. ... answer to dan: this is not a problem, i.e. conflicts on using the same iri with different arities is intended. Michael Kifer: not sure. PROPOSED:. ... for example in PROLOG it is quite common to use the same predicates. Chris Welty: yes, but we disallow that. Christian de Sainte Marie: What do we do on rif:locals on merging? General problem. Jos de Bruijn: This is - for the import mechanism - well-defined. Michael Kifer: We defined import, but nor merging.. Chris Welty: who is responding to Dan? ... I will start the wiki page for the response right now. Adrian Paschke: propose to wait until tomorrow and will respond then, together with UCR responses. ACTION: AdrianP to respond to Dan2 (about well-formedness) ACTION: Adrian to respond to Dan2 (about well-formedness) ACTION: apaschke to respond to Dan2 (about well-formedness) Christian de Sainte Marie: 3 comments from Jeremy on rif:iri. OWL is unconvinced by rif:iri and rif:text. Sandro Hawke: I think this is satisfied by our presentation syntax resolutions. ... but we need to respond. Jos de Bruijn: I will write these responses. Christian de Sainte Marie: next, dave has a comment on equality terms appearing in externals. ... the answer to the question is yes: it is deliberate and legal. Dave Reynolds: but why then disallow External in the head? Christian de Sainte Marie: So, shall we allow any External in the head or diallow any Externals in the head? Michael/josb: that would be a void restriction, because it can't be amulated. s/can't/can/ (failed) Christian de Sainte Marie: Dave's comment on BLD XML. ... this about the XSD version. ... so, only about datatypes. Dave Reynolds: my comment is about the *XML* version. Christian de Sainte Marie: objections against saying that we refer to XML 1.0? PROPOSED: We'll use XML 1.0 (not XML 1.1) Jos de Bruijn: isn't there a possibility to allow people to use their preferred XML version? Chris Welty: We are not gonna try to solve that problem, if people can make it work with XML1.1, then it is fine. PROPOSED: We'll use XML 1.0 (not XML 1.1) for the XML syntax for BLD. Jos de Bruijn: what is the difference? Sandro Hawke: fixed reference to unicode in XML 1.0. ... 1.1 more open to speak "different languages". Gary Hallmark: there's a recomendation to use 1.0 unless features of 1.1 really needed. Sandro Hawke: let's get back to that later, I will gtry to get an answer within the hour. Christian de Sainte Marie: comment from Dave on compact IRIs in the XML syntax. ... compact IRIs are not approproate in the XML, because there they are real QNames (?) Harold Boley: Once we have presentation syntax with prefixes, and entities in the XML, that should be fine. Michael Kifer: prefix definition will be in BLD, Consts will be in DTB. ... (the pres. syntax) ACTION: Harold to update all examples for Presentation Syntax and XML syntax for curies and entities. Also add Prefix to presentation syntax. Chris Welty: The XML syntax should be valid XML... full stop. Christian de Sainte Marie: Dave wants a full XML document as an example. Harold Boley: that will be a byproduct of my action. ... I always use the official W3C validators for XML in the examples. Christian de Sainte Marie: more comments form Dave on the schema. ... 1) rif:type should be used rather than just type. ... 2) rif:type should be resticted to anyURI rather than xsd:string. Christian de Sainte Marie: Any drawback in qalifying "type"? Chris Welty: is this the only attribute? Christian de Sainte Marie: Who's in favor f qualifying attributes? Sandro Hawke: makes XML more readable... attributes don't need a def namespace. who in favor of qualifying? 0 yes, 4 against, 7 undecided. PROPOSED: in the RIF XML syntax (as long as we stick with this non-RDF style), attributes will have no namespace (so that we can avoid "rif:" in documents) PROPOSED: in the RIF XML syntax (as long as we stick with this non-RDF style), attributes will have no namespace (be unqualified) (so that we can avoid "rif:" in documents) RESOLVED: in the RIF XML syntax (as long as we stick with this non-RDF style), attributes will have no namespace (be unqualified) (so that we can avoid "rif:" in documents) Andreas Harth: RDF or XSLT use that differently... there seems not to be an agreed treatment. Christian de Sainte Marie: next one. Dave suggests 2) rif:type should be resticted to anyURI rather than xsd:string. ... content of the type cannot be a number, must be a IRI. Sandro Hawke: slight hesitation for anyURI vs. IRI. ... but that could just be a bugfix. ACTION: Harold to change type of "type" attribute to xs:anyURI (from xs:string) Jos de Bruijn: in XML Schema datatypes 1.1 anyURI is also used for IRIs. Christian de Sainte Marie: my own comments on BLD. ... Equal, Member, Subclass should not be allowed to be External. ...also Frame. Suggestion: limit External to ATOMIC. ... discussing External terms in property position in Frames. Michael Kifer: We can disallow some things in External. Discussion about the respective parts of the BLD grammar. Christian de Sainte Marie: currently Externals allowed in slotname position in Frames in BLD. Chris Welty: Should this now be restrictd? s/td/ted/ (failed) Christian de Sainte Marie: Still usure about External(Frame) Michael Kifer: The semantics is precisely defined. Christian and Michael trying to clarify what an External Frame actualy means. ... we agreed to disallow external Equal, Member, Subclass. PROPOSED: Replace Exterman(ATOMIC) with External(ATOM_BASE or FRAME) ... ? ... External Frame still under discussion. Sandro Hawke: doubts about External used as extension mechanism Axel Polleres: I thought the set of external schemas is FIXED per dialect. Michael Kifer: no. ... that is an extension mechanism. Axel Polleres: RIF FLD says "RIF dialects are always associated with sets of coherent signatures"... I am confused now. Discussion is whether defining an own external Schema is a new dialect, i.e. an extension, or no. Igor Mozetic: External could be a SPARQL query, yes? Michael Kifer: If you add some datatypes or externals, then you have a bigger dialect than BLD. Sandro Hawke: Right. Michael Kifer: It is like a blackbox. ... external Equal would be possible to define, but wouldn't make sense to use, actually. PROPOSED: Change External(ATOMIC) to External(ATOM) or External(Frame). Chris Welty: broken link in BLD to "coherent set of such schemas associated[...]" PROPOSED: Change External(ATOMIC) to External(Atom) or External(Frame). 0.0 0.0 (not sure why external frames needed and not just External(Atom) ) PROPOSED: Change External(ATOMIC) to External(Atom). PROPOSED: Change External(ATOMIC) to External(Atom) or External(Frame) and add text explaining how External frames are supported by the semantics. +1 RESOLVED: Change External(ATOMIC) to External(Atom) or External(Frame) and add text explaining how External frames are supported by the semantics. ACTION: Kifer to add text explaining external frames Christian de Sainte Marie: Next. Christian's comment on NAU limitation s/NAU/named argument uniterms/ (failed) clarified. Christian de Sainte Marie: How can we create new symbols when inferring a new frame? ... proposals: rif:new, skolem terms, existentials in the head. Sandro Hawke: writing a translator form N3 to rif needs existentials in the head. Christian de Sainte Marie: What was your concrete example? Sandro Hawke: Skolemizing would be a fallback. Adrian Paschke: not the same. Igor Mozetic: the only problem with skolemization is round-tripping. Christian de Sainte Marie: What about having that in a builtin? Michael Kifer: not possible as a builtin. ... rif:new should be symbol not a constant (not an external) that each time you use it is interpreted differently. Discussion skolem function vs. new constant ongoing. Chris Welty: What about making skolem funcs a special datatype. ...? Michael Kifer: It could be a subsymbolspace of rif:local Dicussion of whether something like gensym is possible. +1 to +1 of jos. Shall we go on? Don't see this being resolved soon. We had more promising discussions being cut off today already Christian de Sainte Marie: We are starting to run in circles. ... if we decide we want this, it will delay. Is it worth? many no's. break now. 20min until 4. (No activity for 15 minutes) scribe??? still me? :-) (Scribe changed to Adrian Paschke) BLD open issues Chris Welty: compliance definition for BLD Michael Kifer: separate document for compliance Michael Kifer: put it in the overview Chris Welty: last call document needs conformant statement Sandro Hawke: agree conceptually it could go into another document Sandro Hawke: but for now it might be in BLD Harold Boley: what about FLD Chris Welty: put it in BLD for now PROPOSED: add the text on (more or less) to BLD, probably near the next.... Sandro Hawke: statement about syntactic RIF consumer , RIF producer compliance Dave Reynolds: question: is schema validation actually enough to validate conformance? Michael Kifer: well-formed vs. semantically correct use Sandro Hawke: RIF consumers must reject a BLD document if .... constraints are not met Sandro Hawke: for rule engines as consumers we should say something about when a BLD document needs to be rejected Sandro Hawke: for example a BLD document which use e.g. a new construct ActionRule; a consumer must throw an error Sandro Hawke: it can not silently ignore it Chris Welty: you want this strict dialect conformance? Sandro Hawke: right Chris Welty: we could label it strict conformant and conformant Chris Welty: strict conformance is exclusive; conformance is inclusive Sandro Hawke: we need strict conformance, otherwise people will abuse BLD Michael Kifer: conformance and loose conformance (No activity for 7 minutes) PROPOSED: accept the conformance statement on for BLD, up to the separator line. 0 RESOLVED: accept the conformance statement on for BLD, up to the separator line. RDF discussion PROPOSED: The normative exchange syntax for RIF will be glass etchings. XML Syntax -- type tagging and RDF/Compatibility Chris Welty: type-tagging syntax Chris Welty: XSLT transformation from current XML syntax to rigid RDF syntax Sandro Hawke: some people are allergic using RDF name space Chris Welty: current syntax is ordered Chris Welty: so we would need to add parsetype collections to get rid of the order for RDF Chris Welty: if you translate into RDF and back the order is lost Sandro Hawke: sure. if you use a triple store the order is lost Christian de Sainte Marie: but this is your (user) problem Christian de Sainte Marie: you are not forced to use it Michael Kifer: What is the problem with the XSLT solution Christian de Sainte Marie: RDF tools can not directly parse a BLD document Christian de Sainte Marie: so you loose a litte bit of openness Sandro Hawke: you still need to add parsetype collection Michael Kifer: XSLT will do that Christian de Sainte Marie: Would you need a different XSLT if you have user-defined functions etc. Harold Boley: No. You covered BLD and it can be mirrored by XSLT Sandro Hawke: it does not provide anything. It does not scale with dialects Michael Kifer: We give an example and they can modify the XSLT example Sandro Hawke: How to find this XSLT? Michael Kifer: they publish it Sandro Hawke: does not solve anything yes, grddl could be a dialect and maybe module solution Sandro Hawke: I want to use frame rules Harold Boley: it is like meta programming Chris Welty: only positional arguments can not transformed into frames Gary Hallmark: so let's get rid of positional arguments and name them Harold Boley: we had a breakout session about this and slides exsist Sandro Hawke: I want frame rules in RIF Chris Welty: the only problem is the order Sandro Hawke: the two options are use numbers on the arguments or have some ordered flag Michael Kifer: numbers are a general solution Igor Mozetic: follow the principle object-oriented XML Gary Hallmark: isn't is possible to have flag which says if it ordered or not Sandro Hawke: two questions: how to implement the ordering and do we use rdf namespace to implement a solution Gary Hallmark: people really don't want rdf namespace Gary Hallmark: so use a flag attribute Sandro Hawke: yes it solves the frame rule problem and makes me happy Michael Kifer: solves the parsetype problem Sandro Hawke: we still have the problem with RDF datatypes Igor Mozetic: let's handle these two issues separated Igor Mozetic: if we talk about OO XML XSLT can make it RDF readable Christian de Sainte Marie: requiring XSLT to make it RDF parsable is exactly the same as if we have no RDF compatibility Harold Boley: we already have Const and Var Michael Kifer: It is not clear why we need this rdf:value PROPOSED: we'll have an "object-oriented" / "type-tagged" /"self-describing" XML, so that frame-rules can operate on RIF documents. Requires something like numbering arguments or rdf:parsetype="collection" or ordered="yes". like ordered="yes" +1 PROPOSED: we'll have an XML such that frame-rules can operate on RIF documents. Requires something like numbering arguments or rdf:parsetype="collection" or ordered="yes". I understand it like <Atom ordered="yes">...</Atom>? Harold Boley: why do we not the oid of a frame, it think named arguments would do it PROPOSED: we'll have an XML such that RIF can operate on RIF documents at a RIF-syntactic-level instead of a DOM level. Requires something like numbering arguments or rdf:parsetype="collection" or ordered="yes". s/Atom/Frame/ (failed) +1 for ordered="yes" RESOLVED: we'll have an XML such that RIF can operate on RIF documents at a RIF-syntactic-level instead of a DOM level. Requires something like numbering arguments or rdf:parsetype="collection" or ordered="yes". s/Frame/Atom/ (failed) PROPOSED: use an RDF/XML-compatible syntax for RIF (more-or-less following the suggestions of) -1 PROPOSED: use an RDF/XML-compatible syntax for RIF (more-or-less following the suggestions of) provided it does not make RIF implementations need to know anything about RDF. Harold Boley: You could easily transform the stripped version into a version with stripe skipping, e.g. XSLT would remove the slots Adrian Paschke: you then would have a much more compact representation (No activity for 8 minutes) (No activity for 5 minutes) PROPOSED: we will NOT use an RDF/XML-compatible syntax for RIF Christian de Sainte Marie: it is not fully stripped Christian de Sainte Marie: no, I retracted to what I said Harold Boley: We could also put it on <Atom> Chris Welty: put ordered attribute on Atom Harold Boley: we could have convention that arguments and members of lists are ordered, by default Sandro Hawke: it is not simpler PROPOSED: use an XML attribute rif:ordered="yes" (as exemplified above) which works like rdf:parseType="Collection" (and rif:type attribute gets qualified again.) PROPOSED: use an XML attribute rif:ordered="yes" (as exemplified above) or using an equivalent unique method to specify order, which works like rdf:parseType="Collection" (and rif:type attribute gets qualified again.) RESOLVED: use an XML attribute rif:ordered="yes" (as exemplified above) or using an equivalent unique method to specify order, which works like rdf:parseType="Collection" (and rif:type attribute gets qualified again.) Metadata Harold Boley: explains proposal for metadata Jos de Bruijn: Why isn't identifier simply and IRI? Michael Kifer: yes, it can be an iri Sandro Hawke: Curries? Jos de Bruijn: Link between metadata and identifier? Harold Boley: now it is totally decoupled Jos de Bruijn: what is the advantage? Michael Kifer: it gives you more freedom, refer to other pieces to metadata Sandro Hawke: in the example is pd identifier for the group? Jos de Bruijn: it is the identifier of the frame not of the group Michael Kifer: there is no formal relation Christian de Sainte Marie: I would like to say that a certain rule is called "cmp" in a group of rules containing only one rule Harold Boley: we allow crossreferences between metadata Jos de Bruijn: I dissagree with the snapshot proposal Jos de Bruijn: with the new proposal we can identify rules, so it overcomes my issue Harold Boley: it is open how deep it will go ; could be on var Sandro Hawke: id and meta roles; optional Harold Boley: XML syntax is given in the end of document Christian de Sainte Marie: compatibility with PRD, a rule set will have parameters, how do I distinguish a group with and without parameters Sandro Hawke: you make different groups Michael Kifer: it has nothing to with metadata, currently group has no parameters Christian de Sainte Marie: Group can be used in other dialects, PRD Christian de Sainte Marie: Currently in PRD you have ruleset, we could use the same syntax Sandro Hawke: it is orthogonal to metadata Gary Hallmark: Why is it a formula? Sandro Hawke: Metadata could be in a separate document (No activity for 89 minutes) (No activity for 88 minutes) (No activity for 6 minutes) (No activity for 11 minutes) Day 3 - Scribe - Igor Mozetic, Gary Hallmark, Andreas Harth, Axel Polleres (Scribe changed to Igor Mozetic) PROPOSED: Close Issue 34 as addressed by text currently in BLD PROPOSED: Close Issue 34 as addressed by text currently in BLD at +1 RESOLVED: Close Issue 34 as addressed by text currently in BLD at BLD left-overs from yesterday How to order args in XML syntax (ordered="yes") Harold Boley: multiple children under a role should be ordered by convention Why using convention instead of being explicit? Sandro Hawke: is the above ordered or not? Sandro Hawke: his proposal Sandro Hawke: one needs a flag to indicate order show of hands: 1 prefers the second option, the rest the first Gary Hallmark: explicit ordered="yes" affects semantics Igor Mozetic: what if we have explicit ordered="no" and the rest is assumed ordered? Sandro Hawke: not sure PROPOSED: RIF adopts an Object Oriented XML with the following Ordering Criteria : The child elements (roles) of class tags are unordered; the child elements roles are ordered. The only roles where this matters are the non-unary ones: args and slot. Editorss Note: This is at-risk and can be replaced by an ordered="yes" attribute. PROPOSED: RIF adopts an Object Oriented XML with the following Ordering Criteria : The child elements (roles) of class tags are unordered; the child elements of roles are ordered. An XML attribute ttxml:collection="yes" is used on for emphasis (optional if there are two or more child elements) PROPOSED: 1 prefers inexplicit convention, majority explicit order Gary Hallmark: keep it simple! PROPOSED: we use "ordered=yes" to indicate ordered arguments in XML PROPOSED: who prefers rif namespace: 4 yes vs. 4 no PROPOSED: RIF will use rif:ordered="yes". This item will be marked "at risk", saying the name and XML details on this bit may change. no objections +1 RESOLVED: RIF will use rif:ordered="yes". This item will be marked "at risk", saying the name and XML details on this bit may change. Christian de Sainte Marie: concern about datatypes extensibility Michael Kifer: vars can have symbol spaces in the future dialects Christian de Sainte Marie: agrees with Dave Axel Polleres: we could have shortcuts in XML strawpoll on: 5 vs 5 Christian de Sainte Marie: stick with current +1 for Harold PROPOSED: Close Issue 55, with decisions made so far this meeting. tabled for after break metadata meta as frame conjunction, not arbitrary formula otherwise the same proposal as by Harold the short form is an alternative proposal the short form covers the comments Dave, between the two options above Sandro Hawke: extensions are just extensions of Classes, not roles PROPOSED: 7 for option 1, 2 for option 2, nobody objects to either PROPOSED: Adopt the XML syntax for metadata in, using conjunction-of-frames instead of all formulas. Christian de Sainte Marie: return to the AT RISK features (Scribe changed to Gary Hallmark) Dave Reynolds: problem with shortcut is metadata mixed with rule markup Christian de Sainte Marie: also, shortcut limits metadata to be about container only Jos de Bruijn: also, how to do structured metadata PROPOSED: Adopt the XML syntax for metadata in, using conjunction-of-frames instead of all formulas. no supporters for Dave's suggestion PROPOSED: Adopt the XML syntax for metadata in and given as the first example on, using conjunction-of-frames instead of all formulas. RESOLVED: Adopt the XML syntax for metadata in and given as the first example on, using conjunction-of-frames instead of all formulas. Christian de Sainte Marie: any objection to put metadata on all Class elements? Jos de Bruijn: so you can put IDs on Const? ... it may be an IRI already Sandro Hawke: Const could be Plancks constant (worthy of a comment) PROPOSED: the <id> and <meta> elements can occur under any Class element (this matter is underspecified in 0036, and previous resolution). Jos de Bruijn: oid of meta frames can be anything RESOLVED: the <id> and <meta> elements can occur under any Class element (this matter is underspecified in 0036, and previous resolution). PROPOSED: Close Issue 51 (metadata syntax and rule identification) give the decisions made so far this meeting. Dave Reynolds: IDs can be any Const, maybe should limit to IRI Sandro Hawke: nice to have locals ... also nice to have existential to avoid having ot invent locals to nest frames Dave Reynolds: doesn't like having number as ID of element Andreas Harth: can I refer to rules from another doc? answer: yes RESOLVED: Close Issue 51 (metadata syntax and rule identification) give the decisions made so far this meeting. Christian de Sainte Marie: what about comments? Sandro Hawke: rdfs:comment ... don't use xml comments, they are ignored Michael Kifer: should list recommended metadata property names ... and include one for comment PROPOSED: Close Issue 58 (Comments) by suggesting people use Dublin Core for metadata () eg dc:comment Sandro Hawke: point people at dublin core (also has comment) (No activity for 6 minutes) (No activity for 5 minutes) Chris Welty: owl points people to list of other properties for metadata (annotations) PROPOSED: Close Issue 58 (Comments) by suggesting people use Dublin Core, RDFS, and OWL for metadata, along the lines of ... rdfs:comment, etc. ... dc:creator, dc:description, dc:date, etc PROPOSED: Close Issue 58 (Comments) by suggesting people use Dublin Core, RDFS, and OWL for metadata, along the lines of -- specifically owl:versionInfo, rdfs:lable, rdfs:comment, rdfs:seeAlso, rdfs:isDefinedBy, dc:creator, dc:description (when creator is an object, not a string). Andreas Harth: foaf:maker is better than dc:creator because range is IRI not just string Chris Welty: these are just suggestions. users can also invent new ones if they like the metadata section of BLD. Michael Kifer: where does this info go. we have no metadata section). ACTION: Harold to add metadata syntax and commentary to BLD. Michael Kifer: above are alternatives for presentation syntax Sandro Hawke: prefers round or curly braces to pointy ones ACTION: kifer to metadata to mathematical description PROPOSED: close Issue 55 (striping and xml syntax and rdf/xml syntax compatibility) addressed by decisions made so far this meeting RESOLVED: close Issue 55 (striping and xml syntax and rdf/xml syntax compatibility) addressed by decisions made so far this meeting Jos de Bruijn: metadata target is ambiguous in presentation syntax, e.g. (* comment *) ?o[s->v] could apply to frame or oid Harold Boley: could enclose target in the (* *) Michael Kifer: could add regular parens to disambiguate Jos de Bruijn: could define precedence for comment "operator" Sandro Hawke: comment has lowest precedence Sandro Hawke: don't have to formalize precedence until/unless we formalize rest of PS Michael Kifer: issue with Harold's proposal is most of the time the *) is far away from the (* PROPOSED: we don't need to settle things like precedence in the PS for now. we're fine. PROPOSED: move Issue 57 (xml syntax extensibility) out of critical path RESOLVED: move Issue 57 (xml syntax extensibility) out of critical path Metadata suvivability Sandro Hawke: metadata in RIF should be put in comments in target language Christian de Sainte Marie: comments could be less survivable PROPOSED: We say metadata (including comments) SHOULD survive the translation from-and-back-to RIF Chris Welty: but we don't distinguish comments from other metadata Christian de Sainte Marie: don't care about XML comments PROPOSED: We say metadata SHOULD survive the translation from-and-back-to RIF RESOLVED: We say metadata SHOULD survive the translation from-and-back-to RIF Christian de Sainte Marie: should preserve metadata even if you don't understand it ACTION: Harold to put this in Conformance section of BLD PROPOSED: closs Issue 59 as discussed in this meeting RESOLVED: close Issue 59 as discussed in this meeting Relative IRIs Harold Boley: related to Prefix: Sandro Hawke: base and prefix are separate issues ... base overrides location of document Chris Welty: base is for unprefixed IRIs Axel Polleres: base is not required, but prefix is required Sandro Hawke: relative IRIs work well for imports Michael Kifer: don't need both base and prefix ... relative IRIs most useful when no base is specified Michael Kifer: need to define semantics Chris Welty: its a preprocessing step Jos de Bruijn: shares michaels concern Dave Reynolds: presentation syntax should use absolute IRIs with curies. relative IRIs are in XML only Jos de Bruijn: how to map a relative IRI to domain of interpretation we seem to agree that relative to absolute expansion is a preprocessing step Sandro Hawke: wants relative IRIs in PS because of PS->XML mapping ... i.e. roundtrip between XML and PS PROPOSED: In Presentation Syntax, the IRIs in rif:iri Consts can be relative. A "base" directive will be added to the preamble. PROPOSED: In Presentation Syntax, the IRIs in rif:iri Consts can be relative. RESOLVED: In Presentation Syntax, the IRIs in rif:iri Consts can be relative. ACTION: kifer to put relative IRI handling, with base directive, in BLD ACTION: Harold to update BNF with base directive for relative IRI handling in PS XML 1.0 or 1.1 Chris Welty: xml 1.1 may have issues with normalizing strings PROPOSED: We'll use XML 1.0 (not XML 1.1) for the XML syntax for BLD. Sandro Hawke: advice from experts is to use 1.0 ... use 1.0 "as amended" to pick up new unicode chars PROPOSED: We'll use XML 1.0 as amended (not XML 1.1) for the XML syntax for BLD. PROPOSED: We'll use XML 1.0 as amended (not XML 1.1) for the XML syntax for BLD. RESOLVED: We'll use XML 1.0 as amended (not XML 1.1) for the XML syntax for BLD. Michael Kifer: back to metadata disambiguation - propose to attach metadata no lower than FORMULA ACTION: Harold to add XML 1.0 statement to BLD at risk ACTION: Sandro come up with style for "At Risk" comments in document. Christian de Sainte Marie: rif:text, rif:ordered at risk Chris Welty: equality? Christian de Sainte Marie: at risk in head at least Michael Kifer: in body its just identity (easy) PROPOSED: make equality-in-the-head (that is, equality that is more than syntactic sugar) a feature-at-risk. PROPOSED: make equality-in-the-head a feature-at-risk. RESOLVED: make equality-in-the-head a feature-at-risk. Christian de Sainte Marie: conformance clause at risk? Christian de Sainte Marie: at risk because of "strict mode" PROPOSED: Mark "at risk" the strictness part of the conformance clause RESOLVED: Mark "at risk" the strictness part of the conformance clause Chris Welty: reminder that we can't make substantive changes to non-at-risk parts of the doc after last call Sandro Hawke: a redo of last call costs at least 4 weeks extra review time PROPOSED: Advance BLD to Last Call, pending satisfactory completion of the editors decided at this meeting. PROPOSED: Advance BLD to Last Call, pending satisfactory completion of the edit decided at this meeting. are we ready for Last Call? PROPOSED: Advance BLD to Last Call, pending satisfactory completion of the edits decided at this meeting. we scroll thru the BLD doc, making sure all editors notes are taken care of ACTION: kifer to make editorial changes associated with DTB links in BLD and remove editor's notes Christian de Sainte Marie: the xml syntax translation table needs editorial work, but not a problem for last call PROPOSED: Advance BLD to Last Call, pending satisfactory completion of the edits decided at this meeting. Christian de Sainte Marie: who is the reviewer Sandro Hawke: we all can be RESOLVED: Advance BLD to Last Call, pending satisfactory completion of the edits decided at this meeting. (Scribe changed to Andreas Harth) agenda planning prd Christian de Sainte Marie: changes since last time ... tightened up overview, removed diagrams in syntax section ... added running example Sandro Hawke: i'd benefit from a five-ten minute summary what prd is Christian de Sainte Marie: not an attempt to improve pr language, but on common serialisation format ... tried to use the same syntax as bld, but not the case everywhere (some could be expressed in bld, some cannot) ... operational semantics is being described in terms of states and transitions ... proposed table to convert xml syntax to presentation syntax ... i think we can do without presentation syntax in prd ... semantics specified using plodkin-style system ... next are rule instantiation, conflict resolution,and halting test ... useful to describe the semantics in three-step system rather than just one function ... method to resolve conflicts differs from system to system, other methods are shared between systems ... conflict resolution requires some discussion ... halting test also differs across systems Harold Boley: question about three-step approach - is it applied to single rule or the entire process? Christian de Sainte Marie: describing the transition from one step to another step describes the semantics Harold Boley: could finite state machines or finite automata also used? Christian de Sainte Marie: number of states is not necessarily finite Harold Boley: datalog's not necessarily finite if your alphabet is infinite Gary Hallmark: here we can define new things Chris Welty: let's look at the reviews, start with gary's Michael Kifer: question regarding halting test ... assume you have core, have a test to stop after five applications, you may end up without a model? Christian de Sainte Marie: yes ... if you have only assert in conlusion and no negation in condition, per default it should halt when you have minimal model Michael Kifer: is there some generally accepted way to explain that? Christian de Sainte Marie: given that description of semantics you cannot use fixpoint as a test, halt is when you have no transition any more Harold Boley: result is a set of facts? you consider only the final configuration but not the actions? Christian de Sainte Marie: same as when you're looking at brd ruleset, you're only interested in the model ... actions not covered in the semantics yet, we have to discuss that ... i wrote down the common points of pr systems, left out other things Harold Boley: in the current version only working memory can be changed Christian de Sainte Marie: nice property of semantics is that it's very compact ... execute might be difficult (e.g. what transition is a print?) Gary Hallmark: if x != 0 inc(x) and start with x=1 the system will never terminate, if true inc(x) will only fire once in my system Christian de Sainte Marie: discussing gary's comment Chris Welty: what about syntax? Christian de Sainte Marie: derive presentation syntax from xml (bld is organised the reverse way) Gary Hallmark: should the organisation of prd follow the bld document? Christian de Sainte Marie: not necessarily, easier to go from xml to presentation syntax Chris Welty: issue is presentation not how syntax was derived ... do you use the presentation syntax at all? Gary Hallmark: 1.3.2 example has different presentation syntax than defined Christian de Sainte Marie: syntax is not yet defined in the beginning, just use a rule syntax Chris Welty: let's first identify and collect issues, then resolve Christian de Sainte Marie: if we use imply in bld we'll use it in prd as well ... forall can be nested in prd ... constraints on variable can be imposed in the forall Chris Welty: is that necessary? Christian de Sainte Marie: not really, in pr system the evaluation of clauses must be ordered, but they really have different scope ... could flatten everything and then reconstruct ... but representation of rules should preserve optimisations that use forall ordering ... but community undecided on that Chris Welty: any points or issues that should be resolved before 1st wd Gary Hallmark: need some vision of a core which can span all rule languages ... currently bld and prd does not have any intersection ... syntax side-by-side comparison would be a first step towards common core s/comparision/comparison (succeeded, 1 lines ago) Christian de Sainte Marie: two goals: i) get pr community interested (those people do not care about bld) ... ii) but also interoperate with other kind of rules ... pr community will not read bld ... proposal is to have table with similarities and overlaps in seperate section Gary Hallmark: would nice to have the core more explicit Christian de Sainte Marie: will align syntax Adrian Paschke: core with both syntactic and semantic overlap is desirable ... syntax should be aligned Christian de Sainte Marie: forall is the only point where there's considerable differences, will update tagnames to current ones Gary Hallmark: write pattern and tests as conjunction into one formula Christian de Sainte Marie: i want to target only the pr community Gary Hallmark: i'd like to use that standard to bridge logic and production rule communities inside oracle Christian de Sainte Marie: need that for if-then-else Gary Hallmark: it's a trivial rewrite using a negation Christian de Sainte Marie: there's no negation in bld Chris Welty: would an editor's note on nested forall's suffice for now? Christian de Sainte Marie: there should even be a specific request for comments from reviewers Chris Welty: if we align syntax, table with overlap between prd and bld, and add editor's note to ask community for comments, can we go to first wd? Gary Hallmark: re order, you could have a policy that rules can only fire once Christian de Sainte Marie: order related to age of rule could be another policy ACTION: csma align syntax, table with overlap between prd and bld, and add editor's note to ask community for comments ACTION: csma to work out policies for pick (refraction, recency, priority, sequential) Gary Hallmark: it's also common to give a limit on cycles ACTION: gary and adrian to review draft working draft Adrian Paschke: we have not discussed update and execute Christian de Sainte Marie: propose to table update ... say to say we have execute but semantics not defined yet Gary Hallmark: could even remove update Christian de Sainte Marie: would be nice if we have assign ... with assign you can remove old value and use a new one Gary Hallmark: assign used to synchronise working memory with external storage Gary Hallmark: only have assert and remove for now, execute would need to be integrated with external functions ACTION: csma to remove UPDATE, EXECUTE and ASSIGN from PRD PROPOSED: Publish PRD as a FPWD, given the editorial changes decided so far this meeting (after confirmation of edits by Gary and Adrian). Christian de Sainte Marie: the prr group uses remove ACTION: csma to change REMOVE to RETRACT PROPOSED: Publish PRD as a FPWD, given the editorial changes decided so far this meeting (after confirmation of edits by Gary and Adrian). RESOLVED: Publish PRD as a FPWD, given the editorial changes decided so far this meeting (after confirmation of edits by Gary and Adrian). WG Futures administration, future of working group Sandro Hawke: future meetings, extension request, publication details Chris Welty: schedule for DTB? Axel Polleres: end of the week Jos de Bruijn: swc by monday ... June 2 Christian de Sainte Marie: swc review by june 6 ACTION: chris review Axel's changes to DTB Michael Kifer: BLD until June 16 Chris Welty: working group review of BLD by June 23 Michael Kifer: reivew DTB by June 16 s/reivew/review/ (failed) Christian de Sainte Marie: changes to PRD by June 2 Gary Hallmark: review by June 4 Adrian Paschke: review by June 6 Sandro Hawke: do you make changes to FLD as welL? Michael Kifer: yes Michael Kifer: FLD by June 16 Chris Welty: reviewing FLD by June 23 Sandro Hawke: publication date of June 23rd-ish? future of WG Chris Welty: need to ask for another extension - how long before everything's moved to rec? Sandro Hawke: jos, when are you changing level of participation? Jos de Bruijn: want to bring bld, swc, dtb to rec, not so much involved with future dialects Sandro Hawke: need to get bld and swc to recommendation, get implementatons and test suite ... want to fld and dtb torec Jos de Bruijn: dependencies between bld and dtb Sandro Hawke: certainly need to be stable Jos de Bruijn: dtb is essentially part of bld Sandro Hawke: want to consider to all get them to rec Chris Welty: no document on fallback mechanism yet, can we take that to rec realistically? Sandro Hawke: we should try ... UCR could be a note Adrian Paschke: interested in events, reaction dialect Christian de Sainte Marie: must make sure the prd can be extended to events Adrian Paschke: some reaction rules build on logical formalism, others to pr Chris Welty: i'm nervous about prd a chair being an editor ... need more participation from the pr community Chris Welty: discussion about more pr involvement during next telecon Christian de Sainte Marie: want to re-inforce: wrote and rewrote the draft three times because nobody else wanted to do it Chris Welty: who's doing the work on core? Christian de Sainte Marie: we should make clear that core specifies the fragment that is implementable in both logical and production systems Michael Kifer: can't that be an appendix on BLD, we've had that before? Chris Welty: core should be core document Axel Polleres: why not just a document to restrict bld? ... many things were outsourced to dtb ... what's the use of bld then? Michael Kifer: takes a long time to specify a dialect, way to go is to restrict one dialect Sandro Hawke: why not have bld and prd editors write two pages to restrict their dialect to core? Sandro Hawke: i'm tempted to go for 18 months instead of one year Chris Welty: would prefer to scope work to fit into one year Chris Welty: why not move core and fallback to wd? Sandro Hawke: but there's no unity to rif without those Christian de Sainte Marie: why not commit somebody to combine the specialisations into core? adrian would know both camps Harold Boley: could we move the name fallback mechanism to interoperation mechanism? Harold Boley: would not mind downgrading core and fallback mechanism to wd Michael Kifer: other ways to achieve unity via a framework Chris Welty: how about merging the two into core and interoperability Gary Hallmark: how about test cases? Sandro Hawke: implied in the documents Sandro Hawke: merged core and fallback mechanism documents in wiki Michael Kifer: we'd need lpd in Chris Welty: 18 months is a long commitment for a chair Christian de Sainte Marie: same as Michael Kifer: what's the process (e.g. for adding events) if the group is finished Christian de Sainte Marie: at some point need to terminate group to re-charter it Sandro Hawke: there's wg's like css that run for twelve years now, others like owl finished, paused, and were re-established (Scribe changed to Axel Polleres) Chris Welty: I would like to opt for one year. PROPOSED: The WG requests a 1-year request, with the work plan/description Chris Welty: What we have looks like a realistic plan. Harold Boley: with which events should we align? ... RR, business rules, etc. PROPOSED: The WG requests a 1-year extension, with the work plan/description +1 (DERI) RESOLVED: The WG requests a 1-year extension, with the work plan/description UCR Chris Welty: igor, you reviewed it? Sandro Hawke: we also need to talk about f2f schedule then. Adrian Paschke: completely restructured the draft according to our discussions. ... compacted CSFs removing discussions, pictures, etc. that was section 3. s/3/2/ (failed) now Section 3. Section three outlines the documents. Sandro Hawke: Isn't this kind of like the reader's guide we need anyway? Michael Kifer: Is it still "Use Cases and Requirements"? Jos de Bruijn: this section is neither UC nor R. Chris Welty: could go into a separate overview/guide document. Adrian Paschke: Section 4 Axel Polleres: abridged syntax now in DTB ... should be pointed at. Discussion about syntax in the examples. ... Escpecially about :- vs. <- Chris Welty: This needs to be changed to RIF PS. Christian de Sainte Marie: ad overfull boxes on the drafts, you should use colons for intention, then it works. Adrian Paschke: Use cases will be updated. next section Requirements. ... distinction between phase 1 and phase 2 changed to "general requirments" and "objectives" which we could address but we do not promise. ChrisW/Sandro: Objectives should rather be "future requirments" or "desiderata" or "wishes". Chris Welty: you think XYZ June is realistic deadline? Adrian Paschke: yes. Chris Welty: We need UCR on the Tele conf agenda in two weeks. ACTION: csma put UCR (esp. requirements) on agenda s/XYZ/16/ (failed) Sandro Hawke: requirements should not have been changed. Adrian Paschke: I only restructured. Igor Mozetic: We should have an example with blanknodes in the head/skolemization, to indicate limitationes. Adrian Paschke: this is basically UC1, could be another example there. Sandro Hawke: we should have a one-paragraph version of each UC aside vidually. Christian de Sainte Marie: check whether this is already there, we had that request already. Harold Boley: Maybe move things to an appendix? Adrian's gonna look at making the UCs shorter along these lines. (No activity for 12 minutes) AxelPolleres has joined #rif (No activity for 12 minutes) future F2Fs. Chris Welty: We have 4 possible dates in October. TPAC, ISWC, RuleML, RR, Orlando or Karlsruhe or Mandelieu. Christian de Sainte Marie: we have september dificulties typically with people on teaching assignments. Jos de Bruijn: I am travelling secnd half of Sept. Chris Welty: at least one person opposing TPAC, Oct 23-24 Christian de Sainte Marie: We might consider have one more in between. ... last week of august. Sandro Hawke: I can host any of the august meetings. Gary Hallmark: I could try in Oregon. Suggested dates 28-29 August. ... for both those options. Portland pro: 4, Boston pro: 5. Sandro Hawke: I have to back off 29th, that is a bad time for MIT. ... will change for alternatives. s/change/check/ (failed) Sandro Hawke: back to previous state: 28-29 IS ok for MIT. FLD ? ACTION: Gary confirm hosting offer for Aug 28-29 ACTION: Sandro confirm hosting offer for Aug 28-29 FLD Discussion (guest): FLD should not *require* other dialects to be derived from FLD by spezialization but that should be weakened to "expected" or alike. Christian de Sainte Marie: it is ambiguous whether the PS normative or not. I don't know whether this is also in BLD or FLD, but I removed this in DTB: "The compact URI notation is not part of the RIF-BLD syntax." Likewise, I think all things which say that the PS is not normative should be removed. (that was not a scribecomment but a personal one) Discussion on that the EBNF does not represent the Presentation syntax, since it misses some constraints of it. Michael Kifer: we will not change this, but add a clarifying text. Christian de Sainte Marie: before going to WD, do we need more reviews? Chris Welty: I think we need to have it reviewed once again. Christian de Sainte Marie: let's ask for reviews and then decide for publication. Chris Welty: we have only June 16-23 for the reviews. Jos and ChrisW will review FLD ACTION: chris to review FLD [june 23] ACTION: jos to review FLD [june 23] ACTION: jdebruij2 to review FLD [june 23] PROPOSED: conditional on reviews by Jos and Chris, publish FLD as 2cnd WD +1 (DERI) RESOLVED: conditional on reviews by Jos and Chris, publish FLD as 2cnd WD PROPOSED: BEER! +0.5 (Bulmers) RESOLVED: BEER!
http://www.w3.org/2005/rules/wiki/F2F10_Minutes
crawl-002
refinedweb
13,053
58.82
Yes but else make the code look complex, that's why I didn't want to use it. It looks a lot neater with a continue, and easier to read I think. Do you mean that it remembers the length? How would it know x had not changed? Printable View wow, I had no idea this post was still ongoing lol. Here is my code, yours is definitely better esbo. Code: #include <iostream> #include <string> #include <vector> #include <conio.h> using std::cout; using std::cin; using std::endl; using std::vector; using std::string; int main(){ //asks for list of words cout << "Please give me a list of words:" << endl; string x; vector<string> words; //adds the input to the vector while (cin >> x){ words.push_back(x); } //checks to see if user inputted anything if (words.size() == 0){ cout << "I didn't receive any input." << endl; cout << "Press any key to continue..."; _getch(); return 1; } //the containers string longest = words[0]; string shortest = words[0]; //checks for the shortest and longest string for(int i = 1; i != words.size(); i++){ if(words[i].length() < shortest.length()){ shortest = words[i]; } if(words[i].length() > longest.length()){ longest = words[i]; } } //the output cout << "Shortest: " << shortest << endl; cout << "Longest: " << longest << endl; cout << "Press enter to continue..."; _getch(); return 0; } I got my 'big' C program working in C++, it seems the 'bug' already existed ( I had done a little processing when I should have checked for end of file first) however it never caused any harm to the C program, however as I had rearranged the position of some functions to get it to compile I think that may have made a harmless action dangerous. Any by converting it to compile with C++ I uncovered 2 hide bugs, which can't be a bad thing. I might write the output section in C++ (cout) to make it easier to modify but I am not familiar with the print format of cout yet, if it's the same as printf Iprobably won't bother. hey guys i have a question, if we also want to output the length of the longest/shortest input, is it just cout << longest/shortest.length? sorry, its probably a noob question.. longest.length() will give you the length of the string, yes. -- Mats Yes, it is shortest/longest.length(). Quote: I might write the output section in C++ (cout) to make it easier to modify but I am not familiar with the print format of cout yet, if it's the same as printf Iprobably won't bother. I/O stream format flags. You may want to start a separate thread on this. In any case formatting with streams is completely different. It is awkward to a C programmer, but it is type-safe. E.g you simply can't make the following mistake with std::cout: If the awkwardness bothers you (formatting output with format strings is just too convenient) you can download boost and use Boost.Format which gets you the best of both worlds: you can use format string but in a type-safe way.If the awkwardness bothers you (formatting output with format strings is just too convenient) you can download boost and use Boost.Format which gets you the best of both worlds: you can use format string but in a type-safe way.Code: int n = 10; prinf("%s", n); thanks..also i was wondering wat does this part do: _getch() I used _getch() because my compiler closed my cmd window everytime i ran my programs. You can remove that and the cout above it. Don't use it if possible...I read its not exactly industry standard, and affects performance...but atm I'm only running small programs so whatever keeps my cmd window open! =] Its nice to know youre learning c++ too jewelz. Maybe we can learn from each other! Basically it reads a key from the keyboard, without echoing it to the screen so it is effectively, press any key to continue. It is unbuffered so you don't have to press carridge return. When you program exits the window will close so you need something to stop it closing. I usually run mine from a batch file wiith a pause in it to keep the window open. I don't think _getc will effect performance at all as it is just waiting for a signal that a key has been pressed. hey guys, jus wanna say thanks for the help. i really appreciate it =) ok this is probably the most retarded post ever, but how do i make the output come out within quotation marks? for example if input was: Hello I am nice. output: Longest: Hello Shortest: I but i need it to be: Longest: "Hello" Shortest: "I" Code: cout << "Shortest: " << "\"" << shortest << "\"" << endl; cout << "Longest: " << "\"" << longest << "\"" << endl;
http://cboard.cprogramming.com/cplusplus-programming/111070-longest-shortest-string-2-print.html
CC-MAIN-2014-52
refinedweb
810
72.46
The Q3ToolBar class provides a movable panel containing widgets such as tool buttons. More... #include <Q3ToolBar> This class is part of the Qt 3 support library. It is provided to keep old source code working. We strongly advise against using it in new code. See Porting to Qt 4 for more information. Inherits Q3DockWindow.DockAreas or float as top-level windows. Q3MainWindow provides four Q and Q3MainWindow. This property holds the toolbar's label. If the toolbar is floated the label becomes the toolbar window's caption. There is no default label text. Access functions:. Destructor. Adds a separator to the right/bottom of the toolbar. Deletes all the toolbar's child widgets. Returns a pointer to the Q3MainWindow which manages this toolbar.().
http://doc.trolltech.com/4.0/q3toolbar.html
crawl-001
refinedweb
123
62.34
. Adding more ES6 features Let’s add some more ES6 language features to our index.js and see how well the most current web browsers, mentioned in the previous post, have implemented them: import greeting from './utils' let myGreeting = greeting('John', 'Smith') console.log(myGreeting) //Testing the map array helper console.log('-----------------') console.log('Map array helper:') console.log('-----------------') const numbers = [1, 2, 3, 4, 5, 6] const squares = numbers.map(function(number) { return number * number }); console.log(`Squares with map function: ${squares}`) //Testing an ES6 class console.log('--------') console.log('Classes:') console.log('--------') class Animal { constructor(options) { this.name = options.name this.age = options.age } makeSound() { return 'WHOOAAA' } move() { console.log('I\'m moving forward') } } let myAnimal = new Animal({name: 'Simba', age: 10}) let sound = myAnimal.makeSound() myAnimal.move() myAnimal.makeSound() //Destructuring console.log('--------------') console.log('Destructuring:') console.log('--------------') let customer = { name: 'Great Customer Inc', address: 'Los Angeles' } let {name, address} = customer console.log(name) console.log(address) //Rest and spread operator console.log('-------------------------') console.log('Rest and spread operator:') console.log('-------------------------') let strictOopLanguages = ['C#', 'Java', 'C++'] let looseOopLanguages = ['Python', 'Ruby'] let funcLanguages = ['F#', 'Scala', 'Erlang'] let frontEndLanguages = ['JavaScript', 'HTML', 'CSS'] let allLanguages = [...strictOopLanguages, ...looseOopLanguages, ...funcLanguages, ...frontEndLanguages] for (let lang of allLanguages) { console.log(lang) } //Generator functions console.log('--------------------') console.log('Generator functions:') console.log('--------------------') function* programmingLanguages() { yield 'C#' yield 'Java' yield 'JavaScript' yield 'F#' } for (let lang of programmingLanguages()) { console.log(lang) } //Promises console.log('---------') console.log('Promises:') console.log('---------') function executePromise() { let promise = new Promise((resolve, reject) => { setTimeout(() => { let success = {'message': 'all good'} resolve(success) }, 5000) }) promise.then(response => console.log(response.message)) .catch(e => console.log(e.exception)) } executePromise() That’s a collection of ES6 features including the following: - The map array helper - Classes - Destructuring - Rest and spread operator - Generator functions - Promises If you don’t know these features then consult this page where you can find links to examples and explanations. Then open a command prompt, navigate to the project root and run the following command to generate the bundle.js file: npm run build …and then open index.html in a web browser with Development Tools open (F12). If you have the latest of Chrome, FF and Edge then you’ll see that all of them managed to run our code. Here’s the console output in Chrome 60: That’s very positive but we cannot assume that all clients have updated their browsers. Also, later on when ES7, ES8 etc. features come along then browser manufacturers will always require some time to implement them in their product. Also, JS developers want to be able to use new features of the language without worrying much about browser support. This is where babel.js enters the scene. Configuring Babel Babel.js is a JS compiler that basically transforms “new” JS into “old” JS. Babel has a number of NPM packages to make it work with webpack. We’ll need to install 3 of them to be exact. Run the following NPM command in the command prompt: npm install –save-dev babel-loader babel-core babel-preset-env The loader package ensures that Babel can work with webpack. It is a so-called module loader in webpack. A module loader is a pre-processor that is applied on all or some subset of the source files in the webpack project. Pre-processors are executed before webpack creates the bundle. Make sure you understand the distinction between these pre-processing libraries and webpack itself. We can instruct webpack to execute a pre-processor but it’s not webpack’s responsibility to make them work properly. It is up to the pre-processor to execute its work. When it’s done then webpack takes over and finishes its own main task. Babel-core is a JS parser and the preset package contains the rules how to transform ES6 specific features to ES5 equivalent code. The terminology is a bit confusing because a module loader is called a rule in webpack 2 and rules make part of the webpack module system. We can also tell webpack which files to apply the rule on. This is called a test. E.g. we want the Babel rule to be applied on JS files only and not e.g. CSS files. That wouldn’t make any sense. Rules – or module loaders – make webpack even more exciting than what it is. They show the pluggable nature of webpack where we can add any number of modules to perform some action on the source files. So by default webpack is “only” responsible for creating the JS bundle but it can be extended with rules. Here’s the extended webpack.config.js with the new section called “module”: const path = require('path') const config = { entry: './src/index.js', output: { path: path.resolve(__dirname, 'build'), filename: 'bundle.js' }, module: { rules: [ { test: /\.js$/, exclude: /node_modules/, use: 'babel-loader' } ] } } module.exports = config “module” stands for the module system of webpack mentioned above. Then comes the array of rules with a single entry with 3 properties. We declare the loader to be the Babel loader. We want to apply it to all files ending with .js which is declared through a regular expression of the test property. Finally we want the files in the node_modules folder to be excluded. We’re not done yet. We have to tell the Babel loader to apply the babel-preset-env transpiler on each input file. We don’t do that in the webpack config file though. Recall that it’s not webpack that performs the JS code transformation but Babel. As soon as webpack calls upon the Babel rule we’re on Babel’s turf, not webpack’s. At this moment the Babel loader will need to know what we want it to do and we can achieve it through Babel’s own configuration file called .babelrc : The Babel loader will be looking for this file. Add the following JSON object to .babelrc: { "presets": ["env"] } We tell Babel to use the preset called “env” which stands for the babel-preset-env processor. All right, let’s see if this thing works. Run the usual npm command again… npm run build Run index.html in your web browser and… I don’t know about you but I got an exception: The exception is complaining about the following line of code: var _marked = /*#__PURE__*/regeneratorRuntime.mark(programmingLanguages); …where “regeneratorRuntime” is not defined. This is a case of where webpack and the various loaders are sort of out of sync and we need to find a solution. This thread on StackOverflow gives the necessary hints. First we need to install the babel-polyfill processor as well: npm install –save-dev babel-polyfill …and add it to the first position of the entry point array in webpack.config.js: const config = { entry: ['babel-polyfill', './src/index.js'], //rest of the document ignored } Build the project with npm run build again, test it in Chrome and it should work again. The code also works in IE11: Babel stages Another important component related to Babel is called a stage. This website comes with a good overview of Babel including stages: “JavaScript has some proposals for new features that are not yet finalized. They are separated into 5 stages (0 to 4). As proposals gain more traction and are more likely to be accepted into the standard they proceed through the various stages, finally being accepted into the standard at stage 4. There is no babel-preset-stage-4 as it’s simply babel-preset-es2015.” These stages imply that if a new JS feature comes along that has not yet been fully accepted developers can still use it and have Babel transpile it into ES5. The requirement is to install an extra package that corresponds to the stage where the feature is located. Here’s how stage 2 would be installed: npm install –save-dev babel-preset-stage-2 …and then we can tell Babel to apply the stage 2 level code additions as follows in .babelrc: { "presets": ["env", "stage-2"] } View all posts related to JavaScript here. Awesome explanation!! Thank you. This tutorial was helpful to me. Thank you for the careful step-by-step explanations.
https://dotnetcodr.com/2017/08/31/using-webpack-2-and-npm-to-bundle-various-resources-of-a-web-application-part-5-adding-babel-js-for-es5-transformation/
CC-MAIN-2021-43
refinedweb
1,369
58.38
Web frameworks for Swift started popping up almost as soon as Swift went open source at the end of 2015. Vapor quickly became one of the most used libraries for Swift on the web. In this quick tutorial you’ll learn the basics of using Vapor to build web applications using Swift. What You’ll Need Before we take a look at how Vapor works we need to get a few tools. The first thing we need is Swift 3 with Swift Package Manager. Follow this guide to get Swift running on your system. If you’re on Linux you may find this guide easier to digest than Apple’s site. You can verify that you have Swift 3.0 or greater by running this command: swift --version We also need Vapor. Vapor comes with a command-line interface that simplifies many of the tasks associated with building, running and deploying Swift web apps. Install Vapor by running the following in the terminal: curl -sL toolbox.vapor.sh | bash Verify it is working by running: vapor --help Is This Thing On? Let’s turn on the fog machine and make sure Vapor works on our machine. We’ll also browse around the sample project it generates. Create a Vapor project by using Vapor CLI’s vapor new command: vapor new VaporSample Make sure everything is working correctly by building and running the sample project: cd Vapor Sample vapor build vapor run serve Visit and you should see that “it works”: Now that we know everything is working, let’s dig into how it what makes Vapor tick. How Did It Do That? A web app in Vapor is called a Droplet. Open VaporSample/Sources/App/main.swift to see where our Droplet is defined. It should look like this: import Vapor let drop = Droplet() drop.get { req in return try drop.view.make("welcome", [ "message": drop.localization[req.lang, "welcome", "title"] ]) } drop.resource("posts", PostController()) drop.run() Vapor uses the Droplet to create routes using Vapor’s Router. For example, a GET request to /hello would look like this: drop.get("hello") { request in return "Hello, world!" } In main.swift, the default route on lines 5-9 makes use of the welcome.leaf view that is located at VaporSample/Resources/Views/welcome.leaf Since it doesn’t contain a route name it operates at the base URL. This route uses a templating language built into Vapor called Leaf. We will use Leaf when we build our first sample. Line 11 defines a RESTful resource named posts to represent blog posts. It creates several routes in VaporSample/Sources/App/Controllers/PostController.swift.This sample uses features that are outside the scope of this post but you can read this for more information on Vapor Controllers. The drop.run() at line 13 starts the server. Now that we know the basics of how Vapor works, we can use the router and the Leaf templating language to render a Hello World example that greets the visitor by name. Hello Vapor The first thing we need to do is define a route for a GET request to /hello. We’ll also remove the existing examples so that they don’t distract us as we learn. Replace the contents of main.swift with the following code: import Vapor let drop = Droplet() drop.get("hello") { request in } drop.run() We want to greet the visitor by their name so we’ll have them pass it in as a parameter, e.g.. Let’s update the hello route to access the name query parameter passed into the request: drop.get("hello") { request in let name = request.data["name"]?.string ?? "stranger" } In the highlighted line, we check if there’s a name value in the request.data dictionary provided by Vapor’s Request object. If there is, we store a string representation of it in a local name constant. If not, we store “stranger” as their name. Next we’ll use a Vapor View to render an tag that says hello to our visitor. Start by creating a hello.leaf file in VaporSample/Resources/Views: touch Resources/Views/hello.leaf We’ll set up our hello.leaf markup to extend the base.leaf layout provided in the sample. It has head and body placeholder regions defined using #import that we can put our view content into. Here’s what base.leaf should look like: <!DOCTYPE html> <html> <head> #import("head") </head> <body> #import("body") </body> </html> Add the following code to hello.leaf: #extend("base") #export("head") { <title>Hello from VaporSample</title> } #export("body") { <h1>Hello #(name)!</h1> } On line 1 we state that we’re using the base.leaf layout. We then use #export to declare markup to be placed into the #import placeholders in the base layout. Line 8 uses a name property that we’ll pass into the view from our route. Head back to main.swift and add the highlighted code to the hello route to create and return the hello view: drop.get("hello") { request in let name = request.data["name"]?.string ?? "stranger" return try drop.view.make("hello", [ "name": name ]) } We use drop.view.make to create a view named hello that passes in a name parameter to be used by the view. That’s everything we need to render this web page passing in a name so let’s test it. Head back to the terminal and build the app using Vapor CLI: vapor build Once successful, use the Vapor CLI to fire up the Vapor server: vapor run serve Head to to see the greeting from Vapor. Now try it with your own name. Using JSON in Vapor What if you wanted to build an API for your mobile app using Vapor? Chances are you’ll want to work with JSON. Thankfully JSON support is built-in. Make a route for a POST request to /person that takes a x-www-form-urlencoded body containing name and city values. We’re going to need to work with Vapor’s HTTP module so add the following import statement to the top of main.swift: import HTTP Add the highlighted code directly after the closing brace of the hello route in main.swift: drop.get("hello") { request in // contents omitted for brevity } drop.post("person") { request in } The first thing we need to do is make sure that both name and city are passed in since they’re required for our route. Use Swift’s guard statement to abort and return a Bad Request if we’re missing either of these parameters while also conveniently setting them to local constants: drop.post("person") { request in guard let name = request.data["name"]?.string, let city = request.data["city"]?.string else { throw Abort.badRequest } } Note we use the same request.data dictionary we used for query parameters to access x-www-urlencoded values. Now that we have the request values let’s generate a response. Since we want to simulate creating a person in our backend we’ll need to send a 201 Created status code in our response. We’ll also send back a JSON version of the created object. Add the highlighted code to generate the response: drop.post("person") { request in guard let name = request.data["name"]?.string, let city = request.data["city"]?.string else { throw Abort.badRequest } return try Response(status: .created, json: JSON(node: [ "name": name, "city": city ])) } Build and run the application: vapor build vapor run serve Send a POST request using cURL or Postman containing name and city and you should get back JSON with a 201 response: Here’s the cURL request if you prefer the terminal: curl -v --data "name=Brent&city=Philadelphia" You should see this at the end of the response: < HTTP/1.1 201 Created < Content-Type: application/json; charset=utf-8 < Date: Mon, 31 Oct 2016 14:23:00 GMT < Content-Length: 38 < * Connection #0 to host localhost left intact {"city":"Philadelphia","name":"Brent"} It’s super simple to work with JSON in Vapor. What’s really clever is that the JSON data passed into a route is accessible the same way as query parameters and x-www-urlencoded body values. For more information on this, check out Vapor’s docs. If you want to learn about deploying your app to Heroku follow this tutorial. If you’re more of a visual learner, here’s a video tutorial that covers most of what’s covered in this blog post: That’s a Wrap Being able to use Swift on the web is super exciting. Vapor’s built-in JSON support, ease of use and painless deployment make it an excellent candidate for building APIs and web applications using Swift. Here are some things to try next: - Look into Router Grouping for creating different versions of an API. - Add custom commands to the Vapor CLI. - Check out some of the other Swift Web Frameworks such as Kitura or Perfect I’m excited to see what you build with Swift on the web. You can find me on Twitter @brentschooley or send me an email at brent@twilio.com.
https://www.twilio.com/blog/2016/10/getting-started-with-the-vapor-swift-web-framework.html
CC-MAIN-2021-31
refinedweb
1,526
66.74
In the iOS-versus-Android developer war I’m long-time member of team iOS. I proudly charge my Apple Mouse like a psychopath, use the word “courage” unironically, and wear the most ridiculous dongles in public. You don’t choose the dongle life. The dongle life chooses you. I do all of this because iOS provides marginally better software than Android (in my opinion), and as a software developer, I’ve been trained to take life-or-death stances over the most trivial of details. It’s what defines us. But there’s one part of iOS that even I cannot defend, and that’s its bizarre keyboard behavior. That’s because iOS, a platform with over 2 million deployed applications, is surprisingly awful at providing a decent default keyboard for developers. If you’ve worked in NativeScript apps for more than a few hours, you’ve probably hit the situation in the gif below at least once. (Notice how I move focus to the second form field, and then I can’t see what I’m typing because the keyboard is covering the input.) iOS, the operating system that lets you authenticate with your face, has trouble allowing users see what they’re typing. NOTE: For you Android lovers out there, yes, Android doesn’t have this problem. You’ve got this one, but if you bring it up in an argument with me prepare to answer for this hot mess. NOTE: For you Android lovers out there, yes, Android doesn’t have this problem. You’ve got this one, but if you bring it up in an argument with me prepare to answer for this hot mess. Luckily, there’s a library you can install quickly to help. The iOS IQKeyboardManager library is a drop-in solution that will likely solve all of your iOS keyboard issues out of the box. To try it, install the NativeScript wrapper of the library using the tns plugin add command. tns plugin add tns plugin add nativescript-iqkeyboardmanager And... that’s all you need to do, as IQKeyboardManager works without you needing to write any code. For example, if I just rerun the same app I showed earlier, not only does the keyboard no longer block my input, I also get a few helpful keyboard controls and labels for free. IQKeyboardManager handles the keyboard like iOS should by default. TIP: You can try this example out for yourself on NativeScript Playground. TIP: You can try this example out for yourself on NativeScript Playground. Although IQKeyboardManager solves most of the common issues you’ll have with the iOS keyboard by default, there are some additional customization options you might want to try out. Let’s look at how they work. Note in the gif above how IQKeyboardManager provides helpful arrow buttons to navigate between textfields. IQKeyboardManager is only able to provide those arrows if your textfields are sibling elements, aka if your markup looks something like this. <StackLayout> <TextField hint="Email"></TextField> <TextField hint="Password"></TextField> </StackLayout> The problem is oftentimes in real-world forms you need additional markup to make your form look good. In those cases you’ll want to use the IQKeyboardManager plugin’s <PreviousNextView> element. The usage of the element varies depending on whether you’re using NativeScript Core or NativeScript with Angular. <PreviousNextView> In NativeScript Core apps you need to bring in IQKeyboardManager as an XML namespace, which looks a little something like this. <Page xmlns: ... <!-- This is the wrapper that enables the previous/next buttons --> <IQKeyboardManager:PreviousNextView> <!-- This single StackLayout child of the PreviousNextView is also necessary. --> <StackLayout> <!-- Your textfields go here. --> <StackLayout> <TextField hint="Email"/> </StackLayout> <StackLayout> <TextField hint="Password"/> </StackLayout> </StackLayout> </IQKeyboardManager:PreviousNextView> ... </Page> If you’re using Angular, you need to register the <PreviousNextView> element with the following two lines of code in your TypeScript component. import { registerElement } from "nativescript-angular"; registerElement("PreviousNextView", () => require("nativescript-iqkeyboardmanager").PreviousNextView); And then you use the element directly in your markup. <PreviousNextView> <StackLayout> <!-- Your textfields go here. --> <StackLayout> <TextField hint="Email"/> </StackLayout> <StackLayout> <TextField hint="Password"/> </StackLayout> </StackLayout> </PreviousNextView> NOTE: Vue usage of the plugin is possible too. Check out the IQKeyboardManager plugin’s docs for the syntax you need. NOTE: Vue usage of the plugin is possible too. Check out the IQKeyboardManager plugin’s docs for the syntax you need. Although IQKeyboardManager does provide a nice experience by default, it also gives you a number of customization options to meet the needs of your apps. Here’s a quick run down of the things you can do (and refer to the plugin’s documentation for the exact syntax you’ll need). iOS provides both a light and dark keyboard, and the IQKeyboardManager plugin makes it easy to toggle between the two. By default iOS does not hide the keyboard when the user taps outside of the control. If you want to enable this behavior, aka you want to close the keyboard when the user taps outside the textfield, the plugin provides an option for that. Overall, the IQKeyboardManager plugin provides functionality that should arguably be a part of iOS itself. (If you know of a good reason the OS should allow you to type in a textfield you can’t see let me know in the comments.) Luckily NativeScript makes it easy to include native libraries such as IQKeyboardManager, so including this functionality in your NativeScript app is as easy as a quick install. So if you’re running into problems with the keyboard in your iOS app, give the plugin a shot. And refer to this article’s example code if you want to see the plugin in action.?
https://www.nativescript.org/blog/customizing-the-ios-keyboard-in-your-nativescript-apps
CC-MAIN-2019-39
refinedweb
945
54.73
Just some time ago I did one challenge on Codewars. I was successful with my attempt but I was struck when I saw how clever and simple one solution was. Unfortunately, I do not understand one part of a code that is Regex. In the solution below you can see the whole solution: function duplicateCount(text){ return (text.toLowerCase().split('').sort().join('').match(/([^])\1+/g) || []).length; } This was the description: Count the number of Duplicates. Example “abcde” → 0 # no characters repeats more than once “aabbcde” → 2 # 'a' and 'b' “aabBcde” → 2 # 'a' occurs twice and 'b' twice (b andB ) If you have some time would you mind explaining to me what is this part of a solution (/([^])\1+/g) || []) about? I do know some regex and I also do have JS course on freeCodeCamp so I did something with it but this is beyond what I know so far and I did not grasp the concept of this part of a solution. So far I have encountered “^” only in combination with some characters in brackets. Also I do not get || []this part. Why it is not inside foreward slashes?
https://forum.freecodecamp.org/t/regex-in-js-code/475572
CC-MAIN-2022-21
refinedweb
189
70.13
Python provides two very important features to handle any unexpected error in your Python programs and to add debugging capabilities in them − Exception Handling − This would be covered in this tutorial. Here is a list standard Exceptions available in Python: Standard Exceptions. Assertions − This would be covered in Assertions in Python tutorial. List of Standard Exceptions − An assertion is a sanity-check that you can turn on or turn off when you are done with your testing of the program. The easiest way to think of an assertion is to liken it to a raise-if statement (or to be more accurate, a raise-if-not statement). An expression is tested, and if the result comes up false, an exception is raised. Assertions are carried out by the assert statement, the newest keyword to Python, introduced in version 1.5. Programmers often place assertions at the start of a function to check for valid input, and after a function call to check for valid output. When it encounters an assert statement, Python evaluates the accompanying expression, which is hopefully true. If the expression is false, Python raises an AssertionError exception. The syntax for assert is − assert Expression[, Arguments] If the assertion fails, Python uses ArgumentExpression as the argument for the AssertionError. AssertionError exceptions can be caught and handled like any other exception using the try-except statement, but if not handled, they will terminate the program and produce a traceback. Here is a function that converts a temperature from degrees Kelvin to degrees Fahrenheit. Since zero degrees Kelvin is as cold as it gets, the function bails out if it sees a negative temperature − #!/usr/bin/python def KelvinToFahrenheit(Temperature): assert (Temperature >= 0),"Colder than absolute zero!" return ((Temperature-273)*1.8)+32 print KelvinToFahrenheit(273) print int(KelvinToFahrenheit(505.78)) print KelvinToFahrenheit(-5) When the above code is executed, it produces the following result − 32.0 451 Traceback (most recent call last): File "test.py", line 9, in <module> print KelvinToFahrenheit(-5) File "test.py", line 4, in KelvinToFahrenheit assert (Temperature >= 0),"Colder than absolute zero!" AssertionError: Colder than absolute zero! An exception is an event, which occurs during the execution of a program that disrupts the normal flow of the program's instructions. In general, when a Python script encounters a situation that it cannot cope with, it raises an exception. An exception is a Python object that represents an error. When a Python script raises an exception, it must either handle the exception immediately otherwise it terminates and quits. If you have some suspicious code that may raise an exception, you can defend your program by placing the suspicious code in a try: block. After the try: block, include an except: statement, followed by a block of code which handles the problem as elegantly as possible. Here is simple syntax of try....except...else blocks − try: You do your operations here; ...................... except ExceptionI: If there is ExceptionI, then execute this block. except ExceptionII: If there is ExceptionII, then execute this block. ...................... else: If there is no exception then execute this block. Here are few important points about the above-mentioned syntax − A single try statement can have multiple except statements. This is useful when the try block contains statements that may throw different types of exceptions. You can also provide a generic except clause, which handles any exception. After the except clause(s), you can include an else-clause. The code in the else-block executes if the code in the try: block does not raise an exception. The else-block is a good place for code that does not need the try: block's protection. This example opens a file, writes content in the, file and comes out gracefully because there is no problem at all − #!/usr/bin/python try: fh = open("testfile", "w") fh.write("This is my test file for exception handling!!") except IOError: print "Error: can\'t find file or read data" else: print "Written content in the file successfully" fh.close() This produces the following result − Written content in the file successfully This example tries to open a file where you do not have write permission, so it raises an exception − #!/usr/bin/python try: fh = open("testfile", "r") fh.write("This is my test file for exception handling!!") except IOError: print "Error: can\'t find file or read data" else: print "Written content in the file successfully" This produces the following result − Error: can't find file or read data You can also use the except statement with no exceptions defined as follows − try: You do your operations here; ...................... except: If there is any exception, then execute this block. ...................... else: If there is no exception then execute this block. This kind of a try-except statement catches all the exceptions that occur. Using this kind of try-except statement is not considered a good programming practice though, because it catches all exceptions but does not make the programmer identify the root cause of the problem that may occur. You can also use the same except statement to handle multiple exceptions as follows − try: You do your operations here; ...................... except(Exception1[, Exception2[,...ExceptionN]]]): If there is any exception from the given exception list, then execute this block. ...................... else: If there is no exception then execute this block. You can use a finally: block along with a try: block. The finally block is a place to put any code that must execute, whether the try-block raised an exception or not. The syntax of the try-finally statement is this − try: You do your operations here; ...................... Due to any exception, this may be skipped. finally: This would always be executed. ...................... You cannot use else clause as well along with a finally clause. #!/usr/bin/python try: fh = open("testfile", "w") fh.write("This is my test file for exception handling!!") finally: print "Error: can\'t find file or read data" If you do not have permission to open the file in writing mode, then this will produce the following result − Error: can't find file or read data Same example can be written more cleanly as follows − #!/usr/bin/python try: fh = open("testfile", "w") try: fh.write("This is my test file for exception handling!!") finally: print "Going to close the file" fh.close() except IOError: print "Error: can\'t find file or read data" When an exception is thrown in the try block, the execution immediately passes to the finally block. After all the statements in the finally block are executed, the exception is raised again and is handled in the except statements if present in the next higher layer of the try-except statement. An exception can have an argument, which is a value that gives additional information about the problem. The contents of the argument vary by exception. You capture an exception's argument by supplying a variable in the except clause as follows − try: You do your operations here; ...................... except ExceptionType, Argument: You can print value of Argument here... If you write the code to handle a single exception, you can have a variable follow the name of the exception in the except statement. If you are trapping multiple exceptions, you can have a variable follow the tuple of the exception. This variable receives the value of the exception mostly containing the cause of the exception. The variable can receive a single value or multiple values in the form of a tuple. This tuple usually contains the error string, the error number, and an error location. Following is an example for a single exception − #!/usr/bin/python # Define a function here. def temp_convert(var): try: return int(var) except ValueError, Argument: print "The argument does not contain numbers\n", Argument # Call above function here. temp_convert("xyz"); This produces the following result − The argument does not contain numbers invalid literal for int() with base 10: 'xyz' You can raise exceptions in several ways by using the raise statement. The general syntax for the raise statement is as follows. raise [Exception [, args [, traceback]]] Here, Exception is the type of exception (for example, NameError) and argument is a value for the exception argument. The argument is optional; if not supplied, the exception argument is None. The final argument, traceback, is also optional (and rarely used in practice), and if present, is the traceback object used for the exception. An exception can be a string, a class or an object. Most of the exceptions that the Python core raises are classes, with an argument that is an instance of the class. Defining new exceptions is quite easy and can be done as follows − def functionName( level ): if level < 1: raise "Invalid level!", level # The code below to this would not be executed # if we raise the exception Note: In order to catch an exception, an "except" clause must refer to the same exception thrown either class object or simple string. For example, to capture above exception, we must write the except clause as follows − try: Business Logic here... except "Invalid level!": Exception handling here... else: Rest of the code here... Python also allows you to create your own exceptions by deriving classes from the standard built-in exceptions. Here is an example related to RuntimeError. Here, a class is created that is subclassed from RuntimeError. This is useful when you need to display more specific information when an exception is caught. In the try block, the user-defined exception is raised and caught in the except block. The variable e is used to create an instance of the class Networkerror. class Networkerror(RuntimeError): def __init__(self, arg): self.args = arg So once you defined above class, you can raise the exception as follows − try: raise Networkerror("Bad hostname") except Networkerror,e: print e.args 187 Lectures 17.5 hours 55 Lectures 8 hours 136 Lectures 11 hours 75 Lectures 13 hours Eduonix Learning Solutions 70 Lectures 8.5 hours 63 Lectures 6 hours
https://www.tutorialspoint.com/python/python_exceptions.htm
CC-MAIN-2022-21
refinedweb
1,661
55.95
FYI, I've just merged a few changes from coreutils. The only one that may be problematic is to xmalloc.c: Merge in changes from the coreutils. * mountlist.c: #undef MNT_IGNORE before defining it, to avoid warning on FreeBSD. * makepath.c (make_path): Restore umask *before* creating the final component. (make_path): Minor reformatting. * xmalloc.c: Adjust to work with new autoconf macros, AC_FUNC_MALLOC and AC_FUNC_REALLOC: test #ifndef HAVE_MALLOC/HAVE_REALLOC. * mountlist.h (ME_DUMMY): Don't count entries of type `auto' as dummy ones. At least on GNU/Linux systems, `auto' means something else. From Michael Stone. ----------------- The xmalloc.c change reflects a change in autoconf. If you're using a version of autoconf prior to 2.54, you'll get a compile-time failure from that code. I didn't think about that dependency before checking it in. But to be honest, I think we shouldn't be catering too much to older versions of tools like autoconf. If it causes a problem for anyone, please let me know. Any suggestions on how to record such dependencies? Ideally, we'd be able to say that xmalloc.c requires autoconf-2.54 or newer, and a tool would find the max version of all *.[ch] files used by a given package and AC_REQUIRE that. FYI, here's the diff from coreutils: Index: xmalloc.c =================================================================== RCS file: /fetish/cu/lib/xmalloc.c,v retrieving revision 1.20 retrieving revision 1.21 diff -u -p -ICopyright -r1.20 -r1.21 --- xmalloc.c 26 Jan 2001 11:13:28 -0000 1.20 +++ xmalloc.c 20 Jul 2002 07:07:48 -0000 1.21 @@ -46,12 +46,12 @@ void free (); # define EXIT_FAILURE 1 #endif -#ifndef HAVE_DONE_WORKING_MALLOC_CHECK -"you must run the autoconf test for a properly working malloc -- see malloc.m4" +#ifndef HAVE_MALLOC +"you must run the autoconf test for a properly working malloc" #endif -#ifndef HAVE_DONE_WORKING_REALLOC_CHECK -"you must run the autoconf test for a properly working realloc --see realloc.m4" +#ifndef HAVE_REALLOC +"you must run the autoconf test for a properly working realloc" #endif /* Exit value when the requested amount of memory is not available.
http://lists.gnu.org/archive/html/bug-gnulib/2002-11/msg00032.html
CC-MAIN-2015-22
refinedweb
348
61.53
#include "angband.h" #include "ui-keymap.h" #include "ui-term.h" Keymap. Add a keymap to the mappings table. Given a keymap mode, a trigger, and an action, store it in the keymap list. References keymap::actions, keymap::key, keymap_make(), KEYMAP_MODE_MAX, keymap_remove(), mem_zalloc(), keymap::next, and keymap::user. Referenced by parse_prefs_keymap_input(), and ui_keymap_create(). Append active keymaps to a given file. Save keymaps to the specified file. References keymap::actions, buf, FALSE, file_putf(), keymap::key, KEYMAP_MODE_ORIG, KEYMAP_MODE_ROGUE, keypress_to_text(), keymap::next, OPT, TRUE, and keymap::user. Referenced by ui_keymap_pref_append(). Find a keymap, given a keypress. Given a keymap mode and a keypress, return any attached action. References keymap::actions, keypress::code, keymap::key, KEYMAP_MODE_MAX, keypress::mods, and keymap::next. Referenced by target_dir_allow(), textui_get_command(), and ui_keymap_query(). Forget and free all keymaps. Free all keymaps. References keymap::actions, i, mem_free(), N_ELEMENTS, and keymap::next. Referenced by textui_cleanup(). Duplicate a given keypress string and return the duplicate. References EVT_NONE, mem_zalloc(), and type. Referenced by keymap_add(). Remove a keymap. Given a keypress, remove any keymap that would trigger on that key. Return TRUE if one was removed. References keymap::actions, keypress::code, FALSE, keymap::key, KEYMAP_MODE_MAX, mem_free(), keypress::mods, keymap::next, prev, and TRUE. Referenced by keymap_add(), and ui_keymap_remove(). List of keymaps.
http://buildbot.rephial.org/builds/restruct/doc/ui-keymap_8c.html
CC-MAIN-2017-22
refinedweb
206
56.21
R TE FREE DO NOT L IT PLE E SE A MCN JOIN MELBOURNE CITY NEWS ON FACEBOOK! E RECY CL PLE JUNE 2011 • VOL 2, ISSUE 4 AS Melbourne City Newspaper Festival and Fashion giveaways Six passes to Melbourne International Animation - page 17 Festival Six passes to Music on Film - page 6 Festival Step right up! There’s no better place to be this winter than huddled around a fire, sharing stories, cups of hot chocolate and tuning in to the vibrant sounds of the city. With a wide range of news and entertainment on offer you certainly won’t be left out in the cold! TM 5 April 11c.pdf C M Y CM 1 5/04/11 Four passes to the International Magic Show - page 6 30% off hand-made apparel from Do-It Baby - page 18 3:53 PM Photo: Fred Kroh What matters?... Tax matters. MY CY CMY K ̻ǤǤ matters Ƭơ 2 MCN LOCAL NEWS JUNE 2011 • VOL 2, ISSUE 4 New charity keen to make a difference By Renée Purdie Photo: Laura McNulty D MCN Melbourne City Newspaper APPROX: 65,000 COPIES MONTHLY Results of CAB Audit April-September 2010 Editor-in-Chief: Paul McLane Marketing & Media Manager: Dione Joseph Designer: Matt Hocking Marketing: Pummi Sooden Kasia Todisco, Clarice Lau Photographer: AP Guru Production Manager: Lisa Stathakis Publisher: Paras Australia Pty Ltd Distributor: Arrow Distribution and Private Distribution CONTACT Toll free: 1300 80 40 33 Website: Postal Address: PO Box 582 Collins St West, VIC 8007 Address: 416-420 Basement Collins St, Melbourne CBD 3000 Next Issue on: 20 June,. Najib Warsame, 18, cuts out unusual shapes for the stop motion animatation activity that day Youth inspire at Urban Mesh By Laura McNulty T he third Urban Mesh Workhouse is an inspiring government funded initiative that places the focus directly on Melbourne’s young and budding creatives. The outer appearance of the venue is somewhat akin to a “lone ranger,” a building with purpose, yet different. Nestled at the end of Banana Alley it’s odd, yet remarkable. An artist’s messy workbenches as well as state of the art film/media technology are all on offer allowing young kids to gain first hand experience with industry-based artists and emerging media. Part of Melbourne’s funky youth based art hub, known as Signal, the centre welcomes all youth passionate about the arts between the ages of 13 – 20 to come along and be immersed in something special. Regular three day hands-on experience workshops for youth allow them to work alongside Melbourne’s freshest and most passionate industrybased artists. Some of Urban Mesh’s working artists include the talented Ben Cittadini (Performance) Isobel Knowles (Animation) Marion Singer (Film) and Eugene Ball (Jazz/Music). Downstairs, local animation artist Isobel works closely with the students on a stop motion animation inspired project. Upstairs, Eugene plays around with the tech-savvy program Logic and the kids learn to record and edit various sounds. The opportunities are endless and young people are offered a chance to experiment with a wide range of media. Ben Cittadini, an industry based artist, believes the workshop is crucial for the development of ideas and conversation between artists and young people. “It’s more than just a place to come and make stuff it’s a place where young people can listen and discuss their ideas” Mentor Ben Cittadini “It’s more than just a place to come and make stuff it’s a place where young people can listen and discuss their ideas freely.” While previous classes have remained relatively small, Signal’s Supervisor, Christine Grant, believes the art space has room to grow and extends its arms to all youth with a passion in the arts. “A place like this needs young people to grow. We would love young people to come along to our ‘Signal’s Curator’ meetings to collaborate and discuss ideas about what young kids really want in these artist-based workshops.” Youth are encouraged to think alternatively and express themselves through different art forms and textiles whilst keeping in mind the “Urban Forest Strategy” theme. Christine Grant says that you don’t have to come here with a crazy artistic talent: “you just have to want to be involved”. Najib Warsame, 18, found out about Signal’s workshop through “Youth Hub” and was interested in progressing his drawing and art skills: “It’s good just to be involved in something, teens have a lot of time on their hands but they often don’t know where to start.” Today’s art and media industry places too much pressure on youth to replicate and produce work to a certain societal standard, says Cittadini, and Signal can offer a space that is different: “Kids can become involved in work without actually having the industry pressure to ‘produce’ something.” Signal’s creative art space welcomes new, young and emerging artists to come along and be involved in this creative arts initiative. The next upcoming “Urban Mesh” will be held July 1-3 so make sure you jump online and become involved in this fantastic opportunity to express your inner creative. Or jump online and visit signal/pages/signal.aspx espite improvements in Aboriginal and Torres Strait Islander education over recent years, a large gap remains between Indigenous and non-Indigenous populations, particularly in the realm of higher education. According to the 2008 Australian Bureau of Statistics survey, Education and Indigenous Wellbeing (4102.0 —Australian Social Trends, Mar 2011), non-Indigenous adults were over four times as likely to have attained a Bachelor degree or higher (24% compared with five %). To tackle this problem, Yolley Thomas-Kalos has started the organisation Feeding Young Minds. Yolley’s vision for Feeding Young Minds is to be a vessel that will shape and add meaning to young people’s lives. Yolley shared that, “to feed one’s mind is to nourish one’s intellect by giving all the tools necessary to grow and think in such a manner that is positive, productive and beneficial to oneself and to society at large. I’d really like to make a difference in the lives of those who are truly disadvantaged whereby they become productive and contributing members of society and pass it forward to others as mentors helping the next generation.” Yolley was born in Brooklyn, New York, to Haitian parents. She lived in Haiti between the ages of three and 13 before visiting the United States on holiday. She was not allowed to return to her country because of political reasons i.e. a coup d’état wherein President JeanClaude Duvalier was ousted from the country. She was left with no outward relics of her past: no family pictures and no favourite teddy bear, but she had already been imparted with her father’s legacy: the importance of education and also a deep awareness of what poverty really means. Yolley credits her passion for education to her father. “He told me education would be the key to my future. So, he insisted that I pick a substantial career so that he could rest easy when he’s gone. He was right. Without an education, I wouldn’t accomplish anything in my life. So, I want to pass on my father’s remedy to others.” In Haiti, Yolley saw great poverty and, “talented, smart kids not having the opportunity to further themselves because they have no means to do so … no money, no food, no water, no shelter, no electricity, no clothing”. Having seen the ramifications Yolley is committed to ensuring that children in Australia have the opportunities that so many of us take for granted. Once funding for Feeding Young Minds is obtained, the idea is to provide grants between $2,500 and $5,000. The program will be available to students who are referred by their peers or teachers and there will be a mentorship component available. There will be a massive campaign to promote the launch of Feeding Young Minds in the next couple of months. Email Yolley on ybdforyou@ gmail.com and she will add you to the mailing list. In the meantime, donations can be mailed to: Feeding Young Minds, PO Box 2327, Oakleigh, VIC 3166. Feeding Young Minds founder, Yolley Thomas-Kalos NEWS IN BRIEF MCN JUNE 2011 • VOL 2, ISSUE 4 Campfire program sheds new light 3 F munity Photo: Fed Square or com- Indigenous artist performs traditional dance tradi- Photo: Fed Square By Dione Joseph The home and the hearth tional owners Couzens explains the need for such recognition: “It is the Aboriginal law of the land to welcome – it is a very important tradition and embedded in our community. You could not travel from here to Geelong without getting permission.” Vicki Couzens storytell- ing,.” 4 MCN LOCAL NEWS JUNE 2011 • VOL 2, ISSUE 4 Victorian couple say you’re “Not Alone” N ot Alone, established by Victorian couple Peter MacDiarmid and Sheila MacDiarmid, follows the disappearance and suspected homicide of their daughter Sarah MacDiarmid, on Wednesday July 11 1990, from Kananook railway station near Frankston, Victoria. The unresolved case has left the couple, like many others whose missing loved ones are suspected victims of homicide, in limbo. However, that has not stopped the MacDiarmids from establishing an online platform to help curb an issue yet to be addressed in Australia. “We’ve been told there are many websites for those known to be murdered but very few attempting to focus on suspected homicide cases of missing persons. Not knowing one thing or another is not easy,” Peter MacDiarmid said. It is only in hindsight that the couple realised the kind of help they would have benefited from, possibly preventing some of the problems they encountered. The MacDiarmids regret several of their decisions in coping with their missing daughter’s suspected homicide. “We took many wrong steps. Very much personal things. We wanted to leave Frankston area. We moved to Queensland. We now realise to have been in Kilmore, where we are now, would have been the best option. I would have stayed in my old job which would have seen us better off financially and work wise,” he said. “There was lots of angst and there was no one to help give us any guidance. We appreciated the emotional support but sometimes practical assistance such as financial and real estate help can make the journey a lot easier.” Having gone through 20 odd years of uncertainty with their missing daughter’s suspected homicide case, they hope to share their personal experience of such a journey and make available information regarding referral to appropriate Government agencies and local support services across Australia through “Not Alone”. “Through ‘Not Alone’, we also hope to provide links to a wide range of professionals assisting with legal help, counseling and eventually making connections with missing persons units in various states. And probably, in the next couple of years, GPs will be involved,” MacDiarmid said. “It will be a healing site not just for Victoria. We’re starting off in Victoria but we hope to go national, completely national. We would like to think we’ll get some assistance from various state governments and be promoted by various homicide departments since that’s where it all sets off.” In fact, the comprehensive online platform does not neglect the impact on siblings of missing persons suspected of homicide. Alisdair MacDiarmid was just 21 and in his third year at Melbourne University when his sister Sarah went missing. “Like the rest of us, his sister’s disappearance has an ongoing effect on him. What’s happening with the sibling is often unknown. We were amazed when he finished off his Honours degree in Electrical Engineering but even now, Photo courtesy of: Peter MacDiarmid By Devi Rajaram Sarah MacDiarmid, still missing he breaks out into random cold sweats while talking to people. There was one spell following his sister’s disappearance where he went to the UK to meet his grandparents before they died,” Peter MacDiarmid said. Through “Not Alone”, Alisdair MacDiarmid, now married with a daughter, is hopeful he will be able to give some insight and help to other siblings in a similar position. Peter MacDiarmid said siblings interact at a different level compared with their parents. “We feel it’ll be helpful to allow them to discuss their thoughts and feelings at that level. Alisdair will be able to share with them what worked and what did not work. ” Indeed, “Not Alone”, currently run by pro bono help, functions to serve the greater good of the wider community. It is hoped the website will benefit from a trust account in the future. “We’re looking forward to proper regulated footing soon for General Practitioner seminars and so on. But it will take time to build, it will take time,” MacDiarmid said. Local Debate: Climate change or human-made change? R ising oceans. Melting icebergs. Hotter summers. Colder winters. Increasing temperatures. More extreme natural disasters. What is the world coming to? It’s hard to imagine a subject less talked about. Politicians, campaigners, fundraisers, television, radio and every form of print and online media are discussing and debating the concept of ‘global warming’ and ‘climate change’. But what do these terms really mean and is it truly more than just a big fat carbon tax? The opinions are endless and there are plenty of stats to support both sides. Melbourne City News spoke to Barrie Hunt, honorary researcher at the CSIRO Marine and Atmospheric Research to get his opinion. Firstly the confusion between “climate change” and “global warming” - the fact that the two terms are related does not mean that they are identical. Overwhelming exposure to the two terms have gradually merged their true meanings so that today they are often used interchangeably, but there is a difference. Global warming may encompass a general rise in surface temperatures due to the rising emissions of greenhouse gases such as carbon dioxide, but climate change looks at the globe as a whole – a longer term effect and change to the world’s climate, which will include phenomenon such as the melting icecaps in the Arctic region, and therefore may be the reason why we are currently experiencing harsh cold winters in Melbourne this year. Hunt further elucidated the difference – global warming is only one component of climate change. However, it is not the only thing that is going on right now. Besides global warming, there are also concerns with rising sea level and melting ice glaciers. Climate change does comprise of all these factors and covers more precisely what is actively taking place in our environment that will have adverse effects on the planet we are living on. Hunt, well versed in the cli- Photo: Stock By Clarice Lau A receding glacier in the Canadian rockies mate change debate explains that even though many people seem to be aware of the increase in carbon dioxide and that harmful gases are no doubt harmful to the environment, they remain unconvinced of their role in causing climate change. This can be a result of them not fully understanding climatic systems and therefore there is a lot of skepticism about the truth of climatic change. “Recent wet conditions in eastern Australia mainly reflect short-term climate variability and weather events, not longerterm climate change trends. Conclusions that climate is not changing are based on misunderstanding of the roles of climatic change caused by increasing greenhouse gases and climatic variability due to natural processes in the climatic system,” says Hunt. Hunt also explained that the increasing greenhouse gases and climatic variability works together to either exacerbate or moderate climate extremes. In other words: “purely judging based on individual’s perceptions of climate patterns does not indicate that climate change theories have been overthrown, and that the world is going to be fine after all.” Despite scientific research showing evidence that the climate has indeed warmed in the last 100 years, and that the glaciers are melting, people are generally not convinced as they cannot see a significant change. The media does not help, often using fear tactics and instead of offering informative analysis. The reality of the situation is that statistics are often used to prove just about anything, and this goes for both sides of the argument. Interested in the debate? Send your thoughts, questions and comments to the Editor and you could have them printed in the next Edition of Melbourne City News – keep up-to-date with all the latest local news and make sure you get your say. Email: comments@mc-news.com.au LOCAL PROFILE MCN JUNE 2011 • VOL 2, ISSUE 4 5 Dumbing down debate: the current political mess By Mitchell Shepherd M ainstream political journalism has reached a difficult juncture. Do the media exist to entertain or inform, sell or tell? Our constant craving for drama and sensationalism has now infiltrated political reporting to the point where politicians act in a defensive, almost robotic manner, while journalists wait to pounce on the slightest of errors or slips of the tongue. Former Labour Federal MP Lindsay Tanner discusses in his latest book Sideshow: Dumbing Down Democracy, how politics and mainstream media now find themselves in an awkward relationship in which games, stunts and drama are now favoured over serious political content. “Mainstream media have become heavily focused on manipulating and modifying political content to make it more appealing,” says Tanner. “If people want comedy, they’ll “I do believe both politicians and the media need to find more effective ways of communicating with the public. Lindsay Tanner go and see a comedian. Political issues shouldn’t be treated like a game.” However Tanner – who retired before the 2010 federal Lindsay Tanner’s new book Sideshow: Dumbing Down Democracy election after holding the seat of Melbourne for 17 years – does acknowledge the current struggle the media find themselves in. They are constantly being torn in two directions and this creates an environment in which editorial responsibility and commercial pressure are engaged in a relentless tug-ofwar. “I don’t blame the media for responding to commercial pressure and it isn’t a problem solely of the media – it’s a problem of the whole community. Media is a central component of democracy and plays a fundamental role, however it is also driven by profit – this creates an unusual tension,” says Tanner. Throughout the book Tanner is critical of the “infotainment” style and emotional stirring of the media. He also draws attention to the everincreasing celebrity fascination gripping the general public and supplying incentive to the mainstream media to over indulge in sex and scandal. Tanner’s point about our celebrity culture isn’t any great revelation, although it is interesting in regards to the fuss surrounding recent carbon tax advertisements starring Cate Blanchett and Michael Caton. Is there a better way to capture people’s attention and whip the media into a frenzy? Tanner not only condemns the media’s dramatic approach to political news, but also the politicians who fuel the circus with ridiculous stunts that draw attention and media coverage. “My biggest complaint is that the more effort politicians drive into appearing on programs like Kerri-Anne, or Are You Smarter than a 5th Grader, the less time they spend dealing with the problems that people face in this country,” explains Tanner. “Let’s focus less on TRUST AUSTRALIA’S LARGEST LASER Author Lindsay Tanner dancing around in funny hats on television and spend more time on the serious issues.” It’s quite easy to point the finger and blame others for “My biggest complaint is that the more effort politicians drive into appearing on programs like KerriAnne, or Are You Smarter than a 5th Grader, the less time they spend dealing with the problems that people face in this country” Lindsay Tanner the current fracas between the media and our political leaders, but it is more complex to allocate responsibility for overcoming this dysfunctional reality. And finding a more adequate working relationship is vital. As Tanner writes, “academic HAIR REMOVAL SPECIALISTS 8 Now is the best time to begin your Hairfree treatment course. Start today and you will be Hairfree forever. HAIRFREE USES TGA LISTED MEDICAL GRADE EQUIPMENT. ALL OUR THERAPISTS ARE LASER SAFETY ACCREDITED. 1300 528 654 hairfreeplus.com.au *See website for details research indicates that the primary influence of the media is not in telling people what to think, but in telling them what to think about.” Although only a minority of the population hold a strong interest in politics, a large proportion are slightly interested. Communicating political news to the array of people, personalities and preferences is where the challenge lies. “I do believe both politicians and the media need to find more effective ways of communicating with the public. I think some brave political soul will eventually try new strategies to engage the public and will reap the rewards. Although the political risks involved make it unlikely in this current climate,” says Tanner. Tanner outlines a previous conversation with the Nine Network’s head of news and current affairs in which he was told that when a politician graces the television screen “a hundred thousand viewers change the channel.” This th Year Celebration Hairfree is Drastically Reducing Prices to celebrate our 8th year could easily be attributed to the typical boring and uninspiring politician we’ve come to expect, but perhaps it points to a stale and out-dated presentation of political content by the media. “The mainstream media – as well as politicians – should definitely consider alternative methods of political communication,” explains Tanner. “Though in this current atmosphere of ‘gotcha’ style reporting we are seeing politicians become very defensive and protective, which ultimately leads to a boring product.” Interest in politics is low. Determining whether that’s because of android politicians running from ravenous journalists, or the media’s skewed and lacklustre presentation of serious political issues isn’t the problem. Locating forward thinking politicians and a healthy media certainly is. Tanner’s book Sideshow: Dumbing Down Democracy is available now in bookstores. Special Prices Brazilian (Female) Bikini Underarm Lip and chin Now Was 99 79 $ 79 $ 88 $ $ 39 $ Half Leg Treatment Now Westfield Doncaster Westfield Brighton Brunswick Dandenong Frankston Geelong Kew Maribyrnong NEW 340 149 $ Airport West Was $ Mentone Mooroolbark Mount Waverley Narre Warren Plenty Valley NEW Port Melbourne Preston Ringwood South Yarra Launceston NEW L A S E R H A I R R E M O V A L 6 MCN EVENTS Events Calendar For Kids The Light in Winter Various times, Thu June 2 to Sun July 3 Federation Square This is a free event The Light in Melbourne is free and perfect for the kids. This free event is happening at various times from June 2 to July 3 showcasing Federation Square’s annual celebration of light and entertainment. This year the symbolism of fire shines bright in the program with various local and international artists coming together to highlight a series of lightbased artworks and community events for the public. Directed by Robyn Archer this year’s light in winter will feature themes of mythology, ritual and the art of ceremony. Puppets at Fed Square July 4 to 10, 10am–3:30pm Federation Square. Visit: Puppets at Fed Square is the perfect place to take the kids these school holidays. The program features a free week-long puppet program showcasing the world premiere of giant puppets, workshops and films. Renowned performance company Polyglot Theatre will be present, actively engaging with kids on an imaginary journey into the world of puppets. Snuff Puppets’ Human Body Parts will also be in attendance including an ensemble of enormous life-like body parts which will be floating around Fed Square during the day’s activities. Music on Film Festival Screening Disney’s Fantasia Sat July 9, Palais Theatre Adults $20, Kids Tickets $10 Fantasia is a major achievement of cinema and one of the best examples of music on film ever. This 1940 Walt Disney masterpiece has recently been restored, and whether watching it for the first time or revisiting it, the beauty and mastery of its animations, celebrating some of the greatest music ever written, is a joy to behold. Screened, as it was intended, with a twenty minute interval, so bring your friends, your family, and any kids you know and treat it as the special event it is. JUNE 2011 • VOL 2, ISSUE 4 Welcome to our Wednesday July 6 at 5pm. Melbourne Magic Festival Festivals July 4, Northcote Town Hall General tickets are on sale at Australian International Motor Show The festival is held annually in Melbourne and was launched four years ago by The Australian Institute of Magic. It has already grown into the largest festival of its kind in the Southern Hemisphere. This year’s highlights include internationally renowned illusionists Ellis & Webster and the world’s greatest close up magician: the astonishing Boris Wild, direct from Paris. Other events include classes in sleight of hands for adults, magic for school kids, and the famous trick or treat magic shop on location for the entire festival. Ten great shows especially designed to appeal to kids under the age of ten make it the perfect school holiday treat! 6pm, July 1 to 10 Melbourne Convention and Exhibition Centre Visit We are offering our readers the chance to win one of four passes to the Melbourne Magic Festival. To enter, simply email: win@mc-news.com.au with MMF in the subject line. Please include your full name address. Disney on Ice presents Worlds of Fantasy July 6 to 11 Rod Laver Arena $24.50 – $54.50 Take the kids to go see their favourite Disney characters on ice! Worlds of Fantasy has Lightning McQueen from Cars racing across the ice, under the sea characters from the Little Mermaid and animals from The Lion King. It’s also the premiere of Tinker Bell and all her fairy friends from Pixie Hollow on ice. From wheels to waves, Pride Lands to pixie dust, your family favourite Disney moments come to life with dazzling skating and special effects certain to create some warm winter memories. The Australian International Motor Show is the recognised industry showcase event through which vehicle manufacturers choose to display the latest innovations and advances in the industry. This includes safety, environmental and vehicle purpose-built designs for Australian consumers and motoring enthusiasts alike. A major feature of the Australian International Motor Show will be more than 40 new models that will be revealed for the first time in Australia. Music on Film Festival July 6 to July 10, Palais Theatre Single ticket: $20, Festival pass: $160, Scorsese Sunday pass: $60 Go and support the first Music on Film Festival that organisers hope will become an annual celebration of music in filmmaking. The Music on Film Festival presents the greatest music performed on film in Melbourne’s iconic Palais Theatre. From Sigur Ros to The Who and many more in between. A whole day has also been set aside to celebrate the great Martin Scorsese with four films to take you on a music and film-making journey that stretches from Africa to New York, from 1945 to today. If you love music, film and the Palais Theatre then this event is not to be missed. Visit: for details. We are offering our readers the chance to win one of six passes to the Music on Film Festival. To enter, simply email: win@mc-news.com.au with MOFF in the subject line. Please include your full name address. General interest Run Melbourne Fun Run July 16 to 17, Federation Square Run Melbourne Fun Run is the community fitness event for everyone. With the right training, anyone, at any age and any fitness level can participate. Participants can choose from a half-marathon, a 10km run and a 5km run or walk starting and finishing at Federation Square. There is also a 3km Kids Run on Saturday July 16. It’s also an opportunity to raise money for the charity you care about most. So get training for a good cause, see you at the start line. See. com.au for details. Sri Chinmoy Como Landing Half Marathon Live performance David Choi Sun July 3, Route via South Yarra and Richmond. Information: Melbourne@srichinmoyraces.org (03) 98534731 The half Marathon will begin at the Como Landing, Corner Williams Road and Alexandra Avenue, South Yarra. The event is conducted on a 7km circuit along the picturesque Yarra river. The 7km and 14km races complete one and two laps of the courses respectively. The half marathon runners will complete three laps of the course with a short extension at the start. Competitors will have two hours and 45 minutes to complete the course. All competitors will be given a full course briefing at 7:55am on the race day. The Australian Ballet presents Elegy June 9 to 18 The Arts Centre, State Theatre $28 – $128 The Australian Ballet’s Resident Choreographer, Stephen Baynes, draws inspiration from musical masterpieces of Requiem and Beyond Bach in this unforgettable double bill. Exploring the difficult concept of life after death through dance, Elegy is a must see for those that appreciate the fine art of ballet. For more details visit. au and see our “On Stage” page for an interview with the talented principle dancer Amber Scott. Fri July 1, Melbourne City Conference Centre. $39 – $79 King of Bangor Bella Union, Trades Hall June 29 to July 9 2011 1.00pm and 8.00pm Visit: King of Bangor is a new one act play by renowned Australian playwright, Lee Gambin. A spine chilling glimpse into the world the horror genre’s most prolific writer as well as one of the most reclusive. This is a venture into the world of Stephen King. From the lonely cry of the outsider in Carrie to the complexities of obsession and ownership in Christine to domestic unrest in The Shining, King of Bangor offers its audience insight into the dilemmas one must face when creativity proves dangerous. Coda/NICA June 22 to July 1 NICA’s National Circus Centre in Prahran. Tickets are now on sale at Coda will have a strictly limited season at NICA in Prahran and is it essential tickets are prepurchased before the day. Coda is NICA’s latest offering of stunning contemporary circus which features a cast of 23 second-year students mid-way through their Bachelor of Circus Arts Degree. Directed and choreographed by renowned actor, teacher and director, Megan Jones, who has been the head of performance studies at NICA since 2008 and has directed various other NICA works, including the sold-out season of Ariel’s Dream, Circus Showcase 2009 and Circus Showcase 2010. Talented young circus artists offer a stunning contemporary interpretation of memory, reality and fantasy whilst performing high energy dance and spectacular circus feats. Singer David Choi from Los Angeles has risen to YouTube fame with over 90,800,000 total video views, making him the 6th most subscribed musician on YouTube. This is your opportunity to see him live and in concert in Melbourne. Dedicated fans might even get to have their moment on stage with David Choi himself. All you need to do is submit a video dancing the David Choi dance and the winner will be able to perform on stage with Choi himself! More details and ticket information can be found at: 6th Melbourne International Chamber Music Competition July 9 to 17 Tickets start from $30 Visit Sixteen international music ensembles will be arriving in Melbourne to compete for one of the world’s most prestigious Chamber Music Competitions. Piano Trios and String Quartets from the United States, Europe, Australia and the United Kingdom will be giving world-class performances with finals being held on July 16 and 17 at the Melbourne Recital Centre. The entire competition will be broadcast live on ABC Classic FM and in addition will offer significant cash awards for prize winners. Boom Crash Opera Sat July 9, Palms at Crown Doors open 7:15. Adult $46.50 Visit Boom Crash Opera would like to invite you to join them for a night to remember and help celebrate 25 years of Boom Crash Opera history by taking a look back at their remarkable career. The band openly acknowledges their influences especially with former front man Sean Kelly whom they have attempted to clone many times. A night not to be missed, Boom Crash Opera showcases 25 years of the band’s music with front man Sean Kelly performing live. RED CARPET MCN JUNE 2011 • VOL 2, ISSUE 4 7 Sophie Harley getting into the spirit Christine Ahern and Martine Alpins Cirque Du Soleil’s Saltimbanco Opening The stars were out to experience Cirque Du Soleil’s Saltimbanco which first premiered in 1992 and continues to captivate audiences around the world. Guests included Melbourne’s TV, radio and music personalities, all out to see the world renowned acrobatic and produc- tion skills of Cirque Du Soleil. The creative energy and expertise displayed in the choreography, costumes, lighting and musical score (just to mention a few) had jaws dropping around Rod Laver Arena. Big Fair Trade Morning Break On Friday May 13 as part of the Fairtrade Fortnight, Moral Fairground’s Big Fair Trade Morning Break brightened up the Docklands Harbour Esplanade with a truly memorable morning tea. Morning commuters were served by Global Café Direct’s Mexican dancers endorsing free trade coffee fair-trade coffee experts and entertained by Mexican dancers as part of the cultural showcase. All in an effort to educate people about ethical trade and produce and inspire them to make a stand towards a fair and sustainable future. Photos courtesy of Coffex Photos by Daniel Gregoric Conrad Taylor and Jo Hall Cameron Barnet and Nadine Garner Adam Elliot hams it up for the cameras St Kilda Film Festival Opening Night Gala The St Kilda Film Festival was launched in style on Tuesday May 24 with filmmakers and actors from the Top 100 films in attendance to watch the screening of Natasha Gadd and Rhys Graham’s Old Fitzroy, at the Melbourne’s very own Cairo Club Orchestra glorious Palais Theatre. Shane Jacobson hosted the night and had the crowd in stiches. Most people partied on late into the night at the St Kilda Town Hall. Melbourne International Jazz Festival Opening This free open air concert at Federation Square kicked off the Melbourne International Jazz Festival and got everyone up and dancing. The Cairo Club Orchestra serenaded in the evening’s headline artists; Chiri featuring internationally renowned Korean pansori singer Bae Il Dong and legendary “tone scientists”, the Sun Ra Arkestra. Thousands of kids and families were treated to a journey through jazz full of games, laughter, and one very big riff. Sun Ra Arkestra bandleader Marshall Allen playing an EVI (Electronic Valve Instrument) Photos by Nick Pitsas Knoel Scott performing with the Sun Ra Arkestra Gyton Grantley and Alexandra Schepisi Reg Gorman finds a familiar face Photos by Jim Lee 8 MCN OUT & ABOUT JUNE 2011 • VOL 2, ISSUE 4 Fast Ed cooks up a storm By Renée Purdie W ell-known Aussie chef Ed Halmagyi is in Melbourne this week and provides us with a plethora of insight into his style, technique and personality. Ed formerly referred to as “Fast Ed” is best known for his commercial TV work on Better Homes & Gardens as well as making special guest appearances on the radio show. Ed’s passion for food remains simple; keep time to a minimum and keep the flavour turned up. Ed’s most recent book work includes Nove Cicina, Dinner in 10 and An Hours the Limit, featuring a diverse and delicious range of culinary dinner ideas. If you just can’t get enough of Ed he also runs his own food inspired magazine Better Basics which showcases more fruitful recipes for food lovers all over. What is your favourite meal (both to eat and cook)? To cook? Definitely bread. You see, anyone can take a piece of great beef and make a meal. But to combine flour, water, salt and yeast (four ingredients that are not especially valuable on their own) and still create art is a measure of the chef. I live that challenge. To eat? Pho, Vietnamese noodle soup. Fresh, light, noodley and delicious. I’ve always wanted to create a cookbook. What are the best and worst things about the process? Creating a book takes so much longer than any reader ever understands. At least a year and a half. You have to find an idea that is relevant and sufficiently different to be interesting. Then you need to make sure that every recipe is perfect—a testing process that takes months and costs tens of thousands of dollars. But in the end you get to make something beautiful, and hopefully have a meaningful impact on the lives of others. That’s what motivates me. If you weren’t a chef—and please never take that option—what would you be doing? If I wasn’t cooking, I’d be writing. I’m sure I’m not alone in having an inner author buried deep within, but mine keeps trying to burst through to the surface. Watch out, you never know what may be in bookstores next year!!! If you could dine with anyone in the world (past or present), who would it be and where would you go? My late grandmother, my wife, my best friend Shaun (who lives in London) and Antonin Careme, the 19th century master pastry chef. I’d probably eat at home and ask Careme to make dessert! What’s the best advice on cooking you can give to budding chefs and also people who simply want to create healthy, flavourful meals for their family? Let the ingredients do the talking. When you do less, the ingredients can do more. I’ve seen more meals ruined by overexertion than by sloth. A great tomato is perfect as it is, so don’t try to change that. Cookery is the art of teasing out those natural tastes. I read in a MasterFoods interview that ground nutmeg is your all-time favourite spice and that thyme is your favourite herb. Can you please share some more tips about spices and herbs that’ll enliven old favourites? Use spices carefully and without going overboard. They are concentrated pockets of flavour that can easily overpower. The idea is restraint. I love nutmeg for its layering and versatility— sweet or savoury—but remember that when it comes to herbs and spices they’re all so flexible. Also, dried herbs are neither better nor worse than fresh, just different, so use them in their own ways. Who or what inspires you both in the kitchen and out? I’m inspired by those who do things I can’t. Whether its talented chefs creating dishes I Fast Ed and a fan want to try, Lewis RobertsThomson kicking goals for the Swans, or the amazing nurses at the children’s hospice Bear Cottage in Sydney who provide amazing respite and palliative care to some very sick children. What they do is both inspiring and humbling. Since I have a sweet tooth (a few of them actually), I was especially excited to discover your extensive experience as a pastry chef. What’s the secret to making delicious scones? A light hand is the most important thing. Over-mixing toughens the flour, making them less light. Also, replace 10% of the flour with cornflour for an even more delicate scone. You’re a celebrity chef, an author, a husband, a father and a philanthropist through your work with Anglicare and Community Greening … What’s next? It’s not always about starting something new; sometimes it’s about getting better at the things we already do. I want to do better in raising money for charity; I want to make better books, better mags, better TV. But most of all I want to find more time for my kids—they’re the best thing that ever happened to me and I want to make sure I don’t miss out on them just because life is hectic. And to top it all off what’s something we don’t know about you? I collect specimen beetles. Steampunk rises to fame C ircus Oz is doing a technical run of their new show’s grand finale. As ringmistress Sarah Ward sings the final song, she is to be hoisted high above the crowd, trailing a vast skirt of parachute silk. Ward is multi-talented – cabaret chanteuse, rapper, composer, clown and poet – but she isn’t an acrobat. The lift requires careful rehearsal. It’s a jerky stop-start process, frequently interrupted by artistic director Mike Finch as he discusses the mechanics with Ward and the riggers keeping her safe. It’s a small insight into just how much work goes into staging a show like Steampowered, a steampunk circus extravaganza. Steampunk is a genre celebrating a Victorian era that never was, a past with clunky machines festooned with clockwork gears and powered by superheated water. Already beloved by science fiction writ- ers, the aesthetic has gradually made its way into the mainstream. When the set designer suggested Circus Oz take on steampunk, Finch says, “It was a real forehead slapping moment.” As Finch points out, although the classical circus tradition evolved in the real Victorian era, that circus was also never quite what people think it was. “Steampunk’s got that aesthetic unity, but it’s also history that never existed. And contemporary circus already plays with that – it’s in a constant state of reference to an imagined history of circus.” Steampowered has inspirations as diverse as Mad Max, the works of Jules Verne, and the French circus troupe Archaos. But Finch promises that Steampowered is “more punk than steam. The punk side is all about breaking the rules.” Known for its live music, raw acrobatics and cheeky flair, Circus Oz also has a reputation as a company interested in the ethics of performance. Aware of criticisms of steampunk as a genre nostalgic for a racist, misogynist, colonialist past, Finch “set a challenge” for designers. He wanted to explore the circus of an imagined past that had gone a better way: “What if Captain Phillip had integrated with Indigenous Australians? What if the Red Coats had gone bush? What would be a real Australian steampunk?” The result, he says, is a show with a “Victorian aesthetic sensibility – but the politics are a complete reversal.” Especially, it seems, as regards gender roles. “Our female characters are more practical action heroes than romantic heroines.” Circus Oz’s commitment to social justice isn’t only reflected in their performances. For many years, the company has raised money for various causes, including Plan Australia, and the Asylum Seekers Resource Centre. Circus Oz gives out hundreds of tickets to people who might not be financially able to attend, including gifts to women’s shelters, youth groups, and Indigenous communities. Supported by the Philanthropic Trust they are about to begin a three-year program developing Indigenous circus performers. “We don’t always want to make a big deal about it,” Finch says. “But we think, who is the most disadvantaged, and how can we support them?” So Steampowered has ethics and aesthetics; but is it fun? Finch grins: “On the way out [the audience] feels good about it.” Sarah Ward rehearses the lift again, this time with the other performers around her. The acrobats are dressed in jeans and tracky daks, miming their stunts. Ward isn’t singing in full voice. But the mistress of the ring raises one arm, grins, Photo: Rob Blackburn By Karen Healey Mason Rolabola and disappears into the billowing silk. For that moment, the magic of a history that never was comes to life. The circus is in town. Circus Oz is staging Steampowered in Melbourne under the Big Top, from June 22 to July 17. Auslan performances July 3. MUSIC MCN JUNE 2011 • VOL 2, ISSUE 4 9 We’re a radio community T his experience with society in Australia or an individual suffering from a mental health disorder speaking about their experience with the health system in Victoria.” “We are representing the community at grassroots level. This is a bias that we know about and is something we are very proud of.” James Williams from 3KND “Community radio is a part of Melbourne life. Melburnians are participants and enthusiasts of community radio; they’re also very supportive of their environment” Adrian Basso - PBS says that relationships between community radio stations are stronger than ever and echoes the vibrancy and diversity of our beautiful city. “We’re not competing. We’re peers and we support each other.” 3KND is an Aboriginal community radio station known for being the voice of Australia’s traditional land owners. Photo: Stefan Elias hree different radio stations with three different specialties. With different channels and different themes, they’re all trying to win over you, the listener, and sell something with advertisements and music, right? Wrong. Melbourne’s community radio stations often work together to promote a sense of unity in the media, unlike today’s mass media. You wouldn’t guess by the hip and crazy mural painted by local artists on the side of its home on Smith St, Collingwood, but 3CR is one of Melbourne’s oldest running community radio stations. The radio station was established in 1976 to provide a voice for those marginalised in mainstream media. Groups like the working class, women, Indigenous people and the many community groups and community issues discriminated against, in and by the mass media were finally given a chance to speak out in a profit-obsessed corporate society. Station manager Loretta O’Brien says the thing most people enjoy about the station is the diversity of shows and people, and ever-vibrant environment. “People on our shows are people who are directly talking In the booth at 3CR about their own experiences rather than, for example, a media representative talking about a celebrity’s experience. We have real stories and real issues.” “Our volunteers walk straight off the street, get trained up in our training program and go ahead and talk about their real lives on air.” And with over 300 volunteers presenting 120 programs each week they must be doing something right for our community. “These are voices and stories that you won’t hear anywhere else on any other kind of media, whether it’s a young Arabic male talking about It is Melbourne’s first community radio station managed and owned by Indigenous Australians and has been airing programs and current affairs since 2003. “We give Indigenous Australians someone to speak to, and also when the wider community tunes in they too learn about Aboriginal issues.” 3KND offers training and support for young Indigenous Australians wanting to get out there and make it in broadcast media. “We have a lot of pride in how we empower young Aboriginal people. We have a lot of youth here that are studying and assistants in training. We’re really about getting the young ones through training and giving them the education they deserve.” “We talk about issues that they’re scared to. We discuss issues that there’s no money in but that people want to hear.” 3KND has around 30 volunteers participating in daily programs, weekly shows, and most popularly, their “deadly” current affairs program. “The only notable figure we haven’t had on the program is the prime minister. We’ve had pretty much everyone else – all kinds of ministers and politicians talking about issues important to Aborigines and then the wider community.” PBS, “Home of little heard music”, is another community radio station known for its edge – or as Station Manager Adrian Basso puts it: “more music, less talk.” “What makes PBS different is that we solely focus on music. We celebrate music in all its diversity, a cornucopia of all different styles run by all our 100 volunteer announcers.” “All the mainstream stations out there ignore the little gems that we find. Our announcers go and search out these rare and wonderful pieces of art.” “We are dedicated and deliver quality programs week in and week out. We have over 80 programs each week – all music programs – that focus on all different kinds of music.” “I like to describe us being like a nice deli where there’s lots of things you can try you may not have tried before. If you transfer that idea to your ears and to sounds, you should give us a go. I recommend people jump on our website and check out what we have to offer. Depending on what time you tune in, you’re bound to find something you’re into.” Over 20,000 Australians volunteer regularly at their local community radio stations each year. This makes up over $145 million and seven million hours of unpaid work per annum. Community radio stations have been around Melbourne for over thirty years, providing an outlet for music and issues that are often missed by commercial and public broadcasters. “A lot of our needs are overlooked in conventional media. It’s all driven by profit and money,” adds Williams. Community stations usually avoid content found on commercial radio like repetitive Top 40 hits and unfunny comedians on your drive home. Commercial stations don’t Photo: Stefan Elias By Zorana Dodos Smith St community radio station 3CR play music from emerging artists unless they’ve been paid to do so. They also wouldn’t interview your local café owner holding a charity event raising funds to support the local autistic school. Community radio acts as a means for people to tell their own stories, promote good for our community and contribute to the world of broadcast media. “Community radio is a part of Melbourne life. Melburnians are participants and enthusiasts of community radio; they’re also very supportive of their environment. “It’s just like our café culture, our football culture; it’s the same thing with our radio culture. There is no limit with Melbourne community radio. There’s so much on offer out there, the connection with the community is quite strong,” explains Basso. “Over the thirty-five plus years community radio has been around, we’ve grown our own patch, developed our own area. We’ve all established our own niches. “People get into what we do, they support us, and this really shines back into the community.” ADVERTORIAL Michael Jackson lives on T o mark the 3rd anniversary of Michael Jackson’s passing, Australia’s No.1 Michael Jackson Tribute Artist, TJ will perform a special dance tribute concert at Spencers Live, Melbourne. TJ rose to fame when he was declared Winner of a competition run by Channel Nine’s The Today Show to find the best Jackson dancer in the country. TJ has just returned from an amazing tour of Indonesia, performing as the headlining act at The Hard Rock Hotel Bali and now back home to perform his New World Class show, packed with all the big hits: Billie Jean, Thriller, Beat It, Smooth Criminal and many more. Finally, Australia can experience a Michael Jackson Concert – featuring the best of the 1987 Bad Tour, 1992 Dangerous Tour, 1996 History Tour and 2009 This Is It Tour. “This is the most authentic Michael Jackson Tribute Show you will see, it will be a memory of what MJ brought to the stage during his various world tours – not just a list of songs.” Costumes and choreography is exactly the way Michael presented it on stage. TJ is backed by a team of Melbourne’s most sought out professional dancers and the act features an incredible stateof-the-art-light-show. More special effects, more dance and purely the passion that only the brilliant TJ can bring. 10 MCN FOOD & WINE ADVERTORIAL JUNE 2011 • VOL 2, ISSUE 4 Hot Chokolait... “Strictly no powders, artificial fillers, flavours or cream. All too often these are used to camouflage inferior chocolate while pure melted genuine Belgian couverture chocolate is simply in a class of its own.” Chokolait proprietor Ross and temperature seamlessly combining the pure melted chocolate through the milk with deceptively little effort. It may sound obvious but hot chocolate must be served hot, because your experience should be savoured, not rushed. You can choose from dark, milk or white hot chocolate, add chilli, cinnamon, or coffee, made fresh in any combination, Chokolait also has a range of genuine country of origin chocolate buttons from around the world, ready to be melted down and crafted into a more exotic hot chocolate experience. The variety of single origin hot chocolates on offer is almost as diverse as Chokolait’s clientele. Ross has sourced the range from Uganda, Ecuador, Costa Rica, Papua New Guinea, Peru and Venezuela each of which have been made using cocoa sourced from Rainforest Alliance Certified farms or have Fairtrade approval. Ross explains that it is only fairly recently that country of origin chocolate could actually be obtained in the chocolate world: “Conglomerate chocolate companies catering for mass consumption, perhaps because of the extra work and logistics involved, prefer to throw chocolate from different regions into the same vat, losing the distinctive flavour of each individual country’s produce.” However, despite these inconsistencies that filter through the chocolate world Ross continues with a smile optimistically explaining that now “single origin chocolate is steadily becoming more widely available which is in large part thanks to people becoming more sophisticated and discerning in their consumption of chocolate and seeking new sensations and innovations.” Each single origin chocolate has its own distinct flavour because each country’s local climate, soil, plants and agricultural practices affect the flavour of the final product. The best examples of this are the decisive differences between the chocolate originating in Papua New Guinea, Peru and Husband and wife team Marianna and Ross A decadent hot chocolate from Chokolait. Costa Rica. They each have the same cocoa content of 64% but because of their different places of origin they each have their own individual flavours and intensity. The Papua New Guinean variety has a very distinctive, smoky flavour with a hint of whisky whereas the chocolate from Peru has surprisingly fresh fruity accents on a rich cocoa foundation. In contrast the strong, smooth, dark chocolate flavours of Costa Rican origin chocolate have made it a favourite among Chokolait’s regulars. Whatever hot chocolate you decide to try, Chokolait ensures that the natural flavours of each individual type of chocolate are its focus, for pure and serious hot chocolate decadence. Not sure if you’ll enjoy the smoky 64% Papua New Guinean, the 71% Ecuadorian or the bold and robust 80% from Uganda? Just ask if you can sample a chocolate button to see if it suits your taste. There’s plenty to choose from, and Ross and Marianna are always happy to talk and help you select your new hot chocolate experience, but be warned, their passion for hot chocolate is infectious. Melbourne may be considered the coffee capital of Australia but the country’s cultural capital most certainly has a hot chocolate heart – beating defiantly and waiting to be experienced in all its delicate richness at Chokolait. A short history of Chocolate Chocolate is made from the seeds of the cocoa tree, its scientific name Theobroma Cacao translates to “food of the gods”. The word Chocolate comes from the Aztec word xocolatl, meaning, bitter water. Hernan Cortes brought the valuable secret of xocolatl back to Spain, improving the recipe with the addition of sugar from the East Indies and vanilla procured from Mexico. When the Mayans first turned cocoa seeds into a drink, it was drunk cold. Cocoa beans were used as a form of currency in the Aztec Empire. It was only after the beans had been worn down that they were used in drinks. When the cocoa pod ripens it turns from yellow to orange. Photo: Stock defends it exclusive rights to the term Champagne. Exuding not only a culinary but also a sensory passion, Ross talks about hot chocolate as an experience saying “If you are a serious hot chocolate consumer, having the same hot chocolate can get boring. Unlike coffee you drink it for the sensory experience, not just to get you up and going. We’re here to provide that experience tailored to the individual.” Marianna, always ready with a welcoming smile froths milk to the perfect consistency Photo: Chokolait W inter is officially here, time for all you chocolate lovers out there to enjoy a hot chocolate with friends to dispel those winter blues. But before you settle for just any hot chocolate, consider taking your winter love affair to a more... intimate, discerning and exciting level. Yes, just as the coffee world is concerned with roasts and blends, and the wine experts indulge in debates about terrior, the hot chocolate world has its own aficionados of technique and cocoa regions. Tucked away in the Hub Arcade on Little Collins street, husband and wife team Ross and Marianna run Chokolait: a chocolate salon dedicated to providing premium hot chocolate for ardent and aspiring connoisseurs. I followed the sweet aroma down the arcade and sat down at a cosy corner table with Ross, chocolatier and chef, to find out what it really takes to make a serious, quality hot chocolate. Ross tells me that just like fine cuisine, the most important thing is quality ingredients. If you have the best ingredients you shouldn’t have to add anything, the real flavour of the chocolate should be allowed to shine. “Strictly no powders, artificial fillers, flavours or cream”, emphasizes Ross who explains that “all too often these are used to camouflage inferior chocolate while pure melted genuine Belgian couverture chocolate is simply in a class of its own.” He emphasises that he uses the word “genuine” deliberately to describe his Belgian chocolate, much like the French region Photo: Chokolait an experience to remember A ripened cocoa pod I – “internetworkingâ€?, had been in operation as we know it since 1982 and had existed in one form or another for more than 20 years before that. But the decision of the University of Southern California’s’s most secure and trusted namespaces, according to AusRegistry CEO Adrian Kinderis. Australia’s Internet integrity also benefits the country by attracting commerce, tourism and investment. “Just as Australia has a proud identity that it projects to the world, the .au space does the same for our online presence,â€? said auDA CEO Chris Disspain. “The .au domain name is Australia’s home on the Internet. It’s a safe, reputable environment. “That’s why more and more people and companies are gravitating towards .au names that allow us to shop, live, work and play online with confidence.â€? Not every view of the Internet is as positive or altruistic as Mr Disspain’s. The Australian novelist Peter Goldsworthy recognised some of its foibles, describing it as a place where “our deepest, darkest and most thrilling secret impulses can be found and allowed out on paroleâ€?. “dried tree flakes encased in dead cowâ€?. When electronic communication became widely available the cynics bemoaned the supposed deleterious effects in ways By Dean Watson B rett Ludeman, founder of Storybottle, has been overwhelmed with the interest in his video service since its launch, a little over a year ago. “There’s a lot of advertising out there that is really just explaining the product, rather than selling it,â€? he explained. “My ads are aimed more at creating a feeling; creating a curiosity and a buzz around the product.â€? Storybottle is a new creative venture that provides independent performing artists and advertisers with affordable video for online content and advertising. “Up until now, video wasn’t very affordable for the industry, especially for independent performing artists. It wasn’t an affordable direction for advertising to go in. Photography and word of mouth was the go, but now video is becoming more and more accessible.â€? With a background in acting and directing, Ludeman decided his videos were best put to use in the performing arts. “I saw a lot of videographers selling into the more ‘financially loaded’ industries. I felt drawn to go back to my roots, to the industry that really deserved it.â€? His hunch is looking more and more like an inspired decision. “I have a small team of photographers and videographers working with me. We’re Look your best for Less! 15% starting to get more and more people on board.â€? With clients from WA, SA and NSW, Ludeman is hoping to expand as soon as possible. “I see Storybottle going international and bringing artists together through online video. I want performing artists to think about how we communicate between theatre groups and theatre companies and how we communicate globally as a theatre network.â€? For more on the video services provided by Storybottle visit: HALF PRICE cosmetic dentistry Qualified & Experienced Medical Practitioners based on Melbourne prices averaged across our range of services FRACTIONATED AND CO2 LASER BEFORE Storybottle founder, Brett Ludeman * Safe and Effective DISCOUNT* Photo: Teagan Crowley Affordable video service for independent artists and advertisers From a single crown to a full mouth reconstruction. AFTER Before Actual images No compromise in our service & quality of work ... just HUGE cost savings for you! LASER LIPOLYSIS Alternative to liposuction BEFORE After crowns & veneers CROWNS , -ĂŠUĂŠ6 ,-ĂŠ * /-ĂŠUĂŠ 6- ĂŠ TOOTH WHITENING Guaranteed 100% Australian Made. Claimable through all major health funds. Your satisfaction is our Goal. Finance Available. †AFTER h7E ENCOURAGE QUESTIONS ABOUT RISKS AND SIDE EFFECTS AND A SECOND OPINIONv Other treatments include: s ,ASER &AT 2EDUCTION s ,ASER WRINKLE REMOVAL s 2ED 6EINS "ROWN -ARKS s 3KIN #ANCER CHECKS s .ON SURGICAL THREAD FACE LIFTS and many more... Free consultations and brochures 1300 762 770 * /N ANY PROCEDURE OF YOUR CHOICE LISTED PRICE -UST PRESENT COUPON /NE COUPON PER PERSON 1155 1155 High HighSt StArmadale Armadale Phone Phone 9090 9090 0099 0099 level 1, 1155-1161 High St, Armadale, Suite 1 cnr of High St & Mercer Rd. enter from Mercer Rd. *conditions apply ~ see website for full details. All pictures are for demonstration purposes only and are not patients of Creative Smiles. †Exception: All genuine Invisalign is sent to USA for manufacturing purposes. ACA5106 #1 11 Australian Internet celebrates a landmark Š 2010 CDC Clinics D TECHNOLOGY MCN JUNE 2011 • VOL 2, ISSUE 4 12 MCN EDUCATION JUNE 2011 • VOL 2, ISSUE 4 Education: Making change easier By Candice de Chalain P eople say “an education opens doors” but many people fear it will also close their bank account. However, learning new skills, or polishing your existing ones, doesn’t have to come with a big price tag. New skills can be life changing, whether you are young, old, a student, job-seeker or employee who wants to excel or change your career. Acquiring new skills places a basket of opportunities at your feet filled with job choice, self-confidence and the chance to interact with others like yourself. There are thousands of affordable courses in Victoria, ranging from on-the-job training, apprenticeships and adult community education, to TAFE and university degrees. There are also plenty of grants, loans and payments available to help pay for your course – all you need to do is reach out and grab them. An easy and affordable way to sharpen your skills is to do it locally. Virginia Lowe is one of the 110, 000 Victorians who learn locally each year. As a single mother of three, Virginia had been swept off her feet for 11 years caring for her children. But when her youngest child “Go for it. There are so many different things you can do and it’s more affordable than you think. It gives you the confidence to achieve and once you start you never know where you’re going to finish.” Virginia Lowe went to school she wanted to re-enter the workforce. Virginia used to work as a machinist, but after having children she no longer wanted to work in a factory. “I wanted something that was more flexible because I had young children,” she says. However Virginia wasn’t sure she had the skills to enter a new profession. That’s when she discovered Learn Local. Learn Local offers a range of adult community education and training programs that help people start work, return to work, change jobs or remain employed. Courses range from basic computer skills to certificates and diplomas in business, information and technology, and community services, such as childcare, aged care and hospitality. Virginia completed Certificate II in Information and Technology, before deciding to enrol at TAFE to complete Certificate III in Business Administration. Virginia found her courses to be very flexible and she was able to customise her workload to suit her lifestyle. “They knew I was a single parent and structured my course so I could still be home for my kids,” Virginia says. Virginia’s new skills placed her in the perfect position to apply for a job at her children’s primary school. And guess what? She got it. Virginia is now working as Virginia Lowe an Administration Assistant while completing Certificate IV in Business Administration under the Victorian Training Guarantee. The Victorian Training Guarantee subsidises courses for people over 20 years old if the qualification is higher than their other qualifications. It also subsidises courses at any level for Victorians under 20 years old. “Fees for the course were actually quite reasonable,” Virginia says. There are a range of grants and payments available. Virginia qualified for the Pensioner Education Supplement, which is a fortnightly payment to help pay Sparks fly at 370˚ T he 370˚ skills centre provides quality Electrotechnology training in pre-vocational, apprenticeship and trade-based courses. Unique and industry-driven the 370˚ skills centre is the largest private RTO trainer of electrical apprentices in Victoria. Known for its highlyexperienced industryspecific staff the centre holds a strong partnership with the National Electrical and Communications Association (NECA). The Skills Centre prides itself on being the benchmark for flexible and dynamic Electrotechnology training to meet all the demands of the industry. The new e-technologies training facility is a show case for the electrical industry and has ‘state of the art’ workshops and classrooms. The 370° skills centre offers cutting-edge training and technology in the way of: • Interactive blended learning environments assisting with student participation, retention and completion rates. • Classrooms fitted with interactive whiteboards, computers and wireless internet to enhance engagement between the teacher and student. Pre-apprenticeship-Electrical Are you interested in gaining an Electrical Apprenticeship? Improve your chances by enrolling in our industry endorsed Pre-apprenticeship course commencing Monday 18th July 2011 at 370° skills centre in Carlton North. To enrol please call us on 9388 0566 (places are limited) This training is delivered with Victorian Government funding (subject to eligibility criteria) • Combination of online and face-to-face teaching with added practical application. The 370˚ skills centre has two campuses operating in Victoria in Carlton North and Brunswick. Call 9388 0566 for more information. for study-related expenses. Virginia says she is “very happy” in her new abilities and job but admits she wouldn’t be where she is if she hadn’t enrolled in training courses. “There are more opportunities available to me now. I feel much more confident in my ability and I know that if I want to do something else I can,” she says. “It’s made me realise that you’re not stuck where you started. It’s a journey and you keep building on what you learn.” With so many opportunities literally just around the corner, Virginia says she would recommend enrolling in a course to any one. “Go for it,” she says, “There are so many different things you can do and it’s more affordable than you think. It gives you the confidence to achieve and once you start you never know where you’re going to finish.” To find a course in your area, visit Job market showing resilience J obs Minister Chris Evans believes the latest employment data highlights the resilience of the labour market with the jobless rate holding at 4.9 per cent. The Australian Bureau of Statistics (ABS) said just 7800 jobs were created in May, well shy of the 25,000 expected by economists, but after a surprise fall in April. This kept the unemployment rate steady for a third straight month. “The latest ABS figures show the resilience of Australia’s labour market,” Senator Evans said. “The latest figures show that in the past month we have seen a return to job creation.” Australia remained in an enviable position compared to its international counterparts with Europe and the US continuing to battle unemployment rates in excess of nine per cent. “With the unemployment rate expected to fall to 4.5 per cent by mid-2013, we’re focused on building a bigger and better workforce that our economy needs,” Senator Evans said. “We want to ensure Australians receive the training and education they need to enable them to access the job opportunities which will arise from our economic strength.” AAP CAE College Mid-year enrolments If you or someone you know is thinking about changing schools or even dropping out, consider a transfer to CAE instead. Whether it’s VCE, continuing into tertiary education or increasing employment prospects, we can help. To find out how, call 9652 0713 or visit. ENROLMENT DATES Thursday 23 June, 1-6pm Tuesday 5 July, 1-6pm FITZROY LOCALITY FEATURE MCN EDUCATION FEBRUARY VOL 1, ISSUE 12 JUNE 2011 2011 • VOL•2, ISSUE 4 21 13 Going best back toof study doesn’t be scary... The the old have andtothe new help is available! in Melbourne’s oldest suburb You may be eligible for the Victorian Government’s Victoria Works grant. Grants can be up to $1000 and can be used to cover the costs associated with training, such as books, course fees and childcare. If you are a mature-aged worker: Rose Street Artists’ Centrelink’s Market Jobs, Education and Training Child Care Fee Assistance provideand extra help Everycan Saturday Sunday, with the cost of childcare for over 140 of Melbourne’s best parents who are studying. emerging artists and designers showcase their work at one of If are markets an Indigenous theyou coolest around – student: the Rose Street Artists’ Market Saturdays are for established Indigenous studentsare and artists and Sundays forfullthe time apprentices are eligible up and coming newcomers. for a fortnightly allowance to help This is where you can find with the costjewellery, of studyingfashion called artworks, ABSTUDY. and home wares that you can’t find anywhere else. TheYouIndigenous can find Completions it between Initiative 9.00am andallows 5.00pm,Indigenous at 60 Rose students to only pay the miniStreet, Fitzroy. mum tuition fee for enrolment in government funded trainSoundWaves ing, including diplomas. SoundWaves If you are overis25back andagain until the end of February at studying or doing an Fitzroy Swimming Pool. apprenticeship fulltime: Every Sunday afternoon from You are1.00pm, probablypoolside eligible DJs for provide cool music to helpCentake financial assistance from on the summer heat. trelink via Austudy. Over the next three years, workers aged 50 years and over with trade relevant skills but no formal qualifications will have Slow progress in bridging the gap T Photo: Rose St Artists’ Market he Council of Australian lum, Assessment and Reporting Authority to provide betGovernments (COAG) ter breakdowns of NAPLAN Reform Council has absences in future, showing expressed concerns about the whether students were sick, number of indigenous children not sitting literacy and numergenuinely exempt or perhaps being asked to stay away. acy tests. Council chairman Paul McThe federal government’s Clintock admitted it was posNational Assessment Program sible some institutions were Literacy and Numeracy (NAPLAN) tests quiz years 3, 5, trying to skew the results. “Gaming is not impossible, 7 and 9 students every year even if it’s not at state level ... a on their reading, writing and local level,” he told reporters in arithmetic. Canberra. Test results feed into the My School website and concerns The council’s 2010 report have been raised that some on national indigenous reform agreements between the states, teachers are asking poorly perterritories and commonwealth forming students to stay home revealed the proportion of inin order to boost their school’s digenous Year 3 students takscores. Head of the council secreing NAPLAN reading tests was tariat Mary Ann O’Loughlin 78.7 per cent in the Northern Territory, and 95.6 per cent for said she had asked test adminRose St Artists’ Market: artworks, jewellery, their non-indigenous peers. istrator the Australian Curricu- fashion and home wares that you can’t find anywhere else the opportunity to have their skills assessed and formally recognised. As part of the new Federal budget, training to address skills gaps will also be funded. Victoria has a shortage of skilled workers across a range of industries, meaning there are plenty of opportunities for you to up-skill and meet this increasing demand for higher skill levels and qualifications. Employers and small business owners can apply for Experience Training grants. This $4,950 grant provides training at the Certificate III level or above for workers aged 50 years or over so that they can mentor and supervise apprentices and trainees. For more information and details, vist: If you are a pensioner: Centrelink offers a Pensioner Education Supplement, in the form of a fortnightly payment to help cover the costs of fullThor e pool itself study. is open every time part-time day of the week, including most public holidays. With its crystal clear waters, indoor cycle training, yoga school, spa, sauna and steam room, Fitzroy Pool has something for everyone at any time of year. Eco-House Open Day at Holden Street, North Fitzroygaps the Large participation Enviro Shop, community stalls, were also recorded in Victoria, free bike checks, children’s South Australia, Western Ausactivities and lots more. tralia and the ACT, and they Saturday February 26, 11.00am widened the older students to 3.00pm at 128 Holden Street, got. North Fitzroy. “There is always a concern Or call Oliphant, that people Rachel give up on Year 9 Sustainability Citylate, of and say, `Well,Offi it’s cer, all too Yarraconcentrate on 9205 5769 we’ll on Year 3,” Mr Rachel.Oliphant@yarracity.vic. McClintock said. gov.au If all this just sounds too expensive... The Australian Government provides FEE-HELP loans for full fee and government subsidised diplomas and advanced diplomas in vocational education and training. This means that you do not need to start paying the fees of your course until you are employed and earning a certain amount of income. Centrelink will pay you Youth Allowance if you are between 16–24 years old and are studying or apprenticed fulltime. If you turn 25, you can keep getting Youth Allowance until you finish your course or apprenticeship. Most TAFEs, universities and educational institutions provide a range of scholarships to help pay for tuition fees, books and other training expenses. Check your preferred learning institution’s website for details. Helpful Resources. au – tips, traps and triumphs of studying as an adult. To fund a subject or course that suits your needs, visit The Australian Apprenticeships Access Program helps disadvantaged jobseekers find training from training organisations that are accredited and recognised by employers. See www. centrelink.gov.au for more information. The Employment Pathway Fund provides funds to purchase a broad range of assistance to help you get the right training and other support to help you find and keep a job. See Job Services Australia. And don’t forget: eLearning Grant eLearning is important in increasing participation as it provides the flexibility of choice over time and location of training. Through Skills Victoria, the Victorian Government funds the eLearning Grant. The eLearning grant is an initiative that provides annual funds to all TAFE institutions to increase the uptake and integration of eLearning in their organisation. The grant funds projects that focus on staff development, research or developing teaching resources to support flexible delivery. Trainees and apprentices are even paid wages while they learn. lounges or in the rooftop courtyard. The warmer weather will see live bands take to the outdoor stage beneath Moroccan lights. While downstairs on any given night you can find some of Melbourne’s most talented Indigenous children share their musical talents DJ’s mixing an array of music styles Hop above the national minimum and the NT did not meet their “It’sfrom reallyHip early daysto...Disco, but if House and techno, Rock, progress points. you start to get behind on aSoul trareading, writing and numeracy and Funk. jectory, particularly if you are standards by 2018. There was no major imWithin walls been Mr McClintock said progprovement in numeracy anygoing down,itsthen yourhas chances forged an atmosphere of to comwhere, in any age group, when of getting back on it get be ress was slow in some areas. fort matched with down-toit came to the proportion of For instance, in Year 9 readquite hard.” earth ‘anything indigenous students achieving Theservice statesand andan territories ing, NSW, Queensland, WA, goes’ attitude. at or above the national minihave agreed to work towards Tasmania and the NT did not Makethe Bimbo meet their progress points. mum standard between 2008 halving gap Deluxe betweenyour the inner cityofentertaining In years 3, 7 and 9 numerand 2010. number indigenouslounge and room. acy, and Year 9 writing, NSW non-indigenous students at or Learning and conserving: Holden Street Neighbourhood House AAP If you would like to Advertise in MCN Please call: 1300 80 40 33 Visit for Media Kit + advertising Rates Photo: City of Yarra If you are a studying parent: Photo: City of Yarra T F he last few years have been hard. For the itzroy was for Melbourne’s economy, families, fi rst offi cial ‘suburb’ for young students and indiand it exemplifi that viduals returning to theesworkfantastic the been best force. Butmarriage if you of have and the old lost and the retrenched, yournew jobthat or have makes this city such an are looking for an alternative exciting to live. career toplace suit the new demands BrunswickthenStreet and of employers Skill Up may Gertrude Street are the dual be for you. This is a free Victohearts of Fitzroy,program and they rian Government depulsatetowith for signed allowpossibilities you to upgrade anyone interested in shopping, your skills so you can re-enter eating or entertainment. the workforce. The programs Besides all to thereceive great up pubs, entitles people to shops, and cafes, this 80 hours of training at asuburb, TAFE sitting on the traditional lands or private registered training of the Woiworung tribe, and still organisation free of charge has some the most beautiful leads to anofAward or Statement bluestone colonial architecture of Attainment. There is also to be found in Melbourne. course guidance and counselfrom theyou small lingAnd available to help work commercial-art galleries, out what is the best course for artist-run andoptions artist you. Here spaces are a few studios to the thriving street specifically designed for those -art community, Fitzroy is looking to further their educaalso home to some of the most tion: dynamic art in a city of artists. Here’shave a tinybeen snapshot of If you out of what the suburb has to off er the workforce while carthis month. ing for your children: Postal Address: PO Box 582, Collins Street West, Vic 8007, E-mail: info@mc-news.com.au 14 MCN HEALTH JUNE 2011 • VOL 2, ISSUE 4 The healthy way to kick-start your day Why missing out on brekkie and not drinking enough water isn’t doing us any favours By Violeta Edge your system well fed and it will look after you. Timing is also critical. When running a business, you don’t keep your clients waiting so “The key is to drink warm to hot water. If hot water is not available, drink room temperature water, but never cold water, especially while eating” Violeta Edge why would you subject your vital systems to that kind of pressure? The frequency of strokes, sudden death and heart attacks peak between 6 am to noon, with the highest incidence between 7.00 am and 10.00 am. What mechanism within the body can account for this significant jump or be the reason why people never wake up? Platelets – no not mini-sized dinner plates, but those tiny white blood cells that keep us from bleeding to death when we Photo: Stock down of the system. A build-up of plaque slowly attacks the immune system and leads to a variety of diseases including heart failure, cancer and possibly even mental illness. Our stomach and internal temperatures are warm, so drinking hot water helps our body flush out plaque and keeps the wall of the intestine and villi (the little “hairs” in the intestine) healthy and clean, allowing maximum absorption of vitamins and minerals into the twelve organs in our bodies. My Grandma’s advice was never to skip breakfast and now I know the physiological reasons behind it. One of the main causes of high blood pressure is the chronic deficiency of essential nutrients. Millions and even trillions of artery wall cells are responsible for the availability of relaxing factors (nitric oxide) which decrease vascular wall tension and keep your blood pressure in the normal range. Eating a healthy breakfast decreases your likelihood of developing high blood pressure, diabetes and other diseases. In simple words, keep get a cut—clump together inside our arteries due to cholesterol/ plaque build up in the arterial lining wall. During the gap between falling sleep and waking up, platelets become the most activated and usually form internal blood clots and poor circulation at the greatest frequency. During that gap, you have not fed the twelve systems of the body. You then expect to get up, race around and get everything done. It’s no small wonder why you’re feeling like you’re always running the Monday marathon! Fortunately, even a very light meal will minimise the morning platelet activation that is associated with high blood pressure, strokes and heart attacks. Studies performed in 1991 at the Memorial University of Traditional Chinese Medicine: flourishes in Melbourne By Louise Collins I Photo: Stock n the mid-nineteenth century, long before the first Chinese restaurants popped up in Australian suburbia offering an exotic alternative to fish and chips on a Saturday night, Traditional Chinese Medicine (TCM) was practiced by Chinese migrants in the goldfields of Victoria. The practice grew slowly over time as the result of increased migration of practitioners to Australia. By the 1970s, with a population open and accepting of alternative medicines such as TCM, naturopathy and homeopathy, the Australian government-sponsored health insurance for full acupuncture cover for patients of registered Western medical practitioners. Since the 1990s, the Australian government has accredited university degrees in Traditional Chinese Medicine and in May 2000, the Chinese Medicine Registration Act was passed by the Victorian Parliament. TCM practitioner, Dunnielle Mina, sees the popularity of TCM in Australia as being more than the result of industry standards or Freshly ground herbs are more potent than dried public acceptance of alternative therapies. “The fact that the industry is now registered and health funds are offering rebates for a lot of our services is evidence of the rise; the cause of the rise is people’s disenchantment with the reductionist medical system”. Whereas Western medicine is seen to provide treatment for a precise illness, TCM addresses how illness is revealed in their patient and then treats that patient, not just the disease. Taking a holistic and preventative approach to health has become common practice. People have become proactive in their approach to their well being, and less inclined to take their antibiotics and sleep it off. Mina explains: “… you’ve got a system that’s looking at disease first and the person second and a TCM practitioner that’s looking at the person first and the disease second, so we’re looking at what’s going on in this person’s life: their lifestyle, the way they think, the way they feel, what they eat and all of these different aspects and we treat that instead of treating the disease.” “I think one way to summarise would be to say that we’re working from a foundation belief that the body is an inherently intelligent system, it’s about working with the body to get rid of the disharmony…the human body is quite miraculous when you look at it so it’s hard to separate the metaphysical from the physical because it’s a body of knowledge that sees them as interwoven, codependent.” TCM practitioner, Dunnielle Mina, believes conventional Western medicine and TCM are beneficial as complementary medicines and the rise and rise of popularity of TCM within mainstream Australia is bringing the two closer to an integrated approach to health. “I think these days more and more orthodox medical practitioners are willing to work with TCM practitioners … there’s now a lot more Western-style scientific research done in to Chinese herbs and acupunc- Newfoundland discovered that eating a light, low fat breakfast was very critical in modifying the morning platelet activation in the body. Recommended breakfast components included protein, carbohydrates, fibre, fruit, yogurt, milk or soy. My recommendation is to continue that meal combination throughout the day, with each meal containing protein, carbohydrates and fibre. Some suggestions are porridge with fruit, honey and/or nuts or eggs on toast with grilled tomato and sautéed mushrooms. The choices are endless so be creative! It’s said that it takes seven days to form a habit. If you haven’t had a healthy breakfast today, and you’re not drinking warm to hot water on a regular basis throughout the day, NOW is the best time to start forming habits that could literally save your life. Violeta Edge About Violeta Violeta Edge is a qualified, experienced holistic therapist, consultant and practitioner who specialises in improving the quality of lives and lifestyles by using natural remedies and therapies. Contact Violeta by calling 0403 287 702, emailing: violeta@veunique.com or visiting her website: Photo: Stock M y journey as a wellness and lifestyle coach started many years ago in the Philippines. When I was growing up, my Grandma (Irene) always drank two to three mugs of warm to hot water—sometimes flavoured with ginger, lemon juice and honey—first thing in the morning before anything to eat. Then after ten to fifteen minutes, we would have breakfast. The human body can be likened to a car. Trying to operate your body without water is like driving your car without petrol. Sooner rather than later, it’s going to stop operating properly and leave you stranded. The key is to drink warm to hot water. If hot water is not available, drink room temperature water, but never cold water, especially while eating. Why? Firstly, it upsets the digestive system by diluting the digestive juices. That, in turn, can cause indigestion. Also, cold water solidifies oily and greasy food and slows down digestion. This leads to increased stomach acid, blood clots, high cholesterol and a general slow- Organic, safe and effective ture, it’s growing and I think as it grows our esteem with orthodox practitioners rises, but there’s a lot of catching up to do in terms of that evidence-based research.” 16 MCN ON STAGE JUNE 2011 • VOL 2, ISSUE 4 Principle Dancer, Amber Scott shares her background as a ballet dancer and her experiences with Elegy. By Clarissa Dimitroff F rom a very young age Amber was told that she had potential as a dancer, and pirouetting her way to success, she was recently promoted to Principle Dancer this year. Amber has been with the Australian Ballet Company since 2001 and was promoted to Principle Dancer in 2011. Beginning her journey at the Australian Ballet School at the age of 11 Amber has never hesitated in her career choice. Four years spent overseas with the Royal Danish Ballet as well as performing in many classical ballets nationally and internationally her repertoire has increased rapidly. Despite her extensive experience her favourite role is dancing as Odette in Graeme Murphy’s Swan Lake: “I think to date it’s still the hardest role I do and have done. That’s been a milestone role that’s helped me progress, I’ve grown up with it.” Playing such an important role in a production is never easy, so the dancers are required to rehearse about seven hours a day, six days a week. They have a week off in June and three weeks at Christmas to spend much needed rest time and see family and friends. Although it can be a very difficult career, it is also very rewarding. “I couldn’t have wished for a better career. It’s hard, so you’ve got to be prepared to do the work for it.” On stage, Amber says she feels totally liberated: “It’s a live performance. Every time you go on stage, it’s a little bit different. You’re working with a pas de deux partner usually” she explains “which makes the experience challenging but so rewarding.” In such a style the two dancers must be completely in sync with each other and together they bring to the subtleties of the story and the characters’ emotions. Elegy addresses an issue that appears to perennially plague the minds of mankind. Life after death is a subject which calls for reflection and contemplation and Requiem in partic- ular touches closely upon this topic, examining how life is a spiritual journey with death as its inevitability: “working with music and hearing pieces like the Requiem by Fauré and the Bach pieces, you can’t help but be moved on quite a deep level,” says Amber, “and I believe there’s an underlying sense of spirituality that permeates this work.” If that wasn’t enough the role is certainly Amber’s dream come true: “I’m a big fan of Stephen Bayne’s work and I’ve always wanted to dance in Beyond Bach.” Hearing the master’s music you are reminded that the pieces were composed in a different era: “There’s that beautiful feeling of history being passed aurally onto the next generation.” The Victorian Opera adds another exceptional dimension to Requiem. Amber says, “Having the singers surround the dancers is just going to be the most beautiful, and to borrow some of David [McAllister]’s words, it promises to be a sensory experience.” With amazing choreography from Stephen Baynes, the talented dancers of the Austra- Photo: Jeff Busby Elegy brings sensory experience to Melbourne Amber Scott with Andrew Killian from Beyond Bach, part of the Elegy program lian Ballet and a first time accompanying performance by the Victorian Opera, Elegy is not to be missed. Peter Berzanskis is Stephen King By Tilly Lunken P eter Berzanskis sits casually, chatting about juggling casual work and travel with his bourgeoning acting career and it seems to be a familiar story that belies his late entry into the performance industry. Berzanskis however seems to be content with his career change from working in community broadcasting into the less definable life on the stage. His commitment to acting is paying off, with his resume expanding with numerous short films, feature films and now the stage. Now as the lead in Lee Gambin’s new play King of Bangor he is bringing to life the publicly well loved and (privately tormented) horror novelist author Steven King. The cast and crew are now embarking on an intense rehearsal period before the play opens on the June 29 for its Melbourne season at the Old Council Chambers at Bella Union. It is this process that Berzanskis has come to relish, “I like theatre because of the rehearsal period and the development,” and having “experienced a lot of things including the shit that happens to everyday people,” Berzanskis is enjoying the process of working with the other talented members of the cast under the direction of Dione Joseph. The synopsis of the King of Bangor promises a world of oppressive darkness and an insight into the spine-chilling creations of Steven King. “In the opening scene” Berzanskis says, “Steven is writing and it’s terrible.” From here the fears that he has previously purged onto the page come back to haunt him and the blurring between the boundaries of reality unleash new horrors. In a one act play there will be no opportunity for the character (or the audience) to escape. When asked if portraying such a public figure is a challenge, Berzanskis agrees: “yes, because people who particularly like him have an idea of him already, [he] already exists as someone people already know.” However this seasoned actor isn’t daunted and it is clear that he is eager to delve into a character with a expansive and complicated psyche. You might think you know Steven King but King of Bangor will offer an opportunity for a new perspective. As Berzanskis says “a painting of a flower might not be a ‘true’ representation but it makes us see the flower in a different way.” And would he describe himself as a fan of Steven King and his work? Yes indeed, but as an actor he still has no reservations about getting inside his Peter Berzanskis head! Finally Berzanskis warns us: “The voyage is more than just a creative exploration of Stephen King, it explores the struggles of a writer confronted with his greater fear – that of not being able to put words to paper”. For both Stephen King fans and theatre goers alike the play combines dramatic appeal with iconic references. Book tickets by visiting:. wordpress.com YouTube artist comes to town I f you haven’t heard of David Choi from YouTube then you’ve certainly been missing out! The hottest new artist with over 800,000 subscribers and over 90.8 million total video views is a local boy from Los Angeles, USA and he’s coming to Melbourne to get the party started! He is the first YouTube artist to embark on an international tour in the Asia Pacific region and after performing concerts in Hong Kong, Malaysia, Singapore, Philippines, Indonesia he will arrive in Australia to bring the best of his music to his Aussie fans. With his opening concert to be held on July 1 at the Melbourne City Conference Centre, Choi will then proceed to Sydney and Brisbane. Having won the grand prize for David Bowie’s Mash-up contest Choi was also awarded winner of USA Weekend Magazine’s John Lennon Song- writing Contest and appeared in USA Weekend Magazine with recording artist Usher. In 2008, David released his self-produced album Only You worldwide followed by his sophomore album By My Side in 2010. Dedicated fans will have their chance to be on stage with David Choi himself by submitting a video of themselves dancing the “David Choi” dance! Presented by Sydney and Singapore-based entertainment company, Monsoon Productions, tickets are available from. com. For more information and to keep up with the latest Choi Tour news, you can “like” monsoon productions on Facebook and also be updated on Twitter: @monsoonteam.com ON SCREEN MCN JUNE 2011 • VOL 2, ISSUE 4 17 By Dean Watson O scar-winning writer, director, animator Adam Elliot hesitates to call himself a writer. “I feel like a fraud when I say I’m a writer. I’m happy to say I’m a storyteller. Maybe when I’m in my 60s, I’ll call myself a writer. I never know what to have on my business card.” I’m speaking to Elliot before an audience of writers in one of RMIT’s lecture theatres. Like most of the arts venues across Melbourne, he mentions the high probability of having given a talk in this particular theatre at some point in the past. For someone involved in the highly reclusive field of making claymation motion pictures and despite having spent the last 20 months working seven days a week, completing the script for his new feature film, his public profile is strong. On a cold Monday night, there is a crowd of around 80 in the lecture theatre. He attributes the weather as part of the reason animation is so strong in this city. “Everyone’s inside, which is very conducive to creativity. It’s no coincidence that a lot of these [Academy Award] nominees come from Melbourne. Melbourne is an arts hub. I read in The Age a few weeks ago that there are 16 sword swallowers in the world and Melbourne has six of them. It’s a great place to be an animator.” Elliot is a part of the movement away from the overly colourful Disney and Pixar films. Both his Oscar winning short, Harvie Krumpet (2003) and feature length Mary and Max (2009) are defined by their lack of colour, layered with a rich tapestry of deeply flawed, but human characters. “My films are biographical. They’re comedy/tragedies. There’s no talking animals. Even though they’re blobs of plasticine, I’m trying to create very real, authentic and endearing characters. I’ve thought about other types of writing, but I just love observing people. I’m a human sponge, like most writers.” He speaks fondly of the time-consuming process of clay animation. “Five seconds a day is our rate. It’s meditative. It’s zen. I’m only making one film every five years. So I’ve probably got another four in me and then I’m dead! The reason I choose plasticine over computer-generated imagery is really simple – I like to get my hands dirty. There’s something magical about clay animation. They don’t date.” For a filmmaker known primarily for his directing, Elliot stresses the importance of a good script. “When I was at film school, Fred Schepisi came and spoke to us. He said, ‘what are the three most important ingredients of a good film?’ and we all put up our hands and said ‘directing, script and casting,’ and he said ‘no, no, you’re all wrong. It’s script, script, script.” “I realised I was never going to get the budgets I wanted for my films because of the nature of them – they’re dark and not Photo: Daniel Gregoric A moment with Adam Elliot Oscar winning animator Adam Elliott Disney films – so I quickly realised I had to have a good story well told.” He cites Robert McKee’s “Story” as one of the few screenwriting books he’s read. “Like every good screenwriter should,” he adds. Details surrounding his new animated feature film are top secret, but he reveals it’s a romantic comedy. “The actors we want to get are ridiculous – Angelina Jolie, Jackie Chan and Gérard Depardieu.” On the legacy he would like to leave, Elliot is reflective. “I always pretend that the audience, when they come in to see one of my films are sitting in individual compost bins and sitting there, absorbing as much as they can and are nourished, so by the time they leave the cinema, they don’t feel like I’ve wasted an hour and a half of their life.” Judging from the number of audience members that waited patiently to talk to Adam after our interview, a cold Monday night in a compost bin, being nourished by Adam Elliot is a place you want to be. Hayes’ circus magic T his year’s Melbourne International Animation Festival (MIAF) is screening over 350 of the world’s best indie animations, ranging from scribbles in pen and pencil and rough cardboard cut-outs, to elegant paint and sand compositions. But it’s MIAF’s new program Australian Showcase that’s oozing with local talent and getting people excited. Animator and artist Rebecca Hayes is one of Melbourne’s gems featured in Australian Showcase. Four years ago, Hayes was in high school. Now, the 21 year-old is an internationally accredited animator whose work has screened in Poland, the Melbourne Museum and even floated on a screen in Sydney’s Darling Harbor. Hayes’ next stop is MIAF, where she will introduce her latest animation The Show (2010). The Show catapults you into the midst of a quirky travelling carnival and offers a glimpse into the backstage world of performers. Approximately 20 circus personalities bring the screen to life by performing, what is for them, everyday routines in preparation for a show. But they are not your average characters. A bearded lady, Siamese twins, an armless contortionist and two fat ladies are just a few of the unusual personalities who bring a mix of melancholia and comedy to the big-screen. Hayes thinks of The Show as “a whimsical visual treat” that captures the excitement felt before a performance. “I hope the imagery will take people away on a journey. I wanted it to be quite lighthearted but there is a sense of nostalgia in it” Hayes admits. Without dialogue, The Show relies on Hayes’ hand-drawn imagery, and sound (by Angela Grant), to transport the audi- ence into the carnival world. Hayes used a combination of traditional and digital processes to create The Show, scanning her hand-drawn illustrations and water-colour paintings onto a computer and then using a tablet to draw directly onto the computer screen. “I like the traditional process of creating things with my hands, but combining digital techniques saves time and means less frustration in the end,” Hayes explains. Hayes was propelled into a career in animation almost by surprise after creating her first animation Bus Stop (2007) during high school. Bus Stop was screened in Top Screen during the VCE Season of Excellence, allowing Hayes to watch her work as part of an audience. “After being around the audience reaction to my work I was hooked,” Hayes says. Now having graduated with A screenshot from Rebecca Hayes’ The Show a Bachelor of Arts in Animation and Interactive Media from RMIT, Hayes is receiving glowing reviews and has been awarded the ArtStart grant from Australia Council for the Arts to help establish her career. But Hayes isn’t alone in her talent. “Melbourne is buzzing with animators and creative types,” Hayes says. “It’s really exciting.” You can see Hayes’ The Show and films by other talented animators at the Melbourne International Animation Festival (June 19 to 26) at Australian Centre for the Moving Image, Federation Square.. aspx We are offering our readers the chance to win one of six passes to the Melbourne International Animation Festival. To enter, simply email: win@mc-news.com.au with MIAF in the subject line. Please include your full name address. 18 MCN FASHION JUNE 2011 • VOL 2, ISSUE 4 Retrofusion C scratch,” he says. Even the labels are handwritten. And whereas inks in mass-produced printed clothing can often be produced from toxic materials, Sheppard works exclusively with one hundred percent non-toxic inks on every printed piece. It helps that the clothes are gorgeous. They’re wry and funky, with a sense of humour that doesn’t obscure the superb craftwork. There are hoodie cardigans with wooden buttons, dresses emblazoned with printed birds, and a top made out of patchwork pieces sewn together that takes up to five hours of personal labour to get right. One particular top features a skull made up of a collage of body pieces refashioned into a Photo: Gerard O’Connor olin Sheppard has a message for Australian clothes shoppers: “Stop purchasing so much made-inChina rubbish.” While sympathetic to the financial limitations and lack of options that keep many buyers limited to mass-produced clothing (“It’s a vicious cycle”) Sheppard is definite about the need to support ethical clothing practices. He’s put his money where his mouth is. With partner Brett, he began designing and creating handmade clothing line Do-It Baby, which is now sold exclusively from their Chapel Street shop. “We cut it all out, we print it all, sew it all – it’s all from A sample of Do-It Baby’s collection grinning face. When Sheppard cites Vivienne Westwood as a major influence, it’s easy to see her work as inspiring his own. Andy Warhol is another inspiration, along with Australian designer Peter York. But the greatest inspiration for Do-It Baby is the fine work of times gone by. “50s frocks are by far the most incredible,” Sheppard enthuses. “From the 50s to the 80s, that’s what inspires us. We’re trying to do a 90s retrospective and I just can’t figure out where to go.” The spirit of retro fusion lives in Do-It Baby, where a demure grey dress with a 50s style full skirt and cowl neck can hang beside a singlet top in a bright print that wouldn’t shame 80s designers. Even the shop’s décor is a stylistic mashup, with a vintage icebox serving as a dresser, old ships’ lights repurposed as store lighting, and colorful plastic robot statues placed to greet customers. They all belong to Sheppard: “I’ve always collected vintage pieces; vintage clothes, vintage toys,” he says. With the mix of props and racks of clothing, the shop feels like a backstage space. Perhaps that’s not surprising from someone who’s done so much work for the Melbourne Theatre Company as a scenic artist and costume designer. Do-It Baby came from a fortunate fusion of Sheppard with his partner Brett: “He did his fine arts degree in printmaking, and I’ve always been a graphic artist, signwriter and designer. It was magical, those worlds coming together.” Inspired to create, it was clear the medium would be fabric. Sheppard originally envisaged Do-It Baby attracting women from 20 to 35, but soon discovered the range had broader appeal and “now ladies up to 65” frequent the increasingly popular Chapel Street store. Unlike many high-fashion designers, who seem to regard the bodies wearing their clothes as occasionally inconvenient coat hangers displaying their Photo: Gerard O’Connor By Karen Healey A sample of Do-It Baby’s collection art, Sheppard accounts for the person within the dress, and likes the idea of “the healthy, curved woman.” Do-It Baby goes from size 8 – 16, and “we’re going to go up to 18 soon. And for handmade clothing, the prices are remarkably low, ranging from $85 to $285. “When we were wholesaling, the shirts would go for $145,” he explains. “Now that we’ve got our own store we can bring prices down.” And yes, before you ask, Do-It Baby take commissions: “We’ve just been commissioned for a wedding dress,” Sheppard enthuses. “It’s full of amazing colour.” Do-It Baby is offering 30% off to all MCN readers who bring in a copy of this article. Do-it Baby is located at 15 Chapel Street Windsor Finding the perfect winter hat W ith the recent cold snap in southeast Australia, it’s time to put on a hat. The body loses between 70 and 80% of its heat through the head. And during a winter hike, camping trip or just walking to the train station to get to work, maintaining body heat is crucial. So a good hat is essential. And not just any hat will do on a winter outdoors excursion. You’ll need something that not only traps heat efficiently, but can also cope with sweat. Here are some important issues to keep in mind when selecting a hat for a winter activity, and a few headwear options: • Side-to-side protection: Use a winter hat or cap that not only covers the head, but the ears as well. Ears are particularly susceptible to frostbite. • Material: This is a key factor in dealing with perspiration. Skip cotton if possible – cotton absorbs moisture easily. A wool hat will keep your head warm, and copes with sweat better. Caps made of synthetic materials, such as polypropylene or fleece, hold heat in well, and do not absorb water. • Hat system: Some hats offer an adjustable function. A hat system consists of a lightweight polypropylene liner and a nylon shell to alter in changing cold temperatures. • Hat alternatives: A Balaclava offers probably the most complete protection for the head. It pulls down over the head and neck, and has an opening for the face. It can also be rolled up and used as a hat. A headband made of fleece or other synthetic materials provide another option. The headband can be pulled down to cover the ears and perhaps a bit of the cheeks, but it leaves the top of the head exposed. AAP A warm hat is a winter essential JUNE 2011 • VOL 2, ISSUE 4 FITZROY FEATURE MCN $29 00 19 . 99 “ALL YOU CAN EAT” DINNER, B.Y. O Thű Hai - Thű Ba 5.30 PM - 10.00PM Thű Tù - Chù Nhật 4.30 PM - 10.00PM NHÀ HÀNG ING DOI THAI 73 Victoria Pde, Collingwood VIC 3066 Tel: (03) 8415 0161 20 MCN ENVIRONMENT JUNE 2011 • VOL 2, ISSUE 4 Bringing the bonsai to the city tree and just tend to it and be at peace. I think it’s the pursuit,” muses Farr. “What would suit an innercity garden is the aesthetic side of a bonsai tree. It’s like an object of beauty that goes with the changes. The green is soothing, and the line of a rugged trunk in a modern habitat is a way of reminding you of nature in a busy world filled with noise.” “Everything we do, all the actions we take have a measurable outcome, but with a bonsai tree it never stops, because the bonsai never stops growing,” describes Farr. And it’s no secret that indoor plants of any kind – literally – liven up any dull indoor space, suit any décor and add an element of texture. There’s even university research showing that having any plant indoors increases productivity by 12%, reduces stress levels, cleans the air and helps in creating a healthier environment. Like making use of the useless, as Farr puts it, what quietens the mind from all the Photo: Zorana Dodos trees develop the classic dwarf characteristics that distinguish these plants. “What’s different about a bonsai tree from other plants is the whole intention is to grow smaller and not bigger. It has a different sensibility from other plants,” explains Lindsay Farr, bonsai enthusiast and owner of Bonsai Farm in Hawthorn. “In an inner-city residence you can personally have many views of the world, there is no right or wrong way of living. Whatever you want in life and with a bonsai is a valid and creative pursuit.” For a tree to be considered a bonsai, the growing miniature tree must be a fusion of horticulture, nature and creative expression. Bonsai can be a philosophical experience, in a way a means for feeling the spiritual harmony between man and nature. “The greatest thing about these trees is the way they can quiet the mind. We live in a city so cluttered with noise. It’s great when you can go to a bonsai Lindsay Farr’s Bonsai Farm in Hawthorn Could this be your favourite Bonsai? stress and clutter are the simple things in life. In this case, tending to a bonsai tree. And Farr should know a thing or two about bonsais, being an enthusiast for over sixty years. “I started growing bonsais when I was five years old. I came back to it full-time in 1978 and I guess that was six decades ago now. “What I love about them is their energy, and their capacity to endure hardship. Bonsai trees were inspired by mountain trees, the ones that had been blown about in the wind on mountains and still stood strong.” “And what’s also different about them is what they bring to the individual.” Farr clarified the most common misconception about the bonsai tree is that you have to cut the roots all the time and shape it and style it. “The most important thing is to be a diligent waterer. It is Photo: Zorana Dodos W e’re all familiar with the usual reasons to live in an innercity apartment: instant access to art and cultural precincts, fantastic dining, glorious shopping, public transport and the list goes on – but once the noise and frenzy have reached their peak is there any chance for some inner city peace and quiet? Forget that concrete jungle, because while some CBD residents may drive for hours to find a bit of serenity, you could be closer than you think. Bonsai trees in an inner-city apartment aren’t exactly hip and ultra-cool. In fact you’re probably thinking, “doesn’t my grandma meddle with bonsai?” Actually, probably not. Bonsai plants and trees are rapidly growing in popularity and inner city Melbourne apartments are increasingly realising their benefits. The Japanese term “bonsai” literally means plant in a tray. Through careful pruning and by wiring the branches, the Photo: Zorana Dodos By Zorana Dodos Bring a little version of Melbourne’s tree lined streets into your home more of an artistic pursuit contrary to popular belief. If you make a mistake you have fairly secure knowledge that nothing’s locked in stone and you can fix it up and try again.” Because these trees are so small and picturesque, another common fallacy about them is that all bonsais are actual indoor plants. When purchasing a bonsai for an indoor space, great care must be taken in choosing a bonsai tree suitable for indoors and outdoor balconies that are prone to reaching high temperatures. Although indoor bonsais require a bit more attention than outdoors plants, they do brighten and let light into any small, cold apartment. Next time you feel like you’re stressed, tired and are longing to feel free, the cure might be as easy as investing in a bonsai tree. Kaffir lime leaves provide citrus kick to the winter blues D ouble-lobed kaffir lime leaves add an unmistakable citrus aroma and flavour to soups, stews and curries. The leaf, which is actually two leaves together, is added to simmering foods to provide flavour, but is generally not eaten. There are exceptions - when it is finely shredded, it can be eaten - for example, added to fish cakes or soups. Pat Tanumihardja, author of The Asian Grandmothers Cookbook, said that when she moved to California many years ago, the first thing she did was to buy a kaffir lime tree so all she would have to do was go out on the deck and pluck some leaves. But if you don’t have a tree, you can find them at many Asian grocery stores, or you can ask your favourite Thai restaurant to give or sell you some, but most supermarkets do stock them these days. “I would avoid the dried ones as they lack flavour and scent,” Tanumihardja says. “If you won’t be using them all at once, seal them in a zip-top bag and freeze for up to three months. Rinse under cold running water before using.” Also before using, crumple them in your hands to release the essential oils. This allows the scent and flavour to fully permeate the dish. Use the whole leaf in simmering coconut-based sauces, stockbased soups, stir-fries and noodle dishes. If you want to use them in savoury pancakes, shred them finely before using. • 6 kaffir lime leaves, crumpled 2 tablespoons Vietnamese or Madras curry powder Winter Warmers: Vietname Chicken Curry • A 1.3-1.8kg chicken, cut into 8 serving pieces, or 1.3kg bone-in chicken parts of your choice (thighs, drumsticks, wings, breasts, etc.) • Makes 6 servings as part of a multicourse family-style meal • Salt • 1 tablespoon vegetable oil • 2-1/3 cups unsweetened coconut milk (about 1-1/2 cans) • 1 large yellow onion, chopped (1-1/2 cups) • 1 cup water, plus more if needed • 1.1kg sweet potatoes and/or russet potatoes, peeled and cut into 5cm chunks In a large pot, heat the oil over medium heat until it becomes runny and starts to shimmer. Add the onion and lime leaves, and stir and cook until the onion is slightly softened, about 2 minutes. Add the curry powder and 1/4 teaspoon salt and stir until fragrant, about 15 seconds. Add the chicken and brown for 3 to 4 minutes on each side. Don’t worry about completely cooking the chicken at this point; you just want to sear the meat so that it retains its juices and doesn’t fall apart during cooking. Add the coconut milk and 1 cup water, followed by the potatoes. Make sure the chicken pieces and potatoes are completely submerged in the liquid. If necessary, add more water. Raise the heat to high and bring to a boil. Reduce the heat to medium-low and cover. Simmer for at least 1 hour, preferably 2 hours. When the dish is done, the chicken will be fall-apart tender and the gravy will be thick from the starch of the potatoes. Add 2 teaspoons salt (or to taste). Remove the kaffir lime leaves before serving hot with freshly steamed rice or French bread. TRAVEL MCN JUNE 2011 • VOL 2, ISSUE 4 21 Chasing the American dream By Anastassia Irina Muhammad Din Fikri Photo: Trav Media I America’s largest national historic landmark, Virginia City in Nevada experiencing some true blues and ribs in Memphis, or if you would like to splurge, taking an aeroplane ride down to the bottom of the Grand Canyon. In the big cities there are always plenty of tourist options as well, so regardless of what gets you excited anything is a possibility when you’re in the USA. At the moment it is also a great opportunity to make the most of your Australian Dollar. With it being very strong against many currencies at the moment, this ensures that you get your money’s worth if you do end up going overseas. How strong exactly? Well, strong enough to shop to your heart’s content. The interest rate differential (in real terms) drives movements in the exchange rate and looking at the current state, it is quite likely that the AUD will remain strong for now until the US, Japanese and Europeans begin to grow. Gas (yes, it’s petrol) prices are low as well so why not get your roadmap out and start planning the route along the 101? An extra bonus is that an Australian driving licence is valid within all the states in America and as most rental agencies have a plethora of cars available for all budgets you will be spoilt for choice. Quick Tip: Book online as it’s always cheaper than over the counter and much more economical than at the airport. Also remember that its cheaper to hire several different cars in different states as there are inter-state taxes. Mix up the journey with some planes, trains and buses and things will never get dull. If you’ve never been to the USA before don’t let the notion of endless roads connecting forty-eight states overwhelm you. The best option is to remember that, when trying to plan the trip of a lifetime, less Photo: Trav Media magine that you’re driving down the highway (one that’s probably been used in endless film sets), your favourite song is playing in the background, and a fresh sea breeze is blowing as you drive off into a glorious sunset. Sound a little too good to be true? Maybe, but the reality of the situation today is that a road trip through the USA doesn’t have to be pipedream. Whether you’re a young professional looking for a break or a fresh graduate, or even a young family getting ready for their first holiday overseas, an American road trip would be a great way to open up yourself to the world. Australians and Americans have a diverse range of cultures and landscapes but an appreciation for travel and adventure ignites a passion that see many making the trip overseas. The USA has a multitude of natural breathtaking views and sights. Whether it is taking in the beauty of glaciers or working on your tan in the sunny tropical weather, make sure you experience the range of natural and man-made wonders on offer. Whether you’re interested in hiking through the deserts of Joshua Tree National Park in California, drinking in a local bar in the French Quarter of New Orleans, or soaking up the sunset at Key Largos in Florida, there is definitely something for everyone. And don’t just stick to the traditional tourist hot spots (though Disneyland is a must for the young and young at heart) but step outside your comfort zone and hang out with the locals. Some options could include taking a bike ride across the Golden Gate Bridge, The natural wonder of Hawaii is more, and ultimately more enjoyable. Mark Sheehan, Media Chair for Trav Media, commented “The Masai people have a saying ‘The best way to eat an elephant is one small bite at a time’. America is an elephant”. In other words, you should savour the experience of the road trip and treasure the memories instead of trying to do everything in whatever time you have. Remember that research is about but be flexible and spontaneous. The key to enjoying your travel is to have fun, so plan, book but but leave plenty of room for spontaneity. Special thanks: Mark Sheehan, Media Chair of Trav Media; Sally Branson and Beverly E. Mather-Marcus of the U.S. Consulate General Melbourne and Prof. Olan Henry from the University of Melbourne. Have you done your homework? Visa There are different types of Visas available for Australian citizens and there’s even a Visa Waiver Program in place. For more info, check out: canberra.usembassy.gov/ Photo: Trav Media Places to visit in the US Las Vegas has everything, even an Eiffel Tower a crucial key to planning your trip. There is nothing worse than going on a holiday and realising that you have to look around for a place to stay for the night. The Internet is your friend and wherever you go there are countless wireless hotspots but very few internet cafés so be warned. Ultimately, your trip to the USA is yours, so make it special. Include the places and activities you’ve always dreamed Every state and most big cities would have their own tourism board that would be more than happy to assist you with your planning and make suggestions. A good place to start is: The vehicle of choice As with many things, it is a matter of preference. Recreational Vehicles or RV’s are great for those who are on a tight budget and like having the comforts of home, or you could take to the road in style with a Ferrari. Be aware that when taking vehicles across inter-state borders there is often an extra tax. Accommodation Be it bed & breakfasts (B&B’s) or fancy hotels, it’s all a matter of budget and personal choice but don’t forget online options like Craig’s list (The equivalent of the Aussie Gumtree). Travel Insurance Very important and handy to have. DON’T go without it. This is where a travel agent would be helpful but if you’re savvy you can do this online as well. When it comes down to it, as long as you plan ahead but still leave some room for spontaneity, you’re bound to have a great escapade. MCN SPORTS Storm develops next generation By Stuart Harrison Twitter: @sportsjournostu M elbourne Storm have faced many challenges since they entered the National Rugby League competition in 1998, but none come close to the challenge of building a team of local players. For Victorian Rugby League chairman and Melbourne Storm Development General Manager, Greg Brentnall, it is a massive task that must be done in order to gain respect for the southern team in a decidedly northern game. Brentnall believes rugby league provides an important alternative to Victoria’s athletes that might not be naturally suited to Aussie rules. He said this element of choice was an important part of his own development while growing up in country NSW. “I put it back to how when I grew up in Wagga [Wagga], I was given the same opportunities to play both games and I chose to play league when I finished school for reasons that I liked running with the footy. What you’re trying to do is give kids the opportunity to do that.” Brentnall believes this challenge is a generational one that starts from when people grow up listening to the adults around them and consuming the media. “We have to try and change the thinking of a lot of the young players. The issue that we have had in trying to get more players playing our game is we haven’t had free-to-air TV in Melbourne and we’re still not in a position to do that either,” he said. “At the moment we’ve got a new generation coming through who don’t see rugby league unless their parents have got Fox Sports. That has been a hold back for us”. Over the past four years, the game has been able to modestly JUNE 2011 • VOL 2, ISSUE 4 Photo: Courtesy of Melbourne Storm 22 Storm star Gareth Widdop teaches the next generation grow itself around the state. This change was helped by the establishment of Melbourne Storm Development, an initiative of the Australian Rugby League with the Melbourne Storm. Additional government grants allowed the body to employ more staff to give their programs and the game as a whole a wider audience. This year, they will run programs involving over 40,000 school children. “Since we’ve been able to get the development staff into the schools over the past three, four years we’ve had a doubling of our numbers, our participation numbers, which is a huge boost for us,” Brentnall said. “We had around 750 playing the game when we instituted Storm development. We’re now up to, this year we’ll be well over 2,000 playing the game,” he said. “On the scale of things it’s not a huge number but from what we’ve come from it’s been a great advancement and a really positive step for us.” This growth has had positive repercussions on the quality of the amateur VRL competition with local players being able to make the leap into the Storm’s under 20s team. Brentnall said Storm Development are committed to not only growing the grassroots but also providing pathways for players through lower grade national competitions into the National Rugby League. “We won the under 20s competition in 2009 and we’ve now got nine of those players that have come through to the NRL squad, which is when you look at the numbers coming through the under 20 level and nationally throughout the game that’s a huge number to out of our own program to say have come through to the NRL level,” Brentnall said. “We’d ideally like to have a pathway here in Victoria that we can bring in our local players from the club system, or pick them up in the school system and introduce them into the club system, then ideally have them come through to play at the NRL level,” he said. “We’ve had a great success this year with Gareth Widdop, not a born and bred Victorian, but came into our system six years ago and this year he’s stepped up as the five-eight in the NRL squad and one of our key players in that area. So he’s been a success story of what we’re trying to create for our players, the ability to be able to come through to NRL level”. By Stuart Harrison Twitter: @sportsjournostu M elbourne Ice captain hopes his team’s history of fighting adversity can restore the team to defend its Australian Ice Hockey League title. “It’s only a matter of time before we’re challenging more and get back where we were last year,” he said. “We’re almost an identical team. The only problem has been our build-up”. In a team fighting for consistency after the late arrival of import players, the Ice have been relying on their old guard to bring the team together. Players like Hughes, Webster and Armstrong have proved as indispensible in attempts to gel the mix of older, younger and import players together. “Our veterans really are our key. They seem to be able to stand up and bring the team together. They have been the players that have really been able to give us direction”. The need for a solid team has become even more important as the reigning champions and the admission of a second Melbourne team, the Mustangs. He believes Newcastle and West Sydney will be their biggest contenders for the title, especially in the wake of West Sydney’s last place finishing in 2010. As for the Mustangs, he believes the young team is already turning heads despite a rollercoaster start to the season “The Mustangs have been very unpredictable. They have a young team and nothing to lose,” he said. But despite this early prediction for a possible playoff appearance for the Mustangs in their first season there has been no love lost in the rivalry that has already proved fiery in Photo: Creative Commons Melbourne Ice fights to retain title Melbourne Ice are hoping for more glory this season their opening two encounters – with one win a piece. “Melbourne Ice have always been a family organisation and having the Mustangs in the comp, it’s kinda like they’re stepping on our toes. A lot of people are going to see it as a negative thing and that’s going to support our rivalry. The rivalry is definitely there. We hate each other that’s for sure. But after the game we’re all great mates,” he said. Jones said part of the ill-will to the new club will always be based on the fight for recognition that the Melbourne Ice had to endure in the quagmire of underrepresented sports playing at Melbourne’s other ice rink at Oakleigh. He believes the club’s home for the last two seasons at the Docklands Icehouse has proved to be a massive success for the sport and the club has been reinforced by their championship win last year. “We’re in a position where we can market our sport and show everyone why we love it. This is a huge difference from where we were in Oakleigh. We were always proud of what we had. But other teams looked at us like we were worthless, like we had no money. But we pushed through that. We had to push so hard to get where we are today”. SPORTS MCN JUNE 2011 • VOL 2, ISSUE 4 23 By Stuart Harrison Twitter: @sportsjournostu F or thousands of people who identify themselves as gay, lesbian, bisexual or transsexual (GLBT), the sporting field can be an imposing place, a Victorian University study said last year. It is with this in mind that Victorian Human Rights and Equal Opportunity Commission, Australian Sports Commission and Hockey Victoria launched the Fair Go, Sport initiative. The Come Out to Play report was released last year and was the first comprehensive support into inequalities faced by GLBT people on Victoria’s sporting fields. It found that a majority of GLBT competitors decided to not disclose their sexuality while many more were dissuaded from playing sport all together. Equal Opportunity and Human Rights Commissioner Helen Szoke said the initiative had the potential to not only promote equality in the sport of field hockey but also in the wider community. “Sport has an elevated place in our culture with involvement from all areas of our community. We look to our sports men and women as role models, and we recognise the sports environment can teach us many lessons in human rights because sport has at its core, the values of freedom, respect, equality and dignity,” she said. The program also hopes to break down the negative stereotypes and realities of “club culture” and breed a wider acceptance for difference within sport. Photo: Courtesy of VEOHRC Leveling out the playing field Hockey Victoria contingent at St Kilda’s Pride March earlier this year Hockey Victoria has four clubs currently acting as pilots for the program. They include Camberwell, Old Carey, Werribee and Baw Baw. Each of the four clubs has also set about organising activities that can further promote the goal of a more inclusive sport for all and create a program design that can be implemented in other sports. The body has so far overseen the nomination of key national hockey stars as ambassadors to the program, had a contingent attend the Pride March, held an International Day Against Homophobia event and created a Fair Go, Sport Cup to promote the cause to the widest possible audience. Hockey Victoria CEO Ben Hartung said he worked with the pilot clubs to develop initiatives and resources to engage and educate all hockey stakeholders, from players to the administrators in strategies to fight homophobia. “This project is about real clubs and real people, doing real things to make their par- ticular clubs more inclusive,” Mr Hartung said. In October, the Australian Research Centre in Sex, Health and Society at LaTrobe University will assess the initiative and look at how it can be expanded more broadly into Australia’s sporting landscape. A new title explores Essendon’s past successes By Stuart Harrison Twitter: @sportsjournostu M Photo: Melbourne Storm any Essendon fans started the year with great hopes for their club. The return of two of their greats James Hird and “Bomber” Thompson to the Windy Hill team has restored hope that the club could once again become one of the league’s premier teams. As their history shows they have never been far from glory sharing the title for the most Victorian Football League/ Australian Football League premierships with Carlton with 16 each. For many envisaging how their club can restore themselves as pride of the league, a look to the past is necessary, and this is particularly the case if your history has been filled with success. Essendon were first nicknamed the “same olds”, as in the “same old” team that always wins during their first string of premierships from 1891 to 94. It was a nickname that stuck with the team until the run-up to the Second World War. With the club playing in the shadow of the nearby Essendon Aerodrome it renamed itself the Bombers. The signing of former Richmond defender Kevin Sheedy as coach in 1981 proved to be one of the club’s greatest moves. Four premierships and countless champion players were borne out of his programs at Windy Hill. Simon Madden, Terry Daniher, Mark Thompson, Paul Salmon, Tim Watson, Mark Harvey, Scott Lucas, Dustin Fletcher, Gavin Wanganeen and James Hird were just some of the best players to play under Sheedy. The club moved from their home ground since 1922, Windy Hill, at the end of the 1991 – though few knew it at the time. The club decided over the summer that the ground could no longer hold their growing fan base and a move was made to the MCG and finally to Etihad Stadium. It was a sad day for suburban footy with only Carlton and Collingwood continuing to play home games at their suburban Melbourne bases. But it also indicated an explosion in support shown by their ANZAC Day games against Collingwood in 1995, their premiership win in 2000 and their “blockbuster” club status. The Sheedy years had brought Essendon a generation of fans and a backlog of coaching possibilities among their retired stars. The club has already shown they will not accept a second rate team – Matthew Knights’ limited coaching career reflected this. The post-Sheedy era has been a glum one for many Bombers fans and the newfound bond the players have found with their former players in the coaching box will only be maintained if the club can continue to prove themselves worthy of being a top club. Flying High – The Story of Essendon’s 16 Premierships published by Slattery Media Group is out now. A new book explores Essendon’s past successes Celtic stars come to Melbourne By Stuart Harrison – Twitter: @sportsjournostu July 13 at AAMI Park. Tour promoter, Tribal Sports Management, has said they aimed to keep ticket prices low in order to build the biggest possible crowds for the games in Melbourne, Sydney and Perth. A “Jungle End” will be set-up for travelling Celtic supporters while home teams will retain a home end for their club’s passionate supporter groups. Victory assistant coach, Kevin Muscat, was a former player for Celtic’s cross-town rival Rangers and said he looked forward to his side taking on the 42-time Scottish champions. “The rivalry with Celtic that I experienced during my time in Scotland was phenomenal,” he said. “They are a massive club, with some of the best supporters in football. To get them out here to Australia again is not only great for the game, but more importantly, great for the fans. Their record speaks for itself, so I’ve got no doubt there’ll be a big crowd on hand at AAMI Park to watch us take on one of Europe’s biggest clubs,” Muscat added. Photo: FLC/Flickr R ecently crowned Scottish soccer champions, Glasgow Celtic, will return to Australia next month to take on the best the A-League has to offer in a series of friendly matches. Celtic returns to Australia after touring the country in 1977 and 2009. This will be their first time coming to Melbourne with a game against the Victory on Glasgow Celtic will return to Australia next month MELBOUR NE’S CHEAPES T IS NOW MOCARS ILE FRIENDLB Y. N JUMP O NOW! 4,850 ‘05 FORD FALCON XT $ * AUTO, AIR COND, POWER STEERING, AS TRADED, GREAT VALUE. S/N 81563 ‘08 TOYOTA RAV4 15,850* $ AUTO, AIR COND, PWR STEER, CD, ABS, AIRBAGS, 1 OWNER, EXCELLENT VALUE. S/N 81725 11,850 ‘08 TOYOTA CAMRYY ALTISE $ AUTO, AIR COND, PWR STEER, CD, ABS, CRUISE, ALLOYS, AIRBAGS. S/N 81768 COROLLA $14,850 ‘09 TOYOTA * * AUTO, AIR COND, PWR STEER, 1 OWNER, EX FLEET VEHICLE, GREAT VALUE. S/N 81663 ‘08 HOLDEN * 5 DR HATCH, AUTO, AIR COND, ALLOYS, PWR STEER, 1 OWNER, EX FLEET, EXCELLENT VALUE. S/N 81618 EXEC. WAGON $9,850 ‘07 HOLDEN VZ 11,850 ASTRA CD $ * AUTO, AIR COND, PWR STEER, ABS, ELECS, EXCELLENT VALUE. S/N 80950 ‘09 FORD FOCUS CL 14,850* $ 1 OWNER, EX FLEET VEHICLE, EXCELLENT VALUE. S/N 81866 CNR BIGNELL & SOUTH RDS, MOORABBIN - 1300 301 777 *ALL CARS SOLD UNREGISTERED & EXCLUDE GOV’T CHARGES. LMCT 8399 NOW MOBILE FRIENDLY
https://issuu.com/mcnews/docs/mcn_june_2011_epaper
CC-MAIN-2016-50
refinedweb
23,422
58.92
Hi, Today I receive several questions about debugging CQWP XSLT. Although I love writing them in Notepad and doing the final transformation in IE ;), it would be great to use the features of Visual Studio 2005 (I will do other post for Visual Studio 2008) in order to debug our files in MOSS, without MOSS. As you can read in the referenced links, there are, OOB, 3 XSL files for the CQWP in MOSS. So in order to start you should take a copy of those files, and take a sample xml for your query. The sample query can be taken from the CQWP itself, configuring the query in a test page, and adding the following element in the first template of the ContentQueryMain (lines 2-4): - < That will allow you to copy&paste the sample xml for your query. You shoud add the correct encoding for your data. Mine is: <?xml version="1.0" encoding="iso-8859-1"?> as the first line in the sample xml. Then you can just open Visual Studio (you will have intellisense for XSL schema, Validation, etc..), Open the XSL files, and in the Input property of the XSL file select the sample XML. Then in the XML Toolbar, you will need to click Debug XSLT (5th button), and you are on the right way!. In order to debug the XSLT's we will use, we will need to include the import element on the top of the Main XSL file, linking the item XSLT (line 8): - < version="1.0" exclude-result-prefixes="x xsl cmswrt cbq" xmlns:x="" xmlns:xsl="" xmlns:cmswrt="" xmlns: But I couldn't get rid of the custom functions in the namespace cmswrt, so you will need to change those (as cmswrt:EnsureIsAllowedProtocol); and you will get the result in the result window: Cheers! References PingBack from
https://blogs.msdn.microsoft.com/carloshm/2007/12/04/debugging-xslt-from-a-content-query-webpart/
CC-MAIN-2017-39
refinedweb
308
74.22
During a project for developing a light JPEG library which is enough to run on a mobile device without compromising quality graphics on a mobile device, I have seen and worked out a number of ways in which a given computer program can be made to run faster. In this article, I have gathered all the experiences and information, which can be applied to make a C code optimized for speed as well as memory. Although a program's complexity and readability. It will not be acceptable if you are programming for small device like mobiles, PDAs etc., which have strict memory restrictions. So, during optimization, our motto should be to write the code in such a way that memory and speed both will be optimized. Actually, during my project, I have used the tips from this for optimization ARM because my project was on ARM platform, but I have also used many other articles from Internet. All tips of every article do not work well, so I collect only those tips together, which are very useful and very efficient. Also, I have modified some of them in such a way that they are almost applicable for all the environments apart from ARM. What I did is just make a collection of the information from various sites but mostly from that PDF file I mentioned above. I never claimed that these are my own discoveries. I have mentioned all information sources in the References section at the end of this article. Without this point, no discussion can be started. First and the most important part of optimizing a computer program is to find out where to optimize, which portion or which module of the program is running slow or using huge memory. If each part is separately being optimized then the total program will be automatically faster. The optimizations should be done on those parts of the program that are run the most, especially those methods which are called repeatedly by various inner loops that the program can have. For an experienced programmer, it will usually be quite easy to find out the portions where a program requires the most optimization attention. But there are a lot of tools also available for detecting those parts of a program. I have used Visual C ++ IDE's in-built profiler to find out where the program spends most click tricks. Another tool I have used is Intel Vtune, which is a very good profiler for detecting the slowest parts of a program. In my experience, it will usually be a particular inner or nested loop, or a call to some third party library methods, which is the main culprit for running the program slow. We should use unsigned int instead of int if we know the value will never be negative. Some processors can handle unsigned integer arithmetic considerably faster than signed (this is also good practice, and helps make for self-documenting code). So, the best declaration for an int variable in a tight loop would be: register unsigned int variable_name; although, it is not guaranteed that the compiler will take any notice of register, and unsigned may make no difference to the processor. But it may not be applicable for all compilers. Remember, integer arithmetic is much faster than floating-point arithmetic, as it can usually be done directly by the processor, rather than relying on external FPUs or floating point math libraries. We need to be accurate to two decimal places (e.g. in a simple accounting package), scale everything up by 100, and convert it back to floating point as late as possible. In standard processors, depending on the numerator and denominator, a 32 bit division takes 20-140 cycles to execute. The division function takes a constant time plus a time for each bit to divide. Time (numerator / denominator) = C0 + C1* log2 (numerator / denominator) = C0 + C1 * (log2 (numerator) - log2 (denominator)). The current version takes about 20 + 4.3N cycles for an ARM processor. As an expensive operation, it is desirable to avoid it where possible. Sometimes, such expressions can be rewritten by replacing the division by a multiplication. For example, (a / b) > c can be rewritten as a > (c * b) if it is known that b is positive and b *c fits in an integer. It will be better to use unsigned division by ensuring that one of the operands is unsigned, as this is faster than signed division. Both dividend ( x / y) and remainder ( x % y) are needed in some cases. In such cases, the compiler can combine both by calling the division function once because as it always returns both dividend and remainder. If both are needed, we can write them together like this example: int func_div_and_mod (int a, int b) { return (a / b) + (a % b); } We can make a division more optimized if the divisor in a division operation is a power of two. The compiler uses a shift to perform the division. Therefore, we should always arrange, where possible, for scaling factors to be powers of two (for example, 64 rather than 66). And if it is unsigned, then it will be more faster than the signed division. typedef unsigned int uint; uint div32u (uint a) { return a / 32; } int div32s (int a){ return a / 32; } Both divisions will avoid calling the division function and the unsigned division will take fewer instructions than the signed division. The signed division will take more time to execute because it rounds towards zero, while a shift rounds towards minus infinity. We use remainder operator to provide modulo arithmetic. But it is sometimes possible to rewrite the code using if statement checks. Consider the following two examples: uint modulo_func1 (uint count) { return (++count % 60); } uint modulo_func2 (uint count) { if (++count >= 60) count = 0; return (count); } The use of the if statement, rather than the remainder operator, is preferable, as it produces much faster code. Note that the new version only works if it is known that the range of count on input is 0-59. If you wished to set a variable to a particular character, depending upon the value of something, you might do this:, e.g.: static char *classes="WSU"; letter = classes[queue]; Global variables are never allocated to registers. Global variables can be changed by assigning them indirectly using a pointer, or by a function call. Hence, the compiler cannot cache the value of a global variable in a register, resulting in extra (often unnecessary) loads and stores when globals are used. We should therefore not use global variables inside critical loops. If a function uses global variables heavily, it is beneficial to copy those global variables into local variables so that they can be assigned to registers. This is possible only if those global variables are not used by any of the functions which are called. For example: int f(void); int g(void); int errs; void test1(void) { errs += f(); errs += g(); } void test2(void) { int localerrs = errs; localerrs += f(); localerrs += g(); errs = localerrs; } Note that test1 must load and store the global errs value each time it is incremented, whereas test2 stores localerrs in a register and needs only a single instruction. Consider the following example - void func1( int *data ) { int i; for(i=0; i<10; i++) { anyfunc( *data, i); } } Even though *data may never change, the compiler does not know that anyfunc () did not alter it, and so the program must read it from memory each time it is used - it may be an alias for some other variable that is altered elsewhere. If we know it won't be altered, we could code it like this instead: void func1( int *data ) { int i; int localdata; localdata = *data; for(i=0; i<10; i++) { anyfunc ( localdata, i); } } This gives the compiler better opportunity for optimization. As any processor has a fixed set of registers, there is a limit to the number of variables that can be kept in registers at any one point in the program. Some compilers support live-range splitting, where a variable can be allocated to different registers as well as to memory in different parts of the function. The live-range of a variable is defined as all statements between the last assignment to the variable, and the last usage of the variable before the next assignment. In this range, the value of the variable is valid, thus it is alive. In between live ranges, the value of a variable is not needed: it is dead, so its register can be used for other variables, allowing the compiler to allocate more variables to registers. The number of registers needed for register-allocatable variables is at least the number of overlapping live-ranges at each point in a function. If this exceeds the number of registers available, some variables must be stored to memory temporarily. This process is called spilling. The compiler spills the least frequently used variables first, so as to minimize the cost of spilling. Spilling of variables can be avoided by: The C compilers support the basic types char, short, int and long (signed and unsigned), float and double. Using the most appropriate type for variables is very important, as it can reduce code and data size and increase performance considerably.. Consider the following three example functions: int wordinc (int a) { return a + 1; } short shortinc (short a) { return a + 1; } char charinc (char a) { return a + 1; } The results will be identical, but the first code segment will run faster than others. If possible, we should pass structures by reference, that is pass a pointer to the structure, otherwise the whole thing will be copied onto the stack and passed, which will slow things down. I've seen programs that pass structures several Kilo Bytes in size by value, when a simple pointer will do the same thing. Functions receiving pointers to structures as arguments should declare them as pointer to constant if the function is not going to alter the contents of the structure. As an example: void print_data_of_a_structure ( const Thestruct *data_pointer) { ...printf contents of the structure... } This example informs the compiler that the function does not alter the contents (as it is using a pointer to constant structure) of the external structure, and does not need to keep re-reading the contents each time they are accessed. It also ensures that the compiler will trap any accidental attempts by your code to write to the read-only structure and give an additional protection to the content of the structure. Pointer chains are frequently used to access information in structures. For example, a common code sequence is: typedef struct { int x, y, z; } Point3; typedef struct { Point3 *pos, *direction; } Object; void InitPos1(Object *p) { p->pos->x = 0; p->pos->y = 0; p->pos->z = 0; } However, this code must reload p->pos for each assignment, because the compiler does not know that p->pos->x is not an alias for p->pos. A better version would cache p->pos in a local variable: void InitPos2(Object *p) { Point3 *pos = p->pos; pos->x = 0; pos->y = 0; pos->z = 0; } Another possibility is to include the Point3 structure in the Object structure, thereby avoiding pointers completely. Conditional execution is applied mostly in the body of if statements, but it is also used while evaluating complex expressions with relational ( <, ==, > and so on) or boolean operators ( &&, !, and so on). Conditional execution is disabled for code sequences which contain function calls, as on function return the flags are destroyed. It is therefore beneficial to keep the bodies of if and else statements as simple as possible, so that they can be conditionalized. Relational expressions should be grouped into blocks of similar conditions. The following example shows how the compiler uses conditional execution: int g(int a, int b, int c, int d) { if (a > 0 && b > 0 && c < 0 && d < 0) // grouped conditions tied up together// return a + b + c + d; return -1; } As the conditions were grouped, the compiler was able to conditionalize them. A common boolean expression is used to check whether a variable lies within a certain range, for example, to check whether a graphics co-ordinate lies within a window: bool PointInRectangelArea (Point p, Rectangle *r) { return (p.x >= r->xmin && p.x < r->xmax && p.y >= r->ymin && p.y < r->ymax); } There is a faster way to implement this: (x >= min && x < max) can be transformed into (unsigned)(x-min) < (max-min). This is especially beneficial if min is zero. The same example after this optimization: bool PointInRectangelArea (Point p, Rectangle *r) { return ((unsigned) (p.x - r->xmin) < r->xmax && (unsigned) (p.y - r->ymin) < r->ymax); } The Processor flags are set after a compare (i.e. CMP) instruction. The flags can also be set by other operations, such as MOV, ADD, AND, MUL, which are the basic arithmetic and logical instructions (the data processing instructions). If a data processing instruction sets the flags, the N and Z flags are set the same way as if the result was compared with zero. The N flag indicates whether the result is negative, the Z flag indicates that the result is zero. The N and Z flags on the processor correspond to the signed relational operators x < 0, x >= 0, x == 0, x != 0, and unsigned x == 0, x != 0 (or x > 0) in C. Each time a relational operator is used in C, the compiler emits a compare instruction. If the operator is one of the above, the compiler can remove the compare if a data processing operation preceded the compare. For example: int aFunction(int x, int y) { if (x + y < 0) return 1; else return 0; } If possible, arrange for critical routines to test the above conditions. This often allows you to save compares in critical loops, leading to reduced code size and increased performance. The C language has no concept of a carry flag or overflow flag, so it is not possible to test the C or V flag bits directly without using inline assembler. However, the compiler supports the carry flag (unsigned overflow). For example: int sum(int x, int y) { int res; res = x + y; if ((unsigned) res < (unsigned) x) // carry set? // res++; return res; } In a if(a>10 && b=4) type of thing, make sure that the first part of the AND expression is the most likely to give a false answer (or the easiest/quickest to calculate), therefore the second part will be less likely to be executed. For large decisions involving if... else... else..., like this:. Break things down in a binary fashion, e.g. do not have a list of: if(a==1) { } else if(a==2) { } else if(a==3) { } else if(a==4) { } else if(a==5) { } else if(a==6) { } else if(a==7) { } else if(a==8) { } Have instead: if(a<=4) { if(a==1) { } else if(a==2) { } else if(a==3) { } else if(a==4) { } } else { if(a==5) { } else if(a==6) { } else if(a==7) { } else if(a==8) { } } Or even: if(a<=4) { if(a<=2) { if(a==1) { /* a is 1 */ } else { /* a must be 2 */ } } else { if(a==3) { /* a is 3 */ } else { /* a must be 4 */ } } } else { if(a<=6) { if(a==5) { /* a is 5 */ } else { /* a must be 6 */ } } else { if(a==7) { /* a is 7 */ } else { /* a must be 8 */ } } } The switch statement is typically used for one of the following reasons: If the case labels are dense, in the first two uses of switch statements, they could be implemented more efficiently using a lookup table. For example, two implementations of a routine that disassembles condition codes to strings: char * Condition_String1(int condition) { switch(condition) { case 0: return "EQ"; case 1: return "NE"; case 2: return "CS"; case 3: return "CC"; case 4: return "MI"; case 5: return "PL"; case 6: return "VS"; case 7: return "VC"; case 8: return "HI"; case 9: return "LS"; case 10: return "GE"; case 11: return "LT"; case 12: return "GT"; case 13: return "LE"; case 14: return ""; default: return 0; } } char * Condition_String2(int condition) { if ((unsigned) condition >= 15) return 0; return "EQ\0NE\0CS\0CC\0MI\0PL\0VS\0VC\0HI\0LS\0GE\0LT\0GT\0LE\0\0" + 3 * condition; } The first routine needs a total of 240 bytes, the second only 72 bytes. Loops are a common construct in most programs; a significant amount of the execution time is often spent in loops. It is therefore worthwhile to pay attention to time-critical loops. The loop termination condition can cause significant overhead if written without caution. We should always write count-down-to-zero loops and use simple termination conditions. The execution will take less time if the termination conditions are simple. Take the following two sample routines, which calculate n!. The first implementation uses an incrementing loop, the second a decrementing loop. int fact1_func (int n) { int i, fact = 1; for (i = 1; i <= n; i++) fact *= i; return (fact); } int fact2_func(int n) { int i, fact = 1; for (i = n; i != 0; i--) fact *= i; return (fact); } As a result, the second one fact2_func" will be more faster than the first one. It is a simple concept but effective. Ordinarily, we used to code a simple for() loop like this: for( i=0; i<10; i++){ ... } [ i loops through the values 0,1,2,3,4,5,6,7,8,9 ] If we needn't care about the order of the loop counter, we can do this instead: for( i=10; i--; ) { ... } Using this code, i loops through the values 9,8,7,6,5,4,3,2,1,0, and the loop should be faster. This works because it is quicker to process i-- as the test condition, which says "Is i non-zero? If so, decrement it and continue". For the original code, the processor has to calculate "Subtract i from 10. Is the result non-zero? If so, increment i and continue.". In tight loops, this makes a considerable difference. we have to be careful of are remembering that the loop stops at 0 (so if it is needed to loop from 50-80, this wouldn't work), and the loop counter goes backwards. It's easy to get caught out if your code relies on an ascending loop counter. We can also use register allocation, which leads to more efficient code elsewhere in the function. This technique of initializing the loop counter to the number of iterations required and then decrementing down to zero, also applies to while and do statements. Never use two loops where one will suffice. But if you do a lot of work in the loop, it might not fit into your processor's instruction cache. In this case, two separate loops may actually be faster as each one can run completely in the cache. Here is an example. Functions always have a certain performance overhead when they are called. Not only does the program pointer have to change, but in-use variables have to be pushed onto a stack, and new variables allocated. There is much that can be done then to the structure of a program's functions in order to improve a program's performance. Care must be taken though to maintain the readability of the program whilst keeping the size of the program manageable. If a function is often called from within a loop, it may be possible to put that loop inside the function to cut down the overhead of calling the function repeatedly, e.g.: for(i=0 ; i<100 ; i++) { func(t,i); } - - - void func(int w,d) { lots of stuff. } Could become.... func(t); - - - void func(w) { for(i=0 ; i<100 ; i++) { //lots of stuff. } }. This can make a big difference. It is well known that unrolling loops can produce considerable savings, e. The following code (Example 1) is obviously much larger than a simple loop, but is much more efficient. The block-size of 8 was chosen just for demo purposes, as any suitable size will do - we just have to repeat the "loop-contents" the same amount. In this example, the loop-condition is tested once every 8 iterations, instead of on each one. If we know that we will be working with arrays of a certain size, you could make the block size the same size as (or divisible into the size of) the array. But, this block size depends on the size of the machine's cache. //Example 1 #include<STDIO.H> #define BLOCKSIZE (8) void main(void) { int i = 0; int limit = 33; /* could be anything */ int blocklimit; /* The limit may not be divisible by BLOCKSIZE, * go as near as we can first, then tidy up. */ blocklimit = (limit /; } /* * There may be some left to do. * This could be done as a simple for() loop, * but a switch is faster (and more interesting) */ if( i < limit ) { /* Jump into the case at the place that will allow * us to finish off the appropriate number of items. */ switch( limit -); } } } This example 1 efficiently tests a single bit by extracting the lowest bit and counting it, after which the bit is shifted out. The example 2 was first unrolled four times, after which an optimization could be applied by combining the four shifts of n into one. Unrolling frequently provides new opportunities for optimization. //Example - 1 int countbit1(uint n) { int bits = 0; while (n != 0) { if (n & 1) bits++; n >>= 1; } return bits; } //Example - 2 int countbit2(uint n) { int bits = 0; while (n != 0) { if (n & 1) bits++; if (n & 2) bits++; if (n & 4) bits++; if (n & 8) bits++; n >>= 4; } return bits; } It is often not necessary to process the entirety of a loop. For example, if we are searching an array for a particular item, break out of the loop as soon as we have got what we need. Example: this loop searches a list of 10000 numbers to see if there is a -99 in it.. It is a good idea to keep functions small and simple. This enables the compiler to perform other optimizations, such as register allocation, more efficiently. Function call overhead on the processor is small, and is often small in proportion to the work performed by the called function. There are some limitations up to which words of arguments can be passed to a function in registers. These arguments can be integer-compatible ( char, shorts, ints and floats all take one word), or structures of up to four words (including the 2-word doubles and long longs). If the argument limitation is 4, then the fifth and subsequent words are passed on the stack. This increases the cost of storing these words in the calling function and reloading them in the called function. In the following sample code: int f1(int a, int b, int c, int d) { return a + b + c + d; } int g1(void) { return f1(1, 2, 3, 4); } int f2(int a, int b, int c, int d, int e, int f) { return a + b + c + d + e + f; } ing g2(void) { return f2(1, 2, 3, 4, 5, 6); } the fifth and sixth parameters are stored on the stack in g2, and reloaded in f2, costing two memory accesses per parameter. To minimize the overhead of passing parameters to functions: longparameters, as these take two argument words. This also applies to doubles if software floating-point is enabled. A function which does not call any other functions is known as a leaf function. In many applications, about half of all function calls made are to leaf functions. Leaf functions are compiled very efficiently on every platform, as they often do not need to perform the usual saving and restoring of registers. The cost of pushing some registers on entry and popping them on exit is very small compared to the cost of the useful work done by a leaf function that is complicated enough to need more than four or five registers. If possible, we should try to arrange for frequently-called functions to be leaf functions. The number of times a function is called can be determined by using the profiling facility. There are several ways to ensure that a function is compiled as a leaf function: __inlinefor small functions which are called from it (inline functions discussed next). Function inlining is disabled for all debugging options. Functions with the keyword __inline results in each call to an inline function being substituted by its body, instead of a normal call. This results in faster code, but it adversely affects code size, particularly if the inline function is large and used often. __inline int square(int x) { return x * x; } #include <MATH.H> double length(int x, int y){ return sqrt(square(x) + square(y)); } There are several advantages to using inline functions: As the code is substituted directly, there is no overhead, like saving and restoring registers. The overhead of parameter passing is generally lower, since it is not necessary to copy variables. If some of the parameters are constants, the compiler can optimize the resulting code even further. The big disadvantage of inline functions is that the code sizes increase if the function is used in many places. This can vary significantly depending on the size of the function, and the number of places where it is used. It is wise to only inline a few critical functions. Note that when done wisely, inlining may decrease the size of the code: a call takes usually a few instructions, but the optimized version of the inlined code might translate to even less instructions. A function can often be approximated using a lookup table, which increases performance significantly. A table lookup is usually less accurate than calculating the value properly, but for many applications, this does not matter. Many signal processing applications (for example, modem demodulator software) make heavy use of sin and cos functions, which are computationally expensive to calculate. For real-time systems where accuracy is not very important, sin/ cos lookup tables might be essential. When using lookup tables, try to combine as many adjacent operations as possible into a single lookup table. This is faster and uses less space than multiple lookup tables. Although floating point operations are time consuming for any kind of processors, sometimes we need to used it in case of implementing signal processing applications. However, when writing floating-point code, keep the following things in mind: Division is typically twice as slow as addition or multiplication. Rewrite divisions by a constant into a multiplication with the inverse (For example, x = x / 3.0 becomes x = x * (1.0/3.0). The constant is calculated during compilation.). floats instead of doubles. Float variables consume less memory and fewer registers, and are more efficient because of their lower precision. Use floats whenever their precision is good enough. Transcendental functions, like sin, exp and log are implemented using series of multiplications and additions (using extended precision). As a result, these operations are at least ten times slower than a normal multiply. The compiler cannot apply many optimizations which are performed on integers to floating-point values. For example, 3 * (x / 3) cannot be optimized to x, since floating-point operations generally lead to loss of precision. Even the order of evaluation is important: (a + b) + c is not the same as a + (b + c). Therefore, it is beneficial to perform floating-point optimizations manually if it is known they are correct. However, it is still possible that the floating performance will not reach the required level for a particular application. In such a case, the best approach may be to change from using floating-point to fixed point arithmetic. When the range of values needed is sufficiently small, fixed-point arithmetic is more accurate and much faster than floating-point arithmetic. In general, savings can be made by trading off memory for speed. If you can cache any often used data rather than recalculating or reloading it, it will help. Examples of this would be sine/cosine tables, or tables of pseudo-random numbers (calculate 1000 once at the start, and just reuse them if you don't need truly random numbers). ++and --etc. within loop expressions. E.g.: while(n--){}, as this can sometimes be harder to optimize. char, short, double, bit fields etc.). sqrt()square root function in loops - calculating square roots is very CPU intensive. val * 0.5instead of val / 2.0. val + val + valinstead of val * 3. puts()is quicker than printf(), although less flexible. #defined macros instead of commonly used tiny functions - sometimes the bulk of CPU usage can be tracked down to a small external function being called thousands of times in a tight loop. Replacing it with a macro to perform the same job will remove the overhead of all those function calls, and allow the compiler to be more aggressive in its optimization.. mallopt()function (for controlling malloc), use it. The MAXFASTsetting can make significant improvements to code that does a lot of mallocwork. If a particular structure is created/destroyed many times a second, try setting the malloptoptions to work best with that size. Last, but definitely not least - turn compiler optimization on! Seems obvious, but is often forgotten in that last minute rush to get the product out on time. The compiler will be able to optimize at a much lower level than can be done in the source code, and perform optimizations specific to the target processor. [Thanks to Craig Burley for the excellent comments. Thanks to Timothy Prince for the note on architectures with Instruction Level Parallelism]. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/cpp/C___Code_Optimization.aspx
crawl-002
refinedweb
4,970
59.84
Chronograf release notes v1.10 [2022-08-19] Features - Update the admin UI to provide database-level role-based access control (RBAC). Also, made the following improvements to the UI: - Show permissions mappings more intuitively - Improve the process of creating users and roles - Improve visualization of effective permissions on Usersand Rolespages - Add reader rolefor users that require read-only access to dashboards. - Add convenient redirection back to an original URL after OAuth authentication. - Allow InfluxDB connection with context path. - Ability to customize annotation color. Bug fixes - Repair table visualization of string values. - Improve InfluxDB Enterprise server type detection. - Avoid stale reads in communication with InfluxDB Enterprise meta nodes. - Properly detect unsupported values in Alert Rulebuilder. - Markdown cell content can be selected and copied. Error messaging - Improve InfluxDB Enterprise user creation process so “user not found” error no longer occurs, even when user is successfully created. - Enhance error message notifying user to use InfluxDB v2 administration. Maintenance updates v1.9.4 [2022-03-28] Features This release renames the Flux Query Builder to the Flux Script Builder (and adds improvements), and improves on Kapacitor integration. Flux Builder improvements - Rename the Flux Query Builderto the Flux Script Builder, and add new functionality including: - Ability to load truncated tags and keys into the Flux Script Builder when connected to InfluxDB Cloud. - Script Builder tag keys and tag values depend on a selected time range. - Make aggregation function selection optional. - Autocomplete builtin v object in Flux editor. - Add a warning before overriding the existing Flux Editor script. Kapacitor integration improvements Improved pagination and performance of the UI when you have large numbers of TICKscripts and Flux tasks. - Move Flux Tasks to a separate page under Alerting menu. - Add TICKscripts Pageunder Alerting menu. - Optimize Alert Rules API. - Open Alert Rule Builderfrom the TICKscripts page. - Remove Manage Taskspage, add Alert Rulespage. - Add alert rule options to not send alert on state recovery and send regardless of state change. Bug fixes - Respect BASE_PATHwhen serving API docs. - Propagate InfluxQL errors to UI. - Rename Flux Query to Flux Script. - Repair time zone selector on Host page. - Report correct Chronograf version. - Show failure reason on Queries page. - Reorder Alerting side menu. v1.9.3 [2022-02-02] NOTE: We did not release version 1.9.2 due to a bug that impacted communication between the browser’s main thread and background workers. This bug has been fixed in the 1.9.3 release. Features - Add ability to rename TICKscripts. - Add the following enhancements to the InfluxDB Admin - Queriestab: CSV downloadbutton. - Rename Runningcolumn to Duration. - Add Statuscolumn. When hovering over the Durationcolumn, status shows Killconfirmation button. - Modify the CSVexport to include the Statuscolumn. - Upgrade to use new google.golang protobuflibrary. Bug Fixes - Ability to log the InfluxDB instance URL when a ping fails, making connection issues easier to identify. - Repair enforcement of one organization between multiple tabs. - Configure HTTP proxy from environment variables in HTTP clients. Improvements were made to: - Token command within chronoctl - OAuth client - Kapacitor client - Flux client Security - Upgrade github.com/microcosm-cc/bluemondayto resolve CVE-2021-42576. v1.9.1 [2021-10-08] Features - Distinguish tasks created from templates by appending “created from template” on the Manage Tasks page. - Upgrade Golang to 1.17.1. Bug Fixes Flux fixes - Update time range of Flux queries when zooming in on dashboard. - Repair calculation of Flux query range duration. Kapacitor integration - When using a nametask variable, the TICKscript name that appears in the Alert portion of Chronograf now reflects that variable. Previously, name variables were ignored and this led to confusion. - TICKscripts created from templates are now visible in a read-only mode from within Chronograf. In addition, TICKscripts created from templates will not appear in the Alert Rule section of the UI. This requires Kapacitor 1.6.2, which now provides information about the template used to create the underlying TICKscript. - Pagination of more than 500 Flux tasks was broken. This has now been addressed. Browser support - Safari only: Fix issue displaying Single Stat cells in dashboard. - Avoid extraneous browser history change. - Resolve issues that occurred attempting to open multiple organizations across tabs in one browser. Now, the session enforces one organization across browser tabs. - Support Firefox private mode. Visualization fixes - Repair time rendering in horizontal table. - Skip missing values in line chart instead of returning zeros. - Fix calculations to use the appropriate v.windowPeriodvalue. Previously, v.windowPeriodwas stuck at 3ms. Package fixes - Rename ARM RPMs with yum-compatible names. Security - Upgrade github.com/microcosm-cc/bluemondayto resolve CVE-2021-29272. - Upgrade github.com/golang-jwt/jwtto resolve CVE-2020-26160. v1.9.0 [2021-06-25] Breaking Changes OAuth PKCE Add OAuth PKCE (RFC7636) to OAuth integrations in Chronograf. PKCE mitigates the threat of the authorization code being intercepted during the OAuth token exchange. Google, Azure, Octa, Auth0, Gitlab, and others already support OAuth PKCE. Enabling PKCE should have no effect on integrations with services that don’t support it yet (such as Github or Bitbucket). To disable PKCE, set the OAUTH_NO_PKCE environment variable to =true or include the–oauth-no-pkce flag when startingchronograf`. Features - Support data migrations to ETCD over HTTPS. - Set trusted CA certificates for ETCD connections. - Configure new Kapacitor alert endpoints in the UI. - Remove HipChat alert endpoints. - Show or hide the log status histogram in the Log Viewer. - Add the following meta query templates in the Data Explorer: SHOW FIELD KEYS SHOW SUBSCRIPTIONS SHOW QUERIES SHOW GRANTS SHOW SHARDS SHOW SHARD GROUPS EXPLAIN EXPLAIN ANALYZE - Flux improvements and additional functionality: - Add predefined and custom dashboard template variables to Flux query execution. Flux queries include a vrecord with a key value pair for each variable. - Define template variables with Flux. - Add Kapacitor Flux tasks on the Manage Tasks page (read only). - Provide documentation link when Flux is not enabled in InfluxDB 1.8+. - Write to buckets when Flux mode is selected. - Filter fields in the Query Builder. - Select write precision when writing data through the Chronograf UI. - Support GitHub Enterprise in the existing GitHub OAuth integration. - Update the Queries page in the InfluxDB Admin section of the UI to include the following: - By default, sort queries by execution time in descending order. - Sort queries by execution time or database. - Include database count in the Queries page title. - Select the refresh interval of the Queries page. - Set up InfluxDB Cloud and InfluxDB OSS 2.x connections with the chronografCLI. - Add custom auto-refresh intervals. - Send multiple queries to a dashboard. - Add macOS arm64 builds. Bug Fixes - Open alert handler configuration pages with URL hash. - Omit errors during line visualizations of meta query results. - Delete log stream when TICKscript editor page is closed. - Generate correct Flux property expressions. - Repair stale database list in Log Viewer. - Exclude _startand _stopcolumns in Flux query results. - Improve server type detection in the Connection Wizard. - Add error handling to Alert History page. - Don’t fetch tag values when a measurement doesn’t contain tags. - Filter out roles with unknown organization references. - Detect Flux support in the Flux proxy. - Manage execution status per individual query. - Parse exported dashboards in a resources directory. - Enforce unique dashboard template variable names. - Don’t modify queries passed to a Dashboard page using a query URL parameter. - Fix unsafe React lifecycle functions. - Improve communication with InfluxDB Enterprise. Other - Upgrade UI to TypeScript 4.2.2. - Upgrade dependencies and use ESLint for TypeScript. - Update dependency licenses. - Upgrade Markdown renderer. - Upgrade Go to 1.16. - Upgrade build process to Python 3. v1.8.10 [2020-02-08] Features - Add the ability to set the active InfluxDB database and retention policy for InfluxQL commands. Now, in Chronograf Data Explorer, if you select a metaquery template (InfluxQL command) that requires you to specify an active database, such as DROP MEASUREMENT, DROP SERIES FROM, and DELETE FROM, the USEcommand is prepended to your InfluxQL command as follows: USE "db_name"; DROP MEASUREMENT "measurement_name" USE "db_name"; DROP SERIES FROM "measurement_name" WHERE "tag" = 'value' USE "db_name"; DELETE FROM "measurement_name" WHERE "tag" = 'value' AND time < '2020-01-01' - Add support for Bitbucket emailsendpoint with generic OAuth. For more information, see Bitbucket documentation and how to configure Chronograf to authenticate with OAuth 2.0. Bug Fixes - Repair ARMv5 build. - Upgrade to Axios 0.21.1. - Stop async executions on unmounted LogsPage. - Repair dashboard import to remap sources in variables. - UI updates: - Ignore databases that cannot be read. Now, the Admin page correctly displays all databases that the user has permissions to. - Improve the Send to Dashboard feedback on the Data Explorer page. - Log Viewer updates: - Avoid endless networking loop. - Show timestamp with full nanosecond precision. v1.8.9.1 [2020-12-10] Features - Configure etcd with client TLS certificate. - Support Flux in InfluxDB Cloud and InfluxDB OSS 2.x sources. - Support Flux Schema Explorer in InfluxDB Cloud and InfluxDB OSS 2.x sources. - Let users specify InfluxDB v2 authentication. - Validate credentials before creating or updating InfluxDB sources. - Use fully qualified bucket names when using Flux in the Data Explorer. - Upgrade Go to 1.15.5. - Upgrade Node.js to 14 LTS. Bug Fixes - Prevent briefly displaying “No Results” in dashboard cells upon refresh. - Warn about unsupported queries when creating or editing alert rules. - Use the ANDlogical operator with not-equal ( !=) tag comparisons in generated TICKscript wherefilters. - Disable InfluxDB admin page when administration is not possible (while using InfluxDB Cloud or InfluxDB OSS 2.x sources). - Use token authentication against InfluxDB Cloud and InfluxDB OSS 2.x sources. - Avoid blank screen on Windows. - Repair visual comparison with time variables ( :upperDashboardTime:and :dashboardTime:). - Repair possible millisecond differences in duration computation. - Remove deprecated React SFC type. v.1.8.8 [2020-11-04] Features Bug Fixes - Ensure the alert rule name is correctly displayed in the Alert Rules and TICKscript lists. - Resolve the issue that caused a truncated dashboard name. - Ensure the TICKscript editor is scrollable in Firefox. - Apply default timeouts in server connections to ensure a shared HTTP transport connection is used between Chronograf and InfluxDB or Kapacitor. - Retain the selected time zone (local or UTC) in the range picker. - Export CSV with a time column formatted according to the selected time zone (local or UTC). v.1.8.7 [2020-10-06] This release includes breaking changes: TLS1.2 is now the default minimum required TLS version. If you have clients that require older TLS versions, use one of the following when starting Chronograf: - The --tls-min-version=1.1option - The TLS_MIN_VERSION=1.1environment variable Features - Allow to configure HTTP basic access authentication. - Allow setting token-prefix in Alerta configuration. - Make session inactivity duration configurable. - Allow configuration of TLS ciphers and versions. Bug Fixes - Disable default dashboard auto-refresh. - Fix to user migration. - Add isPresentfilter to rule TICKscript. - Make vertical scrollbar visible when rows overflow in TableGraph. - Upgrade papaparseto 5.3.0. - Require well-formatted commit messages in pull request. - Upgrade nodeto v12. v1.8.6 [2020-08-27] Features - Upgrade Dockerfile to use Alpine 3.12. Bug Fixes - Escape tag values in Query Builder. - Sort namespaces by database and retention policy. - Make MySQL protoboard more useful by using derivatives for counter values. - Add HTTP security headers. - Resolve an issue that caused existing data to be overwritten when there were multiple results for a specific time. Now, all query results are successfully shown in the Table visualization. - Resolve an issue that prevented boolean field and tag values from being displayed. Now, field and tag values are printed in TICKscript logs. v1.8.5 [2020-07-08] Bug Fixes - Fix public-url generic OAuth configuration issue. - Fix crash when starting Chronograf built by Go 1.14 on Windows. - Keep dashboard’s table sorting stable on data refresh. - Repair TICKscript editor scrolling on Firefox. - Better parse Flux CSV results. - Support .Time.Unixin alert message validation. - Fix error when viewing Flux raw data after edit. - Repair management of Kapacitor rules and TICKscripts. - Avoid undefined error when dashboard is not ready yet. - Fall back to point timestamp in log viewer. - Add global functions and string trimming to alert message validation. - Merge query results with unique column names. - Avoid exiting presentation mode when zooming out. - Avoid duplication of csv.fromin functions list. v1.8.4 [2020-05-01] Bug Fixes - Fix misaligned tables when scrolling. v1.8.3 [2020-04-23] Bug Fixes - Fixed missing token subcommand. - Incomplete OAuth configurations now throw errors listing missing components. - Extend OAuth JWT timeout to match cookie lifespan. Features - Added ability to ignore or verify custom OAuth certs. v1.8.2 [2020-04-13] Features - Update to Flux v0.65.0. Bug Fixes - Fix table rendering bug introduced in 1.8.1. v1.8.1 [2020-04-06] Warning: Critical bug that impacted table rendering was introduced in 1.8.1. Do not install this release, install v1.8.2, which includes the features and bug fixes below. Features - Add ability to directly authenticate single SuperAdmin user against the API. Bug Fixes - Update table results to output formatted strings rather than as single-line values. - Handle change to newsfeed data structure. v](/chronograf/v1.9/administration/create-high-availability/). If you're upgrading Chronograf, learn how to [migrate your existing Chronograf configuration to HA](/chronograf/v1.9/administration/migrate-to-high-availability/). -. ’line’ http:// - ’new-sources’ server flag example by adding ’type’ Was this page helpful? Thank you for your feedback! Support and feedback Thank you for being part of our community! We welcome and encourage your feedback and bug reports for Chronograf and this documentation. To find support, use the following resources:
https://docs.influxdata.com/chronograf/v1.10/about_the_project/release-notes-changelog/
CC-MAIN-2022-40
refinedweb
2,247
53.37
TIFF and LibTiff Mailing List Archive May 2010 Previous Thread Next Thread Previous by Thread Next by Thread Previous by Date Next by Date The TIFF Mailing List Homepage This list is run by Frank Warmerdam Archive maintained by AWare Systems The important line would be "ld: warning: in /usr/local/lib/libtiff.dylib, file is not of required architecture" You have a libtiff that isn't compiled for your architecture. (probably 32 bit when you need 64, or vice versa) Chris ________________________________________ From: tiff-bounces@lists.maptools.org [tiff-bounces@lists.maptools.org] On Behalf Of Dave Sun [hotdog.sun@gmail.com] Sent: Saturday, May 01, 2010 6:54 PM To: tiff@lists.maptools.org Subject: [Tiff] qmake (Qt) and libtiff "undefined symbols" problem on mac (snow leopard). Dear folks, I have problems using qmake to compile my simplest libtiff program. Here are the information about my system. computer: Darwin Darwin Kernel Version 10.2.0: Tue Nov 3 10:37:10 PST 2009; root:xnu-1486.2.11~1/RELEASE_I386 i386 Qt version: Qt 4.6.2 libtiff version: 3.9.2 operating system: snow leopard. Simplest program: #include <QtCore/QCoreApplication> #include "tiff.h" #include "tiffio.h" #include <iostream> int main(int argc, char *argv[]) { //QCoreApplication a(argc, argv); TIFF* tif = TIFFOpen("foo.tif", "r"); if(tif == NULL) std::cout << "could not open the file"; TIFFClose(tif); //return a.exec(); std::cout << "hello"; return 0; } QT -= gui TARGET = libtiffSimpleTest CONFIG += console CONFIG -= app_bundle TEMPLATE = app LIBS += -ltiff SOURCES += main.cpp I used Qt creator as my IDE. When I run compiling program I got the following error: ld: warning: in /usr/local/lib/libtiff.dylib, file is not of required architecture Undefined symbols: "_TIFFOpen", referenced from: _main in main.o ld: symbol(s) not found collect2: ld returned 1 exit status make: *** [libtiffSimpleTest] Error 1 However, when I just use the following: g++ main.cpp -ltiff It works well. What is the problem here? Thank you. Dave
http://www.asmail.be/msg0055108035.html
CC-MAIN-2015-48
refinedweb
329
59.8
Variables A free video tutorial from Arkadiusz Włodarczyk Excellent teacher, Expert in Programming 4.5 instructor rating • 18 courses • 267,075 students Lecture description We are exploring the concept of "variable", "variable type", "declaration", "definition". Exploring naming rules of variables and not variables in the C++ language. Let's note that in comments, slash slash and now Variables OK let's start this topic not from what is variable but why do we need variables. For example I want to send something to the console ouput. Lets's write. cout two angle brackets, four, semi-colon. So it means that I want to send a number four to the console output but hey what if I wanted to store this number somewhere because I would like to make some operations on that number or even change it a bit . Look right now we have the number four here right? And after that instruction is invoked. We can't do anything with that number anymore So we can't do anything here. With that number here. So this number disappears. We don't have any reference to that number. That's why we need variables. Variable as the name suggests is something that is able to vary. So it means that it is changeable it can be changee anytime you want. So variables are some kind of containers that can store values like numbers. Imagine that these beans/containers here. Are our variables. Each of them has different label name. Right. A B and C. As you can see each of that container is also a bit different. They have different shapes. It means that containers variables can store different values different type of values. Now we have discovered a new thing. The type of variable. The type describes what can we put into our containers . Variables Okay Lets's create a variable in our program. Let's name it 'a'. We have got only label now, name of our container variable. We have to also describe what can we put into our container variable. So we have to declare the type of variable. Let's start from the int, which stands for integer. Integer are numbers that have no fractional part and they can be negative. So now we can assign to our variable only integer. By assign I'm meaning putting something into that variable. But let's stop for a second and think about memory in our PC. Is it unlimited? No it is limited by our hardware because we can only assign subset of integer numbers. In our situation we are allowed to assign to our variable named 'a' numbers that range from minus 2 billions to about plus 2 billions Let's do that. a equals For example 4. The equal sign. Allow us to assign values to our variables. The process of assigning value to variable is called initilization initialization. OK. I can do that process of assigning also in one line I could write something like that I will comment that thing here. OK but let's assume I want to assign to variable a. Something like For example I can do it because this value is greater than two billions. OK. And it's a really good idea that we can do it because we would take too much memory in our PC. We don't need that number for all the situations. Right. OK. Now I will say something very important. When you look at any book what is the easiest way to find something inside? If I give you for example one section of book or even only one word I will tell you to find this section inside that book that has 1000 pages. I'm sure it will take a very long time to complete the task. Right. What about the same situation But now I'm giving you the number of page, you will find it probably almost instantly right? In programming world situation is almost the same. The section word in our program it called label or name of variable. The page number is called address address. OK. So by using variable address we can get to values faster. But this is a bit advanced topic and it is connected with pointers. We'll explain it later very precisely. But now I want you to know how to get address of variable. In order to do it... We have to write something like ampersand. It's an ampersand. And lets do in another line. Ampersand. And now the name of variable. OK. Let's send that things to the console output. Lets do something like that now. cout and cout ampersand A Now let's build it. And as you can see we have everything in one line. So we will add here something like endl; endl stands here for end of line. so that thing here is indicating the end of line. And it's adding the enter at the end right?. So now I build it as you can see we are having two lines for the first line there is four. Right. And in the second line we have the address of that four. So we have the address of the variable. When we write something like 'a' we are declaring variable. Declaring means that we are informing compiler that there will be somewhere in our program something named in our situation. 'A' When you write something like int 'a'. In the c++ the language We are also defining variable. Defining means declaring and in addition to this we are also reserving, allocating, space in memory, in our situation allocating space for the variable that is integer, right? int allocates 4 bytes of memory. One byte is 8 bits. So we are allocating 32 of bits in PCs and memory. Bits are zeros or ones signs. So we are allocating in one sequence somewhere in our memory thirty two places that can be filled with zero or ones. As I said in the first lesson that sequence of 0 and one can be translated to our language or our language can be translated to that sequence right? And as I said earlier. Variable's values can be changed any time. In the second lesson I was talking a lot about that compiler is reading everything from the top to the bottom right? So if I write something like that and we copy that. thing... What will our compiler do now? Let's start from the beginning because our compiler is doing it that way. So our complier is including the IO stream library it will use the std namespace and then it will invoke the main function. In that line int a equals 4, our compiler will be informed about existence of variable named a. It will also allocate somewhere four bytes of memory and because of initialization that equal(=) sign right? It will change a sequence of zero and ones to represent number four in our PC language. After compiler will print to the console output value an address of our variable. Here. Here a equals 10, our compiler will change the value of variable named a. So we will change a bit that sequence of zero and ones and. But it won't change the other variables of course Right? Let's compile it as we can see the number has changed but the address is still the same. OK let's sttart how we should name variables now. That's do it in comment and the first thing is that we can't have two variables that have the same name variables can't have the same name. So if I want to declare two variables I have to have different names. Right? That way we can run our program but we can do something like that. Error! re-declaration of int 'a'. we can't do something like that. We can start the name of variable from the number but we can use numbers in other places so variables can't start from the number lets's check it out. I will try to create int and now I'll do something like for example that right? I'm trying to build it. I can't do it but I can do it. that way easy. We can't use spaces but we can use underscores so we can't use spaces. I can't do something like int and now will do something like prime number I can't do something like that. but you can use Of course the underscores so I can do it that way. Prime number. That's all right. We can execute our program. When we declare variables we should try to make them self-descriptive. Self-descriptive means that we should be able to guess by looking at the name of variable. what is this variable created for. For example we can write int prime number Or maybe I can do it even that way in the prime number. I can do it primeNr I can do it also That way iPrimeNr; The i prefix here. Let us guess by just looking at the name that the type of variable is integer right? Because Int. This notation, that one here, is called Hungarian notation. You can type it in google and you can read about other types of notation. OK so let's note that. Our variables should be self-descriptive and we can't use special characters, keywords. Let's see what we can do. For example I can't do something like int 'using' Because it's a keyword. I can do it that way. It's not working. I can't use the special character like #. It's not working. OK so let's note that variables can't have special characters keywords can't be constructed Can't be constructed of special characters/keywords. OK. Variables should be nouns. They shouldn't be adjectives or other things like that they should be just nouns. OK that's all. And the next lesson we'll discover more types of variables and thank you everyone
https://www.udemy.com/tutorial/video-course-c-from-beginner-to-expert/variables/
CC-MAIN-2021-17
refinedweb
1,704
84.88
Draft SelectPlane Description The Draft Workbench features a working plane system. A plane in the 3D view indicates where a Draft shape will be built. There are several methods to define the working plane: - From a selected face. - From three selected vertices. - From the current view. - From a preset: top, front, or side. - None, in which case the working plane is adapted automatically to the current view when you start a command, or to a face if you start drawing on an existing face. Different working planes can be set on which to draw shapes How to use The SelectPlane button is present in the Draft Tray toolbar, which only appears in the Draft and Arch workbenches. Without element selected - Press the SelectPlane button. - Select the offset, the grid spacing, and the main lines - Select one of the presets: XY (top), XZ (front), YZ (side), View, or Auto. Once the plane is set, the button will change to indicate the active plane Top, Front, Side, Auto, d(0.0,-1.0,0.0). You can show and hide the grid with the shortcut G R. With element selected - Select a face of an existing object in the 3D view, or hold Ctrl and select three vertices of any object. available in version 0.17 - Press the SelectPlane button, or right click and select Utilities → SelectPlane. The plane will be created aligned to the face of the object, or to the plane defined by the three vertices. Options - Press the XY (top) button to set the working plane on the XY plane. To easily draw on this plane, you should set the view to the top or bottom (the normal is in the positive or negative Z direction). Press 2 or 5 to quickly switch to these views. - Press the XZ (front) button to set the working plane on the XZ plane. To easily draw on this plane, you should set the view to the front or rear (the normal is in the negative or positive Y direction). Press 1 or 4 to quickly switch to these views. - Press the YZ (side) button to set the working plane on the YZ plane. To easily draw on this plane, you should set the view to the left or right side (the normal is in the positive or negative X direction). Press 3 or 6 to quickly switch to these views. - Press the View button to set the working plane to the current 3D view, perpendicular to the camera axis and passing through the origin (0,0,0). - Press the Auto button to unset any current working plane, and automatically set a working plane when a tool is used. When a drawing tool is selected the grid will be automatically updated to the current view; then, if the view is rotated, and another tool is selected, the grid redraws in the new view. This is equivalent of pressing View automatically before using a tool. - Set the "Offset" value to set the working plane at a certain perpendicular distance from the plane you selected. - Set the "Grid spacing" value to define the space between each line in the grid. - Set the "Main line every" value to draw a slightly thicker line in the grid at the set value. For example, if the grid spacing is 0.5 m, and there is a main line every 20 lines, there will be slightly thicker line every 10 m. - Click on the "Center plane on view" checkbox to draw the plane and grid closer to the camera view in the 3D view. - Press Esc or the Close button to abort the current command. Scripting See also: Draft API and FreeCAD Scripting Basics. See the WorkingPlane API. Working plane objects can easily be created and manipulated in macros and from the Python console. You can access the current Draft working plane, and apply transformations to it: import FreeCAD Workplane = FreeCAD.DraftWorkingPlane v1 = FreeCAD.Vector(0, 0, 0) v2 = FreeCAD.Vector(1, 1, 1).normalize() Workplane.alignToPointAndAxis(v1, v2, 17) A Draft command must be issued after changing the working plane to update the visible grid. You can create your own planes, and use them independently of the current working plane. import WorkingPlane Plane = WorkingPlane.plane() -
https://www.freecadweb.org/wiki/index.php?title=Draft_SelectPlane
CC-MAIN-2019-04
refinedweb
707
73.17
26 U.S. Code § 6039I - Returns and records with respect to employer-owned life insurance contracts prev | next (a) In general Every applicable policyholder owning 1 or more employer-owned life insurance contracts issued after the date of the enactment of this section shall file a return (at such time and in such manner as the Secretary shall by regulations prescribe) showing for each year such contracts are owned— (4) the name, address, and taxpayer identification number of the applicable policyholder and the type of business in which the policyholder is engaged, and (b) Recordkeeping requirement Source(Added Pub. L. 109–280, title VIII, § 863(b),Aug. 17, 2006, 120 Stat. 1023.) References in Text The date of the enactment of this section, referred to in subsec. (a), is the date of enactment of Pub. L. 109–280, which was approved Aug. 17, 2006. Effective DatePub. L. 109–280, set out as an Effective Date of 2006 Amendment note under section 101 of this title. LII has no control over and does not endorse any external Internet site that contains links to or references LII.
https://www.law.cornell.edu/uscode/text/26/6039I?quicktabs_8=2
CC-MAIN-2015-48
refinedweb
185
57.61
Argument Definitions The arguments in the prototype allow convenient command-line parsing of arguments and, optionally, access to environment variables. The argument definitions are as follows: - argc An integer that contains the count of arguments that follow in argv. The argc parameter is always greater than or equal to 1. - argv An array of null-terminated strings representing command-line arguments entered by the user of the program. By convention, argv[0] is the command with which the program is invoked, argv[1] is the first command-line argument, and so on, until argv[argc], which is always NULL. See Customizing Command Line Processing for information on suppressing command-line processing. The first command-line argument is always argv[1] and the last one is argv[argc – 1]. - envp The envp array, which is a common extension in many UNIX® systems, is used in Microsoft C++. It is an array of strings representing the variables set in the user's environment. This array is terminated by a NULL entry. It can be declared as an array of pointers to char (char *envp[ ]) or as a pointer to pointers to char (char **envp). If your program uses wmain instead of main, use the wchar_t data type instead of char. The environment block passed to main and wmain is a "frozen" copy of the current environment. If you subsequently change the environment via a call to putenv or _wputenv, the current environment (as returned by getenv/_wgetenv and the _environ/ _wenviron variable) will change, but the block pointed to by envp will not change. See Customizing Command Line Processing for information on suppressing environment processing. This argument is ANSI compatible in C, but not in C++. The following example shows how to use the argc, argv, and envp arguments to main: // argument_definitions.cpp // compile with: /EHsc #include <iostream> #include <string.h> using namespace std; int main( int argc, char *argv[], char *envp[] ) { int iNumberLines = 0; // Default is no line numbers. // If /n is passed to the .exe, display numbered listing // of environment variables. if ( (argc == 2) && _stricmp( argv[1], "/n" ) == 0 ) iNumberLines = 1; // Walk through list of strings until a NULL is encountered. for( int i = 0; envp[i] != NULL; ++i ) { if( iNumberLines ) cout << i << ": " << envp[i] << "\n"; } }
https://msdn.microsoft.com/en-us/library/88w63h9k(v=vs.85).aspx
CC-MAIN-2015-32
refinedweb
376
56.35
BasicObject Designates, via code block, code to be executed unconditionally before sequential execution of the program begins. Sometimes used to simulate forward references to methods. puts times_3(gets.to_i) BEGIN { def times_3(n) n * 3 end } # File keywords.rb, line 83 def BEGIN end Designates, via code block, code to be executed just prior to program termination. END { puts "Bye!" } # File keywords.rb, line 91 def END end The current default encoding, as an Encoding instance. # File keywords.rb, line 31 def __ENCODING__ end # File keywords.rb, line 59 def __END__ end. # File keywords.rb, line 68 def __FILE__ end The line number, in the current source file, of the current line. # File keywords.rb, line 36 def __LINE__ end.name = "Joe" # Please use full_name= p.full_name = "Joe" # Naming your person Joe! # File keywords.rb, line 121 def alias end”. # File keywords.rb, line 142 def and end’s also a rescue clause, in which case the else clause is executed when no exception is raised. # File keywords.rb, line 161 def begin end # File keywords.rb, line 176 def break. # File keywords.rb, line 216 def case end’s possible to write class methods (i.e., singleton methods on class objects) by referring to self: class Person def self.species "Homo sapiens" end end # File keywords.rb, line 260 def class: def method_name def object.singleton_method_name The parameter list comes after the method name, and can (and usually is) wrapped in parentheses. # File keywords.rb, line 276 def def end" # File keywords.rb, line 307 def defined? end’t take a block, the block is ignored and the statement prints the value of the blockless [1,2,3].map (which returns an Enumerator). do can also (optionally) appear at the end of a for/ in statement. (See for for an example.) # File keywords.rb, line 341 def do end. # File keywords.rb, line 352 def else end Introduces a branch in a conditional ( if or unless) statement. Such a statement can contain any number of elsif branches, including zero. See if for examples. # File keywords.rb, line 361 def elsif end Marks the end of a while, until, begin, if, def, class, or other keyword-based, block-based construct. # File keywords.rb, line 368 def end end. # File keywords.rb, line 391 def ensure end false denotes a special object, the sole instance of FalseClass. false and nil are the only objects that evaluate to Boolean falsehood in Ruby (informally, that cause an if condition to fail.) # File keywords.rb, line 399 def false end. # File keywords.rb, line 431 def for end Ruby’s # File keywords.rb, line 460 def if. # File keywords.rb, line 476 def module end"] # File keywords.rb, line 514 def next end). # File keywords.rb, line 526 def nil end.) # File keywords.rb, line 543 def not end"). # File keywords.rb, line 564 def or end Causes unconditional re-execution of a code block, with the same parameter bindings as the current execution. # File keywords.rb, line 571 def redo end’s being handled): def file_reverser(file) File.open(file) {|fh| puts fh.readlines.reverse } rescue Errno::ENOENT log "Tried to open non-existent file #{file}" raise end In a begin/ end block:. # File keywords.rb, line 621 def rescue end Inside a rescue clause, retry causes Ruby to return to the top of the enclosing code (the begin keyword, or top of method or block) and try executing the code again. a = 0 begin 1/a rescue ZeroDivisionError => e puts e.message puts "Let's try that again..." a = 1 retry end puts "That's better!" # File keywords.rb, line 639 def retry end’s an error. ruby -e 'Proc.new {return}.call' => -e:1:in %xblock in <main>': unexpected return (LocalJumpError) ruby19 -e 'p lambda {return 3}.call' => 3 # File keywords.rb, line 670 def return end self is the "current object" and the default receiver of messages (method calls) for which no explicit receiver is specified. Which object plays the role of self depends on the context. In a method, the object on which the method was called is self In a class or module definition (but outside of any method definition contained therein), self is the class or module object being defined. In a code block associated with a call to class_eval (aka module_eval), self is the class (or module) on which the method was called. In a block associated with a call to instance_eval or instance_exec, self is the object on which the method was called. self automatically receives message that don't have an explicit receiver: class String def upcase_and_reverse upcase.reverse end end In this method definition, the message upcase goes to self, which is whatever string calls the method. # File keywords.rb, line 694 def self end Called from a method, searches along the method lookup path (the classes and modules available to the current object) for the next method of the same name as the one being executed. Such method, if present, may be defined in the superclass of the object’s class, but may also be defined in the superclass’s). # File keywords.rb, line 729 def super end Optional component of conditional statements ( if, unless, when). Never mandatory, but allows for one-line conditionals without semi-colons. The following two statements are equivalent: if a > b; puts "a wins!" end if a > b then puts "a wins!" end See if for more examples. # File keywords.rb, line 742 def then end’s good practice to do so, as it makes the intention clear. # File keywords.rb, line 755 def true end Undefines a given method, for the class or module in which it’s. # File keywords.rb, line 783 def undef end The negative equivalent of if. unless y.score > 10 puts "Sorry; you needed 10 points to win." end See if. # File keywords.rb, line 794 def unless end The inverse of while: executes code until a given condition is true, i.e., while it is not true. The semantics are the same as those of while; see while. # File keywords.rb, line 801 def until end. # File keywords.rb, line 842 def while end Called from inside a method body, yields control to the code block (if any) supplied as part of the method call. If no code block has been supplied, calling yield raises an exception. yield can take an argument; any values thus yielded are bound to the block's parameters. The value of a call to yield is the value of the executed code block. # File keywords.rb, line 854.
http://www.ruby-doc.org/docs/keywords/1.9/Object.html
CC-MAIN-2014-52
refinedweb
1,103
67.35
New to using Dynamic Data Structures, Vector classes, and ArrayLists. -I have a read in file named Student.dat - I have a class called Student -I have a class with the main method called VectorTest I am wondering if I declared and initialized my new vector called list correctly. (1A) I also am wondering if created a student using data from the line read corrected( almost know this is wrong it also saying I need to add it to the list). I also am have trouble REPLACING the first student on the list with "me" which is a new object of student. ( I think i need to use the "set" method, but I'm not quite sure how to do it). Any help with this would be appreciated. Here is my code. VectorTest: import java.io.*; import java.util.*; /** *To use a vector as a data structure to store a phone book *Written by Diane Christie for lab use *March 9, 2002 */ public class VectorTest { public static void main(String [] args)throws IOException { // DECLARE AND INSTANTIATE A VECTOR CALLED LIST OF SIZE 5 // TASK 1A Vector list = new Vector(5); String filename = "student.dat"; int comma; Student currentStudent; int count = 0; BufferedReader instream = new BufferedReader(new FileReader(filename)); String line = instream.readLine(); while (line != null) { //String processing, pull apart the line containing // lastName, firstName, phoneNumber // trim() will remove leading spaces from strings comma = line.indexOf(','); String last = line.substring(0, comma).trim(); line = line.substring(comma + 1, line.length()); comma = line.indexOf(','); String first = line.substring(0, comma).trim(); String phone = line.substring(comma + 1, line.length()).trim(); //CREATE A STUDENT USING DATA FROM THE LINE READ IN //ADD IT TO THE LIST //TASK 1B Student s1 = new Student(last,first,phone); line = instream.readLine(); } instream.close(); //CHANGE INFO IN STUDENT CONSTRUCTOR TO HAVE YOUR INFORMATION //TASK 1C Student me = new Student("Wollan", "Jake", "703-6419"); //Task 1D REPLACE THE FIRST STUDENT ON THE LIST WITH me //Task 1E Remove the fourth student on the list. //Task 1F Add Charlie Brown 123-4567 to be the sixth student on the list. System.out.println("Output for the Vector class"); //WRITE A FOR LOOP TO PRINT OUT THE LIST TO THE CONSOLE } } Student.Java public class Student { private String lastName; private String firstName; private String phoneNumber; /** * Creates a student using a last name, first name, and phone number * @param last student's last name * @param first student's first name * @param phone student's phone number */ public Student(String last, String first, String phone) { lastName = last; firstName = first; phoneNumber = phone; } /** * Returns student's last name * @return student's last name */ public String getLastName() { return lastName; } /** * Returns student's first name * @return student's first name */ public String getFirstName() { return firstName; } /** * Returns student's phone number * @return student's phone number */ public String getPhoneNumber() { return phoneNumber; } /** * Returns a string of the student's entry for the phone book * @return the student's full name and phone number */ public String toString() { return phoneNumber + " " + firstName + " " + lastName; } } This the orginal Student.dat (list of students in a note pad file) Smith, John, 232-1234 Doe, Jane, 232-2345 Hall, Monty, 232-0987 Ketchum, Hank, 232-6789 Jones, Ed, 232-1458 Johnson, Tony, 232-0567 Jones, Amanda, 232-4563 Enockson, Liz, 232-1238 --- Update --- is this close? list.add(0,me); ?
http://www.javaprogrammingforums.com/collections-generics/25015-new-help-using-vector-class.html
CC-MAIN-2016-18
refinedweb
555
60.65
Greetings, folks. I am a Matplotlib novice. I'm trying to produce a 2D plot of data points where the X-axis represents a sequence of named events. A grid and xticklabels have been enabled for clarity. For a sample size of 11, all the points fall on the grid. For 12, they do not. Beyond that seems to be on a case-by-case basis. I'm sure I must be abusing the relationship between ticks and data, but the proper solution eludes me. Below is a distilled version of the script. This was run on Matplotlib 0.65 and Numarray 1.1.1. We have similar problems with Matplotlib 0.80 and Numeric 23.8. What am I doing wrong? --Rick Kwan ---- novice script starts here ---- from pylab import * # top = 11 # 11 produces point on grid top = 12 # 12 produces skewed data points iter = [i for i in range(top)] ax = subplot(111) plot(iter, iter, 'gd-') grid(True) ax.xaxis.set_major_locator(LinearLocator(top)) xlabs = ax.set_xticklabels(['evt%d'%i for i in range(top)]) show()
https://discourse.matplotlib.org/t/line-up-data-points-and-xticklabels/3122
CC-MAIN-2019-51
refinedweb
178
78.55
C State QML Type Defines configurations of objects and properties. More... Properties Detailed Description A state is a set of batched changes from the default configuration. All root items have a default state that defines the default configuration of objects and property values. New states can be defined by adding State items to the states property to allow items to switch between different configurations. These configurations can, for example, be used to apply different sets of property values or execute different scripts. The following example displays a single Rectangle. In the default state, the rectangle is colored black. In the "clicked" state, a PropertyChanges object changes the rectangle's color to red. Clicking within the MouseArea toggles the rectangle's state between the default state and the "clicked" state, thus toggling the color of the rectangle between black and red. import QtQuick Rectangle { id: myRect width: 100; height: 100 color: "black" MouseArea { id: mouseArea anchors.fill: parent onClicked: myRect.state == 'clicked' ? myRect.state = "" : myRect.state = 'clicked'; } states: [ State { name: "clicked" PropertyChanges { target: myRect; color: "red" } } ] } Notice the default state is referred to using an empty string (""). States are commonly used together with Transitions to provide animations when state changes occur. Note: Setting the state of an object from within another state of the same object is not allowed. See also Using States, Animation and Transitions, and Important Concepts in Qt Quick Ultralite - States, Transitions and Animations. Property Documentation This property holds the changes to apply for this state By default these changes are applied against the default state. If the state extends another state, the changes are applied against the state being extended. This property holds the state that this state extends. When a state extends another state, it inherits all the changes of that state. The state being extended is treated as the base state in regards to the changes specified by the extending state. This property holds the name of the state. Each state should have a unique name within its item. This property holds when the state should be applied. This should be set to an expression that evaluates to true when you want the state to be applied. For example, the following Rectangle changes in and out of the "hidden" state when the MouseArea is pressed: Rectangle { id: myRect width: 100; height: 100 color: "red" MouseArea { id: mouseArea; anchors.fill: parent } states: State { name: "hidden"; when: mouseArea.pressed PropertyChanges { target: myRect; opacity: 0 } } } If multiple states in a group have a when clause that evaluates to true at the same time, only the first matching state is applied. For example, in the following snippet, state1 is always selected rather than state2 when sharedCondition becomes true. Available under certain Qt licenses. Find out more.
https://doc.qt.io/archives/QtForMCUs-1.1/qml-qtquick-state.html
CC-MAIN-2021-17
refinedweb
457
64.71
31 October 2008 17:10 [Source: ICIS news] By Joseph Chang ?xml:namespace> NEW YORK (ICIS news)--You would think that Wall Street earnings estimates for next year would be sharply lower than for 2008 with the global financial crisis wreaking havoc and causing ripple effects upon world economies. Think again. While analysts have indeed taken the axe to 2009 earnings per share (EPS) estimates following this year's third-quarter results, the cuts have not been deep enough. The table below of profit estimates for 2008 and 2009 shows expected declines ranging from -5% for DuPont to -23% for NOVA Chemicals in the major and diversified group section. Most analysts do in fact expect lower earnings from the large chemical companies. Fair enough. Whether 2009 estimates should reflect greater declines can be argued. But a second look at the 2009 profit estimates of the specialty chemical universe shows they expect EPS gains across the board! Nalco is expected to post a 23% gain in 2009 EPS, to $1.41, while FMC is expected to increase EPS by 20%, to $5.24, and Arch Chemical by 19%, to $2.60. Wall Street forecasts double-digit EPS gains for most others. “To see 10-20% expected EPS gains for specialties is unbelievable,” said one source in the financial community. “The chemical industry is much more volatile than the earnings numbers are reflecting,” he added. Specialty chemicals are still viewed as less cyclical than commodity chemicals, even after the downturn in the late 1990s through the early 2000s saw stunning, unprecedented declines in specialty chemical earnings and stock prices. Specialty chemical to aerospace materials producer and Wall Street darling Cytec Industries saw its stock price fall from nearly $56 to less than $16 in 1998. From June 1999 to September 2001, Hercules’ stock price fell from more than $39, to just above $8. The specialties as non-cyclical businesses myth had been exposed and stock market valuations between commodities and specialties converged in the following years. And while specialties may be somewhat less vulnerable to a global economic downturn than their commodity counterparts, they are nowhere close to immune. Investors aren’t buying the optimistic earnings estimates - for either group. Price/earnings (P/E) multiples based on 2009 EPS estimates are mostly in the single digits. Celanese trades at around 4.1 times estimated 2009 earnings, while Rockwood trades at 5.6 times and NOVA at 6.0 times. That compares with more than 10 times for the overall market as measured by the S&P 500. Because of the violent and indiscriminate downward moves in stock prices, analysts have been hesitant to revise 2009 numbers, according to one chemical analyst. “Sell-side analysts have not taken a good look at 2009 numbers yet. People are frozen because of the obscene and unpredictable stock market declines,” said the source. “Analyst estimates in all groups lag at major turning points. They also end up being too low initially when things recover,” said another source. On a fundamental basis, not only are world economies turning down, but the US dollar is rising against most currencies, making US chemical exports less competitive and overseas cashflows lower on translation impact. Lower feedstock and energy costs will help, but not much if demand declines rapidly. But companies have also been reluctant to guide 2009 earnings lower because they just don’t know by how much. “It doesn’t pay to be the hero and be the first to announce an expected 20% decline in 2009. The market will punish you - fairly or not,” said another source. Wall Street's analysts will have to take 2009 EPS estimates down - and by a lot - to reflect the new reality. Wall Street profit
http://www.icis.com/Articles/2008/10/31/9168119/insight-deeper-cuts-needed-in-chems-estimates.html
CC-MAIN-2015-22
refinedweb
623
62.98
By Michel Schinz and Philipp Haller This document gives a quick introduction to the Scala language and compiler. It is intended for people who already have some programming experience and want an overview of what they can do with Scala. A basic knowledge of object-oriented programming, especially in Java, is assumed.!") } } The structure of this program should be familiar to Java programmers: it consists of one method called main which takes the command line arguments, an array of strings, as parameter; the body of this method consists of a single call to the predefinedton object, that is a class with a single instance. The declaration above thus declares both a class called HelloWorld and an instance of that class, also called HelloWorld. This instance is created on demand, the first time it is used. The astute reader might have noticed that the main method is not declared as static here. This is because static members (methods or fields) do not exist in Scala. Rather than defining static members, the Scala programmer declares these members in singleton objects. To compile the example, we use scalac, the Scala compiler. scalac works like most compilers: it takes a source file as argument, maybe some options, and produces one or several object files. The object files it produces are standard Java class files. If we save the above program in a file called HelloWorld.scala, we can compile it by issuing the following command (the greater-than sign > represents the shell prompt and should not be typed): > scalac HelloWorld.scala This will generate a few class files in the current directory. One of them will be called HelloWorld.class, and contains a class which can be directly executed using the scala command, as the following section shows.! One of Scala’s strengths is that it makes it very easy to interact with Java code. All classes from the java.lang package are imported by default, while others need to be imported explicitly. Let’s look at an example that demonstrates this. We want to obtain and format the current date according to the conventions used in a specific country, say France. (Other regions such as the French-speaking part of Switzerland use the same conventions.) Java’s class libraries define powerful utility classes, such as Date and DateFormat. Since Scala interoperates seemlessly with Java, there is no need to implement equivalent classes in the Scala class library–we can simply import the classes of the corresponding Java packages: import java.util.{Date, Locale} import java.text.DateFormat import java.text.DateFormat._ object FrenchDate { def main(args: Array[String]) { val now = new Date val df = getDateInstance(LONG, Locale.FRANCE) println(df format now) } } Scala’s import statement looks very similar to Java’s equivalent, however, it is more powerful. Multiple classes can be imported from the same package by enclosing them in curly braces as on the first line. Another difference is that when importing all the names of a package or class, one uses the underscore character ( _) instead of the asterisk ( *). That’s because the asterisk is a valid Scala identifier (e.g. method name), as we will see later. The import statement on the third line therefore imports all members of the DateFormat class. This makes the static method getDateInstance and the static field LONG directly visible. Inside the main method we first create an instance of Java’s Date class which by default contains the current date. Next, we define a date format using the static getDateInstance method that we imported previously. Finally, we print the current date formatted according to the localized DateFormat instance. This last line shows an interesting property of Scala’s syntax. Methods taking one argument can be used with an infix syntax. That is, the expression df format now is just another, slightly less verbose way of writing the expression df.format(now) This might seem like a minor syntactic detail, but it has important consequences, one of which will be explored in the next section. To conclude this section about integration with Java, it should be noted that it is also possible to inherit from Java classes and implement Java interfaces directly in Scala. Scala is a pure object-oriented language in the sense that everything is an object, including numbers or functions. It differs from Java in that respect, since Java distinguishes primitive types (such as boolean and int) from reference types, and does not enable one to manipulate functions as values. Since numbers are objects, they also have methods. And in fact, an arithmetic expression like the following: 1 + 2 * 3 / x consists exclusively of method calls, because it is equivalent to the following expression, as we saw in the previous section: (1).+(((2).*(3))./(x)) This also means that +, *, etc. are valid identifiers in Scala. The parentheses around the numbers in the second version are necessary because Scala’s lexer uses a longest match rule for tokens. Therefore, it would break the following expression: 1.+(2) into the tokens 1., +, and 2. The reason that this tokenization is chosen is because 1. is a longer valid match than 1. The token 1. is interpreted as the literal 1.0, making it a Double rather than an Int. Writing the expression as: (1).+(2) prevents 1 from being interpreted as a Double. Perhaps more surprising for the Java programmer, functions are also objects in Scala. It is therefore possible to pass functions as arguments, to store them in variables, and to return them from other functions. This ability to manipulate functions as values is one of the cornerstone of a very interesting programming paradigm called functional programming. As a very simple example of why it can be useful to use functions as values, let’s consider a timer function whose aim is to perform some program simply calls this timer function with a call-back which prints a sentence on the terminal. In other words, this program endlessly prints the sentence “time flies like an arrow” every second. object Timer { def oncePerSecond(callback: () => Unit) { while (true) { callback(); Thread sleep 1000 } } def timeFlies() { println("time flies like an arrow...") } def main(args: Array[String]) { oncePerSecond(timeFlies) } } Note that in order to print the string, we used the predefined method println instead of using the one from System.out. While this program is easy to understand, it can be refined a bit. First of all, notice that the function timeFlies is only defined anonymous functions, which are exactly that: functions without a name. The revised version of our timer program using an anonymous function instead of timeFlies looks like that: object TimerAnonymous { def oncePerSecond(callback: () => Unit) { while (true) { callback(); Thread sleep 1000 } } def main(args: Array[String]) { oncePerSecond(() => println("time flies like an arrow...")) } } The presence of an anonymous function in this example is revealed by the right arrow => which separates the function’s argument list from its body. In this example, the argument list is empty, as witnessed by the empty pair of parenthesis on the left of the arrow. The body of the function is the same as the one of timeFlies above. As we have seen above, Scala is an object-oriented language, and as such it has a concept of class. (For the sake of completeness, it should be noted that some object-oriented languages do not have the concept of class, but Scala is not one of them.) Classes in Scala are declared using a syntax which is close to Java’s syntax. One important difference is that classes in Scala can have parameters. This is illustrated in the following definition of complex numbers. class Complex(real: Double, imaginary: Double) { def re() = real def im() = imaginary } This complex class takes two arguments, which are the real and imaginary part of the complex. These arguments must be passed when creating an instance of class Complex, as follows: new Complex(1.5, 2.3). The class contains two methods, unfortunately no simple rule to know exactly when it will be, and when not. In practice, this is usually not a problem since the compiler complains when it is not able to infer a type which was not given explicitly. As a simple rule, beginner Scala programmers should try to omit type declarations which seem to be easy to deduce from the context, and see if the compiler agrees. After some time, the programmer should get a good feeling about when to omit types, and when to specify them explicitly. A small problem of fields, without putting the empty pair of parenthesis. This is perfectly doable in Scala, simply by defining them as methods without arguments. Such methods differ from methods with zero arguments in that they don’t have parenthesis after their name, neither in their definition nor in their use. Our Complex class can be rewritten as follows: class Complex(real: Double, imaginary: Double) { def re = real def im = imaginary } All classes in Scala inherit from a super-class. When no super-class is specified, as in the Complex example of previous section, scala.AnyRef is implicitly used. It is possible to override methods inherited from a super-class in Scala. It is however mandatory to explicitly specify that a method overrides another one using the override modifier, in order to avoid accidental overriding. As an example, our Complex class can be augmented with a redefinition of the toString method inherited from Object. class Complex(real: Double, imaginary: Double) { def re = real def im = imaginary override def toString() = "" + re + (if (im < 0) "" else "+") + im + "i" } A kind of data structure that often appears in programs is the tree. For example, interpreters and compilers usually represent programs internally as trees; XML documents are trees; and several kinds of containers are based on trees, like red-black trees. We will now examine how such trees are represented and manipulated in Scala through a small calculator program. The aim of this program is to manipulate very simple arithmetic expressions composed of sums, integer constants and variables. Two examples of such expressions are 1+2 and (x+x)+(7+y). We first have to decide on a representation for such expressions. The most natural one is the tree, where nodes are operations (here, the addition) and leaves are values (here constants or variables). In Java, such a tree would be represented using an abstract super-class for the trees, and one concrete sub-class per node or leaf. In a functional programming language, one would use an algebraic data-type for the same purpose. Scala provides the concept of case classes which is somewhat in between the two. Here is how they can be used to define standard classes in several respects: newkeyword is not mandatory to create instances of these classes (i.e., one can write Const(5)instead of new Const(5)), vconstructor parameter of some instance cof class Constjust by writing c.v), equalsand hashCodeare provided, which work on the structure of the instances and not on their identity, toStringis provided, and prints the value in a “source form” (e.g., the tree for expression x+1prints as Sum(Var(x),Const(1))), Now that we have defined the data-type to represent our arithmetic expressions, we can start defining operations to manipulate them. We will start with a function to evaluate an expression in some environment. The aim of the environment is to give values to variables. For example, the expression x+1 evaluated in an environment which associates the value 5 to variable x, written { x -> 5 }, gives 6 as result. We therefore have to find defines a function which, when given the string "x" as argument, returns the integer 5, and fails with an exception otherwise. Before writing the evaluation function, let us give a name to the type of the environments. We could of course always use the type String => Int for environments, but it simplifies the program if we introduce a name for this type, and makes future changes easier. This is accomplished in Scala with the following notation: type Environment = String => Int From then on, the type Environment can be used as an alias of the type of functions from String to Int. We can now give the definition of the evaluation function. Conceptually, it is very simple: the value of a sum of two expressions is simply the sum of the value of these expressions; the value of a variable is obtained directly from the environment; and the value of a constant is the constant itself. Expressing this in Scala is not more difficult: def eval(t: Tree, env: Environment): Int = t match { case Sum(l, r) => eval(l, env) + eval(r, env) case Var(n) => env(n) case Const(v) => v } This evaluation function works by performing pattern matching on the tree t. Intuitively, the meaning of the above definition should be clear: tis a Sum, and if it is, it binds the left sub-tree to a new variable called land the right sub-tree to a variable called r, and then proceeds with the evaluation of the expression following the arrow; this expression can (and does) make use of the variables bound by the pattern appearing on the left of the arrow, i.e., land r, Sum, it goes on and checks if tis a Var; if it is, it binds the name contained in the Varnode to a variable nand proceeds with the right-hand expression, tis neither a Sumnor a Var, it checks if it is a Const, and if it is, it binds the value contained in the Constnode to a variable vand proceeds with the right-hand side, Treewere declared. We see that the basic idea of pattern matching is to attempt to match a value to a series of patterns, and as soon as a pattern matches, extract and name various parts of the value, to finally evaluate some code which typically makes use of these named parts. A seasoned object-oriented programmer might wonder why we did not define eval as a method of class Tree and its subclasses. We could have done it actually, since Scala allows method definitions in case classes just like in normal classes. Deciding whether to use pattern matching or methods is therefore a matter of taste, but it also has important implications on extensibility: Treefor it; on the other hand, adding a new operation to manipulate the tree is tedious, as it requires modifications to all sub-classes of Tree, To explore pattern matching further, let us define another operation on arithmetic expressions: symbolic derivation. The reader might remember the following rules regarding this operation: vis one if vis the variable relative to which the derivation takes place, and zero otherwise, These rules can be translated almost literally into Scala code, to obtain the following definition: def derive(t: Tree, v: String): Tree = t match { case Sum(l, r) => Sum(derive(l, v), derive(r, v)) case Var(n) if (v == n) => Const(1) case _ => Const(0) } This function introduces two new concepts related to pattern matching. First of all, the case expression for variables has a guard, an expression following the if keyword. This guard prevents pattern matching from succeeding unless its expression is true. Here it is used to make sure that we return the constant 1 only if the name of the variable being derived is the same as the derivation variable v. The second new feature of pattern matching used here is the wildcard, written _, which is a pattern matching any value, without giving it a name. We did not explore the whole power of pattern matching yet, but we will stop here in order to keep this document short. We still want to see how the two functions above perform on a real example. For that purpose, let’s write a simple main function which performs several operations on the expression (x+x)+(7+y): it first Derivative relative to x: Sum(Sum(Const(1),Const(1)),Sum(Const(0),Const(0))) Derivative relative to y: Sum(Sum(Const(0),Const(0)),Sum(Const(0),Const(1))) By examining the output, we see that the result of the derivative should be simplified before being presented to the user. Defining a basic simplification function using pattern matching is an interesting (but surprisingly tricky) problem, left as an exercise for the reader. Apart from inheriting code from a super-class, a Scala class can also import code from one or several traits. Maybe the easiest way for a Java programmer to understand what traits are is to view them as interfaces which can also contain code. In Scala, when a class inherits from a trait, it implements that trait defining our equivalent of Comparable as a trait, which we will call Ord. When comparing objects, six different predicates can be useful: smaller, smaller or equal, equal, not equal, greater or equal, and greater. However, defining all of them definition both creates a new type called Ord, which plays the same role as Java’s Comparable interface, and default implementations of three predicates in terms also a super-type of basic types like Int, Float, etc. To make objects of a class comparable, it is therefore sufficient to define the predicates which test equality and inferiority, and mix in the Ord class above. As an example, let’s define a Date class representing dates in the Gregorian calendar. Such dates are composed of a day, a month and a year, which we will all represent as integers. We therefore start the definition of the Date class as follows: class Date(y: Int, m: Int, d: Int) extends Ord { def year = y def month = m def day = d override def toString(): String = year + "-" + month + "-" + day The important part here is the extends Ord declaration which follows the class name and parameters. It declares that the Date class inherits from the Ord trait. Then, we redefine the equals method, inherited from Object, so that it correctly compares dates by comparing their individual fields. The default implementation of equals is not usable, because as in Java it compares objects physically. We arrive at the following definition: override def equals(that: Any): Boolean = that.isInstanceOf[Date] && { val o = that.asInstanceOf[Date] o.day == day && o.month == month && o.year == year } This method makes use of the predefined methods isInstanceOf and asInstanceOf. The first one, isInstanceOf, corresponds to Java’s instanceof operator, and returns define is the predicate which tests for inferiority, as follows. It makes use of another predefined method, error, which throws an exception with the given error message. def <(that: Any): Boolean = { if (!that.isInstanceOf[Date]) error("cannot compare " + that + " and a Date") val o = that.asInstanceOf[Date] (year < o.year) || (year == o.year && (month < o.month || (month == o.month && day < o.day))) } This completes the definition of the Date class. Instances of this class can be seen either as dates or as comparable objects. Moreover, they all define the six comparison predicates mentioned above: equals and < because they appear directly in the definition of the Date class, and the others because they are inherited from the Ord trait. Traits are useful in other situations than the one shown here, of course, but discussing their applications in length is outside the scope of this document. The last characteristic of Scala we will explore in this tutorial is genericity. Java programmers should be well aware of the problems posed by the lack of genericity in their language, a shortcoming which is addressed in Java 1.5. Genericity is the ability to write code parametrized by types. For example, a programmer writing a library for linked lists faces the problem of deciding which type to give to the elements of the list. Since this list is meant to be used in many different contexts, it is not possible to decide that the type of the elements has to be, say, Int. This would be completely arbitrary and overly restrictive. Java programmers resort to using Object, which is the super-type of all objects. This solution is however far from being ideal, since it doesn’t work for basic types ( int, long, float, etc.) and it implies that a lot of dynamic type casts have to be inserted by the programmer. Scala makes it possible to define generic classes (and methods) to solve this problem. element. This type is used in the body of the class as the type of the contents variable, the argument of the set method, and the return type of the get method. The above code sample introduces variables in Scala, which should not require further explanations. It is however interesting to see that the initial value given to that variable is _, which represents a default value. This default value is 0 for numeric types, false for the Boolean type, () for the Unit type and null for all object types. To use this Reference class, one needs to specify which type to use for the type parameter. This document gave a quick overview of the Scala language and presented some basic examples. The interested reader can go on, for example, by reading the document Scala By Example, which contains much more advanced examples, and consult the Scala Language Specification when needed.blog comments powered by Disqus Contents
http://docs.scala-lang.org/tutorials/scala-for-java-programmers.html
CC-MAIN-2014-15
refinedweb
3,563
60.65
Minutes of 20 Feb 2002 and 13 March 2002 approved as posted. -- On Email binding appendices: MarkB sees the current email binding (RFC 2822) as insufficient in our attempt to exercise our binding framework. Noah does not share MarkB's concerns, he thinks the current Email binding is a usable binding, and it shows a "second binding of SOAP". DavidF: we'll set up a conferecnce call on this topic. -- Primer: incorporated comments. -- Spec: the editors have done a lot of work, expect to publish a snapshot on Friday and to clear the ed to-do list plus most of the issues we might resolve today. -- TBTF: nothing to report. -- Conformance: Oisin sent the report in his regrets email, and DavidF read the report: There is no new version on the website, there will be one Friday morning EST. This version will contain additional assertions, and have removed obsolete ones. Trying to coordinate with IBM and Microsoft and Soapbuilders list. -- Usage Scenarios: waiting on example for S10. -- Requirements doc: ok -- Email binding: setting up the meeting to resolve MarkB's concerns. in other words there is an instantiation of option B ("Introduce today an abstract attachment 'binding feature', but defer 'implementation' of this feature to other specifications/notes, such as, for example SOAP+Attachment or DIME.") from the list of options proposed [13] for resolving issue 61, "external payload reference". At its meeting this week, the TBTF briefly discussed the attachment feature proposal and with a few reservations they agreed with it. The purpose of this agenda item is to decide whether the WG agrees to (i) adopt the option B, and (ii) ask the TBTF to come up with a refined attachment feature proposal in the very near future.DavidF: we have a proposal for introducing an abstract concept of an attachment and deferring the concrete implementations of this feature. There were some comments on the lists and TBTF concall, mostly agreement. PaulC: why do we do this? SOAP with Attachments has proven it can be done without our explicit help. DavidF: there has been a feeling that SOAP 1.2 should acknowledge attachments. The proposal gives us a hook so that later on we could do a concrete implementation. This is also a small piece of work. PaulC: still, we are doing more than we must. Noah: I've had a slightly different proposal on xml-dist-app. [scribe disconnected, reconnected] DavidF: we probably should now send this back to the TBTF and to the mailing list. JohnI: we may task the TBTF to give us something next week. Noah: the TBTF should copy the initial discussion email to the public list. PaulC: Is it the opinion of the WG that we cannot go to Last Call without solving this? Jean-Jacques: There is currently an issue closing issue 61. The proposal would enable us to close issue 61. The proposal involves only minor modifications to the HTTP binding. It provides a hook that specs like DIME or S+A could use to add proper support for attachments. DavidF: We will move this issue back to the TBTF and public discussion for a week, then we'll revisit the topic. PaulC: I have no objection to waiting a week. 7. Regarding the rewrite of Part 2 sections 2 and 3, is this the right direction?Asir: this is not changing any functionality, so why do we have to do this for Last Call? Noah: this is much crisper than the last version... Jacek: I have sent some comments that I think should be replied to by the authors and probably also seen by the WG. Gudge: I'll reply when I get to it, I've been busy. DavidF: we've had external comments that liked the rewrite extremely much. Asir: we should also have had an appendix with examples and it does not seem to be there. Gudge: The appendix is in there in a rough form. It was meant to provide explanation on how XML Schema could be used with SOAP Encoding data, if you were expecting any examples on Encoding, that's not what was meant to go in the appendix. Asir: why do we move from XML Schema? Gudge: we get a better layering - moving typing to a higher layer, possibly with a different type system than that provided by XML Schema. Noah: This rewrite clarifies much our relation to XML Schema, removing many loose ends, particularly with respect to validation. Asir: if this is only about validation, can't we only add a comment saying validation is not required? Or just required for builtin simple types? Noah: that might have been the other way but SOAP 1.1 was never saying explicitly that it does require validation of simple types, which is the only real reference to XML Schema. DavidF: we don't want to reopen the issue. Asir are you suggesting we go back to the old text? Asir: I was only asking for clarification. I'm not saying we should go back, maybe we don't need to do that before Last Call. No objections to incorporating it now if we have time to review it before LC. DavidF: there will be time to review the rewrite before submitting the specs to Last Call. 9. Issues having proposals that impact spec and other texts -- Issue 41 DavidF: we have two proposed solutions, solution 2 being commented on as the way to go. Amr: we also have an amended proposal in Henrik: it is fine except it shouldn't suggest we plan to add any such extension ourselves. Noah: I'm mostly OK with it, but it suggests that it's true that the target URI should be in the envelope. We should be explicit that in some cases it may be needed there, in other cases not. Chris: we have raised this issue with the WS Architecture group. DavidF: but we aren't in the position to wait for them to tell us whether or not they are going to handle this. DavidF: proposal: resolve issue 41 by accepting the amended text (pointed to by Amr) without the square-bracketed text, which would be mentioned in the closing text. HFN: suggestion that we just send the whole text to xmlp-comments as the closing text, and change nothing in the spec. DavidF: revised proposal: no change to the spec, amended proposal text going into xmlp-comments as closing text. No objections raised. DavidF: issue 41 is closed with the revised proposal. Henrik will submit xmlp-comment text. -- Issue 189 Noah: for SOAP in general there is no issue, the HTTP binding should be clear on the XML version it uses. Henrik: The current spec (part 1) says that while the examples use XML 1.0, it's not really mandatory as we're based on infoset. DavidF: any objection to closing 189 by referencing this text? Noah: I have some editorial comments which I will give to the editors. No objections raised. DavidF: Issue 189 is closed with the proposed text. JeanJacquesM will send xmlp-comment text. -- Issue 186 DavidF: the proposal is that we provide explicit text specifying uniqueness constraints on the attributes ref and id. Is there any objection to closing the issue with this proposal? HFN: aren't we duplicating some external work? Noah: XML Schema doesn't apply, DTDs we don't want, any other external work? We could say "these attributes have the semantics as if a DTD was there." Jacek: this might interfere with other application attributes named id and ref, wouldn't it? Gudge: no, not really, we'd say this only for SOAP Encoding id and ref attributes. DavidF: again, any objection to the proposed text? No objections reaised. DavidF: issue 186 closed with the original proposed text. MartinG will send text to xmlp-comment. -- Issue 176 Proposal: we'll clarify the rules on how some particular "important" i nformation items can or can not be changed. This has changed slightly with the resolution to 137. Specifically, it clarifies what a SOAP receiver and sender MUST and MUST NOT do with respect to some information items in some places. Noah: it might need to be changed with some cleanups (DTD related) we make. DavidF: any objections? No objections raised. DavidF: issue 176 is closed with the proposal. Henrik to send text to xmlp-comments. -- Issue 187 Noah: the bindings should describe have some kind of a binding failure and a badly formed SOAP message would be such (others being transmission link breakage etc.). We'll have to introduce a broad model for binding failures. It should affect the binding framework and the state machines. TBTF should come up with a good answer to the broader issue and we'll resolve 187 as a special case of that. DavidF: any objections to moving this to the TBTF? No objections raised. DavidF: we will send this issue back to the TBTF to make a proposal on failure description. -- Issue 167 DavidF: it is proposed that we issue a health warning. Gudge: this issue should be covered by the rewrite of Part 2, sections 2 and 3. No objections raised to this resolution with the understanding that Gudge will check 167 is indeed covered by the rewrite. -- Inconsistencies in versioning model, (from the agenda addendum). Henrik: VersionMismatch is on the namespace URI, Upgrade header has the whole root element QName. Proposal: we'll check everything on QNames. This carries a lot of editorial work. We'll also mandate sending the Upgrade header. DavidF: can we agree on this? No objections were raised. DavidF: we accept the proposal. 10. Assign people to create proposals 194: Encoding style in SOAP Header/Body: Henrik will create a proposal. 195: Why mandate RPC return value QName? No volunteer. 163: Jacek volunteered to come up with a proposal. 191: Noah volunteered to come up with a proposal. 192: Chris volunteered to summarise the positions in the discussion and create a proposal. 193: The issue list already contains a proposal. We'll put it on agenda for next week's telcon. DavidF: thanks the editors for their hard work, and reminds the WG that we shall have a new version of the spec to read on friday. End of call.
http://www.w3.org/2000/xp/Group/2/03/20-pminutes.html
CC-MAIN-2014-49
refinedweb
1,718
74.79
I've successfully processed the image frames using a shader. Now I need to encode the processed frames to video. The GLSurfaceView.Renderer provides the onDrawFrame(GL10 ..) method. It's in this method that I'm attempting to read the image frames using just glReadPixels() and then place the frames on a queue for encoding to video. On it's own, glReadPixels() is much too slow - my frame rate is in the single digits. I'm attempting to speed this up using Pixel Buffer Objects. This is not working. After plugging in the pbo, the frame rate is unchanged. This is my first time using OpenGL and I do not know where to begin looking for the problem. Am I doing this right? Can anyone give me some direction? Thanks in advance. - Code: Select all public class MainRenderer implements GLSurfaceView.Renderer, SurfaceTexture.OnFrameAvailableListener { . . public void onDrawFrame ( GL10 gl10 ) { //Create a buffer to hold the image frame ByteBuffer byte_buffer = ByteBuffer.allocateDirect(this.width * this.height * 4); byte_buffer.order(ByteOrder.nativeOrder()); //Generate a pointer to the frame buffers IntBuffer image_buffers = IntBuffer.allocate(1); GLES20.glGenBuffers(1, image_buffers); //Create the buffer GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, image_buffers.get(0)); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, byte_buffer.limit(), byte_buffer, GLES20.GL_STATIC_DRAW); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, image_buffers.get(0)); //Read the pixel data into the buffer gl10.glReadPixels(0, 0, this.width, this.height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, byte_buffer); //encode the frame to video enQueueForEncoding(byte_buffer); //unbind the buffer GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0); } . . }
http://www.anddev.org/android-2d-3d-graphics-opengl-problems-f55/pixel-buffer-objects-on-android-t2176826.html
CC-MAIN-2014-15
refinedweb
246
53.27
This YAML Tutorial Explains What is YAML, Basic Concepts of YAML such as data types, YAML Validator, Parser, Editor, Files, etc with the help of Code Examples using Python: Text processing in computer science helps programmers to create configurable programs and applications. Markup languages play a vital role in storing and exchanging data in a human-readable format. Furthermore, programmers use markup languages as common, and standard data interchange formats between different systems. Some examples of markup languages include HTML, XML, XHTML, and JSON. We have shared information on one more markup language in this easy to follow YAML Tutorial. This tutorial helps the readers in finding answers to the below-mentioned questions. Learners can take the first steps and understand the mystery of markup languages in general and YAML in particular. The Questions include: - Why do we need markup languages? - What does YAML stand for? - Why was YAML created? - Why Do We Need to learn YAML? - Why is it important today to learn YAML? - What type of data can I store in a YAML? This guide is useful for experienced readers also as we discuss concepts in the context of programming in general, and also in the context of software testing. We will also cover topics such as Serialisation and Deserialization here. What You Will Learn: What Is YAML Creators of YAML initially named it as “Yet Another Markup language.” However, with time the acronym changed to “YAML Ain’t a MarkUp language.” YAML is an acronym that refers to itself and is called a recursive acronym. We can make use of this language to store data and configuration in a human-readable format. YAML is an elementary language to learn. Its constructs are easy to understand too. Clark, Ingy, and Oren created YAML to address the complexities of understanding other markup languages, which are difficult to understand, and the learning curve is also steeper than learning YAML. To make learning more comfortable, as always, we make use of a sample project. We host this project on Github with MIT license for anyone to make modifications and submit a pull request if required. You can clone the project using the command below. git clone git@github.com:h3xh4wk/yamlguide.git However, if required, you can download the zip file for the code and the examples. Alternatively, readers can clone this project with the help of IntelliJ IDEA. Please complete the section on prerequisites to install Python and configure it with IntelliJ IDEA before cloning the project. Why Do We Need Markup Languages It is impossible to write everything in software code. It is because we need to maintain code from time to time, and we need to abstract the specifics to external files or databases. It is a best practice to reduce the code to as minimum as possible and create it in a manner that it doesn’t need modification for various data inputs that it takes. For example, we can write a function to take input data from an external file and print its content line by line rather than writing the code and data together in a single file. It is considered a best practice because it separates the concerns of creating the data and creating the code. The programming approach of abstracting the data from code ensures easy maintenance. Markup languages make it easier for us to store hierarchical information in a more accessible and lighter format. These files can be exchanged between programs over the internet without consuming much bandwidth and support the most common protocols. These languages follow a universal standard and support various encodings to support characters almost from all spoken languages in the world. The best thing about markup languages is that their general use is not associated with any system command, and this characteristic makes them safer and is the reason for their widespread and worldwide adoption. Therefore, you might not find any YAML Commands that we can directly run to create any output. Benefits Of Using A YAML File YAML has many benefits. The below-given table shows a comparison between YAML and JSON. JSON stands for JavaScript Object Notation, and we use it as a data-interchange format. Due to the benefits of YAML over the other file formats such as JSON, YAML is more prevalent among developers for its versatility and flexibility. Pre-Requisites We first install Python and then configure Python and its packages with IntelliJ IDEA. Therefore, please install IntelliJ IDEA if not already installed before proceeding. Install Python Follow these steps to install and setup Python on Windows 10. Step #1 Download Python and install it by selecting the setup as shown in the below image. Step #2 Start the setup and select customize the installation. Select the checkbox of Adding Python to PATH. Step #3 Customize the location of Python as displayed in the image. Step #4 Move ahead with the installation. At the end of the installation wizard Disable the path limit on Windows by clicking the option on the Wizard. Now, Python setup is complete. Configure Python With IntelliJ IDEA Let’s now configure IntelliJ IDEA with Python. The first step is to install the Plugins to be able to work on Python projects. Install Python Plugins Install Python Community Edition Install Python Security Follow the below steps to complete the configuration. Step #1 Use the File Menu and Go to Platform settings. Click on the Add SDK button. Step #2 Select the Virtual environment option and select Python’s base interpreter as the one that was installed in the previous step. Step #3 Now select the virtual environment created in the previous step under the Project SDK Settings. We recommend one virtual environment for one project. Step #4 [Optional] Open the config.py file from the project explorer and click on install requirements, as shown in the below image. Ignore the ipython requirement if required by unchecking an option in the Choose package dialog. Now, you can head over to the next section to learn the basics of YAML. Basics Of YAML In this section, we mention the basics of YAML with the help of an example file called config.yml and config.py. We firmly believe that explaining the concepts of YAML in parallel with its use in a Programming language makes learning better. Therefore, while explaining the basics in YAML, we also involve the use of Python to read and write the data stored in YAML. Now let’s Create or open the config.yml in our respective editors and understand the YAML. --- quiz: the Universe is ever-expanding?" answers: - [8, "pluto"] - cats - 3.141592653589793 - true - 4 - null - no # explicit data conversion and reusing data blocks extra: refer: &id011 # give a reference to data x: !!float 5 # explicit conversion to data type float y: 8 num1: !!int "123" # conversion to integer str1: !!str 120 # conversion to string again: *id011 # call data by giving the reference Notice that YAML files have .yml extension. The language is case sensitive. We use spaces and not tabs for indentation. Along with these basics, let’s understand the Data Types. In the YAML mentioned, we have represented the information on a quiz. A quiz is depicted as a root level node, having attributes such as a description, questions, and answers. YAML Data Types YAML can store Scalars, Sequences, and Mappings. We have displayed how to write all necessary data types in the file config.yml. Scalars are strings, integers, floats, and booleans. Data of type Strings are enclosed in double-quotes “. However, YAML doesn’t impose writing strings in double-quotes, and we can make use of > or | for writing long strings in multiple lines. Look at the various data types and mapped values in the below table. Enlisted below are some of the worth noting additional elements of a YAML file. Document Now notice the three dashes —. It signifies the start of a document. We store the first document with a quiz as the root element, and description, questions & answers as child elements with their associated values. Explicit Data Types Observe the section key called extra in the config.yml. We see that with the help of double exclamations, we can explicitly mention the datatypes of the values stored in the file. We convert an integer to a float using !!float. We use !!str to convert an integer to string, and use !!int to convert string to an integer. Python’s YAML package helps us in reading the YAML file and store it internally as a dictionary. Python stores dictionary keys as strings, and auto converts values to Python data types unless explicitly stated using “!!”. Read YAML File In Python In general, we make use of the YAML Editor and a YAML Validator at the time of writing YAML. YAML Validator checks the file at the time of writing. The Python YAML package has a built-in YAML Parser, that parses the file before storing it in memory. Now let’s create and open config.py in our respective editors with the below content. import yaml import pprint def read_yaml(): """ A function to read YAML file""" with open('config.yml') as f: config = yaml.safe_load(f) return config if __name__ == "__main__": # read the config yaml my_config = read_yaml() # pretty print my_config pprint.pprint(my_config) To test that you have completed the outlined steps mentioned above, run config.py. Open the config.py file in IntelliJ IDEA, locate the main block and run the file using the play icon. Once we run the file, we see the console with the output. In read_yaml function, we open the config.yml file and use the safe_load method of the YAML package to read the stream as a Python dictionary and then return this dictionary using the return keyword. my_config variable stores the content of the config.yml file as a dictionary. Using Python’s pretty print package called pprint, we print the dictionary to the console. Notice the above output. All the YAML tags correspond to Python’s data types so that the program can further use those values. This process of constructing Python objects from the text input is called Deserialisation. Write YAML File In Python Open config.py and add the following lines of code just below the read_yaml method and above the main block of the file. def write_yaml(data): """ A function to write YAML file""" with open('toyaml.yml', 'w') as f: yaml.dump(data, f) In the write_yaml method, we open a file called toyaml.yml in write mode and use the YAML packages’ dump method to write the YAML document to the file. Now add the below lines of code at the end of the file config.py # write A python object to a file write_yaml(my_config) Save the config.py and run the file using the below command or using the play icon in the IDE. python config.py We see that the above command prints the contents of config.yml to the console or system’s output. Python program writes the same content to another file called toyaml.yml. The process of writing the Python object to an external file is called Serialisation. Multiple Documents In YAML YAML is quite versatile, and we can store multiple documents in a single YAML file. Create a copy of the file config.yml as configs.yml and paste the below lines at the end of the file. --- quiz: description: | This is another quiz, which is the advanced version of the previous one questions: q1: desc: "Which value is no value?" ans: Null q2: desc: "What is the value of Pi?" ans: 3.1415 Three dashes — in the above snippet marks the beginning of a new document in the same file. Use of | after the description tag enables us to write a multi-line text of type string. Here in the new document, we have stored questions, and answers as separate mappings nested under questions. Now create a new file called configs.py and paste the below-mentioned code into the file. import yaml import pprint def read_yaml(): """ A function to read YAML file""" with open('configs.yml') as f: config = list(yaml.safe_load_all(f)) return config def write_yaml(data): """ A function to write YAML file""" with open('toyaml.yml', 'a') as f: yaml.dump_all(data, f, default_flow_style=False) if __name__ == "__main__": # read the config yaml my_config = read_yaml() # pretty print my_config pprint.pprint(my_config) # write A python object to a file write_yaml(my_config) Notice the changes in read_yaml and write_yaml functions. In read_yaml, we use the safe_load_all method of the YAML package to read all the documents present in configs.yml as a list. Similarly, in write_yaml, we use the dump_all method to write the list of all the previously read documents to a new file called toyaml.yml. Now run configs.py. python configs.py The output of the above command is displayed below. [{'quiz': {'answers': [[8, 'pluto'], 'cats', 3.141592653589793, True, 4, None, False], Universe is ever-expanding?"]}}, {'quiz': {'description': 'This is another quiz, which\n' 'is the advanced version of the previous one\n', 'questions': {'q1': {'ans': None, 'desc': 'Which value is no value?'}, 'q2': {'ans': 3.1415, 'desc': 'What is the value of Pi?'}}}}] The output is similar to the previously mentioned single document output. Python converts every document in the configs.yml into a Python dictionary. It makes it easier for further processing and use of the values. Frequently Asked Questions You may come across the below questions while working with YAML. Q #1) Is it possible to preserve the Order of YAML Mappings? Answer: Yes, it is possible to customize the default behavior of the loaders in Python’s pyYAML package. It involves the use of OrderedDicts and overriding the Base resolver with custom methods, as shown here. Q #2) How to store an image in YAML? Answer: You can base64 encode an image and keep it in YAML, as shown below. image: !!binary | iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mP8/5+hHgAHggJ/PchI7wAAAABJRU5ErkJggg== Q #3) What is the difference between > and | tags in YAML? Answer: Both > and | allow writing values in multiple lines in YAML. We use greater than symbol > to write multi-line strings and | to represent literal values. Values written using | need not be escaped. For example, we can store Html by using |. template: | <p>This is a test paragraph</p> <blockquote> <p> This is another paragraph</p> </blockquote> Q #4) What is the significance of … at the end of the YAML file. Answer: Three periods … are optional identifiers. These can be used to mark the end of the document in a stream. Q #5) How to write comments in the YAML file? Answer: We use # to write a single line comment. YAML doesn’t support multi-line comments. Thus, we need to use # in multiple lines, as shown below. # this is # a single line as well as multi-line # comment Conclusion In this guide, we covered the steps of preparing the development environment in both Windows as well as Linux to get started with YAML. We nearly discussed all the concepts of YAML’s basic data types, YAML editor, and YAML Parser. We have also highlighted the benefits of using YAML vis-a-vis other markup languages and provided code examples with the help of a supporting sample project. We hope that now the learners can use YAML to abstract data from application logic to write efficient and maintainable code. Happy Learning!!
https://www.softwaretestinghelp.com/yaml-tutorial/
CC-MAIN-2021-17
refinedweb
2,574
66.44
Results 1 to 4 of 4 - Join Date - Feb 2011 - 23 - Thanks - 4 - Thanked 0 Times in 0 Posts In VB.net how do I concatenate the varied value of 3 tags all with the same name? How do I concatenate all of the interests from the following XML file into one variable using vb.net 4. Code: <xml> <client> <name>Paul Davis</name> <interest>Football</interest> <interest>Swimming</interest> <interest>Internet</interest> </client> </xml> Code: Dim buffer As String For Each itm In clients If (itm.<name>.Value = CType(Session("Client"), String)) Then Continue For End If buffer &= "<table><tr>" buffer &= String.Format("<td>{0}</td>", itm.<name>.Value) buffer &= String.Format("<td>{0}</td>", // The interests should be selected here buffer &= "</tr></table>" Next lblOutput.Text = buffer - Join Date - Apr 2009 - 244 - Thanks - 1 - Thanked 20 Times in 20 Posts Hey NKeuxmuis, I'm not quite sure how you're parsing the XML file. itm.<name>.Value? I've never seen that before :-) Anyway, you have to use the System.XML namespace (and I have a feeling you sort of are). But in any case, here is a sample: You should get a pretty good idea from there. So now, all you have to do is just loop through each element inside the <client> element. Here is some pseudo code: Code: String s; for each itm in clients if itm.name = "interest" then s += itm.value; end if next You are able to create any HTML server control on the go. There are also various data controls available to you, which act as a template, onto which you can directly bind data (XML or from a database). Here is a sample that applies to your situation: Regards, Mike - Join Date - Sep 2011 - 21 - Thanks - 1 - Thanked 3 Times in 3 Posts Hey Mike Thanks for the informative link, actually I also have the been habituated of storing tag in strings. It gave me a lot of headache once while debugging but as all old habits die hard :-) - Join Date - Aug 2011 - 7 - Thanks - 0 - Thanked 0 Times in 0 Posts hi do you make the codes by writting it or using a program
http://www.codingforums.com/asp-net/225047-vb-net-how-do-i-concatenate-varied-value-3-tags-all-same-name.html?s=6581e88ba719d3ae26542c6ecb778c97
CC-MAIN-2016-07
refinedweb
365
69.82
PDFTron PDFNet SDK Google Group PDFNet SDK from PDFTron Systems Inc. PDFNet SDK is the complete PDF toolkit for .NET, JAVA, C/C++, PYTHON, PHP, RUBY, Objective-C. Using PDFNet SDK developers can write stand-alone, cross-platform, and reliable commercial applications that can read, write, edit, display and print PDF documents. -0-0T::Z Google Groups Support 2013-05-17T19:41:50Z Converting MS Word to PDF on iOS / Android via PDFTron SDK Q: <br> I downloaded iOS SDK from your website. I tried to convert document to pdf <br> file. I used [Convert ToPdf: in_filename:] function in my application. <br> I wrote a several lines of code: <br> NSArray *paths = <br> NSSearchPathForDirectoriesInDo mains(NSDocumentDirectory, NSUserDomainMask, <br> YES); <br> NSString *documentsPath = [paths objectAtIndex:0]; Tomas Hofmann 2013-05-17T16:34:21Z What is the difference between PDFDrawDemo and PDFViewCtrlDemo, and where do the Tools fit in Q: In the Samples solution, there is a PDFDrawDemo and a PDFViewCtrlDemo. <br> They both seems to work similarly in that they display PDFs. <br> Also, I am interested in the tools, which provide interactive features such <br> as annotation editing. Where do they fit in? <br> A: The PDFDrawDemo demonstrates the use of our PDFDraw class. PDFDraw is a Ryan 2013-05-17T16:12:35Z MonoTouch/Xamarin.iOS linker error with libTools Q: <br> Basically I created Monotouch bindings for the PDFNet library. That part I <br> finally got more or less working but I also have to get the tools library <br> linked in and that is where I have hit a brick wall. <br> I have the latest versions of everything and I have made a custom build of <br> the tools project, which now works fine when I compile it in a test project Ryan 2013-05-17T15:20:09Z Generated bmp has incorrect/corrupted output Q: <br> <p>I would like to explain a bit about my program. First, the user creates a <br> PDF file. After this, from PDF file it is converted to Bitmap and shows on <br> the screen. Before convert to Bitmap file I have checked the PDF file, it <br> looks good as the file that I sent you. I think that the problem is not in Ryan 2013-05-16T18:22:50Z .Net Exception from HRESULT: 0x800700C1 Q) 05:57:59 Failed to optimize document <br> [7213040]. System.BadImageFormatException : Could not load <br> file or assembly 'PDFNet.dll' or one of its dependencies. is not a valid <br> Win32 application. (Exception from HRESULT: 0x800700C1) <br> This has been happening since I have got the updated DLL (Please see David Černý 2013-05-14T23:54:47Z PDF/A-3a conversion with document level attachment Hello, we are evaluating PDFNet SDK, but we have following problem: <br> we use this code to test PDF -> PDF/A-3a conversion: <br> using System; <br> using System.Collections.Generic; <br> using System.Linq; <br> using System.Text; <br> using pdftron; <br> using pdftron.Common; <br> using pdftron.SDF; <br> using pdftron.PDF; <br> using pdftron.PDF.PDFA; David Černý 2013-05-13T13:24:53Z PDF/A-3A conversion and validation PDF/A-3A conversion and validation <br> Hello, <br> I have slightly modified sample PDFATestCS_2010 (different input file + PDFACompliance.Conformance.e_L evel3A) <br> // PDF/A Conversion <br> string[link]</a>) which <br> I will use to redact PDF documents in my app. <br> <p>I now need to integrate this into GUI simular to redaction annotations in <br> Acrobat Pro. <br> <p>I'm looking at the PDFVIew WPF example and I'm trying to figure out how to Support 2013-05-10T16:59:41Z Extracting and merging redacted PDF content Q: <br> <p>I am working with PDF SDK (.net) and am specifically looking into <br> redaction. I am able to redact regions just fine with a rectangle and that <br> seems to work great, what I need to be able to do is before redacting, get <br> the information from that rectangular region and save it elsewhere and then Support 2013-05-09T17:27:03Z How do I maintain PDF/X compatibility with the latest version of PDFNet SDK? Q: <br> We use PDFNet library to export graphic designs to PDF. Recently after <br> updating the library to the most recent version we've encountered a problem <br> where our export filter no longer produces PDF/X compliant PDF document <br> with the 'save to PDF/X' settings enabled, whereas the old one was doing <br> this pretty well. James 2013-05-09T00:43:34Z iOS Form Editing and Validation Q: <br> I have following quires on PDFTRON , We are planning to buy it for iOS <br> application. <br> 1. How can we handle the Field validation in PDFTRON , if we add java <br> script while creating pdf , it won't work , we have to manually Parse the <br> java script and do the validation using objective c code . is there any Vincent Ycasas 2013-05-08T17:26:09Z Printing to a Network Printer Does Not Work. *Question:* <br> When I try to print with a local printer, I was able to print PDF documents <br> using the following code: <br> But if I try to select a network printer, the printing process failed with <br> the following error message: Support 2013-05-08T17:23:05Z How do I read / write PDF annotations to a separate file ? Q: <br> <p> I’m currently trying to write out all of the annotations to a file <br> separate from the PDF so the annotations can be managed separately from the <br> report. I’m able to iterate through all of the pages and find all of the <br> annotations, but when I try to write to a file, I’m not seeing what I James Dustin 2013-05-08T04:58:57Z How to store a 1 or 8 bit DIB in a pdf as a thumbnail image Hi, <br> I've been trying to use the pdfnet sdk to add a thumbnail image to a pdf <br> page. <br> I'm using the sdk with a existing application that is supplying the images <br> to me as DIBs. Sometime 24bit and sometimes 8 or 1 bit depending on the <br> contents of the page. <br> Everything is fine for 24bit but the 8bit or 1bit images are stored James 2013-05-08T00:33:14Z How do I allow users to edit TRNFields on iOS? Q: <br> On iOS, I was trying to make focus on specific trnfield (like <br> becomeFirstResponder for UITextfield). So please provide a direction to <br> achieve this. <br> ---------------------- <br> A: <br> TRNFields are a PDF concept and not a UI element; they inherit directly <br> from NSObject and not from UIResponder or any other UI class. Are you Support 2013-05-07T18:40:27Z Viewing PDF in Adobe AIR on Android etc. Q: <br> <p>I ran across PDFTron SDKs and was wondering if you have tried using it <br> within a AIR ANE? AIR apps for mobile don't really have a good solution for <br> viewing pdfs, especially on Android, so I was wondering if your solution <br> could do this. <br> <p>-------- <br> A: <br> <p>For viewing PDFs, there are couple of options: Support 2013-05-07T00:11:36Z Android: Automatic switching to/from landscape/portrait mode Q: We are working on Android version of the SDK. We are trying to display <br> a PDF and it displays fine in portrait mode but when I switch to landscape <br> mode, the orientation of the PDF does not change. Pls let me know how to <br> accomplish this. <br> ------------ <br> A: When rotating the device, Android restarts the activity by default. To Support 2013-05-06T20:57:05Z Is PDFNet threadsafe for parallel rendering? Q: <br> <p>I’m looking for a PDF SDK, which can be used to extract PDF pages as images <br> in parallel. <br> Can I use PDFNet <<a target="_blank" rel=nofollow[link]</a>> for this? <br> <p>------------ <br> A: <br> <p>You can use the current version of PDFNet (at the moment v.5.9.2) to render <br> pages from the same document in parallel. Support 2013-05-02T22:44:12Z Viewing and streaming of remote PDFs via PDFNet WinRT SDK Q: <br> <p>I recently downloaded an evaluation of PDFNet Mobile SDK for WinRT <br> platform. I have a question about opening the PDF file. I looked at the <br> Getting started tutorial which shows opening a PDF file from the file <br> system. Can I open a PDF directly from the Http response? I got a exception Support 2013-05-02T17:42:08Z Convert Word to PDF using java or .net with PDFNet SDK? Q: <br> <p>I would like to know about PDFTron component that I can use to convert Word <br> to PDF. We would require to use this component in one of our product which <br> is hosted on windows Server. The application is using dot net version 3.5 <br> but we are also considering JAVA. We want to build a module where our Support 2013-05-02T17:23:01Z How do I know if an error occured when converting html page to PDF via pdftron.PDF.Convert.HTML2PDF? Q: <br> When using the HTML2PDF is there a way to know if an error occured on the <br> html page? We can convert it to respond with a 505 response code when an <br> error. Is there a way to determine if there is a response code on the page <br> generation. <br> <p>The following code for not converted was not executed during an error due James 2013-05-01T19:02:51Z Using PDFNet for iOS with Xamarin.iOS Q: <br> Our developers would like to use PDFNet Mobile PDF SDK for iOS within <br> Xamarin.iOS (MonoTouch). <br> So, here are some quick questions:**** <br> **1) **Does PDFNet Mobile PDF SDK for iOS support Xamarin.iOS?**** <br> **2) **Are there Xamarin.iOS bindings or information we can use? <br> ------------------------------ -------------------------- Vincent Ycasas 2013-05-01T18:41:09Z Removing Digital Signatures From a PDF Document *Question:* <br> How can I remove a digital signature from a PDF document? If I have a PDF <br> document that has two signatures in it, can I remove one signature without <br> invalidating the other signature? Anderson Konzen 2013-04-29T18:44:40Z Multiple APK Support and supported architectures when using Google Play Q: <br> I followed your suggestions to create separate APKs in order to use the <br> Multiple APK support in Google Play. However, I found in Google Play's <br> documentation ( <br> <a target="_blank" rel=nofollow[link]</a>) <br> that: <br> When using the Android NDK, you can create a single APK that supports Farhan Ghumra 2013-04-29T14:20:25Z How to print the document with PDFNetWinRT ? I have all the sample of PDF Tron SDK for WinRT, but the printing sample is not working. I need urgent help. Can anyone please give me working code to print PDF ? Thanks. Support 2013-04-23T23:33:52Z How do I add custom namespace properties in XMP metadata when generating/converting to PDF/A? Q: <br> <p>For PDF/A conversion we rely on 'pdftron.PDF.PDFA.PDFAComplian ce' from <br> PDFTron PDFNet SDK. <br> <p>The conversion works great, however I would like to be able to add some <br> custom properties / metadata (i.e. extended XMP metadata in a custom <br> namespace ). <br> <p>-------- <br> A: <br> <p>If you need to have a tight control over XMP metadata in the final PDF/A minkbear 2013-04-22T19:42:24Z Unable to get modified date of annoataion using php Hi, <br> I am evaluating PDFNet SDK for my company to make sure that it woks with <br> annotation well. <br> Unfortunately it seems that it's unable to get created date/modified date <br> of annotation which are the key points of my decision. <br> Please take a look on my reproduce step and the sample of pdf file. <br> Reproduce Step: Support 2013-04-22T17:32:45Z How to control the appearance of anchor text in a redacted region? Q: <br> <p>I am trying out PDFTron PDFNet to check whether this can be used for PDF <br> Redaction. <br> I have provide replacement text , replacement font and size to the redact <br> function, however the replaced text appears very small, regardless of the <br> font size provided. <br> <p>Below code is used for redaction <br> void Redact(string input, string output, ArrayList rarr,PDFDoc doc) { Support 2013-04-19T19:52:54Z How do I convert EPUB to PDF, XPS, XOD using PDFTron ? Q: <br> I am interested in converting an eBook in ePub format to work with the Web <br> Viewer. Do you have any samples for EPUB to PDF conversion? I could only <br> find the HTML2PDF converter on the PDFTron website ( <br> <a target="_blank" rel=nofollow[link]</a>), which looks like <br> it converts a HTML pages page to PDF. Ryan 2013-04-19T16:37:04Z Old PDFNet library still loaded in memory (PHP) Q) <br> We recently purchased a PDFTron enterprise license, and added the key to <br> all calls to PDFNet::Initialize. We tested our software and everything <br> worked as it did with the demo version. <br> On our staging environment, we recently got this error when calling <br> PDFNet::Initialize: <br> PDFNet: Bad License Key. PDFNet SDK will work in the demo mode. mikesowerbutts@yahoo.co.uk 2013-04-18T15:34:06Z Error when saving FDF as XFDF Hi, <br> I have just tested working code with another PDF, one which contains a Rich <br> Media annotation (an embedded video) and my app now crashes in this <br> function (see the comment). This is an iOS app. <br> -(void)exportAnnotations{ <br> if([Utils IsConnectedToInternet]){ <br> NSString *docFilename = @"document"; Vincent Ycasas 2013-04-17T16:58:21Z How do I add timestamps when signing a PDF document? *Question:* <br> I was able to add a digital signature to a PDF with the Name, Reason, <br> Location set in the document's signature dictionary, but how do I add the <br> signing time? Is a timestamp server required or can the timestamp be added <br> locally by code? mikesowerbutts@yahoo.co.uk 2013-04-17T14:40:47Z IsModified seems to frequently be true, when seemingly, no changes have been made Hi, <br> I have an iOS app which downloads a PDF file from a server, generates <br> thumbnails of each page, and saved them as .jpgs and then allows you to <br> view the PDF. <br> I am seeing some strange behaviour in that as soon as I load a newly <br> downloaded PDF, into a PDFDoc object, IsModified returns true? <br> I have managed to get around this by loading the PDF into a PDFDoc, then Anderson Konzen 2013-04-16T16:40:33Z PDFViewCtrl.scrollTo() goes to the wrong position on Android Q: <br> I am trying to scroll the document to where the form is located using <br> convPagePtToClientPt() and scrollTo(), but the page is being scrolled to <br> the wrong position. How do I use the scrollTo correctly? <br> ---------- <br> A: <br> Regarding the use of the scrollTo() method, there are two things to have in <br> mind: Anderson Konzen 2013-04-16T16:30:46Z Filling forms with Korean font on Android Q: <br> I'm trying to fill in a form text field but after I enter the text I can <br> only see dots or square characters. The language of my device is configured <br> to be Korean. Does PDFViewCtrl supports Unicode? <br> ---------- <br> A: <br> Yes, PDFViewCtrl supports Unicode. The problem with the characters occurs <br> because the SDK can't find a Koren font that supports those glyphs. Android Vincent Ycasas 2013-04-16T16:29:21Z The PDFNet printer driver is already installed, but conversion still fails. *Question:* <br> The PDFNet printer driver was installed by doing one of the following: <br> 1. By using the <br> installer: <a target="_blank" rel=nofollow[link]</a> <br> 2. By running the following code: <br> if (!pdftron.PDF.Convert.Printer. IsInstalled()) { <br> However, after re-attempting to convert the documents again, the process Anderson Konzen 2013-04-16T16:26:15Z Flickering when using gotoNextPage/gotoLastPage on Android Q: <br> I have the following code to navigate to next and previous page, but every <br> time the page is refreshed. How to avoid that? <br> case gotoLastPage: <br> { <br> mPDFView.gotoPreviousPage(); <br> mPDFView.setPageViewMode(mZESE ditor.PAGE_VIEW_FIT_WIDTH); <br> break; <br> case gotoNextPage: <br> { <br> mPDFView.gotoNextPage(); <br> mPDFView.setPageViewMode(mZESE ditor.PAGE_VIEW_FIT_WIDTH); Tomas Hofmann 2013-04-16T16:23:13Z How do I get the text under a highlight annotation Q: How do I get the text that is covered by a highlight (or any text <br> markup) annotation. <br> A: This can be done by using the text extractor. You have to start it on <br> the page and use the GetTextUnderAnnot(annot) function. <br> In C#, it would look something like this: <br> Annot annot = // some annot <br> pdftron.PDF.TextExtractor te = new pdftron.PDF.TextExtractor(); minkbear 2013-04-16T11:08:30Z How to get modified date of annotation using PHP Hi, <br> I try to get modified date in my PDF which has many comments and that comments show the modified date something like 06/10/2011 09:40:04 by using Adobe Reader. <br> Here is my code: <br> $Dates = $text->getDate(); <br> echo $Dates->GetYear(); // Show 2011 <br> echo $Dates->GetMonth(); // Show empty string <br> echo $Dates->GetDay(); // Show empty string Ryan 2013-04-15T21:58:04Z PDFNet and National Instruments LabView Q) When using PDFNet to open and search PDF files in the standalone <br> version, we have no problems. We recently incorporated PDFNet into our <br> integrated version, which works well in many scenarios, however causes a <br> crash when run from National Instruments LabView package. Our viewer opens <br> correctly, but fails when it comes to opening the PDF itself. James 2013-04-15T16:34:55Z Interactively Add Digital Signature Using PDFViewCtrl Q: <br> I am evaluation your mobile pdf sdk and I have a few questions regarding <br> the ability to digitally sign documents. I see that the sdk allows for <br> filling forms, but how do I enable the user to interactively sign the form <br> when they tap on a digital signature field? <br> ------------------------------ -------------------- Dong Nguyen 2013-04-15T03:12:29Z How to Load xod file with range header request i have seen demo of webviewer on homepage.I see the xod file is loaded really fast. I use Fiddler to capture request , i see that : webviewer usually send a hidden request url like : <br> <a target="_blank" rel=nofollow[link]</a> <br> <a target="_blank" rel=nofollow[link]</a> Ryan 2013-04-12T19:19:49Z SmartAssembly obfuscator automatic memory release feature may cause errors Q) <br> <p>A) <br> It sounds like SmartAssembly was somehow causing over 1gb of memory to be <br> released incorrectly. Then over time, as the freed memory gets overwritten, <br> you started getting random errors. Tomas Hofmann 2013-04-12T17:25:32Z How to make underline annotations line up properly on rotated text Q: When creating my own underline annotations, the line doesn't appear at <br> the right location. It can appear above or to the side of the text. <br> A: Underline annotations (and any Text Markup) will consider the line <br> between the first and second quad point to be the "bottom" of the <br> annotation. In the case of an underline annotation, this means that the Support 2013-04-12T00:55:37Z Parsing / Saving Annotations from XFDF string Q: <br> You currently support creating and loading annotations embedded within a <br> PDF FIle. However, we are storing annotations separately from our PDF files <br> and "side-loading" them when needed. The question is, does the PDFTron API <br> support parsing from these raw bytes. I have included some sample data Support 2013-04-11T22:55:25Z PDF/A Conversion and validation using PDFTron Q: <br> <p>We are searching for tools that can help us convert PDF/A compatible files <br> for our institutional repository. <br> <p>The initial plan is to check the compatibility of the user uploaded file, <br> then convert them if they are PDF/A convertible. So two functions are <br> needed: <br> <p>1) Is the file PDF/A convertible (only on pdf files, or can be doc and Support 2013-04-11T18:38:41Z How do I display a PDF form in a browser and allow the client to edit several specific fields? Q: I want to display a PDF in a browser and allow the client to edit <br> several specific fields and retrieve the update doc. <br> --------- <br> A: <br> <p>You can use PDFTron WebViewer for the task. To find out more info about <br> WebViewer please see: <a target="_blank" rel=nofollow[link]</a> see a demo of the form-filling capabilities here: Anil 2013-04-11T14:39:46Z TIFF to PDF via XPS We used a tiff image that we render to xps and then convert the XPS to PDF, <br> this approach works for most tiff images except a few older tiff image <br> formats. For these tif documents(prior to 2002) the image conversion from <br> tiff to XPS returns an empty document, i was told that a pdf optimizer <br> would fix this issue. Ryan 2013-04-10T18:17:21Z The actual page dimensions of the generated PDF differ from that specified by HTML2PDF.SetPageSize Q) we want to make a pdf file with the following dimensions: 28cm by 40 cm, <br> using HTML2PDF.setPaperSize If we set these values in the code, we end up <br> with a pdf with dimensions 28,02cm by 40,01cm. We thought that the <br> dimensions do not match exactly is due to the fact that the file is created <br> in pixels. By increasing the number of dots per inch, we wanted to get a
http://groups.google.com/group/pdfnet-sdk/feed/atom_v1_0_topics.xml?num=50
CC-MAIN-2013-20
refinedweb
3,712
59.64
Red Hat Bugzilla – Full Text Bug Listing I was testing another bug and allocated a VM a very low amount of RAM, forcing minstg2.img to be used (booted via PXE). The install fails, complainignt hat it couldn't read group data. tty3 says /usr/sbin/lspci not found. Thinking this a little odd, I went and explored: in yuminstall.py (in doGroupSetup): if iutil.inXen() or \ iutil.inVmware() or \ rpmUtils.arch.getBaseArch() == "i386" and "pae" not in iutil.cpuFeatureFlags(): if self.ayum.comps._groups.has_key("virtualization"): del self.ayum.comps._groups["virtualization"] and in iutil.py: def inVmware(): out = execWithCapture("/usr/sbin/lspci", ["-vvv"]) if "VMware" in out: return True return False Being that this code is in a critical path of setting up groups, and installs using minstg2 will fail, we probably need to do an updates.img to remove the dependence on inVmware(). My quick suggestion is to remove the conditional above, and just remove the virtualization comps group, and direct users that have problems to this updates.img. It's unlikely that anyone who needs to use minstg2 actually needs/wants/can use virtualization anyway. For the future, we should probably have a testcase using minstg2.img. I *really* wish we would have noticed this sooner. It is with a heavy heart that I put this on F9Blocker (knowing that we won't actually block since we can't, but we need a solution before release). Can you try ? And *sigh* -- this been there for like 3 weeks at this point. I think that more of the point is that we need to stop having as many stupid differences between minstg2 and stage2. The only difference should really be the presence of all of the X and font stuff. Anything else should be the same. Maybe I'll see how much that impacts the size of the image later this week. Works in the sense of doesn't barf anymore, but when it would present the package selection, it just sits there at a blue screen and spins the CPU. I can't quite tell what it's doing, the last thing on tty3 is 'moving (1) to step basepkgsel'. The shell on tty2 is fairly well unresponsive as well. Are you trying on i386 or x86_64? I just tried on i386 and it was fine... x86_64 will need a different image since we don't include the multilib pairs in the install images. And the failure mode you're seeing is what I'd sort of expect on x86_64 Changing version to '9' as part of upcoming Fedora 9 GA. More information and reason for this action is here: *** Bug 447221 has been marked as a duplicate of this bug. *** Reporter, could you please reply to the previous question? If you won't reply in one month, I will have to close this bug as INSUFFICIENT_DATA. Thank you.
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=445974
CC-MAIN-2016-40
refinedweb
483
75.5
29 March 2011 21:51 [Source: ICIS news] HOUSTON (ICIS)--?xml:namespace> Demand from the finished lube sector increases from March through May, as players prepare for the busy Group II and III base oils were widely limited, leaving no spot activity potentials in those tiers, traders said. Finished lubricant price hikes emerged from a number of lube oil marketers, including Chevron, Shell, ExxonMobil, CITGO and Petro Effective dates for the finished lube increases spread from 11 March to 18 April. A round of price increases stemming from rising feedstock costs and the hikes in finished lubes took place during March. Group I spot prices for light viscosity grades SN100 were last assessed by ICIS at $3.80-3.95/gal and SN150 at $3.83-3.93/gal, reflecting the March increased prices. Brightstock base oil spot prices were assessed for March at $4.87-5.00/gal.
http://www.icis.com/Articles/2011/03/29/9448231/us-base-oil-spot-business-sidelined-by-tight-supply.html
CC-MAIN-2014-52
refinedweb
149
71.34
Resolving certification errors Here are some of the actions you can take and links to additional references that can help you resolve many of the issues related to certification of apps submitted to the Windows Store. This list is not exhaustive, but it should help you to resolve some of the more common issues that can prevent your app from being listed in the Store, or that can be found later during a spot check of your app. If you need more help, visit our forums to discuss your issue with community members. Certification errors fall into three categories: - Security tests - Technical compliance - Content compliance Security tests The app is checked The app is tested using the Windows App Certification Kit. For info about running these tests on your own computer, see Using the Windows App Certification Kit. Content compliance The app is reviewed to ensure it complies with the App certification requirements for the Windows Store. Information to help you comply with some of the specific certification requirements is provided below. 1. Windows Store apps provide value to the customer 1.1 Your app must be fully functional and offer customers unique, creative value or utility in all the languages and markets that it supports Common reasons why apps fail this requirement: - The value or usefulness of the app is not clear. - The app has a name or icon that is very similar to other apps in the Store. - slight variation on the content. Collections of apps that are functionally similar should generally be delivered as a single, unified app. 1.2 Your app must be testable when it is submitted" or "not available yet") for primary user scenarios. - The app doesn’t work on all the architectures that it claims to support. For example, if you state that your app works on any CPU, it must work on all architectures, including ARM. - The app description uses screen shots or statements that imply features that don't appear to be implemented. - The app plays background audio, but does not correctly implement play, pause, and play/pause events to enable users to control audio playback. - The app description doesn't explicitly state hardware or network requirements. - The app uses the default icons generated by Microsoft Visual Studio or included in SDK samples. 3. Windows Store apps behave predictably 3.1 You must use only the Windows Runtime complies with this policy, ask yourself: If this Windows Store app was deployed on a Windows RT system, would it still function the same way and provide the same value to the user? If it would not, then it may not pass certification. (Remember that Windows RT systems do not support third-party desktop applications.) 3.2 Your app must not stop responding, end unexpectedly, or exhibit errors that significantly and adversely impact the customer experience Common reasons why apps fail this requirement: - The app crashes on launch. - The app crashes randomly or repeatedly - The app freezes and the user has to close and restart the app - The app requires a Windows component library that the Windows Store doesn't support. Here are some tips for avoiding or resolving these issues. - Make sure your development tools are up to date. - Before publishing to the Store, make sure the app's code uses theCurrentApp class (and not CurrentAppSimulator). - Test apps that implement the Store commerce APIs (Windows.ApplicationModel.Store) to verify that they handle typical exceptions, such as loss of network connectivity. - Run the app with no network connectivity to verify that the app doesn’t crash. - Test your app on a machine with a fresh installation of Windows and no additional software. - Test your app on all architectures that it supports, on additional computers, and in different configurations to help identify potential errors. For more about testing your app, see Using the Windows App Certification Kit . Tip After your app has been certified and listed in the Store, you can review app quality data to find out about problems it may have experienced after being installed on customers' computers. 3.9 All app logic must originate from, and reside in, your app package One reason why an app might fail this requirement is that it downloads a remote script and subsequently executes that script in the local context of your app package. 3.12 The app must comply with technical requirements - There are a number of reasons an app may not meet this requirement. - The app must pass the tests provided by the latest Windows App Certification Kit. Be sure you run the current version and resolve any problems before you submit your app. For more info, see Using the Windows App Certification Kit. - Direct3D apps must support a minimum feature level. If your app includes an ARM or a Neutral package, it must support Microsoft Direct3D feature level 9_1; otherwise, it must support the minimum feature level you indicate. If you choose a minimum feature level higher than 9_1, make sure your app checks whether the current hardware meets the minimum requirements and displays a message to the user if it does not. - If your app contains Windows Runtime components, they must conform to the Windows Runtime type system. Review the certification requirement 3.12.3 for detailed requirements. - If the app calls the 3rd party device drivers, it must be a privileged app. See Device sync and update for Windows Store device apps in Windows 8.1 for more information. - The app packages must be formatted correctly and include a valid app manifest. See App package requirements for more info.. If you declare other networking capabilities not included in the list above, you'll need to publish a privacy policy as well. You need to provide access to your privacy policy from a link in your app's description and also (such as email address or user account information) without explicitly obtaining the opt-in consent of the user. This requirement also applies if the personal information is that of a person other than the one using the app. Tip If you collect or publish personal information, you'll need to provide a link to your privacy policy in your app's description and from the app's settings (as displayed in the Settings charm). user security, or the security or functionality of the Windows device(s), system or related systems, and must not have the potential to cause harm to Windows users or any other check by testing it on a computer while running other apps. If the other apps behave differently (such as becoming less responsive or losing data) while your app is running, you must find the reason why and fix the issue. Finally, an app could be considered noncompliant if it presents a situation where the app fails, people could be harmed (for example, a GPS app marketed for use in life-threatening emergencies, or an air traffic control app). Apps that allow for control of a device without human manipulation would also be considered noncompliant. 4.6 Your app must comply with Windows Push Notification Service (WNS) requirements if it uses WNS notifications Review the details in certification requirement 4.6 as well as the Guidelines for push notifications. 4.7 If your app includes in-app purchase, billing functionality or captures financial information, the following requirements apply: 4.7.1 If your app uses the Store's in-app purchase API (Windows.ApplicationModel.Store namespace) for in-app purchases: Apps using the Windows.ApplicationModel.Store namespace can only sell in-app products that can be used within the app (i.e. digital items or services). If your app includes in-app billing functionality or captures financial account information, but does not use the Windows.ApplicationModel.Store namespace, the following apply for the listed account types: Apps submitted from either account type (individual or company) that do not use the namespace for in-app purchases must identify the commere transaction, authenticate the user, and obtain user confirmation for transactions. It must give customers the option to require an authentication on every transaction, or to turn off in-app transactions altogether, if they choose to do so. For either account type, if your app collects credit card information (or uses a third-party processor to do so), the payment processing must meet the current PCI Data Security Standard (PCI DSS). For more info, see PCI SSC Data Security Standards Overview. In addition, apps submitted from individual accounts are not permitted to directly collect sensitive financial account info or payments from customers. For more details, review certification requirement 4.7. 4.10 You may not use the Microsoft commerce engine to facilitate charitable contributions or sweepstakes Be sure to research applicable law and comply with all requirements. You must also state clearly that Microsoft is not the fundraiser or sponsor of the promotion. 5. Windows Store apps are appropriate for a global audience Common reasons why an app might fail this requirement: - Apps with a rating over PEGI 16, ESRB MATURE, or that contain content that would warrant such a rating, are not generally allowed unless the app is a game, is rated by a third party ratings board, and otherwise complies with the certification requirements. - Metadata and other content that you submit to accompany your app (including app screenshots) must contain only content that would merit a rating of PEGI 12, ESRB EVERYONE, or Windows Store 12+, or lower. This applies even if the app itself is rated for a higher age group. - If an app provides a gateway to retail content, user-generated content, or web-based content that is likely to violate this requirement, it must include a mechanism which requires the user to opt in to receiving access to such content. Additionally, because different markets have different cultural standards, greater scrutiny may be applied to your app depending on which markets you choose. If your app fails this requirement but you believe its content should be allowed, consider whether the content might be inappropriate for some of the markets you chose. Additional requirements are detailed in the subsections of certification requirement 5. For more info, see: 6. Windows apps are easily identified and understood.6 The capabilities you declare must relate to the core functions and value proposition of your Windows Store app, and the use of those declarations must be compliant with our app capability declarations The app can fail to meet this requirement if it declares capabilities that don't appear to be necessary to perform the described functionality. Note that declaring a lot of capabilities will probably cause the app to be closely scrutinized and to take longer to go through certification. Make sure not to declare more capabilities than your app needs.. Important Apps using this capability can only be submitted from developer accounts which can demonstrate they have acquired an Extended Validation (EV) code signing certificate from a certificate authority (CA). See Account types, locations, and fees for more info. Apps that declare the Documents library capability must facilitate cross-platform offline access to specific Microsoft OneDrive content using valid OneDrive URLs or Resource IDs, and it must save open files to the user’s OneD OneD 6.13 The metadata and other materials you provide to describe your app must accurately and clearly reflect the source, function and features of your app - Here are some ways you can make sure your app meets this requirement: - Make sure that it's easy for customers to understand what your app does. your description, category/subcategory, and images (including tiles) must be reasonably related to the content of the app. - If your app has a trial version, make sure the functionality in the trial reasonably resembles the full functionality. - Disclose any limitations or restrictions that your app has—for example, if it doesn't fully support all input methods (touch, keyboard, and mouse), or if content is limited to certain markets. This last point is particularly relevant for apps listed in the Rest of World (ROW) market (if the app would work properly in one region or country but not in another). - If you declare your app as accessible, you must comply with accessibility guidelines. - Make sure to localize the app (including its description, screenshots, and promotional images) in each language that it supports. (The Windows Store differentiates between language support and market distribution. For example, you are allowed to submit an app in French in the US market. This requirement is about languages, not markets.) Each app must support (and be localized in) at least one of the certification languages, and be functional in all languages for which you submit it. - Make sure that your app has a similar experience across processor types and targeted operating systems. An app doesn't have to support all processor types, but to the customer it must look and act like the same app on all processor types that it does support. If you want to support features that are found in one architecture and not in another, you must create two separate apps with different names..) - Do not imply that your app is associated with a company, government body, or other entity if you do not have permission to make that representation. 7. Desktop apps must follow additional requirements Users should be sent directly to the location where they can quickly and easily download your desktop app. Check that the link is correct, and make sure the information on your purchase page is clear and has all the necessary and correct information. For additional help resolving errors, visit our forums to discuss your issue with community members.
http://msdn.microsoft.com/en-us/library/windows/apps/Hh921583.aspx
CC-MAIN-2014-42
refinedweb
2,274
50.67
Hyperopt is a powerful tool for tuning ML models with Apache Spark. Read on to learn how to define and execute (and debug) the tuning optimally!? There is no simple way to know which algorithm, and which settings for that algorithm (“hyperparameters”), produces the best model for the data. Any honest model-fitting process entails trying many combinations of hyperparameters, even many algorithms. One popular open-source tool for hyperparameter tuning is Hyperopt. It is simple to use, but using Hyperopt efficiently requires care. Whether you are just getting started with the library, or are already using Hyperopt and have had problems scaling it or getting good results, this blog is for you. It will explore common problems and solutions to ensure you can find the best model without wasting time and money. It will show how to: - Specify the Hyperopt search space correctly - Debug common errors - Utilize parallelism on an Apache Spark cluster optimally - Optimize execution of Hyperopt trials - Use MLflow to track models What is Hyperopt? Hyperopt is a Python library that can optimize a function’s value over complex spaces of inputs. For machine learning specifically, this means it can optimize a model’s accuracy (loss, really) over a space of hyperparameters. It’s a Bayesian optimizer, meaning it is not merely randomly searching or searching a grid, but intelligently learning which combinations of values work well as it goes, and focusing the search there. There are many optimization packages out there, but Hyperopt has several things going for it: - Open source - Bayesian optimizer – smart searches over hyperparameters (using a Tree of Parzen Estimators, FWIW), not grid or random search - Integrates with Apache Spark for parallel hyperparameter search - Integrates with MLflow for automatic tracking of the search results - Included already in the Databricks ML runtime - Maximally flexible: can optimize literally any Python model with any hyperparameters This last point is a double-edged sword. Hyperopt is simple and flexible, but it makes no assumptions about the task and puts the burden of specifying the bounds of the search correctly on the user. Done right, Hyperopt is a powerful way to efficiently find a best model. However, there are a number of best practices to know with Hyperopt for specifying the search, executing it efficiently, debugging problems and obtaining the best model via MLflow. Specifying the space: what’s a hyperparameter? When using any tuning framework, it’s necessary to specify which hyperparameters to tune. But, what are hyperparameters? They’re not the parameters of a model, which are learned from the data, like the coefficients in a linear regression, or the weights in a deep learning network. Hyperparameters are inputs to the modeling process itself, which chooses the best parameters. This includes, for example, the strength of regularization in fitting a model. Scalar parameters to a model are probably hyperparameters. Whatever doesn’t have an obvious single correct value is fair game. Some arguments are not tunable because there’s one correct value. For example, xgboost wants an objective function to minimize. For classification, it’s often reg:logistic. For regression problems, it’s reg:squarederrorc. But, these are not alternatives in one problem. It makes no sense to try reg:squarederror for classification. Similarly, in generalized linear models, there is often one link function that correctly corresponds to the problem being solved, not a choice. For a simpler example: you don’t need to tune verbose anywhere! Some arguments are ambiguous because they are tunable, but primarily affect speed. Consider n_jobs in scikit-learn implementations . This controls the number of parallel threads used to build the model. It should not affect the final model’s quality. It’s not something to tune as a hyperparameter. Similarly, parameters like convergence tolerances aren’t likely something to tune. Too large, and the model accuracy does suffer, but small values basically just spend more compute cycles. These are the kinds of arguments that can be left at a default. In the same vein, the number of epochs in a deep learning model is probably not something to tune. Training should stop when accuracy stops improving via early stopping. See “How (Not) To Scale Deep Learning in 6 Easy Steps” for more discussion of this idea. Specifying the space: what range to choose? Next, what range of values is appropriate for each hyperparameter? Sometimes it’s obvious. For example, if choosing Adam versus SGD as the optimizer when training a neural network, then those are clearly the only two possible choices. For scalar values, it’s not as clear. Hyperopt requires a minimum and maximum. In some cases the minimum is clear; a learning rate-like parameter can only be positive. An Elastic net parameter is a ratio, so must be between 0 and 1. But what is, say, a reasonable maximum “gamma” parameter in a support vector machine? It’s necessary to consult the implementation’s documentation to understand hard minimums or maximums and the default value. If in doubt, choose bounds that are extreme and let Hyperopt learn what values aren’t working well. For example, if a regularization parameter is typically between 1 and 10, try values from 0 to 100. The range should include the default value, certainly. At worst, it may spend time trying extreme values that do not work well at all, but it should learn and stop wasting trials on bad values. This may mean subsequently re-running the search with a narrowed range after an initial exploration to better explore reasonable values. Some hyperparameters have a large impact on runtime. A large max tree depth in tree-based algorithms can cause it to fit models that are large and expensive to train, for example. Worse, sometimes models take a long time to train because they are overfitting the data! Hyperopt does not try to learn about runtime of trials or factor that into its choice of hyperparameters. If some tasks fail for lack of memory or run very slowly, examine their hyperparameters. Sometimes it will reveal that certain settings are just too expensive to consider. A final subtlety is the difference between uniform and log-uniform hyperparameter spaces. Hyperopt offers hp.uniform and hp.loguniform, both of which produce real values in a min/max range. hp.loguniform is more suitable when one might choose a geometric series of values to try (0.001, 0.01, 0.1) rather than arithmetic (0.1, 0.2, 0.3). Which one is more suitable depends on the context, and typically does not make a large difference, but is worth considering. To recap, a reasonable workflow with Hyperopt is as follows: - Choose what hyperparameters are reasonable to optimize - Define broad ranges for each of the hyperparameters (including the default where applicable) - Run a small number of trials - Observe the results in an MLflow parallel coordinate plot and select the runs with lowest loss - Move the range towards those higher/lower values when the best runs’ hyperparameter values are pushed against one end of a range - Determine whether certain hyperparameter values cause fitting to take a long time (and avoid those values) - Re-run with more trials - Repeat until the best runs are comfortably within the given search bounds and none are taking excessive time Use hp.quniform for scalars, hp.choice for categoricals Consider choosing the maximum depth of a tree building process. This must be an integer like 3 or 10. Hyperopt offers hp.choice and hp.randint to choose an integer from a range, and users commonly choose hp.choice as a sensible-looking range type. However, these are exactly the wrong choices for such a hyperparameter. While these will generate integers in the right range, in these cases, Hyperopt would not consider that a value of “10” is larger than “5” and much larger than “1”, as if scalar values. Yet, that is how a maximum depth parameter behaves. If 1 and 10 are bad choices, and 3 is good, then it should probably prefer to try 2 and 4, but it will not learn that with hp.choice or hp.randint. Instead, the right choice is hp.quniform (“quantized uniform”) or hp.qloguniform to generate integers. hp.choice is the right choice when, for example, choosing among categorical choices (which might in some situations even be integers, but not usually). Here are a few common types of hyperparameters, and a likely Hyperopt range type to choose to describe them: One final caveat: when using hp.choice over, say, two choices like “adam” and “sgd”, the value that Hyperopt sends to the function (and which is auto-logged by MLflow) is an integer index like 0 or 1, not a string like “adam”. To log the actual value of the choice, it’s necessary to consult the list of choices supplied. Example: optimizers = ["adam", "sgd"] search_space = { ... 'optimizer': hp.choice("optimizer", optimizers) } def my_objective(params): ... the_optimizer = optimizers[params['optimizer']] mlflow.log_param('optimizer', the_optimizer) ... “There are no evaluation tasks, cannot return argmin of task losses” One error that users commonly encounter with Hyperopt is: There are no evaluation tasks, cannot return argmin of task losses. This means that no trial completed successfully. This almost always means that there is a bug in the objective function, and every invocation is resulting in an error. See the error output in the logs for details. In Databricks, the underlying error is surfaced for easier debugging. It can also arise if the model fitting process is not prepared to deal with missing / NaN values, and is always returning a NaN loss. Sometimes it’s “normal” for the objective function to fail to compute a loss. Sometimes a particular configuration of hyperparameters does not work at all with the training data — maybe choosing to add a certain exogenous variable in a time series model causes it to fail to fit. It’s OK to let the objective function fail in a few cases if that’s expected. It’s also possible to simply return a very large dummy loss value in these cases to help Hyperopt learn that the hyperparameter combination does not work well. Setting SparkTrials parallelism optimally Hyperopt can parallelize its trials across a Spark cluster, which is a great feature. Building and evaluating a model for each set of hyperparameters is inherently parallelizable, as each trial is independent of the others. Using Spark to execute trials is simply a matter of using “SparkTrials” instead of “Trials” in Hyperopt. This is a great idea in environments like Databricks where a Spark cluster is readily available. SparkTrials takes a parallelism parameter, which specifies how many trials are run in parallel. Of course, setting this too low wastes resources. If running on a cluster with 32 cores, then running just 2 trials in parallel leaves 30 cores idle. Setting parallelism too high can cause a subtler problem. With a 32-core cluster, it’s natural to choose parallelism=32 of course, to maximize usage of the cluster’s resources. Setting it higher than cluster parallelism is counterproductive, as each wave of trials will see some trials waiting to execute. However, Hyperopt’s tuning process is iterative, so setting it to exactly 32 may not be ideal either. It uses the results of completed trials to compute and try the next-best set of hyperparameters. Consider the case where max_evals the total number of trials, is also 32. If parallelism is 32, then all 32 trials would launch at once, with no knowledge of each other’s results. It would effectively be a random search. parallelism should likely be an order of magnitude smaller than max_evals. That is, given a target number of total trials, adjust cluster size to match a parallelism that’s much smaller. If targeting 200 trials, consider parallelism of 20 and a cluster with about 20 cores. There’s more to this rule of thumb. It’s also not effective to have a large parallelism when the number of hyperparameters being tuned is small. For example, if searching over 4 hyperparameters, parallelism should not be much larger than 4. 8 or 16 may be fine, but 64 may not help a lot. With many trials and few hyperparameters to vary, the search becomes more speculative and random. It doesn’t hurt, it just may not help much. Set parallelism to a small multiple of the number of hyperparameters, and allocate cluster resources accordingly. How to choose max_evals after that is covered below. Leveraging task parallelism optimally There’s a little more to that calculation. Some machine learning libraries can take advantage of multiple threads on one machine. For example, several scikit-learn implementations have an n_jobs parameter that sets the number of threads the fitting process can use. Although a single Spark task is assumed to use one core, nothing stops the task from using multiple cores. For example, with 16 cores available, one can run 16 single-threaded tasks, or 4 tasks that use 4 each. The latter is actually advantageous — if the fitting process can efficiently use, say, 4 cores. This is because Hyperopt is iterative, and returning fewer results faster improves its ability to learn from early results to schedule the next trials. That is, in this scenario, trials 5-8 could learn from the results of 1-4 if those first 4 tasks used 4 cores each to complete quickly and so on, whereas if all were run at once, none of the trials’ hyperparameter choices have the benefit of information from any of the others’ results. How to set n_jobs (or the equivalent parameter in other frameworks, like nthread in xgboost) optimally depends on the framework. scikit-learn and xgboost implementations can typically benefit from several cores, though they see diminishing returns beyond that, but it depends. One solution is simply to set n_jobs (or equivalent) higher than 1 without telling Spark that tasks will use more than 1 core. The executor VM may be overcommitted, but will certainly be fully utilized. If not taken to an extreme, this can be close enough. This affects thinking about the setting of parallelism. If a Hyperopt fitting process can reasonably use parallelism = 8, then by default one would allocate a cluster with 8 cores to execute it. But if the individual tasks can each use 4 cores, then allocating a 4 * 8 = 32-core cluster would be advantageous. Ideally, it’s possible to tell Spark that each task will want 4 cores in this example. This is done by setting spark.task.cpus. This will help Spark avoid scheduling too many core-hungry tasks on one machine. The disadvantage is that this is a cluster-wide configuration, which will cause all Spark jobs executed in the session to assume 4 cores for any task. This is only reasonable if the tuning job is the only work executing within the session. Simply not setting this value may work out well enough in practice. Optimizing Spark-based ML jobs The examples above have contemplated tuning a modeling job that uses a single-node library like scikit-learn or xgboost. Hyperopt can equally be used to tune modeling jobs that leverage Spark for parallelism, such as those from Spark ML, xgboost4j-spark, or Horovod with Keras or PyTorch. However, in these cases, the modeling job itself is already getting parallelism from the Spark cluster. Just use Trials, not SparkTrials, with Hyperopt. Jobs will execute serially. Hence, it’s important to tune the Spark-based library’s execution to maximize efficiency; there is no Hyperopt parallelism to tune or worry about. Avoid large serialized objects in the objective function When using SparkTrials, Hyperopt parallelizes execution of the supplied objective function across a Spark cluster. This means the function is magically serialized, like any Spark function, along with any objects the function refers to. This can be bad if the function references a large object like a large DL model or a huge data set. model = # load large model train, test = # load data def my_objective(): ... model.fit(train, ...) model.evaluate(test, ...) Hyperopt has to send the model and data to the executors repeatedly every time the function is invoked. This can dramatically slow down tuning. Instead, it’s better to broadcast these, which is a fine idea even if the model or data aren’t huge: model = # load large model train, test = # load data b_model = spark.broadcast(model) b_train = spark.broadcast(train) b_test = spark.broadcast(test) def my_objective(): ... b_model.value.fit(b_train.value, ...) b_model.value.evaluate(b_test.value, ...) However, this will not work if the broadcasted object is more than 2GB in size. It may also be necessary to, for example, convert the data into a form that is serializable (using a NumPy array instead of a pandas DataFrame) to make this pattern work. If not possible to broadcast, then there’s no way around the overhead of loading the model and/or data each time. The objective function has to load these artifacts directly from distributed storage. This works, and at least, the data isn’t all being sent from a single driver to each worker. Use Early Stopping Optimizing a model’s loss with Hyperopt is an iterative process, just like (for example) training a neural network is. It keeps improving some metric, like the loss of a model. However, at some point the optimization stops making much progress. It’s possible that Hyperopt struggles to find a set of hyperparameters that produces a better loss than the best one so far. You may observe that the best loss isn’t going down at all towards the end of a tuning process. It’s advantageous to stop running trials if progress has stopped. Hyperopt offers an early_stop_fn parameter, which specifies a function that decides when to stop trials before max_evals has been reached. Hyperopt provides a function no_progress_loss, which can stop iteration if best loss hasn’t improved in n trials. How should I set max_evals? Below is some general guidance on how to choose a value for max_evals * “total categorical breadth” is the total number of categorical choices in the space. If you have hp.choice with two options “on, off”, and another with five options “a, b, c, d, e”, your total categorical breadth is 10. By adding the two numbers together, you can get a base number to use when thinking about how many evaluations to run, before applying multipliers for things like parallelism. Example: You have two hp.uniform, one hp.loguniform, and two hp.quniform hyperparameters, as well as three hp.choice parameters. Two of them have 2 choices, and the third has 5 choices.To calculate the range for max_evals, we take 5 x 10-20 = (50, 100) for the ordinal parameters, and then 15 x (2 x 2 x 5) = 300 for the categorical parameters, resulting in a range of 350-450. With no parallelism, we would then choose a number from that range, depending on how you want to trade off between speed (closer to 350), and getting the optimal result (closer to 450). As you might imagine, a value of 400 strikes a balance between the two and is a reasonable choice for most situations. If we wanted to use 8 parallel workers (using SparkTrials), we would multiply these numbers by the appropriate modifier: in this case, 4x for speed and 8x for optimal results, resulting in a range of 1400 to 3600, with 2500 being a reasonable balance between speed and the optimal result. One final note: when we say “optimal results”, what we mean is confidence of optimal results. It is possible, and even probable, that the fastest value and optimal value will give similar results. However, by specifying and then running more evaluations, we allow Hyperopt to better learn about the hyperparameter space, and we gain higher confidence in the quality of our best seen result. Avoid cross validation in the objective function The objective function optimized by Hyperopt, primarily, returns a loss value. Given hyperparameter values that Hyperopt chooses, the function computes the loss for a model built with those hyperparameters. It returns a dict including the loss value under the key ‘loss’: return {'status': STATUS_OK, 'loss': loss} To do this, the function has to split the data into a training and validation set in order to train the model and then evaluate its loss on held-out data. A train-validation split is normal and essential. It’s common in machine learning to perform k-fold cross-validation when fitting a model. Instead of fitting one model on one train-validation split, k models are fit on k different splits of the data. This can produce a better estimate of the loss, because many models’ loss estimates are averaged. However, it’s worth considering whether cross validation is worthwhile in a hyperparameter tuning task. It improves the accuracy of each loss estimate, and provides information about the certainty of that estimate, but it comes at a price: k models are fit, not one. That means each task runs roughly k times longer. This time could also have been spent exploring k other hyperparameter combinations. That is, increasing max_evals by a factor of k is probably better than adding k-fold cross-validation, all else equal. If k-fold cross validation is performed anyway, it’s possible to at least make use of additional information that it provides. With k losses, it’s possible to estimate the variance of the loss, a measure of uncertainty of its value. This is useful to Hyperopt because it is updating a probability distribution over the loss. To do so, return an estimate of the variance under “loss_variance”. Note that the losses returned from cross validation are just an estimate of the true population loss, so return the Bessel-corrected estimate: losses = # list of k model losses return {'status': STATUS_OK, 'loss', np.mean(losses), 'loss_variance': np.var(losses, ddof=1)} Choosing the Right Loss An optimization process is only as good as the metric being optimized. Models are evaluated according to the loss returned from the objective function. Sometimes the model provides an obvious loss metric, but that may not accurately describe the model’s usefulness to the business. For example, classifiers are often optimizing a loss function like cross-entropy loss. This expresses the model’s “incorrectness” but does not take into account which way the model is wrong. Returning “true” when the right answer is “false” is as bad as the reverse in this loss function. However it may be much more important that the model rarely returns false negatives (“false” when the right answer is “true”). Recall captures that more than cross-entropy loss, so it’s probably better to optimize for recall. It’s reasonable to return recall of a classifier in this case, not its loss. Note that Hyperopt is minimizing the returned loss value, whereas higher recall values are better, so it’s necessary in a case like this to return -recall. Retraining the best model Hyperopt selects the hyperparameters that produce a model with the lowest loss, and nothing more. Because it integrates with MLflow, the results of every Hyperopt trial can be automatically logged with no additional code in the Databricks workspace. The results of many trials can then be compared in the MLflow Tracking Server UI to understand the results of the search. Hundreds of runs can be compared in a parallel coordinates plot, for example, to understand which combinations appear to be producing the best loss. This is useful in the early stages of model optimization where, for example, it’s not even so clear what is worth optimizing, or what ranges of values are reasonable. However, the MLflow integration does not (cannot, actually) automatically log the models fit by each Hyperopt trial. This is not a bad thing. It may not be desirable to spend time saving every single model when only the best one would possibly be useful. It is possible to manually log each model from within the function if desired; simply call MLflow APIs to add this or anything else to the auto-logged information. For example: def my_objective(): model = # fit a model ... mlflow.sklearn.log_model(model, "model") ... Although up for debate, it’s reasonable to instead take the optimal hyperparameters determined by Hyperopt and re-fit one final model on all of the data, and log it with MLflow. While the hyperparameter tuning process had to restrict training to a train set, it’s no longer necessary to fit the final model on just the training set. With the ‘best’ hyperparameters, a model fit on all the data might yield slightly better parameters. The disadvantage is that the generalization error of this final model can’t be evaluated, although there is reason to believe that was well estimated by Hyperopt. A sketch of how to tune, and then refit and log a model, follows: all_data = # load all data train, test = # split all_data to train, test def fit_model(params, data): model = # fit model to data with params return model def my_objective(params): model = fit_model(params, train) # evaluate and return loss on test best_params = fmin(fn=my_objective, …) final_model = fit_model(best_params, all_data) mlflow.sklearn.log_model(final_model, "model") More best practices If you’re interested in more tips and best practices, see additional resources: - Hyperopt best practices documentation from Databricks - Best Practices for Hyperparameter Tuning with MLflow (talk) – SAIS 2019 - Advanced Hyperparameter Optimization for Deep Learning with MLflow (talk) – SAIS 2019 - Scaling Hyperopt to Tune Machine Learning Models in Python – blog – 2019-10-29 Conclusion This blog covered best practices for using Hyperopt to automatically select the best machine learning model, as well as common problems and issues in specifying the search correctly and executing its search efficiently. It covered best practices for distributed execution on a Spark cluster and debugging failures, as well as integration with MLflow. With these best practices in hand, you can leverage Hyperopt’s simplicity to quickly integrate efficient model selection into any machine learning pipeline. Use Hyperopt on Databricks (with Spark and MLflow) to build your best model!
https://databricks.com/blog/2021/04/15/how-not-to-tune-your-model-with-hyperopt.html
CC-MAIN-2021-21
refinedweb
4,348
54.22
pry-debugger Fast execution control in Pry Adds step, next, and continue commands to Pry using debugger. To use, invoke pry normally: def some_method binding.pry # Execution will stop here. puts 'Hello World' # Run 'step' or 'next' in the console to move here. end pry-debugger is not yet thread-safe, so only use in single-threaded environments. Only supports MRI 1.9.2 and 1.9.3. For a pure-ruby approach not reliant on debugger, check out pry-nav. Note: pry-nav and pry-debugger cannot be loaded together. Support for pry-remote (>= 0.1.4) is also included. Requires explicity requiring pry-nav, not just relying on pry's plugin loader. For example, in a Gemfile: gem 'pry' gem 'pry-nav' Stepping through code often? Add the following shortcuts to ~/.pryrc: Pry.commands.alias_command 'c', 'continue' Pry.commands.alias_command 's', 'step' Pry.commands.alias_command 'n', 'next' Contributions Patches and bug reports are welcome. Just send a pull request or file an issue. Project changelog.
https://www.rubydoc.info/gems/pry-debugger/0.1.0
CC-MAIN-2022-33
refinedweb
167
55
Additional Reading:? how to set users child entity in same page Thanks for the code. I will try to apply for mine. can you also explain how to set up a simple hibernate4 and spring 3.1 web application without using jpa. especially the part about how to use LocalSessionFactoryBuilder. Thanks.: pls provide good examples like curdoperations and some useful task to realtime Hi, I am not sure what you mean. Could you clarify this a bit?. Hi Petri, Thanks for the link to that blog post from back in 2013. I only just spotted it in my referrer logs! Anyway, I thought I’d mention that for anyone trying to do the same in the world of Spring Boot, I have just knocked up an updated post: It doesn’t explain things as much as the original post, but is hopefully useful to anyone struggling to persuade Spring Boot to work with multiple data sources. To be honest, the way to do it is almost identical, but there are a couple of gotchas caused by Spring Boot trying to wire everything up based on conventional names. Hopefully it might be useful to someone out there. Keep up the good work! Steve Hi Steve, Thank you for sharing your new blog post. I am sure that it is useful to my readers. That is why I shared it on Twitter, and I hope that you get some traffic from Twitter.. Hi Petri, I’m import Part 8 to ecilipse to run. I would use JpaRepository annotation, so I have been changed in POM.xml and change version of spring-data-jpa to 1.4.1. And in ApplicationContext.java, I added @EnableJpaRepositories(basePackages={“net.petrikainulainen.spring.datajpa.repository”}) to it. When run, I get this error: Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘personRepository’: FactoryBean threw exception on object creation; nested exception is org.springframework.data.mapping.PropertyReferenceException: No property find found for type net.petrikainulainen.spring.datajpa.model.Person. Do you can help me fix it? Many Thanks! Hi, What is the @JpaRepositoryannotation? I have never heard of it, but I found something from Github. Do you mean that you want to annotate your Spring Data JPA repositories with that annotation? By the way, the latest stable release of Spring Data JPA is 1.8.0. I recommend that you use it instead of 1.4.1. This means that Spring Data JPA expects to find a property called findfrom the Personclass. Because it cannot find it, it throws an exception. One possible reason for this is that the name of the custom repository implementation is PaginatingPersonRepositoryImpl. This works with Spring Data 1.1, but if you use Spring Data JPA 1.2.0 (or newer), the name of the custom repository implementation must use this syntax: [The name of the actual repository interface][postfix]. Because the example application uses the default configuration, you must change the name of the custom repository implementation to: PersonRepositoryImpl. I assume that the reason why the exception is thrown is that Spring Data JPA tries to create implementations for the methods that are declared in the PaginatingPersonRepositoryinterface (it thinks that they are “standard” query methods). Because it cannot find the findproperty, it throws an exception. Thanks for the tutorials! are great explained!! I just wonder if is possible you know how to call the files of the configurations and where on the project structure put them? Thanks! Note: I removed the empty list and the empty configuration class because they made the comment a bit hard to read – Petri. Hi Juan, If you are creating a web application and you use a server that supports the Servlet API 3.0 (or newer), you can create a class that configures the ServletContextprogrammatically. This configuration class must implement the WebApplicationInitializerinterface. For example, the example application of this blog post has a package net.petrikainulainen.springdata.jpa.configthat contains all @Configurationclasses and the custom WebApplicationInitializerclass. It doesn’t really matter where you put these classes (as long as they are packaged to the created WAR file), but I like to organize them in this way. By the way, the Spring Framework Reference Manual has a pretty comprehensive documentation about the Java-based container configuration. I hope that this answers to your question. Also, if you have any further questions, don’t hesitate to ask them! hi Petri, Thanks for your useful blog. I have a probelm using the spring data, i use custom queries to create Query , the entities are mapped by Eclipse link with associaition In one particular filter criteria i have message table and status table ,where messageid has one to many relation in status table there are 4 filter critera three to be filteres with the message table one wiht the status table my filter query gets all message entity wiht the status , but i want to get only the elements with latest status which matches with the filter criteria . I use pagination , where the page object returns all the message enity that matches the status .but i want the message enetity with only the latest status which matches filter criteria i Hi Sathish, Unfortunately I am not 100% sure what you are trying to do. Could you leave a new comment to this blog post, and add the relevant entities and your query to that comment? Even I put io.spring.platform platform-bom 1.1.0.RELEASE pom import it still ask me version for hibernate and others(only not spring one). sorry for the bad comment above. Even I put BOM(dependencyManagment) to pom still asking version. For example I don’t see any version for jpa provider in your pom.xml. Am I doing something wrong. I think I realized there is platform-bom and spring-framework-bom. Platform adjusted all versions. Hi, Don’t worry about the bad comment. It happens to all of us sometimes. I described the dependency management of the example application in the previous part of this tutorial, but you are right. You don’t have to worry about the dependency versions because the Spring IO Platform takes care of them after you have enabled it in your pom.xml file. Also, the Spring IO Platform Reference Guide has a section that describes the artifacts which are part of the Spring IO Platform. Hi Petri, thanks for the wonderful blog. It is very easy to read and understand and yes, very up to date. I am hooked here from last two days. I have taken up Spring Data as my topic of study. We are already using in it in our project but I just got curious to understand the configurations which have been done already. We are using XML configuration there. Question 1: Which configuration method is preferable, Java based or XML? Question 2: There is this following bean definition in our applicationContext: After a little search through its javadoc n google, I could only understand that it is used for injecting entityManager. But it is optional because a default PersistenceAnnotationBeanPostProcessor is registered by the XML tags. Can you please explain a little on this? I am confused as to whether this should be there or not. If it should be, then why? Thanks. oops.. seems like the code part has been removed once the comment got posted.. I was mentioning about “org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor” bean declaration in application context. Hi Ketki, Thank you for your kind words. I really appreciate them. I think that this is a matter of preference. I use Java configuration, but I know some people who like to use XML configuration. It’s probably a remnant from the past, and you don’t need it (unless you want to override something). This StackOverflow question provides more details about the role of the PersistenceAnnotationBeanPostProcessorbean. If you have any additional questions, don’t hesitate to ask them. Hello, I have previously used SpringMVC and Spring-Data-JPA for project. For this I did XML Configuration. In these examples, you have demonstrated Java Configurations, so my question is: 1. Can we use both kind of configurations – XML and JAVA configurations together or do I need to stay with either of two throughout the application? Please reply. Hi, We can use XML and Java configuration together, but there are some restrictions: @Configurationclass with the @ImportResourceannotation. @Controller, @Component, @Service, and @Repository) as long as these beans are placed in a package that is scanned by the Spring container. However, if we use XML, we cannot import Java configuration classes (or at least I don’t know a way to do it). If you have any additional questions, don’t hesitate to ask them. Hi Petri, Thank you for such a nice article and your blogs will help beginners a lot. This is my first spring data project and face some issue with the configuration. Which I posted my query in the following link, Can you please have a look at and help me the needful. Hi Manjunath, Typically the NoSuchBeanDefinitionExceptionis thrown when the Spring container cannot find the bean definition. If you want to get more information about this exception, read this blog post: Spring NoSuchBeanDefinitionException. Because the missing bean is a Spring Data JPA repository, the most likely reason for this error is that the repository interface is not placed in a package that is scanned by Spring Data JPA. In other words, check that the basePackagesattribute of the @EnableJpaRepositoriesannotation contains the correct package. Also, you probably want to change this as well (your entity managers tries to find entities from a wrong package). If you have any additional questions, don’t hesitate to ask them. Suppose a join select query returns multiple column from different tables.How can the select query return a specific type of List of bean(not a type of entity) in the @Query of the interface extending JpaRepository interface? public interface XYZRepository extends JpaRepository{ @Query(query=”select ordatt.eNo as eNo, ord.Id as Id from Ord ord,Attr ordatt where ord.Id =ordatt.Id) public List selectAllSendOutOrderInfo(); } class XY{ int eNo; int Id; //getter and setter } Hi, Read this comment. It explains how you can create a query method that returns an object, which is not an entity, by using the @Queryannotation and JPQL. Also, if you want to return a list of objects, you can use this technique as well. Petri, I was reading this topic : Has an official ConnectionProvider class from Hibernate, which should be used instead of the HikariCP implementation. Using the class HikariConfig I can set properties like pool size, idle timeout, etc.. but I can’t set the connection provider. I tried to set using the application.properties using this line: hibernate.connection.provider_class=org.hibernate.hikaricp.internal.HikariCPConnectionProvider And on PersistenceContext properties.put(PROPERTY_HIBERNATE_CONNECION_PROVIDER , environment.getRequiredProperty(PROPERTY_HIBERNATE_CONNECION_PROVIDER)); But I got an error: Caused by: org.hibernate.boot.registry.selector.spi.StrategySelectionException: Unable to resolve name [org.hibernate.hikaricp.internal.HikariCPConnectionProvider] as strategy [org.hibernate.engine.jdbc.connections.spi.ConnectionProvider] Do you know if the HikariCP automaticly sets the hibernate connection provider to com.zaxxer.hikari.hibernate.HikariConnectionProvider, or we need to set this manualy? Some relevant doc: p.s.: I’m using Hibernate 5 and Hikari 2.4.3 Hi Guilherme, I have never done this myself, but it seems that if you want to use the new HikariConnectionProvider, you shouldn’t use the HikariConfigclass. You should configure HikariCP by using the properties described on the wiki page. In other words, you should pass the configuration to Hibernate that configures the HikariCP connection pool. Also, I don’t know if HikariCP supports Hibernate 5 yet. Have you tried downgrading to Hibernate 4.3.6? Hi Petri, What are your suggestion on transaction commit in SPring. I mean if we want to commit a single transaction in parts how can it be achieved.? Regards, Mayank. Hi Mayank, If you want to commit one transaction in several parts, you need divide it into smaller transactions. The reason for this is that a database transaction must either complete entirely or fail (and have no effect). Hello Petri, I was looking through your tutorial and found it very informative and helpful. I have a question, why did you implement TodoMapper class? I typically used Mapper classes when using DAOs and I thought a reason for using Repository, instead of DAO, was so that I would not need to use Mapper classes for my Domain objects anymore. Can you please answer? thank you, mbeddedsoft The reason why I need mappers is that I don’t want to expose my entities to the web layer. I have also written a blog post that explains why I think that exposing them is a bad idea. However, I don’t implement these mappers myself when I write real-world applications. I use Dozer, ModelMapper, or jTransfo. However, since this is a tutorial that is meant for beginners, I want to keep things simple and avoid unnecessary dependencies. I’d also like to mention MapStruct as another model mapper, it makes use of code generation but in a way that a human might write it EG: personDto.setName(person.getName()) I’m not sure how it compares to the others, I’ve only used Dozer before and that was several years ago. I’m sure that they are all pretty much capable of the same thing but it would be nice to do a feature comparison at some point. Thank you for sharing. I will take a look at it. By the way, it would be interesting to see a some kind of performance comparison as well. Thank you. Not only your article but also you replies to readers questions are also excellent source of information. You are welcome. Also, thank you for your kind words. I really appreciate them. Thank you very much for this guide. I am trying to integrate spring data jpa into our application, but I am getting the following error when attempting to start Tomcat: Error creating bean with name ‘academyRepository’: Cannot resolve reference to bean ‘jpaMappingContext’ while setting bean property ‘mappingContext’; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘jpaMappingContext’: Invocation of init method failed; nested exception is java.lang.NullPointerException … … Caused by: java.lang.NullPointerException at org.springframework.data.jpa.repository.config.JpaMetamodelMappingContextFactoryBean.createInstance(JpaMetamodelMappingContextFactoryBean.java:61) at org.springframework.data.jpa.repository.config.JpaMetamodelMappingContextFactoryBean.createInstance(JpaMetamodelMappingContextFactoryBean.java:26) at org.springframework.beans.factory.config.AbstractFactoryBean.afterPropertiesSet(AbstractFactoryBean.java:134) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1637) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1574) Have you ever seen this before, or have any idea what might be causing it? I am using XML configuration and am sure it is correct because this is a established project. The only thing I did was add this entry to our applicationContext.xml: And then created a single repository: public interface AcademyRepository extends JpaRepository { List findByName(String name); } Thanks for any help you may be able to provide. Hi Jonas, I have never seen this exception myself, but I found this StackOverflow question. It seems that this might be caused by “duplicate” configuration. Unfortunately Wordpress removes XML markup from comments => I cannot “guess” what your problem is. Could you add your application context configuration files into Pastebin and add the link here? Petri, thank you for this nice tutorial. For me it is quite the right level to introduce me to SpringIO. Additional I like your links to corresponding areas like maven and how you describe finding the right way within two or more alternatives! Thank you for your kind words. I really appreciate them! Hi Petri, I have connected my application with oracle, mysql db everything is working fine but when I change the configuration to connect same application with MS Sql Sever tables are not auto created with the entities. No exception is returned. Hi, I don’t have a lot of experience from MS SQL Server. I used it in one project and I was able to create the tables automatically. However, I found one interesting StackOverflow answer that might help you to solve your problem. Hi Dear Petri, Issue has been resolved just by adding both of lines spring.jpa.database-platform=org.hibernate.dialect.SQLServerDialect and hibernate.dialect=org.hibernate.dialect.SQLServerDialect in property file. Thanks Hi, Great! I am happy to hear that you were able solve your problem. I have been going through various docs for configuring multiple databases with spring boot 1.3.4 release. I also read spring doc to do this: However, the issue is that all the docs talk about configuration of two different data sources but none talks about configuration of two different database platform. I mean, I am looking into how to configure, say, SQLServer and Oracle in my spring boot REST API. I could not find any way where I could define two different hibernate dialect (one for SQLServer and one for Oracle) in my application properties. Is it even possible to achieve this. Can you please direct me towards any of your blogs or tutorial that could help. Thanks. Hi, First, I am sorry that it took me so long to answer to your question. Second, check out this blog post. Although the example uses two MySQL databases, it is easy to modify the configuration to use different databases (just modify the Spring Boot configuration file). Hi Dear Petri, I don’t know where to post this question in your blog or article but writing here.. I want to create an application in Spring MVC+Spring Security+SpringBoot configured with both JSPs in WEB-INF/views and Thymeleaf in resources/templates. or simply the springBoot application with jsps in WEB-INF, and should serve the JSP view from running jar file created with maven. I have already created web applications in SpringMVC+Spring Security+SpringBoot +Thymeleaf and Spring MVC+Spring Security+JSP. Thanks Hi, It is a bit tricky to use JSP with Spring Boot. I have to admit that before you asked this question, I had no idea that it is even possible. However, I found a blog post that explains how you can do it (and still use the executable jar file). If you need to use both Thymeleaf and JSP, you should take a look at this StackOverflow question. I hope that this helps. I want to develop a simple stand alone Spring Data JPA application with java annotation based configuration.For this I have prepared an entity , a configuration file. Now how to get entityManager in DAO class to execute a simple query.With XML based configuration it is working fine.Please help. Hi, If you use Spring Data JPA, you don’t need to use the entity manager because you can create repository interfaces and use them instead of the entity manager. If you don’t know how you can create or use these repository interfaces, you should take a look at my Spring Data JPA tutorial. This post is titled Spring Data JPA, and in the first step #3 you talk about configuring the entity manager factory bean. Now you’re saying you don’t need an entity manager? Well, you need to configure entity manager because Spring Data JPA is just a wrapper that requires a JPA provider. However, you don’t need to use the entity manager in your code because you can simply use Spring Data JPA repository interfaces. Thanks! I’m trying to convert a Springboot POC to Azure. I need to remove the springboot dependency because I have other fights I need to worry about. So I’m trying to figure out what configuration I need to do. Hi, You are welcome. Also, it’s great that you decided to ask this because it’s true that my original comment was misleading. If you have any other questions, don’t hesitate to ask them. Can you please show how to view the content of the h2 db while debugging the application. I have seen the stackoverflow comments on how to do it. But not clear on what all configurations are required. Your tutorials are very comprehensive. Thank you for your time and effort! You are welcome. Hello Petri, Your Spring JPA tutorials have been an invaluable resource. I was wondering do you have any examples or know how I could separate my Spring data entities into a separate Maven project and then configure my primary WebApp to use that data project as a dependency? thank you, mbedded Hi, Are you talking about JPA entities? If so, you can simply include the jar in your classpath and set the value of the packagesToScanproperty of the LocalContainerEntityManagerFactoryBeanbean. Note that I haven’t tested this, but AFAIK it should do the trick. Which jar?? The jar that contains your entities. Hi Petri, Is there a way we can read the JNDI and create Entity Manager Factory without making configuration in XML file Hi Pranit, If you want to use a JNDI DataSource, you should take a look at this StackOverflow answer. I hope that it helps. Spring Data Spring Data’s mission is to provide a familiar and consistent, Spring-based programming model for data access while still retaining the special traits of the underlying data store….. Data access as of April 17, 2012, 13:31. @TabulaRasa Very useful article. nicely written, so easy to follow. Thanks mate. Thank you for your kind words. I really appreciate them. really helpful. Thank you. a big help, thanks You are welcome. Thanks this is useful ! Hi petri, I am using JPA and I configured all things. But still I am getting javax.persistence.TransactionRequiredException. Hi, Unfortunately I cannot say what’s wrong because you didn’t provide enough information, but it seems that for some reason your code isn’t run inside a transaction. That’s why the entity provider throws the TransactionRequiredException. Can you put your code to Github and share your repository with me? Hi I am using AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext context.register(TransactiinConfig.class); context.refresh(); context.getBean(serviceclassname) I am getting error at refresh. bean of type [org.springframework.orm.jpa.localcontainerentitymanagerfactorybean] is not eligible for getting processed by all beanpostprocessors I want to use spring boot data jpa with jndi which should deploy in websphere. Can you please explain how can i achieve it. Hi, Unfortunately I have never done this myself. That being said, you should take a look at this blog post. I followed Your tutorial … unfortunately its seems not complete … lacks of some supporting information in beetween… what is what and what is puprose of it (but its not so problematic) but when I made copy paste and then I see in my InteliJ in red some variable names and unsuficient declarations / imports etc. For newbie like me it was complete waste of time writing it trying to understand and having now Application like Swiss cheese. I understand That You’ve spent hours on writing this Course I get it and also you helped many ppl … but for me it was no good. I’ve learned almost nothing from it! Wasted more than 1 day trying to get it working and for what? Good course is like joke … if you need to explain it its not that good … and in this case its not good for me at all :/ sorry. And I see that specialist will never understand beginners because skipping some knowledge and writing all in shortcuts … its sucks :/… Im writing this to let you know because after searching a lot of courses without Springboot I”ve found this and … and its a failure for me :/. To give you example of many small mistakes : You not included applicationContext-persistence.xml and some ppl asked for an example. and in example it was line but you wrote that application.properties should be in config directory so classpath should look like : another thing … – what the heck is that and why its red … what is it for etc. cmon Hi, Thank you for posting these quite interesting comments. It seems you had some (maybe a bit unrealistic) expectations when you started reading my Spring Data JPA tutorial, and you got disappointed because you couldn’t just use the code samples without understanding the theory behind them. This is unfortunate. However, it’s quite common that online tutorials have a very limited scope because Spring is a very complex beast and it’s simply not possible to cover everything in one tutorial (especially if the tutorial is free). That’s why you have to write your tutorial to a very specific audience. Often this means that these tutorials aren’t very useful if you are a beginner. That’s why I think that if you want to learn how to use Spring Framework, the best way to do it is to get a proper book (like Spring in Action) and read it. A few comments about your improvement ideas: The XML configuration doesn’t belong to the scope of this tutorial. I left it out simply because I don’t use anymore. In words, this was a conscious decision and not a mistake. That being said, I decided to add these files to the example application to this tutorial because I thought that they could be useful to someone who cannot update his/her application to use Java configuration. This is a good point. I will add a note that provides additional information about the properties file support of Spring Framework. Very easy to understand & too helpful contents , Thanks for such nice contents/Knowledge. I want to store a boolean datatype in an entity to Number(1) type in postgres database.(It works fine if db is oracle).Two approaches already thought of 1.CustomType annotations 2.Update entity to handle 0 and 1 with getters setters Both the approaches require changing entity class.Is there any better approach maybe in configuaration to notify hibernate/jpa to hamdle boolean type with 0/1
https://www.petrikainulainen.net/programming/spring-framework/spring-data-jpa-tutorial-part-one-configuration/?replytocom=1210066
CC-MAIN-2021-31
refinedweb
4,365
57.16
Unit Scope Names Go Up to Getting Started with RAD Studio Unit scope names are prefixes that are prepended to unit names in the RAD Studio libraries (VCL-FMX-RTL). That is, names of units, functions, classes, and members have a unit scope name prepended to the unit name, as follows: Syntax and Description The unit scope name precedes the unit name: <unitscope>.<unitname>. ... For example, the SysUtils unit is now part of the System unit scope, as follows: System.SysUtils and the Controls unit is part of the Vcl or the FMX unit scope: Vcl.Controls FMX.Controls Unit scope names: - Classify units into basic groups such as Vcl, System, FMX, and so forth (unit scopes are classified in Unit Scopes). - Ensure compatibility of the code that you write using the IDE. - Differentiate members whose names are ambiguous (that is, ensure correct name resolution when a member's name matches the name of a member of another unit). - Typically begin with a single uppercase letter followed by lowercase letters (such as Data). - Are typically made up of one element(such as DataSnap), although some are made up of two elements (such as System.Generics). Third-party software, such as Indy and TeeChart, is not unit-scoped. When developing new code with third party components, adding unit scope names is not necessary because added uses entries are automatically scoped.: Classes.TStream The name Classes.TStream is not considered to be a fully qualified class name because fully qualified names must be unit-scoped, that is, they must include the unit scope name. In this case, the unit scope name System must be added to the Classes unit name in order to yield a unit-scoped or fully qualified name, as follows: - In.Seek). How to Specify Unit-Scoped Unit Names in Your Code For new development, you must specify the unit scope for units in your application. Choose any of the following ways to do this: - Everywhere: - Fully qualify all names of all members throughout your code. Using full qualification, including unit scope names, throughout your application ensures the fastest compile time. - Uses clause or #includes: - Fully qualify unit names (with the unit scope and unit names) in the usesclause or #include. Then in your code, you can partially qualify the names of members of those units that you fully qualified (with unit scope) in the usesclause or #include. - In Project Options: - Add the unit scope names in the Unit scope names option on the Delphi Compiler page in Project Options. Caution: Using partially qualified names can significantly slow your compile time because the compilers must resolve all partially qualified names during a compile.Caution: Using partially qualified names can significantly slow your compile time because the compilers must resolve all partially qualified names during a compile. RAD Studio Uses Unit Scopes, and the Help Also Uses Unit Scope Names The wizards and templates in RAD Studio use and include properly unit-scoped unit names. In the help, some instances of unit, class, and member names do not include the unit scope names. However, the Libraries documentation has full unit scope names in the page titles. Example If your code contains: uses System.SysUtils, System.Types, System.Classes, FMX.Controls; or: #include <System.SysUtils.hpp> #include <System.Types.hpp> #include <System.Classes.hpp> #include <FMX.Controls.hpp> You can specify unqualified member names in your code, such as: GetPackageInfo // referring to System.SysUtils.GetPackageInfo TRect // referring to System.Types.TRect TNotifyEvent // referring to System.Classes.TNotifyEvent TTrackBar // referring to FMX.Controls.TTrackBar Unit Scopes There are more than a few unit scopes, but most of the unit scopes can be grouped into a few general categories. The following table lists the general categories and the unit scope names in each category: - Ten unit scopes are FireMonkey related (FMX, FMX.ASE, FMX.Bind, FMX.Canvas, FMX.DAE, FMX.DateTimeControls, FMX.EmbeddedControls, FMX.Filter, FMX.ListView, FMX.MediaLibrary). - The Soap unit scope contains COM-related units. - The System unit scope has several unit scopes, including System.Bindings, System.Generics, System.Math, System.Sensors, System.Tether). - Four unit scopes are VCL related (Vcl, Vcl.Imaging, Vcl.Samples, Vcl.Touch). - The Xml unit scope contains the four units related to XML processing, such as Xml.Win.msxmldom. Unit Scopes and the Units in Each Unit Scope The following table gives the following information: - For units that are documented in the help, a hyperlink is given to the unit scope, where you will see the units that belong in the unit scope. - For external units, which are not documented in the help, the units are listed that belong in the unit scope. Unit Names in Alphabetical Order with Their Unit Scopes For a reference list of unit names with their associated unit scope name, see Unit Names Alphabetical List with Unit Scopes.
http://docwiki.embarcadero.com/RADStudio/Seattle/en/Unit_Scope_Names
CC-MAIN-2016-18
refinedweb
807
53.81