text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
On 9/4/11 7:22 AM, Gilles Sadowski wrote:
> Hello Luc.
>
>>>>> [...]
>>>>>>.
> I'd just like to remind, after all the fuss, that the crux of the matter
> (leaving aside the additional flag issue) was to replace:
> ---CUT---
> final double c = divisor.getReal();
> final double d = divisor.getImaginary();
> if (c == 0.0 && d == 0.0) {
> return NaN;
> }
> ---CUT---
> with
> ---CUT---
> final double c = divisor.getReal();
> final double d = divisor.getImaginary();
> if (c == 0.0 && d == 0.0) {
> return (real == 0.0 && imaginary == 0.0) ? NaN : INF;
> }
> ---CUT---
>
> How this change could hurt existing users of "Complex" (and require a
> "clean" revert and endless arguing) is beyond me. There was a test case
> exercising the new behaviour and the old behaviour could be recovered easily
> if, in the end, the unobvious/inconsistent behaviour was to be retained
> (which would have entailed the addition in the Javadoc of a complete
> explanation as to why it was decided to be so).
>
>>>.
> I took it so because I've never seen such a thing occur for anyone else.
> Was there a security breach that warranted the urgent deletion?
I apologize, Gilles. I honestly thought I was helping. By not just
doing a revision-based revert, you missed the javadoc change that
left the code in an inconsistent state. I did not think you would
be offended by this. I am sorry.
>
>> Text-based exchanges on a mailing list
>> lack tone, gesture, behavior that help understand the state of mind
>> of the peer with which we discuss.
> This is all the more reason to be careful about one's actions. Minimal
> respect for other team members should refrain from deleting their work,
> even if the usefulness is dubious.
>
>> I have met both of you
>> personally; I appreciate both of you. I know for sure that we go
>> forward much efficiently when we cool down than when we argue on
>> personal or behavioral ground.
> I agree; thank you for those appeasing words.
>
>
> | http://mail-archives.apache.org/mod_mbox/commons-dev/201109.mbox/%3C4E639D1A.5090305@gmail.com%3E | CC-MAIN-2014-15 | refinedweb | 326 | 67.96 |
span8
span4
span8
span4
Hi,
I'm trying to make a script that will replace/erase some of words from table of strings.
I want to create list of key words in a Published Parameter (multiple choice) that will be piked by user. When I pick one KeyWord script works fine but with more doesn't work at all. How can I make it to loop through all picked words?
str1 is with Published Parameter and
str2 is with list of words, but there should be more of them so it will look horrible ;)
EDIT: words are single. Additional problem I have now is how to make them be whole words only eg. Landhotel shouldn't be erased HOTEL, only Land Hotel -> Land.
And additional question is there a way to replace RegEx matching with sth shorter? I'm new in scripting so my code is vary basic :)
I have such a script in PythonCaller:
import fme import fmeobjects import re def KeyWord_replacer (feature): str1 = feature.getAttribute('Name') str2 = feature.getAttribute('Name2') keyW = FME_MacroValues['KEY_WORDS'] str1 = str1.lower() str2 = str2.lower() keyW = keyW.lower() str1 = str1.replace(keyW,'') str2 = str2.replace('restaurant','').replace('ristorante','').replace('hotels','').replace('hotel','') str1 = re.sub(r'\s{2,}',' ',str1) str2 = re.sub(r'\s{2,}',' ',str2) str1 = re.sub(r'\A\s+','',str1) str2 = re.sub(r'\A\s+','',str2) str1 = re.sub(r'\s$','',str1) str2 = re.sub(r'\s$','',str2) feature.setAttribute('Name', str1) feature.setAttribute('Name2', str2)
The published parameter is basically a long string, which you'll have to split up and iterate over. For example:
keyWords = FME_MacroValues['KEY_WORDS'].split(' ') for keyW in keyWords: keyW = keyW.lower() ... do whatever is needed with each keyword here ...
If you need to constrain the replacements to word boundaries, you can again use the shlex module to your advantage, e.g.
import shlex words_to_remove = shlex.split('HOTEL RESTAURANT RISTORANTE') test_string1 = 'Landhotel La Paix' test_string2 = 'Land hotel La Paix' print ' '.join([word for word in shlex.split(test_string1) if word.upper() not in words_to_remove]) # -> 'Landhotel La Paix' print ' '.join([word for word in shlex.split(test_string2) if word.upper() not in words_to_remove]) # -> 'Land La Paix'
If you have multi-word values, such as the example posted by @davidrich, you can use the shlex module to split the published parameter text into its individual components, e.g.
import shlex parameter = 'Ham Spam "Ham and Eggs"' keyWords = shlex.split(parameter) # keyWords = ['Ham', 'Spam', 'Ham and Eggs']
Hi
Looking at the output of multiple choice parameters, the answer will depend on if your keywords are all single words, all many words or a mix.
As the parameters comes back as
Ham Spam "Ham and Eggs"
If its all single words then its just a simple matter of:
keywordList = FME_MacroValues['KEY_WORDS'].split(" ") for keyW in keywordList: #rest of code here
In regards to re I don't really use that so unable to help there!
Answers Answers and Comments
14 People are following this question.
PythonCaller User Parameters 3 Answers
Getting published parameters from within a python caller nested in several custom transformers 1 Answer
Published parameter based on another published minus 1 1 Answer
Python and list with null value 2 Answers
How to select a particular layer in FME - Python scripting? 1 Answer | https://knowledge.safe.com/questions/72743/multiple-parameter-in-pythoncaller-skrypt.html | CC-MAIN-2020-16 | refinedweb | 546 | 68.26 |
Red Hat Bugzilla – Bug 127243
can't stop pychecker from checking system Python modules
Last modified: 2007-11-30 17:10:45 EST
Description of problem:
Pychecker has an option not to complain about potential problems
that are in the system library (-q / --stdlib on the command line,
'ignoreStandardLibrary' in .pycheckrc). This option does not work
on Fedora Core 2.
Similar results happen if you attempt to exclude specific
standard modules by manipulating the 'blacklist' variable in
.pycheckrc.
Version-Release number of selected component (if applicable):
pychecker-0.8.13-3
How reproducible:
Always.
Steps to Reproduce:
1. Create a sample module:
$ cat testmodule.py
import HTMLParser
class T(HTMLParser.HTMLParser):
def handle_starttag(self, tag, attrs):
print tag, attrs
def test(string):
__pychecker__ = "no-abstract"
t = T()
T.feed(string)
T.close()
2. Run pychecker on it with the appropriate option.
$ pychecker --stdlib testmodule
Actual results:
A cascade of complaints about HTMLParser.py and markupbase.py,
both of which are standard modules.
Expected results:
No complaints.
Additional info:
The core of the problem can be seen in the odd paths that pychecker
attributes to the HTMLParser and markupbase modules. Pychecker works
with the co_filename element of code objects; in .pyc and .pyo files,
this has its value set (to the original name of the file) at the time
when they are compiled to bytecode. Because of the RPM build process,
their location at the time of compilation is nothing like their final
location.
Because of this difference between the file location embedded into
the .pyc/.pyo files and the actual system location, Pychecker thinks
that all of the .pyc / .pyo files of system Python modules are not
actually system Python modules, so --stdlib does nothing.
A similar problem applies to the 'blacklist' .pycheckrc variable
because Pychecker actually excludes blacklisted modules based on
the full module paths. When it gets the module path right, it will
consequently fail all comparisons against the co_filename values.
The files containing buildroot paths are from the python package
(still there in python-2.3.4-10).
Created attachment 105005 [details]
Use (make install DESTDIR=...) instead of (%makeinstall DESTDIR=/)
This should fix it.
The 2.2-0.9b2 changelog comment suggests that distutils would be confused,
but comparing the pre-patch and post-patch buildroots, the only differences
are in *.so* and *.py[co] and /usr/lib/python*/distutils* contains no
*.so* files, so AFAICS distutils should not be affected.
This is fixed in python-2.3.4-13 (FC3 update). | https://bugzilla.redhat.com/show_bug.cgi?id=127243 | CC-MAIN-2018-26 | refinedweb | 415 | 59.5 |
0
I am looking to make a small program that will cut down the repetetive need to insert BB tags in to achievements lists on a site I work for.
So far I have got the algorithm sucessfully reading the first line of the file, and outputting it using cout. The part im having difficulty with is reading all the lines underneath while combining them with the BB tags.
So far I have
#include <iostream> #include <string> #include <fstream> using namespace std; int main() { string name; int points; string description; ifstream inFile; inFile.open("C:\\Users\\Andy\\Desktop\\codegen.txt"); if (!inFile) { cout << "Unable to open file"; exit(1); } inFile >> name >> points >> description; // This is where I get stuck, I know I need a while statement with an EoF argument // Just most examples use getline, which is not what I want. while(! inFile.eof()){ cout << "[b]" << name << " - [i]" << points << "[/i][/b]" << endl; cout << description << endl; inFile >> name >> points >> description; } inFile.close(); return 0; }
The txt file im reading from will have the following format
Name of achievement points of achievement decription of achievement Name of achievement points of achievement decription of achievement Name of achievement points of achievement decription of achievement
The name and the description are strings and need to allow blanks, now is it the txt file im going wrong with or i'm just forgetting a bit of code i was taught :/
Thanks | https://www.daniweb.com/programming/software-development/threads/101475/reading-from-a-file-can-t-figure-out-how-to-do-multiple-lines | CC-MAIN-2017-09 | refinedweb | 236 | 55.92 |
[ all the applications in the Office 2000 suite fully Web-enabled
and able to write to HTML as a native file format you might wonder why
you would need FrontPage 2000 at all. In fact there's a huge difference
between designing for paper and designing for the screen and an even bigger
gulf between designing standalone documents and a fully navigable site.
If you're wanting to tap the huge Internet audience for your new product
launch, for example, it's no good just lobbing up an HTML-version of your
Word press release and expecting the orders to flood in. Instead you're
going to have to produce an attractive, interactive, well-structured site
that shows off your business in the best possible light.
This is where FrontPage 2000 comes in. Microsoft recognises that most
businesses can't afford dedicated Webmasters so usability and productivity
are paramount. To help you get off to a flying start FrontPage includes
eight in-depth wizards. The Corporate Presence wizard is ideal for our
purposes and walks through the set-up process by asking questions, such
as which pages you want to include and how you want data to be handled,
and then asks you to fill in email and address details. When you click
Finish, the wizard then creates and formats all the desired pages including
seriously advanced components such as an automatically generated table
of contents, reply form and search page.
FrontPage 2000 offers in-depth wizards to walk you through the process
of setting up typical web sites.
By clicking on the Navigation icon in the Views pane down the left-hand
side of the FrontPage 2000 screen you can then see the structure of your
site as a tree-based flow diagram. More importantly, by right-clicking
on the window background, you can add your own top level pages and then,
by right clicking on the page icons, rename them to suit your own requirements
and add all the child pages you need. Building up your customised structure
in this way has a number of advantages one of which you'll immediately
appreciate if you double-click on any page icon to go into Page view.
Thanks to its use of automatically generated top and side navigation bars,
whichever page you open, you'll find that it already has links to all
your site's top level pages and to all your current page's child pages.
Using the Navigation view you can customise your site's structure
visually.
In other words FrontPage has entirely taken care of the huge task of
making the information on your site navigable and easily accessible, leaving
you to concentrate on getting the information right. To help in the writing
of your content FrontPage automatically adds suggestions such as to include
a mission statement and company details on your home page. Even more helpful
is the fact that, with shared Office-standard features including the background
spell-check and thesaurus, adding and editing your own text within FrontPage
2000 is very similar to working within Word 2000. Alternatively, of course,
you can always use Word itself to author longer sections of text and then
import them. FrontPage automatically converts the DOC files to HTML and
recreates all your Word formatting through embedded HTML tags.
Thanks to FrontPage 2000's use of shared borders, navigation bars
are automatically added to pages leaving you free to concentrate on the
text.
Generally FrontPage 2000's visual environment is designed to protect
you from having to deal with such HTML directly but, as you get more experienced,
you can always use the HTML tab at the bottom of the Page window to view
and edit the underlying HTML code. Looking at our imported press release,
for example, shows how the Word document's heading formatting has been
recreated as <FONT> tags. That's fine if all we want to do is simulate
the Word document, but our code will be more flexible and efficient if
we remove them. By returning to the Normal tab we can select all the text,
choose the Format>Remove Formatting command and then apply true HTML
<H> heading tags using the Formatting toolbar's Style dropdown list.
The Page view's HTML tab enables direct HTML editing.
Having sorted out the text the next job is to add any graphics. The production
of Web images is a major field in its own right with many applications
entirely dedicated to the task. For our purposes though, we don't need
them as FrontPage 2000 has basic graphics capabilities of its own. In
particular, for graphics such as scans and screenshots, FrontPage is able
to import the necessary TIFFs and BMPs and convert them to the Web standard
GIF and JPEG formats. Using the Picture Properties command it is also
possible to resize the image precisely, to set up links where required,
and to provide alternative text for those users browsing with graphics
switched off. Using the Pictures toolbar you have further control over
features such as contrast, brightness and transparency, while using the
crop and resample commands you can ensure that download times are kept
to the minimum.
FrontPage 2000 can convert graphics to JPEG or GIF with basic control
over image quality.
To further enhance the look of your pages you can now think seriously
about their overall design. Thanks to the use of the FrontPage wizard
for initial set-up, the site is already looking consistent and professional,
but you aren't limited to the design you first chose. By calling up the
Format>Themes command you can choose between 60 professional designs
that change the look of everything from the background image through heading
typeface to the appearance of navigation buttons across the entire site
- and all with a single click! Even better, by selecting the active graphics
option, you can automatically turn static navigation buttons into interactive
Javascript-based rollovers. Advanced users can customise all options and
also choose to apply themes through CSS.
Thanks to its template-based nature, the look and feel of the entire
site can be automatically updated using FrontPage 2000's professionally
designed themes.
The site is getting near completion so it's time to check that everything
is working as expected. Whenever you are working on a page you can always
hit the Preview tab at the bottom of the screen to get a clearer idea
of what your page will look like and to check links. For a more thorough
workout the Preview In Browser command will load your site into any previously
installed browser. If you're running Office you're likely to be using
Explorer yourself, but it's a good idea to check your site with Navigator
too as you want to know exactly what all your potential browsers are going
to see. You should also check FrontPage's Reports view of your site which,
amongst other things, lists all broken hyperlinks, unlinked files and
download-heavy pages.
The Report view highlights potential problems and weaknesses before
the site is published to your server.
When you are completely happy with the site you're ready to post it so
that anyone can access it. FrontPage 2000's Publish Web command enables
you to upload all the necessary files to your ISP's Web server, assuming
you have password-based access. Ideally you should ensure that your ISP
supports the FrontPage Server Extensions as this makes the publishing
process much easier. In particular it enables FrontPage to intelligently
manage your site by comparing the server version to the one on your hard
disk and only updating changed files. This means that if you move a file,
for example, FrontPage will first update all links locally and then, when
you publish the site, make the same changes on the server.
The Server Extensions are also crucial if you want to take advantage
of FrontPage 2000's intelligent agent-based features such as reply forms,
search forms and hit counters. With other packages such features usually
require complicated CGI programming and depend on individual ISP support
- the ability to search for text across an entire site, for example, is
particularly rare. With FrontPage 2000 such advanced features all come
as part of the package. All you have to do is select the Insert>Component
or Insert>Form command and then customise where necessary.
With its combination of bolt-together features, from start-up wizards
and automatic navigation bars through to design themes and intelligent
components, FrontPage 2000 enables even occasional users to begin creating
impressive sites immediately. While professional, day-in day-out Webmasters
steeped in HTML coding would soon find Microsoft's hand-holding automated
approach too restrictive, for the average user FrontPage 2000 undoubtedly
offers both the easiest and the fastest track onto the Web.
The end result: a professional, consistent, interactive and easily
navigable site that looks as if it must have taken weeks to design.. | http://designer-info.com/Web/basic_web_tutorial.htm | crawl-002 | refinedweb | 1,498 | 54.26 |
In this article you will learn how to Sending Automatic Mails in ASP.NET.
The part of automatic sending mail can be done using a timer control from the ajax
toolkit also but the main drawback is that you always have to check whether the
system time is equal to 3'o clock or not so it unnecessarily waste the system's
resources if we directly write the code in our aspx.cs file. So in order to
increase the performance I've used BackgroundWorker class of
System.ComponentModel.Component namespace..
Enough of this theory, we'll go into the example just for testing purpose I've
created a small but effective example. Following is the design code for the
same.
Since background worker will create a different thread for executing the code of
ours ie we can say that the code will be executed in an asynchronous manner, so
we need to define the Async property of our page to true.
Following is the source code for the same.
In this code after the user clicks the button the mail will be send
automatically after 10 seconds to the specified email id.
Following is the output for the same.
View All
View All | https://www.c-sharpcorner.com/uploadfile/17e8f6/sending-automatic-mails-in-Asp-Net/ | CC-MAIN-2022-40 | refinedweb | 204 | 60.14 |
Cheers people, hope everyone is okay!
I've been working on an audio project with T3.6, recently migrated my code to T4.0 and I'm facing several issues/problems that I don't really know how to solve. I'm bumping my head against the wall. Just hope somebody can help, it's a long shot but I have to try :)
General description
I'm using an 44.100Hz timer interrupt (basically a uint16 counter that wraps around every 1.486s) to generate and process the audio signal. Processed audio is (or it should be) stored into an 128-word int16 buffer array, which is then fed to AudioPlayQueue 512 times in one counter cycle (~345 times per second). AudioPlayQueue is connected to the I2S output in Audio System Design Tool and processed by PCM1681 DAC (or SGTL5000 Audio Shield).
1. AudioPlayQueue and speed1. AudioPlayQueue and speedCode:
#include <Audio.h>
AudioPlayQueue audio;
AudioOutputI2S i2s1;
AudioConnection patchCord1(audio, 0, i2s1, 0);
#define SAMPLE_RATE 44100
#define BLOCK_SIZE 128
static uint16_t cycle=0;
static int16_t waveform[BLOCK_SIZE];
static int16_t input;
extern const int16_t wavetable[2048];
void audioInt()
{
uint8_t cbX=cycle%(BLOCK_SIZE);
input=wavetable[cycle>>5];
// do some audio processing
waveform[cbX]=input;
if (cbX==(BLOCK_SIZE-1))
{
int16_t *p1 = audio.getBuffer();
memcpy(p1, waveform, BLOCK_SIZE<<1);
audio.playBuffer();
{
cycle++; // automatically reset at 65536
}
void setup()
{
noInterrupts();
AudioMemory(128);
IntervalTimer *t1 = new IntervalTimer();
t1->begin(audioInt, 1000000.0f / (float)SAMPLE_RATE);
interrupts();
}
void loop()
{
//read some inputs
}
Description of AudioPlayQueue.getBuffer() states "This buffer is within the audio library memory pool, providing the most efficient way to input data to the audio system", but I'm not really sure I'm doing this the most efficient way. Couldn't find any meaningful instructions about AudioPlayQueue anywhere, very poorly documented stuff. It's rolling in an audio rate interrupt, gets executed 345 times per second together with other greedy math-crunching stuff going on in the rest of the cycles, and I wonder is there any faster/better way to do this? Especially because I'm using more than one channel of audio :rolleyes::Code:
int16_t *p1 = audio.getBuffer();
memcpy(p1, waveform, BLOCK_SIZE<<1);
audio.playBuffer();
2. T4.0 vs. T3.6 issue
This code worked without a problem on T3.6 for hours, but now on T4.0 it works for maybe 3 minutes and then my Teensy freezes. I checked AudioProcessorUsage and AudioMemoryUsage, CPU stays under 0.5% but AudioMemory consumption increases every few seconds until it exhausts completely. Doesn't matter even if I declare AudioMemory(512) in Setup, it will freeze somewhere around 160 - maybe it's not even related, I have no way to tell. I can only speculate that AudioPlayQueue.getBuffer() doesn't release memory after AudioPlayQueue.playBuffer() is executed :eek: Maybe the problem that isn't normally obvious happens because of the interrupt routine and the speed? Is this a bug or am I doing something wrong? Like I said, it worked like a charm on T3.6 :confused:
3. I2S issue
I solved this one in the meantime, turns out the problem was between the monitor and the chair :D
P.S. two more bits at the end: Is it better to use 44.117 instead of 44.100 sample rate for I2S on T3.6/T4.0? Also can I make I2S on T4.0 work at 22.050Hz and how?
Thanks in advance for your kind answers, stay safe. And a big thank you to Paul and everybody else involved in Teensy development :cool: | https://forum.pjrc.com/printthread.php?s=e3750131236b45d9651b94ad9a99bc44&t=60407&pp=25&page=1 | CC-MAIN-2021-39 | refinedweb | 589 | 58.89 |
If you wrote old C# applications during the Windows and Web form years, you’ll remember using ADO.NET to connect to your database. You built connections, queries, and handled stored procedure code and parameters in the database. Now, with ADO.NET and Entity Framework, you can eliminate much of the hassle needed to work with data and databases in C#. Entity Framework (EF) and LINQ work together to allow you to create a data layer for your applications.
Learn how to work with Entity Framework, MVC and C# at Udemy.com.
Before you get started with Entity Framework, it helps to know a little about MVC, which is probably the framework you’ll use when you create your application.
What is MVC?
The model-view-controller (MVC) framework is a lot different than old web forms. With MVC, your applications are separated into compartments. You no longer need to keep track of page state such as loading and unloading. Instead, MVC takes care of this process and just lets you design business logic and data design.
The model is what contains your data. Models are usually created in ViewModels. ViewModels are classes that have some data logic that map to your database tables. Entity Framework is more like a one-to-one relationship between a class and a database table. With a ViewModel, you customize the way you want your model to look. For instance, you could have a CustomerOrderViewModel that contains some data logic with both your customer and order tables.
The view is the visual part of your application. The view contains all the HTML elements for your users. Your view is given a data model (or ViewModel) that it can use to display data. In a view, you usually use a technology called “Razor.” Razor contains a number of HTML helper classes that turn multiple C# lines of code into one or two lines. The result is much shorter code, easier to read and better performance. You can also write your own helper classes for your views.
Learn more about the MVC framework and C# at Udemy.com.
Finally, the controller is the class object that contains all of your business logic. The controller passes the model (or ViewModel) to your view. The controller maps to the view based on its name. When you create an “Orders” view, you then create an “Orders” controller. The MVC framework knows to map the two together, so you don’t need to specifically establish a connection between the two objects. You can create ViewModels separately and call the ViewModel class from the controller. You then manipulate the data and then send the data model to your view.
Code First Classes and Your Database
Code first differs from older SQL querying and table manipulation. If you coded web forms, remember you needed to create your stored procedure in the SQL database, set up the parameters and procedure name in your code, call the database and then handle the result set. The data result set was clunky, and you had to code for empty data sets and whenever you ran out of records. You also needed to code for separate insert and delete statements.
With code first, your database tables are represented as classes. For instance, you might have a customer table in your database. The following code is an example of a code first Entity Framework class:
using System;
using System.Collections.Generic;
publicpartialclassCustomer
{
public User()
{
this.Order = newHashSet<Order>();
}
publicint CustomerId { get; set; }
publicstring Username { get; set; }
publicstring FirstName { get; set; }
publicstring LastName { get; set; }
publicvirtualICollection< Order > Orders { get; set; }
}
}
The first two “using” statements are similar to “include” statements. These two lines of code include libraries into your code. The “partial class” statement defines a class name. In this case, the class name is “Customer.” With Entity Framework and code first, you know that this identifies a table named “Customer” in your database.
Next, the “User” constructor statement is defined. A constructor is a part of your class that has no return type but it is always the same name as the class. In the case of an Entity Framework code first class, the statement in this constructor initializes a foreign key. Foreign keys link your tables together. In this case, it’s a record set in the “Order” table. The statement grabs all records in the foreign key link in the table and returns an “ICollection” data type. This data type is used often in your table class models when you need to return linked table rows. Notice the collection is defined in the last class statement. The “Orders” class property contains the list of orders for a specific customer Id.
The final lines of code are the class’ properties. In this customer class, there is the CustomerId, the Username, the FirstName and the LastName columns. The data types are “int” for integer and string.
After you define these classes, you can use them to query your database without any SQL or cumbersome code. The following code is an example of using LINQ with the above customer and order classes:
var customer = (from c in db.Customer join o in db.Order on c.CustomerId equals o.CustomerId where c.CustomerId = 5).FirstOrDefault();
The above line of code grabs a customer with the Id of 5. The returned data set is in a “Customer” class, which is also mapped to your customer database. The result is that you can return data to a class and use this class to read and manipulate your data. Instead of looping through a data set and identifying rows and columns, you just have a collection of customers. The collection is enumerated, so you can loop through each customer record using the foreach loop structure.
This is just a small glimpse into the Entity Framework and building class structures for your tables. It’s a little difficult to learn when you’re first starting out, but once you work with EF for a while, you’ll find it’s much more convenient than working with older technology.
Learn more about MVC and EF at Udemy.com. | https://blog.udemy.com/code-first-entity-framework/ | CC-MAIN-2017-22 | refinedweb | 1,024 | 66.03 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Support Forum » Fast PWM output on PB1
Does anyone know why the following code doesn't result in a slowly blinking LED on PB1?
#define F_CPU 14745600
#include <stdio.h>
#include <math.h>
#include <avr/io.h>
#include <avr/interrupt.h>
#include <avr/pgmspace.h>
#include <inttypes.h>
int main() {
OCR1A = 14400; // 1 sec period with clock/1024
OCR1B = 7200; // 50% duty cycle
// set PB1 to output; this pin is OC1A
DDRB |= (1<<PB1);
// WGM[13:10]=1111 is fast PWM
// COM1A[1:0]=10 non-inverting output on OC1A
// CS[12:10]=101 for clock/1024
TCCR1A = (1<<WGM11) | (1<<WGM10) | (1<<COM1A1);
TCCR1B = (1<<WGM13) | (1<<WGM12) | (1<<CS12) | (1<<CS10);
while(1) {
// nothing
}
return 0;
}
The above code works as expected if I connect an LED to PB2 and make the following modifications:
DDRB |= (1<<PB2);
TCCR1A = (1<<WGM11) | (1<<WGM10) | (1<<COM1B1);
Am I misunderstanding the datasheet? Looking at p.131, Table 15-2, setting COM1A[1:0]=10 should result in a non-inverting output on OC1A (which is the pin labeled PB1).
Hi mmgn,
You are running into a problem here because you are using the mode that uses OCR1A as the TOP register. This does not allow you to use OC1A as an output pin because normally OCR1A is what that output uses to do its comarison agains (to set the duty cycle). Bascially in this mode you are gaining the ability to explicitly set the TOP value of your counter, but giving up a usable output for your PWM. You could turn off WGM10 and then use ICR1 as the register to hold the TOP value, then you should be able to use both OC1A and OC1B as PWM outputs. Does that make sense?
Humberto
Thank you very much, Humberto!! Your post helped me reanalyze the datasheet and notice Figures 15-1 and 15-4. Since OCR1A is TOP, a compare match is only generated when TCNT1 reaches OCR1A. Due to a successful compare match, OC1A is cleared and TCNT1 starts over at BOTTOM. However, based on my COM1A settings OC1A is immediately re-set at BOTTOM. This results in an identical situation as before the compare match was generated and basically nothing happens. // Mr. T pities me.
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/1861/ | CC-MAIN-2018-13 | refinedweb | 399 | 65.52 |
Greetings,
A few days ago I wondered whether it would be possible to call GHCi
from interpreted byte-code. It turned out that it was, and it was even
fairly easy. Here's a preliminary result:
Test.lhs:
> import GHC.Base
> run :: Int -> ()
> run i
> = let b = False
> c x = x + a + i
> in breakPoint ()
> where a = 10
> runIO :: IO String
> runIO = do putStr "Enter line: "
> line1 <- getLine
> breakPoint $ do
> putStr "Enter another line: "
> line2 <- getLine
> return (unwords [line1,line2])
Output from a GHCi session:
ghc/compiler/stage2/ghc-inplace --interactive Test.lhs -v0
*Main> run 100
Local bindings in scope:
a :: Int, c :: Int -> Int, b :: Bool, i :: Int, d :: [Char]
Test.lhs:6> (a, b, c 10, d)
(10,False,120,"str")
Test.lhs:6> :q
Returning to normal execution...
()
*Main> runIO
Enter line: Hello
Local bindings in scope:
line1 :: String
Test.lhs:12> map Char.toUpper line1
"HELLO"
Test.lhs:12> :q
Returning to normal execution...
Enter another line: World
"Hello World"
*Main> :q
Note that the problems with the representation of variables and their
laziness has been delegated to the user. Has this approach been tried
before?
Simon PJ: You mentioned something about some ideas for building a
debugger into GHCi in your call for interns two months ago. Care to
elaborate on those or can I perhaps reach them online?
--
Friendly,
Lemmih | http://article.gmane.org/gmane.comp.lang.haskell.cvs.ghc/14166 | CC-MAIN-2018-13 | refinedweb | 227 | 66.94 |
A Simple Guide to WMI Provider
Table of ContentsIntroduction
What Is Platform Management?
What Is WMI?
What Is WMI Provider?
Developing WMI Provider—Where to Start?
Defining Namespace
Sample Project
How to Use the Demo Project
Known Problems in the .NET WMI Implementation
Conclusion
Appendix A: Definitions & Acronyms
Appendix B:WMI Tools
Appendix C: Important Reading
Appendix D: Developing Steps to Support SNMP
Introduction
This article's purpose is to describe how to develop a WMI provider in the .NET framework. There are several incentives for writing this article:
- The lack of simple examples for writing a WMI provider in C#.
- Many developers are not familiar with WMI and therefore don't make use of this powerful technology.
- By now there are too many Management buzz words that requires simple explanations.
This article does not include the use of SNMP and does not answer the following questions:
- How to write MIB?
- How to configure the Windows SNMP service?
- How to subscribe for traps? nutshell, Platform Management is the means to manage and monitor the health of a system. This may fall into the following categories:
- Configuration—initialization and settings of various aspects of the platform objects such as timeout values, user count thresholds, database connection strings, and so forth.
- Performance measurements—measure end to end process in regards to duration, optimization, and so on.
- Hearth beats monitoring—manage component life time. Start and stop services, receiving components state, and the like.
- Information exposing—expose platform information that might be valuable for system administrators, billing, and so forth.
- Alerts mechanism—informative events, errors, and critical errors that happens in the platform.
- Corrective events mechanism—As opposed to post mortem events, this kind of events gives to the administrators the ability to perform actions to prevent upcoming).
The-ins) to the Providers Manager (CIMOM). This industry standard compliancy allows alien components to share management environment locally and remotely by using SNMP..
Developing WMI Provider—Where to Start?
To expose a software component such as a service through the WMI, one needs to write a WMI provider. This plug-in (provider) exposes the service to the WMI and provides the interface to receive information and to interact with the service. Until recently, a WMI Provider was written as a COM component and now, with the emerging of the .NET framework, it is easier to develop providers.
In case you aren't familiar with the MOF syntax, you can simply start with developing the WMI Provider (see the sample section). When it's all done and finished, use the InstallUtil.exe (see Appendix B) tool to enter the managed class into the CIM schema; then, if you want, you can generate the MOF file from the WMI CIM Studio (it is highly recommended because:
- Order
- Efficiency—Namespace like 'CIMV2' and 'Default' contains a lot of Managed objects. Defining a unique namespace enables you to save time while looking for your objects.
Sample Project
The demo includes a simple .NET service ('Parachute service'—Managed application ) and a WMI provider ('Parachute provider'). For simplicity reasons, the sample uses the MSDEV IDE extension for VS.NET Server Explorer (see Appendix B) as the consumer application.
The Service code is quite simple. Add a reference to the ParachuteProvider and to System.Managment assemblies. In the ExposeMeToWMI method, we instantiate the provider, set some values, and then publish (Instrumentation.Publish(
Note: The provider instance is valid only when the service is started.
The Provider code contains the following actions:
- Adds a reference to the System.Management assembly.
- Defines the instrumented namespace parachute_company under root: [assembly:Instrumented("root/parachute_company")]
Note: The managed object schema will be defined under this namespace.
Add an instance installer in case we want to publish the provider directly via the InstallUtil tool. In this example, we publish the provider through the service.
Defines events by using the InstrumentationType.Event attribute: [InstrumentationClass(InstrumentationType.Event)]. Defines WMI Provider instance using the InstrumentationType.Instance attribute:[InstrumentationClass(InstrumentationType.Instance)]
Note: The provider code can be just as well written in the service.
How to Use the Demo Project
- Register the Parachute service to the SCM (Service Control Manager) with the InstallUtil tool (%systemroot%%\Microsoft.NET\Framework\<framework version&t\InstallUtil.exe).
InstallUtil.exe <service file>.
- Open the SCM
\Administrative tools\ Services.
- Log on as 'This account'—Right-click on the service name (Parachute) -> Properties -> Log on tab -> check the 'This account' enter user name and password (the user must be under Administrator group).
- Start the service.
- Install the MSDEV IDE Management extension for VS.NET Server Explorer.
- Open the MSDEV in 'Server Explorer' view.
- Add your computer to the explorer: Right-click on the Servers root tree ->Add Server.
- Add Management class to the 'Management Classes' item. Look for the Parachute class under to the parachute_company namespace.
- Expand the Parachute item and you should see the brand new instance. Take a look at the instance properties; you can see that the parachute color is exposed (red) by WMI.
- Subscribe for events: 'Add Event Query 'to the 'Management Events' item.
- Check the 'Custom' Events type.
- Add the Landing and Jump events (situated under the parachute_company namespace).
- Start and stop the service. The MSDEV output window will display the events data.
Known Problems in by using wbemtest.exe.)
Conclusion
Well, that's it, folks. Hopefully, this article will stimulate you to drill down into the WMI technology and to take advantage of it. Please send feedback, bug reports, or suggestions here.
Appendix A: Definitions & Acronyms
- CIMÿCommon Information Model—this is the premier concept of WBEM by this model WMI stores the Managed objects data (namespace, classes, methods, properties, and so forth).
- CIM Repository—This is the storage that holds the Managed objects data. The structure of the CIM repository is built upon the DMTF.
- CIMOM—Common Information Model object manager. The CIM repository is managed by the CIMOM, which acts as an agent for object requests. The CIMOM tracks available classes and determines which provider is responsible for supplying instances of these classes.
- DMTF—Distributed Management Task Force—The DMTF consortium was founded in May of 1992. This initiative was conceived and created by eight companies such as: BMC Software Inc., Cisco Systems Inc., Compaq Computer Corp., Intel Corp., Microsoft Corp. and so on. The aims of this consortium are to define industry standards for management.
- MIB—Management Information Base describes a set of managed objects. Each managed object in a MIB has a unique identifier.
- MOF—Managed Object Format. This text file includes the class definition of on or more managed object. You can export and import this definition from the CIM repository by using the WMI CIM Studio.
- Schema—a group of classes that describe a particular management environment.
- SNMP—Simple Network Management Protocol. SNMP is an Internet standard defined by the IETF and is a part of TCP/IP suite of protocols. SNMP is the protocol by which managed information is travel between stations and agents. Management information refers to a collection of managed objects that reside in a virtual information store called a Management Information Base (MIB).
- WBEM—Web-Based Enterprise Management—WBEM stands for several DMTF industry standards including the Common Information Model. WBEM provides a standardized way to access information from various hardware and software management systems in an enterprise environment.
Appendix B: WMI Tools
- Download the WMI Administrative Tools at:. It includes the following:
-.
- Mgmtclassgen.exe—Microsoft Visual Studio .NET tool. Convert MOF file into .cs/.vb/.js files.
- Management [WMI] Extension for VS.NET Server Explorer: SDK tools %systemroot%\system32\wbem
- Platform SDK tools—%systemroot%\system32\wbem
- mofcomp.exe—Compiles MOF files and adds the managed objects to the CIM Repository. It is also possible to check the MOF file correctness.
- wbemtest.exe—Windows Management Instrumentation Tester, also called WBEMTest, is a general-purpose utility for viewing or modifying Common Information Model (CIM) classes, instances, and the like. It functions as the CIM studio only if UI is humble.
- MIB Browser—
- MIB editor, builder, and browser—
Appendix C: Important Reading
Books:
Developing WMI Solution,. Chapter 8, "Developing .NET Management Application."
Articles:
A Peek into the Enterprise Instrumentation Framework
- Windows Management Instrumentation: The Journey Begins—
- Exposing Management Events
- Inheritance
- Understanding WMI Evening
- Windows Management Instrumentation (WMI) Implementation
- WMI Made Easy For C#
Appendix D: Developing Steps to Support SNMP
-:
- Create a MIB (Management Information Base) file (check out Appendix B for some useful and easy-to-use MIB editors). Define classes and SNMP traps (events) that eventually will be exposed by the WMI Provider.
- Compile the MIB file using the SMI2SMIR utility. This will generate a MOF file.
- Compile the MOF file using the mofcomp.exe compiler to check the MOF file syntax correctness.
- Create the C# classes and events with the Mgmtclassgen utility (refer to Appendix B). Use the C# classes to create WMI provider.
DownloadsDownload demo project - 25 Kb
Download demo executable - 6 Kb | https://www.codeguru.com/csharp/csharp/cs_network/wmi/article.php/c6035/A-Simple-Guide-to-WMI-Provider.htm | CC-MAIN-2019-35 | refinedweb | 1,482 | 50.63 |
In a blog post yesterday, I mentioned that the golden angle is an irrational portion of a circle, and so a sequence of rotations by the golden angle will not repeat itself. We can say more: rotations by an irrational portion of a circle are ergodic. Roughly speaking, this means that not only does the sequence not repeat itself, the sequence “mixes well” in a technical sense.
Ergodic functions have the property that “the time average equals the space average.” We’ll unpack what that means and illustrate it by simulation.
Suppose we pick a starting point x on the circle then repeatedly rotate it by a golden angle. Take an integrable function f on the circle and form the average of its values at the sequence of rotations. This is the time average. The space average is the integral of f over the circle, divided by the circumference of the circle. The ergodic theorem says that the time average equals the space average, except possibly for a setting of starting values of measure zero.
More generally, let X be a measure space (like the unit circle) with measure μ let T be an ergodic transformation (like rotating by a golden angle), Then for almost all starting values x we have the following:
Let’s do a simulation to see this in practice by running the following Python script.
from scipy import pi, cos from scipy.constants import golden from scipy.integrate import quad golden_angle = 2*pi*golden**-2 def T(x): return (x + golden_angle) % (2*pi) def time_average(x, f, T, n): s = 0 for k in range(n): s += f(x) x = T(x) return s/n def space_average(f): integral = quad(f, 0, 2*pi)[0] return integral / (2*pi) f = lambda x: cos(x)**2 N = 1000000 print( time_average(0, f, T, N) ) print( space_average(f) )
In this case we get 0.49999996 for the time average, and 0.5 for the space average. They’re not the same, but we only used a finite value of n; we didn’t take a limit. We should expect the two values to be close because n is large, but we shouldn’t expect them to be equal.
Update: The code and results were updated to fix a bug pointed out in the comments below. I had written
... % 2*pi when I should have written
... % (2*pi). I assumed the modulo operator was lower precedence than multiplication, but it’s not. It was a coincidence that the buggy code was fairly accurate.
A friend of mine, a programmer with decades of experience, recently made a similar error. He’s a Clojure fan but was writing in C or some similar language. He rightfully pointed out that this kind of error simply cannot happen in Clojure. Lisps, including Clojure, don’t have operator precedence because they don’t have operators. They only have functions, and the order in which functions are called is made explicit with parentheses. The Python code
x % 2*pi corresponds to
(* (mod x 2) pi) in Clojure, and the Python code
x % (2*pi) corresponds to
(mod x (* 2 pi)).
Related: Origin of the word “ergodic”
4 thoughts on “Irrational rotations are ergodic”
Hi,
I believe there is a bug in your Python script. The % and * operators have same precedence, so left to right chaining happens. Adding parentheses around (2 * pi) yield a much better approximation of the integral:
0.499999994135
0.5
Thanks,
Felix
This strikes me as Monte Carlo integration, with the golden angle rotation taking the place of a pseudo-random angle generator.
Yes, much like Monte Carlo integration. Or even more like quasi-Monte Carlo, a deterministic sequence of integration points that explore a space more efficiently than random points. Note that the integral is accurate to 7 figures, but we’d only expect 3 from Monte Carlo.
That ergodicity of angles feature is what makes the Banach-Tarski paradox work, too, apparently. I never “got” it when I learned it in college, but this video () does a great job explaining it. | https://www.johndcook.com/blog/2017/06/01/irrational-rotations-are-ergodic/ | CC-MAIN-2017-26 | refinedweb | 681 | 62.88 |
OK but if size isn't optional why no error/warning message ?OK but if size isn't optional why no error/warning message ?fxm wrote:How the compiler can guess your number of bytes (> 1) to read, by passing it a dereferenced byte pointer corresponding to one byte, therefore why a warning?
By the way the FB compiler and runtime could do something like this !
Joshy
Code: Select all
#include "crt.bi" extern "C" #ifdef __FB_LINUX__ declare function getBufferSize alias "malloc_usable_size" (byval p as any ptr) as size_t #elseif defined(__FB_WIN32__) declare function getBufferSize alias "_msize" (byval p as any ptr) as size_t #else #error 666: Build trarget must be Windows or Linux ! #endif end extern const fileName = "guitarra.QSF" var hFile = FreeFile() if open(fileName,for binary,access read,as hFile) then print "error: can't read: '" & fileName & "' !" beep:sleep:end 1 end if dim as integer nBytes = lof(hFile) print "file size: " & nBytes dim as ubyte ptr fileBuffer=allocate(nBytes) get #hFile,,*fileBuffer,getBufferSize(fileBuffer) close hFile | https://www.freebasic.net/forum/viewtopic.php?f=17&p=289206&sid=121a22e6742fc3b6dce9eb97108287ae | CC-MAIN-2022-21 | refinedweb | 170 | 54.63 |
There are a number of ways you can take to get the current date. We will use the
date class of the datetime module to accomplish this task.
Example 1: Python get today's date
from datetime import date today = date.today() print("Today's date:", today)
Here, we imported the
date class from the
datetime module. Then, we used the
date.today() method to get the current local date.
By the way,
date.today() returns a
date object, which is assigned to the today variable in the above program. Now, you can use the strftime() method to create a string representing date in different formats.
Example 2: Current date in different formats
from datetime import date today = date.today() # dd/mm/YY d1 = today.strftime("%d/%m/%Y") print("d1 =", d1) # Textual month, day and year d2 = today.strftime("%B %d, %Y") print("d2 =", d2) # mm/dd/y d3 = today.strftime("%m/%d/%y") print("d3 =", d3) # Month abbreviation, day and year d4 = today.strftime("%b-%d-%Y") print("d4 =", d4)
When you run the program, the output will be something like:
d1 = 16/09/2019 d2 = September 16, 2019 d3 = 09/16/19 d4 = Sep-16-2019
If you need to get the current date and time, you can use
datetime class of the
datetime module.
Example 3: Get the current date and time
from datetime import datetime # datetime object containing current date and time now = datetime.now() print("now =", now) # dd/mm/YY H:M:S dt_string = now.strftime("%d/%m/%Y %H:%M:%S") print("date and time =", dt_string)
Here, we have used
datetime.now() to get the current date and time. Then, we used
strftime() to create a string representing date and time in another format. | https://www.programiz.com/python-programming/datetime/current-datetime | CC-MAIN-2021-04 | refinedweb | 291 | 67.65 |
import "github.com/spf13/hugo/commands"
Package commands defines and implements command-line commands and flags used by Hugo. Commands and flags are implemented using Cobra.
benchmark.go check.go commandeer.go convert.go env.go gen.go genautocomplete.go gendoc.go gendocshelper.go genman.go hugo.go import_jekyll.go limit_others.go list.go list_config.go new.go server.go undraft.go version.go
Hugo represents the Hugo sites to build. This variable is exported as it is used by at least one external library (the Hugo caddy plugin). We should provide a cleaner external API, but until then, this is it.
var HugoCmd = &cobra.Command{ Use: "hugo", Short: "hugo builds your site", Long: "" /* 211 byte string literal not displayed */, RunE: func(cmd *cobra.Command, args []string) error { cfg, err := InitializeConfig() if err != nil { return err } c, err := newCommandeer(cfg) if err != nil { return err } if buildWatch { cfg.Cfg.Set("disableLiveReload", true) c.watchConfig() } return c.build() }, }
HugoCmd is Hugo's root command. Every other command attached to HugoCmd is a child command to it.
AddCommands adds child commands to the root command HugoCmd.
Execute adds all child commands to the root command HugoCmd and sets flags appropriately.
InitializeConfig initializes a config file with sensible default configuration flags.
NewContent adds new content to a Hugo site.
NewSite creates a new Hugo site and initializes a structured Hugo directory.
NewTheme creates a new Hugo theme.
Reset resets Hugo ready for a new full build. This is mainly only useful for benchmark testing etc. via the CLI commands.
Undraft publishes the specified content by setting its draft status to false and setting its publish date to now. If the specified content is not a draft, it will log an error.
Package commands imports 44 packages (graph) and is imported by 91 packages. Updated 2017-05-22. Refresh now. Tools for package owners. | https://godoc.org/github.com/spf13/hugo/commands | CC-MAIN-2017-22 | refinedweb | 310 | 52.76 |
Learning Android Intents — Save 50%
Explore and apply the power of intents in Android application development with this book and ebook.
(For more resources related to this topic, see here.)
Common mobile components
Due to the open source nature of the Android operating system, many different companies such as HTC and Samsung ported the Android OS on their devices with many different functionalities and styles. Each Android phone is unique in some way or the other and possesses many unique features and components different from other brands and phones. But there are some components that are found to be common in all the Android phones.
We are using two key terms here: components and features. Component is the hardware part of an Android phone, such as camera, Bluetooth and so on. And Feature is the software part of an Android phone, such as the SMS feature, E-mail feature, and so on. This article is all about hardware components, their access, and their use through intents.
These common components can be generally used and implemented independently of any mobile phone or model. And there is no doubt that intents are the best asynchronous messages to activate these Android components. These intents are used to trigger the Android OS when some event occurrs and some action should be taken. Android, on the basis of the data received, determines the receiver for the intent and triggers it. Here are a few common components found in each Android phone:
The Wi-Fi component
Each Android phone comes with a complete support of the Wi-Fi connectivity component. The new Android phones having Android Version 4.1 and above support the Wi-Fi Direct feature as well. This allows the user to connect to nearby devices without the need to connect with a hotspot or network access point.
The Bluetooth component
An Android phone includes Bluetooth network support that allows the users of Android phones to exchange data wirelessly in low range with other devices. The Android application framework provides developers with the access to Bluetooth functionality through Android Bluetooth APIs.
The Cellular component
No mobile phone is complete without a cellular component. Each Android phone has a cellular component for mobile communication through SMS, calls, and so on. The Android system provides very high, flexible APIs to utilize telephony and cellular components to create very interesting and innovative apps.
Global Positioning System (GPS) and geo-location
GPS is a very useful but battery-consuming component in any Android phone. It is used for developing location-based apps for Android users. Google Maps is the best feature related to GPS and geo-location. Developers have provided so many innovative apps and games utilizing Google Maps and GPS components in Android.
The Geomagnetic field component
Geomagnetic field component is found in most Android phones. This component is used to estimate the magnetic field of an Android phone at a given point on the Earth and, in particular, to compute magnetic declination from the North.
The geomagnetic field component uses the World Magnetic Model produced by United States National Geospatial-Intelligence Agency. The current model that is being used for the geomagnetic field is valid until 2015. Newer Android phones will have the newer version of the geomagnetic field.
Sensor components
Most Android devices have built-in sensors that measure motion, orientation, environment conditions, and so on. These sensors sometimes act as the brains of the app. For example, they take actions on the basis of the mobile's surrounding (weather) and allow users to have an automatic interaction with the app. These sensors provide raw data with high precision and accuracy for measuring the respective sensor values. For example, gravity sensor can be used to track gestures and motions, such as tilt, shake, and so on, in any app or game. Similarly, a temperature sensor can be used to detect the mobile temperature, or a geomagnetic sensor (as introduced in the previous section) can be used in any travel application to track the compass bearing. Broadly, there are three categories of sensors in Android: motion, position, and environmental sensors. The following subsections discuss these types of sensors briefly.
Motion sensors
Motion sensors let the Android user monitor the motion of the device. There are both hardware-based sensors such as accelerometer, gyroscope, and software-based sensors such as gravity, linear acceleration, and rotation vector sensors. Motion sensors are used to detect a device's motion including tilt effect, shake effect, rotation, swing, and so on. If used properly, these effects can make any app or game very interesting and flexible, and can prove to provide a great user experience.
Position sensors
The two position sensors, geomagnetic sensor and orientation sensor, are used to determine the position of the mobile device. Another sensor, the proximity sensor, lets the user determine how close the face of a device is to an object. For example, when we get any call on an Android phone, placing the phone on the ear shuts off the screen, and when we hold the phone back in our hands, the screen display appears automatically. This simple application uses the proximity sensor to detect the ear (object) with the face of the device (the screen).
Environmental sensors
These sensors are not used much in Android apps, but used widely by the Android system to detect a lot of little things. For example, the temperature sensor is used to detect the temperature of the phone, and can be used in saving the battery and mobile life.
At the time of writing this article, the Samsung Galaxy S4 Android phone has been launched. The phone has shown a great use of environmental gestures by allowing users to perform actions such as making calls by no-touch gestures such as moving your hand or face in front of the phone.
Components and intents
Android phones contain a large number of components and features. This becomes beneficial to both Android developers and users. Android developers can use these mobile components and features to customize the user experience. For most components, developers get two options; either they extend the components and customize those according to their application requirements, or they use the built-in interfaces provided by the Android system. We won't read about the first choice of extending components as it is beyond the scope of this article. However, we will study the other option of using built-in interfaces for mobile components.
Generally, to use any mobile component from our Android app, the developers send intents to the Android system and then Android takes the action accordingly to call the respective component. Intents are asynchronous messages sent to the Android OS to perform any functionality. Most of the mobile components can be triggered by intents just by using a few lines of code and can be utilized fully by developers in their apps. In the following sections of this article, we will see few components and how they are used and triggered by intents with practical examples. We have divided the components in three ways: communication components, media components, and motion components. Now, let's discuss these components in the following sections.
Communication components
Any mobile phone's core purpose is communication. Android phones provide a lot of features other than communication features. Android phones contain SMS/MMS, Wi-Fi, and Bluetooth for communication purposes. This article focuses on the hardware components; so, we will discuss only Wi-Fi and Bluetooth. The Android system provides built-in APIs to manage and use Bluetooth devices, settings, discoverability, and much more. It offers full network APIs not only for Bluetooth but also for Wi-Fi, hotspots, configuring settings, Internet connectivity, and much more. More importantly, these APIs and components can be used very easily by writing few lines of code through intents. We will start by discussing Bluetooth, and how we can use Bluetooth through intents in the next section.
Using Bluetooth through intents
Bluetooth is a communication protocol that is designed for short-range, low bandwidth, peer-to-peer communication. In this section, we will discuss how to interact and communicate with local Bluetooth devices and how we can communicate with the nearby, remote devices using Bluetooth. Bluetooth is a very low-range protocol, but it can be used to transmit and receive data such as files, media, and so on. As of Android 2.1, only paired devices can communicate with each other via Bluetooth devices due to encryption of the data.
Bluetooth APIs and libraries became available from Android 2.0 Version (SDK API Level 5). It should also be noted that not all Android phones will necessarily include the Bluetooth hardware.
The Bluetooth API provided by the Android system is used to perform a lot of actions related to Bluetooth that includes turning the Bluetooth on/off, pairing with nearby devices, communicating with other Bluetooth devices, and much more. But, not all of these actions can be performed through intents. We will discuss only those actions that can be performed through intents. These actions include setting the Bluetooth On/Off from our Android app, tracking the Bluetooth adapter state, and making our device discoverable for a small time. The actions that can't be performed through intents include sending data and files to other Bluetooth devices, pairing with other devices, and so on. Now, let's explain these actions one by one in the following sections.
Some Bluetooth API classes
In this section, we will discuss some classes from the Android Bluetooth API that are used in all Android apps using Bluetooth. Understanding these classes will help the developers understand the following examples more easily.
BluetoothDevice
This class represents each remote device with which the user is communicating. This class is a thin wrapper for the Bluetooth hardware of the phone. To perform the operations on the object of this class, developers have to use the BluetoothAdapter class. The objects of this class are immutable. We can get BluetoothDevice by calling BluetoothAdapter.getRemoteDevice(String macAddress) and passing the MAC address of any device. Some important methods of this class are:
- BluetoothDevice.getAddress(): It returns the MAC address of the current device.
- BluetoothDevice.getBondState(): It returns the bonding state of the current device, such as not bonded, bonding, or bonded.
The MAC address is a string of 12 characters represented in the form of xx:xx:xx:xx:xx:xx. For example, 00:11:22:AA:BB:CC.
BluetoothAdapter
This class represents the current device on which our Android app is running. It should be noted that the BluetoothAdapter class represents the current device, and the BluetoothDevice class represents the other devices that can or cannot bonded with our device. This class is a singleton class and cannot be instantiated. To get the object of this class, we can use the BluetoothAdapter.getDefaultAdapter() method. To perform any action related to Bluetooth communication, this class is the main starting point for it. Some of the methods of this class include BluetoothAdapter.getBondedDevices(), which returns all paired devices, BluetoothAdapter.startDiscovery(), which searches for all discoverable devices nearby, and so on. There is a method called startLeScan(BluetoothAdapter.LeScanCallback callback) that is used to receive a callback whenever a device is discovered. This method was introduced in API Level 18.
Some of the methods in the BluetoothAdapter and BluetoothDevice classes require the BLUETOOTH permission, and some require the BLUETOOTH_ADMIN permission as well. So, when using these classes in your app, don't forget to add these permissions in your Android manifest file.
So far, we have discussed some Bluetooth classes in the Android OS along with some of the methods in those classes. In the next section, we will develop our first Android app that will ask the user to turn on the Bluetooth.
Turning on the Bluetooth app
To perform any Bluetooth action, Bluetooth must first be turned on. So, in this section, we will develop an Android app that will ask the user to turn on the Bluetooth device if it is not already on. The user can accept it and the Bluetooth will be turned on, or the user can also reject it. In the latter case, the application will continue and the Bluetooth will remain in the off state. It would be great to say that this action can be performed very easily using intents. Let's see how we can do this by looking at the code.
First, create an empty Android project in your favourite IDE. We have developed it in Android Studio. At the time of writing this article, the project is in the Preview Mode, and its beta launch is expected soon. Now, we will modify a few files from the project to make our Android Bluetooth app. We will modify two files. Let's see those files in the following sections.
The MainActivity.java file
This class represents the main activity of our Android app. The following code is implemented in this class:
In our activity, we have declared a constant value with the name BLUETOOTH_REQUEST_CODE. This constant is used as a request code or request unique identifier in the communication between our app and the Android system. When we request the Android OS to perform some action, we pass any request code. Then, the Android system performs the action and returns the same request code back to us. After comparing our request code with Android's request code, we get to know about the action that has been performed. If the code doesn't match, it means that this action is for some other request. It is not our request. In the onCreate() method, we set the layout of the activity by calling the setContentView() method. And then, we perform our real task in the next few lines.
We create a string enableBT that gets the value of the ACTION_REQUEST_ENABLE method that pertains to the BluetoothAdapter class. This string is passed in the intent constructor to tell the intent that it is meant to enable the Bluetooth device. Like the Bluetooth-enable request string, the Android OS also contains many other requests for various actions such as Wi-Fi, Sensors, Camera, and more. In this article, we will learn about a few request strings. After creating the request string, we create our intent and pass the request string to it. And then, we start our intent by passing it in the startActivityForResult() method.
Basically, the startActivity() method just starts any activity that is passed through intents, but the startActivityForResult() method starts any activity, and after performing some action, it returns to the original activity and presents the results of the action. So, in this example, we called the activity that requests the Android system to enable the Bluetooth device. The Android system performs the action and asks the user whether it should enable the device or not. Then, the Android system returns the result to the original activity that started the intent earlier. To get any result from other activities to our activity, we override the onActivityResult() method. This method is called after returning from other activities. The method contains three parameters: requestCode, resultCode, and dataIntent. The requestCode parameter is an integer value and contains the request code value of the request provided by the developer. The resultCode parameter is the result of the action. It tells the developer whether the action has been performed successfully with a positive response or with a negative response. The dataIntent object contains the original calling-intent data, such as which activity started the intent and all the related information. Now, let's see our overridden method in detail. We have first checked whether requestCode, our request code, is BLUETOOTH_REQUEST_CODE, or not. If both are the same, we have compared the result code to check whether our result is okay or not. If it is okay, it means that Bluetooth has been enabled; so, we display a toast notifying the user about it, and if the result is not okay, that means Bluetooth has not been enabled. Here also we notify the user by displaying a toast.
This was the activity class that performs the core functionality of our Bluetooth-enabling app. Now, let's see the Android manifest file in the following section.
The AndroidManifest.xml file
The AndroidManifest.xml file contains all the necessary settings and preferences for the app. The following is the code contained in this manifest file:
Any Android application that uses the Bluetooth device must have the permission of Bluetooth usage. So, to provide the permission to the user, the developer declares the <uses-permission> tag in the Android manifest file and writes the necessary permissions. As shown in the code, we have provided two permissions: android.permission.BLUETOOTH and android.permission.BLUETOOTH_ADMIN. For most Bluetooth-enabled apps, only the BLUETOOTH permission does most of the work. The BLUETOOTH_ADMIN permission is only for those apps that use Bluetooth admin settings such as making the device discoverable, searching for other devices, pairing, and so on. When the user first installs the application, he is provided with details about which permissions are needed for the app. If the user accepts and grants the permissions to the app, the app gets installed; otherwise, the user can't install the app.
After discussing the Android manifest and activity files, we would test our project by compiling and running it. When we run the project, we should see the screens as shown in the following screenshots:
Enabling Bluetooth App
As the app starts, the user is presented with a dialog to enable or disable the Bluetooth device. If the user chooses Yes , the Bluetooth is turned on, and a toast updates the status by displaying the status of the Bluetooth.
Tracking the Bluetooth adapter state
In the previous example, we saw how we can turn on the Bluetooth device just by passing the intent of the Bluetooth request to the Android system in just a few lines. But enabling and disabling the Bluetooth are time-consuming and asynchronous operations. So, instead of polling the state of the Bluetooth adapter, we can use a broadcast receiver for the state change. In this example, we will see how we can track the Bluetooth state using intents in a broadcast receiver.
This example is the extension of the previous example, and we will use the same code and add a new code to it. Let's look at the code now. We have three files, MainActivity.java, BluetoothStateReceiver.java, and AndroidManifest.xml. Let's discuss these files one by one.
The MainActivity.java file
This class represents the main activity of our Android app. The following code is implemented in this class:
public class MainActivity extends Activity { final int BLUETOOTH_REQUEST_CODE = 0; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); registerReceiver(new BluetoothStateReceiver(), new IntentFilter( BluetoothAdapter.ACTION_STATE_CHANGED)); String enableBT = BluetoothAdapter.ACTION_REQUEST_ENABLE; Intent bluetoothIntent = new Intent(enableBT); startActivityForResult(bluetoothIntent, BLUETOOTH_REQUEST_CODE); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { // TODO Auto-generated method stub super.onActivityResult(requestCode, resultCode, data); if (resultCode == RESULT_OK) { if (requestCode == BLUETOOTH_REQUEST_CODE) { Toast.makeText(this, "Turned On", Toast.LENGTH_SHORT).show(); } } else if (resultCode == RESULT_CANCELED) { Toast.makeText(this, "Didn't Turn On", Toast.LENGTH_SHORT).show(); } } }
From the code, it is clear that the code is almost the same as in the previous example. The only difference is that we have added one line after setting the content view of the activity. We called the registerReceiver() method that registers any broadcast receiver with the Android system programmatically. We can also register the receivers via XML by declaring them in the Android manifest file. A broadcast receiver is used to receive the broadcasts sent from the Android system.
While performing general actions such as turning the Bluetooth on, turning the Wi-Fi on/off and so on, the Android system sends broadcast notifications that can be used by developers to detect the state changes in the mobile. There are two types of broadcasts. Normal broadcasts are completely asynchronous. The receivers of these broadcasts run in a disorderly manner, and multiple receivers can receive broadcasts at the same time. These broadcasts are more efficient as compared to the other type of broadcasts that are ordered broadcasts. Ordered broadcasts are sent to one receiver at a time. As each receiver receives the results, it passes the results to the next receiver or completely aborts the broadcast. In this case, other receivers don't receive the broadcast.
Although the Intent class is used for sending and receiving broadcasts, the intent broadcast is a completely different mechanism and is separate from the intents used in the startActivity() method. There is no way for the broadcast receiver to see or capture the intents used with the startActivity() method. The main difference between these two intent mechanisms is that the intents used in the startActivity() method perform the foreground operation that the user is currently engaged in. However, the intent used with the broadcast receivers performs some background operations that the user is not aware of.
In our activity code, we used the registerReceiver() method to register an object of our customized broadcast receiver defined in the BluetoothStateReceiver class, and we passed an intent filter BluetoothAdapter.ACTION_STATE_CHANGED according to the type of the receiver. This state tells the intent filter that our object of the broadcast receiver is used in detecting the Bluetooth state change in the app. After the Register receiver, we created an intent passing BluetoothAdapter.ACTION_REQUEST_ENABLE, telling the app to turn on the Bluetooth. Finally, we start our action by calling startActivityForResult(), and we compare the results in the onActivityResult() method to see whether the Bluetooth is turned on or not. You can read about these processes in the previous example of this article.
When you register the receiver in the onCreate() or onResume() method of the activity, you should unregister it in the onPause() or onDestroy() method. The advantage of this approach is that you won't receive any broadcasts when the app is paused or closed, and this can decrease Android's unnecessary overhead operations resulting in a better battery life.
Now, let's see the code of our customized broadcast receiver class.
The BluetoothStateReceiver.java file
This class represents our customized broadcast receiver that tracks the state change in the Bluetooth performed the main functionality of tracking the Bluetooth device status in this method. First, we will create a string variable to store the string value of the current state. To retrieve the string value, we have used BluetoothAdapter.EXTRA_STATE. Now, we can pass this value in the get() method of the intent to get our required data. As our states are integers and also extras, we have called Intent.getIntExtra() and passed our required string in it along with its default value as -1. Now, as we have got the current state code, we can compare these codes with the pre-defined codes in BluetoothAdapter to see the state of the Bluetooth device. There are four predefined states.
- STATE_TURNING_ON: This state notifies the user that the Bluetooth turn-on operation is in progress.
- STATE_ON: This state notifies the user that Bluetooth has already been turned on.
- STATE_TURNING_OFF: This state notifies the user that the Bluetooth device is being turned off.
- STATE_OFF: This state notifies the user that the Bluetooth has been turned off.
We compare our state with these constants, and display a toast according to the result we get.The Android manifest file is the same as in the previous example.
Thus, in a nutshell, we discussed how we can enable the Bluetooth device and ask the user to turn it on or off through intents. We also saw how to track the state of the Bluetooth operations using intents in the broadcast receiver and displaying the toasts. The following screenshots show the application demo:
Enabling the Bluetooth App
Being discoverable
So far, we have only been interacting with Bluetooth by turning it on or off. But, to start communication via Bluetooth, one's device must be discoverable to start pairing. We will not create any example for this application of intents, but we will only explain how this can be done via intents. To turn on Bluetooth, we used the BluetoothAdapter.ACTION_REQUEST_ENABLE intent. We passed the intent in the startActivityForResult() method and checked the result in the onActivityResult() method. Now, to make the device discoverable, we can pass the BluetoothAdapter.ACTION_REQUEST_DISCOVERABLE string in the intent. And then, we pass this intent in the startActivityForResult() method, and track the result in the onActivityResult() method to compare the results.
The following code snippet shows the intent-creation process for making a device discoverable:
In the code, you can see that there is nothing new that hasn't been discussed earlier. Only the intent action string type has been changed, and the rest is the same. This is the power of intents; you can do almost anything with just a few lines of code in a matter of minutes.
Monitoring the discoverability modes
As we tracked the state changes of Bluetooth, we can also monitor the discoverability mode using exactly the same method explained earlier in this article. We have to create a customized broadcast receiver by extending the BroadcastReceiver class. In the onReceive() method, we will get two extra strings: BluetoothAdapter.EXTRA_PREVIOUS_SCAN_MODE, and BluetoothAdapter.EXTRA_SCAN_MODE. Then, we pass those strings in the Intent.getIntExtra() method to get the integer values for the mode, and then we compare these integers with the predefined modes to detect our mode. The following code snippet shows the code sample:
Communication via Bluetooth
The Bluetooth communication APIs are just wrappers around the standard RFCOMM , the standard Bluetooth radio frequency communications protocol. To communicate with other Bluetooth devices, they must be paired with each other. We can carry out a bidirectional communication via Bluetooth using the BluetoothServerSocket class that is used to establish a listening socket for initiating a link between devices and BluetoothSocket that is used to create a new client socket to listen to the Bluetooth server socket. This new client socket is returned by the server socket once a connection is established. We will not discuss how Bluetooth is used in communication because it is beyond the scope of this article.
Using Wi-Fi through intents
Today, the era of the Internet and its vast usage in mobile phones have made worldwide information available on the go. Almost every Android phone user expects an optimal use of the Internet from all apps. It becomes the developer's responsibility to add Internet access in the app. For example, when users use your apps, they would like to share the use and their activities performed in your app, such as completing any level of a game or reading any article from any news app, with their friends on various social networks, or by sending messages and so on. So, if users don't get connected through your app to the Internet, social platforms, or worldwide information, then the app becomes too limited and maybe boring.
To perform any activity that uses the Internet, we first have to deal with Internet connectivity itself, such as whether the phone has any active connection. In this section, we will see how we can access Internet connectivity through our core topic—the intents. Like Bluetooth, we can do much work through intents related to Internet connectivity. We will implement three main examples: to check the Internet status of a phone, to pick any available Wi-Fi network, and to open the Wi-Fi settings. Let's start our first example of checking the Internet connectivity status of a phone using intents.
Checking the Internet connectivity status
Before we start coding our example, we need to know some important things. Any Android phone connected to the Internet can have any type of connection. Mobile phones can be connected using data connection to the Internet or it can be any open or secured Wi-Fi. Data connection is called mobile connection, and is connected via the mobile network provided by the SIM and service providers. In this example, we will detect whether the mobile phone is connected to any network or not, and if it is connected, which type of network it is connected to. Let's implement the code now.
There are two main files that perform the functionality of the app: NetworkStatusReceiver.java and AndroidManifest.xml. You might be wondering about the MainActivity.java file. In the following example, this file is not used because of the requirements of the app. What we are going to do in this example is that whenever the Internet connectivity status of a phone is changed, such as the Wi-Fi is turned on or off, this app will display a toast showing the status. The app will be performing its work in the background; so, activity and layouts are not needed in this app. Now, let's explain these files one by one:
The NetworkStatusReceiver.java file
This class represents our customized broadcast receiver that tracks the state change in the network connectivity of the we are performing the main functionality of tracking the Wi-Fi device status in this method. We have registered this receiver in the Android manifest file as a network status change, and we will discuss that file in the next section. This onReceive() method is called only when the network status is changed. So, we first display a toast stating that the network connectivity status has changed.
It must be noted that any broadcast receiver cannot be passed using this in the context parameter of Toast as we used to do in the Activity class because the BroadcastReceiver class doesn't extend the Context class like the Activity class.
We have already notified the user about the network status changes, but we still have not notified the user about which change has occurred. So, at this point, our intent object becomes handy. It contains all the information and data of the network in the form of extra objects. Extra is an object of the Bundle class. We create a local Bundle reference and store the intent extra objects in it by calling the getExtras() method. Along with it, we also store the no connectivity extra object in a boolean variable. EXTRA_NO_CONNECTIVITY is the lookup key for a boolean variable that indicates whether there is a complete lack of network connectivity, that is, whether any network is available, or not. If this value is true, it means that there is no network available.
After storing our required extra objects, we need to check whether the extra objects are available or not. So, we have checked the extra objects with null, and if the extra objects are available, we extract more network information from these extra objects. In the Android system, the developer is told about the data of interest in the form of constant strings. So, we first get our constant string of network information, which is EXTRA_NETWORK_INFO. We store it in a string variable, and then we use it as a key value parameter in the get() method of the extra objects. The Bundle.get() method returns an Object type of the object, and we need to typecast it to our required class. We are looking for network information; so, we are using the NetworkInfo class object.
The Intent.EXTRA_NETWORK_INFO string was deprecated in API Level 14. Since NetworkInfo can vary based on the User ID ( UID ), the application should always obtain the network information through the getActiveNetworkInfo() or getAllNetworkInfo() method.
We have got all our values and data of interest; now, we will compare and check the data to find the connectivity status. We check whether this NetworkInfo data is null or not. If it is not null, we check whether the network is connected by checking the value from the getState() method of NetworkInfo. The NetworkInfo.State state that represents the coarse-grained network state is an enum. If the NetworkInfo.State enum is equal to NetworkInfo.State.CONNECTED, it means that the phone is connected to any network. Remember that we still don't know which type of network we are connected to. We can find the type of network by calling the NetworkInfo.getTypeName() method. This method will return Mobile or Wi-Fi in the respective cases.
Coarse-grained network state is mostly used in apps rather than DetailedState. The difference between these two states' mapping is that the coarse-grained network only shows four states: CONNECTING, CONNECTED, DISCONNECTING, and DISCONNECTED. However, DetailedState shows other states for more details, such as IDLE, SCANNING, AUTHENTICATING, UNAVAILABLE, FAILED, and the other four coarse-grained states.
The rest is an if-else block checking the state of the network and showing the relative toasts of status on the screen. Overall, we first extracted our extra objects from intent, stored them in local variables, extracted network info from extras, checked the state, and finally displayed the info in the form of toasts. Now, we will discuss the Android manifest file in the next section.
The AndroidManifest.xml file
As we have used a broadcast receiver in our application to detect the network connectivity status, it is necessary to register and unregister the broadcast receiver in the app. In our manifest file, we have performed two main tasks. First, we have added the permissions of accessing the network state. We have used android.permissions.ACCESS_NETWORK_STATE. Second, we have registered our receiver in the app using the receiver tag and added the name of the class.
Also, we have added the intent filters. These intent filters define the purpose of the receiver, such as what type of data should be received from the system. We have used the android.net.conn.CONNECTIVITY_CHANGE filter action for detecting the network connectivity change broadcast. The following is the code implementation of the file:
Summarizing the details of the preceding app, we created a customized broadcast receiver, and defined our custom behavior of network change, that is, displaying toasts, and then we registered our receiver in the manifest file along with the declarations of the required permissions. The following screenshots show a simple demo of the app when turning the Wi-Fi on in the phone:
The Network Change Status app
In the previous screenshot, we can see that when we turn the Wi-Fi on, the app displays a toast saying that the network status has changed. And after that toast, it displays the change; in our case, the Wi-Fi is connected. You might be wondering about the role of intents in this app. This app was not possible without using intents. The first use of intents was in registering the receiver in the manifest file to filter it for network status change. The other use of intents was in the receiver when we have received the update and we want to know the change. So, we used the intents and extracted the data from it in the form of extra objects and used it for our purpose. We didn't create our own intents in this example; instead, we only used the provided intents. In our next example, we will create our own intents and use them to open the Wi-Fi settings from our app.
Opening the Wi-Fi Settings app
Until now, we have only used intents for network and Wi-Fi purposes. In this example, we are going to create intent objects and use it in our app. In the previous app example, we detected the network change status of the phone and displayed it on the screen. In this example, we will add a button in the same app. On clicking on or tapping the button, the app will open the Wi-Fi settings. And the user can turn the Wi-Fi on or off from there. As the user performs any action, the app will display the network status change on the screen. For the network status, we used the NetworkStatusReceiver.java and AndroidManifest.xml files. Now, let's open the same project and change our MainActivity.java and layout_main.xml files to add a button and its functionality to them. Let's see these two files one by one:
The activity_main.xml file
This file is a visual layout of our main activity file. We will add a button view in this XML file. The code implementation of the file is as follows:
We have added a button in the layout with the view ID of btnWifiSettings. We will use this ID to get the button View in the layout file. Let's now see our main activity file that will use this layout as the visual content.
The MainActivity.java file
This file represents the main activity file as a launcher point of the app. We will implement our button's core functionality in this file. The code implementation of the file is as follows:
As discussed many times, we have extended our class from the Activity class, and we have overridden the onCreate() method of the class. After calling the super method, we have first referenced our layout file (explained in the previous section) using the setContentView() method and passed the layout ID as the parameter. After getting the layout file, we have extracted our Wi-Fi settings button from the layout by calling the findViewById() method. Remember, we set the button View's ID to btnWifiSettings; so, we will pass this ID in the method as an argument. We stored the referenced file of our button in a local Button object.reference object. Now, we will set View.OnClickListener of the local button to perform our tasks on a button click. We have passed an anonymous object of OnClickListener in the button.setOnClickListener() method, and overridden the onClick() method of the anonymous object.
Until now, we have only performed some initial steps to create a setup for our app. Now, let's focus on opening the Wi-Fi settings task. We will create an Intent object, and we have to pass a constant string ID to tell the intent about what to start. We will use the Settings.ACTION_WIFI_SETTINGS constant that shows the settings to allow the configuration of the Wi-Fi. After creating the Intent object, we will pass it in the startActivity() method to open the activity containing the Wi-Fi settings. It is that simple with no rocket science at all. When we run the app, we will have something similar to the following screenshots:
Opening the Wi-Fi Settings app
As seen from the preceding screenshot, when we click or tap the Wi-Fi Settings button, it will open the Wi-Fi settings screen of the Android phone. On changing the settings, such as turning on the Wi-Fi, it will display the toasts to show the updated changes and network status.
We have finished discussing the communication components using intents, in which we used Bluetooth and Wi-Fi via intents and saw how these can be used in various examples and applications. Now, we will discuss how the media components can be used via intents and what we can do for media components in the following sections.
Media components
The preceding section was all about communication components. But the difference between old phones and the new smartphones is the media capability, such as high-definition audio-video features. And the multimedia capabilities of mobile phones have become a more significant consideration to many consumers. Fortunately, the Android system provides multimedia API's for many features such as playing and recording a wide range of image, audio, and video formats both locally and streamed. If we describe the media components in simple words, this topic is beyond the scope of this article. We will only discuss those media components that can be triggered, used, and accessed through intents. The components to be discussed in this section include using intents to take pictures, using intents to record video, speech recognition using intents, and the role of intents in text-to-speech conversion. The first three topics use intents to perform the actions; but the last topic of text-to-speech conversion doesn't use intents on a complete basis. We will also develop a sample application to see the intents in action. Let's discuss these topics one by one in the following subsections.
Using intents to take pictures
Today, almost every phone has a digital camera component. The popularity of digital cameras embedded within mobile phones has caused their prices to drop along with their size. Android phones also include digital cameras varying from 3.2 megapixels to 32 megapixels. From the development perspective, pictures can be taken via many different methods. The Android system has also provided the APIs for camera control and pictures, but we will only be focusing on one method that uses intents in it. This is the easiest way to take pictures in Android development, and contains no more than few lines of code.
We will first create a layout with the image View and button. Then, in the Activity class, we will get the references of our views from the layout file, and set the click listener of the button. On clicking the button, we will create the intent of the capture image, and start another activity as a child class. After getting the result, we will display that captured image in our image View.
So, with the basic empty Hello World project ready, we will change three files and add our code to it. The files are activity_main.xml, MainActivity.java, and AndroidManifest.xml. Let's explain the changes in each file one by one:
The activity_main.xml file
This file represents the visual layout for the file. We will add an ImageView tag to show the captured image and a Button tag to take a picture and trigger the camera.
The code implementation of the file is as follows:
As you can see in the code, we have placed an ImageView tag in the relative layout with the ID of imageView1. This ID will be used in the main activity file to extract the view from the layout to use in the Java file. We have placed the view in the horizontal centre of the layout by assigning the value true in the android:layout_centerHorizontal tag. Initially, we have set a default image of our app's launcher icon to our image View. Below the image View, we have placed a button View. On tapping the button, the camera will be started. The button's ID is set to btnTakePicture by the android:layout_below tag below the image View layout. This relativity is the main advantage of the relative layouts as compared to linear layouts. So now, let's have a look at the activity of the app that performs the main functionality and uses this layout as a visual part as well.
The MainActivity.java file
This file represents the main launching activity of the app. This file uses the layout_main.xml file as the visual part, and it is extended from the Activity class. The code implementation of the file is as follows:
We start our class by overriding the onCreate() method of the activity. We set the visual layout of the activity to the activity_main.xml layout by calling the setContentView() method. Now, as the layout is set, we can get references to the views in the layout file.
We create two fields in the class; takenImage of the ImageView class to be used to show the captured image and imageButton of the Button class to be used to trigger the camera by clicking on it. The onClick() method will be called when the button is tapped/clicked. So, we will define our camera-triggering code in this method. So, in this method, we are creating an instance of the Intent class, and we are passing the MediaStore.ACTION_IMAGE_CAPTURE constant in the constructor. This constant will tell Android that the intent is for the purpose of image capture, and Android will start the camera on starting this intent. If a user has installed more than one camera app, Android will present a list of all valid camera apps, and the user can choose any to take the image.
After creating an intent instance, we pass this intent object in the startActivityForResult() method. In our picture-capturing app, clicking on the button will start another activity of the camera. And when we close the camera activity, it will come back to the original activity of our app and give us some result of the captured picture. So, to get the result in any activity, we have to override the onActivityResult() method. This method is called when the parent activity is started after the child activity is completed. When this method is called, it means that we have used the camera and are now back to our parent activity. If the result is successful, we can display the captured image in the image View.
First, we can learn whether this method is called after the camera or if another action has happened. For this purpose, we have to compare the requestCode parameter of the method. Remember, when calling the startActivityForResult() method, we passed the TAKE_IMAGE_CODE constant as the other parameter. This is the request code to be compared to.
After that, to check the result, we can see the resultCode parameter of the method. As we used this code for the camera picture intent, we will compare our resultCode with the RESULT_OK constant. After the success of both conditions, we can conclude that we have received our image. So, we use the intent to get our image data by calling the getExtras().get() method. This will give us the Object type of data. We further typecast it to Bitmap to prepare it for ImageView.
Finally, we call the setImageBitmap method to set the new bitmap to our image View. If you run the code, you will see an icon image and a button. After clicking on the button, the camera will be started. When you take the picture, the app will crash and shut down. You can see it in the following screenshots:
The app crashed after taking a picture
You might be wondering why the crash occurred. We forgot to mention one thing; whenever any app uses the camera, we have to add the uses-feature tag in our manifest file to tell the app that it will use the camera feature. Let's see our Android manifest file to understand the uses-feature tag.
The AndroidManifest.xml file
This file defines all the settings and features to be used in our app. There is only one new thing that we haven't seen. The code implementation of the file is as follows:
You can see that we have added the uses-feature tag, and we have assigned android.hardware.camera in the android:name property. This tag tells the app about the camera usage in it, and the Android OS gives our app the permission to use the external camera.
After adding this line in the manifest file and running the code, you will see something similar to the following screenshot if you have more than one camera app in your phone:
Taking pictures through the intents app
In the screenshot, you can see that the user is asked to choose the camera, and when a picture is taken, the image is shown in the app.
When we summarized the code, we first created a layout with image View and button. Then, in the Activity class, we got the references of our views from the layout file, and set the click listener of the button. After clicking on the button, we created the intent of capture image, and started another activity as the child activity. After getting the result, we displayed that captured image in our image View. It was as easy as a walk in the park. In the next section, we will see how we can record video using intents.
Using intents to record video
Until now, we have already seen how to take pictures using intents. In this section, we will see how we can record video using intents. We will not discuss the whole project in this section. The procedure to record videos using intents is almost the same as taking pictures with few minor changes. We will only discuss those changes in this section. Now, let's see how the app works to record video.
The first change that we have done is in our layout file. We removed the image View section, and have placed the VideoView tag. The following code implementation shows that tag:
You can see that everything is the same as it was in ImageView. Now, as we have changed image view to video view in our layout, we have to change that in our activity as well. Just as we did for ImageView, we will create a field object of VideoView, and get the reference in our onCreate() method of the activity. The following code sample shows the field object of VideoView line:
Everything is the same, and we have already discussed it. Now, in our onClick() method, we will see how we send the intent that triggers the video recording. The code implementation to be put on the onClick() method to send an intent is as follows:
You can see that we have created an intent object, and instead of passing MediaStore.ACTION_IMAGE_CAPTURE, we have passed MediaStore.ACTION_VIDEO_CAPTURE in the constructor of the intent. Also, we have put an extra object in the intent by calling the putExtra() method. We have put the extra object defining the video quality as high by assigning the MediaStore.EXTRA_VIDEO_QUALITY value to 1. Then, we pass the intent in the startActivityForResult() method again to start the camera activity.
The next change is in the onActivityResult() method when we get the video from the intent. The following code shows some sample code to get the video and pass it in the VideoView tag and play it:
In the case of taking a picture, we restored raw data from the intent, typecasted it to Bitmap, and then set our ImageView to Bitmap. But here, in case of recording a video, we are only getting the URI of the video. The Uri object declares the reference of data in the mobile phone. We get the URI of the video, and set it in our VideoView using the setVideoURI() method. Finally, we play the video by calling the VideoView.start() method.
From these sections, you can see how easy it is to use the intents to capture images or record videos. Through intents, we are using the already built-in camera or camera apps. If we want our own custom camera to capture images and videos, we have to use the Camera APIs of Android.
We can use the MediaPlayer class to play video, audio, and so on. The MediaPlayer class contains methods like start(), stop(), seekTo(), isLooping(), setVolume(), and much more. To record a video, we can use the MediaRecorder class. This class contains methods including start(), stop(), release(), setAudioSource(), setVideoSource(), setOutputFormat(), setAudioEncoder(), setVideoEncoder(), setOutputFile(), and much more.
When you are using MediaRecorder APIs in your app, don't forget to add the permissions of android.premission.RECORD_AUDIO and android.permission.RECORD_VIDEO in your manifest file.
To take pictures without using intents, we can use the Camera class. This class includes the methods open(), release(), startPreview(), stopPreview(), takePicture(), and much more.
When you are using Camera APIs in your app, don't forget to add the permissions of android.premission.CAMERA in your manifest file.
Until now, we have used visual media components for videos and pictures using intents. In the next section, we will use audio components of a phone using intents. We will see how we can use the speech recognition and text-to-speech supports using intents in the next sections.
Speech recognition using intents
Smartphones introduced voice recognition that became a very big achievement for disabled people. Android introduced speech recognition in API Level 3 in Version 1.5. Android supports voice input and speech recognition using the RecognizerIntent class. Android's default keyboard contains a button with a microphone icon on it. This allows the user to speak instead of typing a text. It uses the speech-recognition API for this purpose. The following screenshot shows the keyboard with the microphone button on it:
Android's default keyboard with the microphone button
In this section, we will create a sample application that will have a button and text field. After clicking on the button, Android's standard voice-input dialog will be displayed, and the user will be asked to speak something. The app will try to recognize whatever the user speaks and type it in the text field. We will start by creating an empty project in the Android Studio or any other IDE, and we will modify two files in it. Let's start with our layout file in the next section.
The activity_main.xml file
This file represents the visual content of the app. We will add the text field and button view in this file. The code implementation of the file is as follows:
As you can see, we have placed an EditText field. We have set android:inputType to textMultiLine to type the text in multiple lines. Below the text field, we have added a Button view with an ID of btnRecognize. This button will be used to start the speech-recognition activity when it is tapped or clicked on. Now, let's discuss the main activity file.
The MainActivity.java file
This file represents the main activity of the project. The code implementation of the file is as follows:
As usual, we override the onCreate() method, and get our button reference from the layout set by the setContentView() method. We set the button's listener to this class, and in the activity, we implement OnClickListener along with overriding the onClick() method. In the onClick() method, we create an intent object and pass RecognizerIntent.ACTION_RECOGNIZE_SPEECH as an action string in the constructor. This constant will tell Android that the intent is for speech-recognition purpose. Then, we have to add some extra objects to provide more information to Android about the intent and speech recognition. The most important extra object to be added is RecognizerIntent.EXTRA_LANGUAGE_MODEL. This informs the recognizer about which speech model to use when recognizing speech. The recognizer uses this extra to fine-tune the results with more accuracy. This extra method is required and must be provided when calling the speech-recognizer intent. We have passed the RecognizerInent.ACTION_LANGUAGE_MODEL_FREE_FORM model for speech. This is a language model based on a free-form speech recognition. Now, we have some optional extra objects that help the recognizer with more accurate results. We have added extra of RecognizerIntent.EXTRA_PROMPT and passed some string value in it. This will notify the user that speech recognition has been started.
Next, we add the RecognizerIntent.EXTRA_MAX_RESULTS extra and set its value as 1. Speech recognition's accuracy always varies. So, the recognizer will try to recognize with more accuracy. So, it creates different results with different accuracies and maybe different meanings. So, through this extra, we can tell the recognizer about how many results we are interested in. In our app, we have put it as 1. So, that means the recognizer will provide us with only one result. There is no guarantee that this result will be accurate enough; that's why it is recommended to pass a value greater than 1. For a simple case, you can pass a value upto 5. Remember, the greater the value you pass, the more time will it take to recognize it.
Finally, we put our last optional extra of language. We pass Locale.ENGLISH as the value of the RecognizerIntent.EXTRA_LANGUAGE extra. This will tell the recognizer about the language of the speech. So, the recognizer didn't have to detect the language, this results in more accuracy in speech recognition.
The speech-recognition engine may not be able to understand all the languages available in the Locale class. Also, it is not necessary that all the devices will support speech recognition.
After adding all the extra objects, we have ensured that our intent object is ready. We pass it in the startActivityForResult() method with requestCode as 1. When this method is called, a standard voice-recognition dialog is shown with the prompt message that we had given. After we finish speaking, our parent activity's onActivityResult() method is called. We first check whether requestCode is 1 or not so that we can be sure that this is our result of speech recognition. After that, we will check resultCode to see whether the result was okay or not. After successful results, we will get an array list of strings containing all the words recognized by the recognizer. We can get these words' lists by calling the getStringArrayListExtra() method and passing RecognizerIntent.EXTRA_RESULTS. This list is only returned when resultCode is okay; otherwise, we will get a null value. After wrapping up the speech-recognition stuff, we can now set the text value to the result. For that, we first extract the EditText view from the layout, and set our result to the value of the text field by calling the setText() method.
An active Internet connection is required to run speech recognition. The speech-recognition process is executed on the servers of Google. An Android phone takes the voice input, sends it to Google Servers, and it is processed there for recognition. After recognition, Google sends the results back to the Android phone, the phone informs the user about the results, and the cycle is complete.
If you run the project, you will see something similar to the following screenshots:
Speech recognition using intents
In the image, you can see that after clicking on the Recognize button, a standard voice-input dialog is shown. On speaking something, we will return back to our parent activity, and after recognizing the speech, it will print all the text in the text field.
Role of intents in text-to-speech conversion
In the previous section, we discussed how the Android system can recognize our speech and perform actions such as controlling the mobile phone via speech commands. We also developed a simple speech-to-text example using intents. This section is the opposite of the previous section. In this section, we will discuss how the Android system can convert our text into a beautiful voice narration. We can call it text-to-speech conversion. Android introduced the Text-To-Speech ( TTS ) Conversion engine in Version 1.6 API Level 4. We can use this API to produce speech from within our application, thus allowing our app to talk with our users. And if we add speech recognition, it will be like talking with our application. Text-to-speech conversion requires preinstalled language packs, and due to the lack of storage space on mobile phones, it is not necessary that the phone will have any language packs already installed in it. So, while creating any app using the text-to-speech engine, it is a good practice to check whether the language packs are installed or not.
We can't use text-to-speech conversion using intents. We can only use it through the text-to-speech engine called TTS. But, there is a minor role of intents in text-to-speech conversion. Intents are used only to check whether the language packs are preinstalled or not. So, creating any app that uses text-to-speech will first have to use intents to check the language packs' installation status. That's the role of intents in text-to-speech conversion. Let's look at the sample code of checking the language packs' installation state:
The first thing we will do for text-to-speech conversion is to check the language packs. In the code, we can see that we are creating an intent object. And we are passing the Engine.ACTION_CHECK_TTS_DATA constant that will tell the system that the intent will check the text-to-speech (TTS) data and language packs. We are then passing the intent in the startActivityForResult() method along with the VAL_TTS_DATA constant value used as requestCode. Now, if the language packs are installed and everything is okay, we will get resultCode as RESULT_OK in the onActivityResult() method. So, if the result is okay, we can use text-to-speech conversion. So, let's see the code sample for the onActivityResult() method, as shown in the following screenshot:
So, we first check requestCode of our passed code. Then, we check resultCode to Engine.CHECK_VOICE_DATA_PASS. This constant is used to tell whether voice data is available or not. If we have data available in our phone, we can do our text-to-speech conversion there. Otherwise, it is clear that we have to install voice data first before doing the text-to-speech conversion. You will be pleased to know that installing voice data is also very easy; it uses intents for this purpose. The following code snippet shows how to install voice data using intents:
We created an intent object and passed Engine.ACTION_INSTALL_TTS_DATA in the constructor. This constant will tell Android that the intent is for the installation of text-to-speech language packs' data. And then, we pass the intent into the startActivity() method to start installation. After the language pack's installation, we have to create an object of the TextToSpeech class and call its speak() method when we want to do some text-to-speech conversion. The following is the code implementation showing how to use the object of the TextToSpeech class in the onActivityResult() method:
protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == VAL_TTS_DATA) { if (resultCode == Engine.CHECK_VOICE_DATA_PASS) { TextToSpeech tts = new TextToSpeech(this, new OnInitListener() { public void onInit(int status) { if (status == TextToSpeech.SUCCESS) { tts.setLanguage(Locale.US); tts.setSpeechRate(1.1f); tts.speak("Hello, I am writing book for Packt", TextToSpeech.QUEUE_ADD, null); } } }); } else { Intent installLanguage = new Intent ( Engine.ACTION_INSTALL_TTS_DATA); startActivity(installLanguage); } } }
As seen in the code, after the successful installation of the language data packs, we have created an instance of TextToSpeech and passed an anonymous OnInitListener object. We have implemented the onInit() method. This method will set the initial settings of the TextToSpeech object. If the status is a success, we are setting the language, speech rate, and finally, we are calling the speak() method. In this method, we passed a string of characters, and Android will read these letters aloud.
Concluding the whole topic, the role of intents in text-to-speech conversion is of checking and installing voice-data packs. Intents don't contribute directly to text-to-speech conversion, but they just set the initial setup for text-to-speech conversion.
With text-to-speech conversion, we have finished the discussions on media components. In media components, we discussed taking pictures, recording videos, speech recognition, and text-to-speech conversion. In the next section, we will discuss motion components and see how intents play a role in these components.
Motion components
Motion components in an Android phone include many different types of sensors that perform many different tasks and actions. In this section, we will discuss motion and position sensors such as accelerometer, geomagnetic sensor, orientation sensor, and proximity sensor. All these sensors play a role in the motion and position of the Android phone. We will discuss only those sensors that use intents to get triggered. We have only one such sensor that uses intents and that is the proximity sensor. Let's discuss it in the following section.
Intents and proximity alerts
Before learning about the role of intents in proximity alerts, we will discuss what proximity alerts are and how these can be useful in various applications.
What are proximity alerts?
The proximity sensor lets the user determine how close the device is to an object. It's often useful when your application reacts when a phone's screen moves towards or away from any specific object. For example, when we get any incoming call on an Android phone, placing the phone on the ear shuts off the screen and holding it back in the hands switches the screen on automatically. This application is using proximity alerts to detect the distance between the ear and the proximity sensor of the device. The following figure shows it in the visual format:
Another example can be when our phone has been idle for a while and its screen is switched off, it will vibrate if we have some missed calls or give notifications hinting us to check our phone. This can also be done using proximity sensors.
Proximity sensors use proximity alerts that detect the distance between the sensor of the phone and any object. These alerts let your application set triggers that are fired when a user is moved within or beyond a set distance from a geographic location. We will not discuss all the details for the use of proximity alerts in this section, but we will only cover some basic information and the role of intents in using proximity alerts. For example, we set a proximity alert for a given coverage area. We select a point in the form of longitude and latitude, a radius around that point in meters, and some expiry time for the alert. Now, after using proximity alerts, an alert will fire if the device crosses that boundary. It can be either that the device moves from outside to within the radius or it moves from inside the radius to beyond it.
Role of intents in proximity alerts
When proximity alerts are triggered, they fire intents. We will use a PendingIntent object to specify the intent to be fired. Let's see some code sample of the application of the distance that we discussed in the earlier section in the following implementation:
In the preceding code, we are implementing the very first step to use proximity alerts in our app. First of all, we create a proximity alert that can be done through PendingIntent. We define the name of the alert as DISTANCE_PROXIMITY_ALERT, and then get the location manager service by calling the getSystemService() method of the current activity we have written the code in. We then set some random values for latitude, longitude, radius, and expiration to infinity. It should be remembered that these values can be set to any depending on the type of application you are creating.
Now comes our most important part of creating the proximity alert. We create the intent, and we pass our own alert name in the constructor to create our own intent. Then, we create an object of PendingIntent by getting a broadcast intent using the getBroadcast() method. Finally, we are adding the proximity alert in our location manager service by calling the addProximityAlert() method.
This code snippet has only created the alert and set initial values for it. Now, assume that we have completely finished our distance app. So, whenever our device passes the boundary that we specified in the app or gets inside it, LocationManager will detect that we have crossed the boundary, and it will fire an intent having an extra value of LocationManager.KEY_PROXIMITY_ENTERING. This value is a Boolean value. If it's value is true it means we have entered into the boundary, and if it is false, we have left the boundary. To receive this intent, we will create a broadcast receiver and perform the action. The following code snippet shows the sample implementation of the receiver:
public class ProximityAlertReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { Boolean isEntered = intent.getBooleanExtra( LocationManager.KEY_PROXIMITY_ENTERING, false); if (isEntered) Toast.makeText(context, "Device has Entered!", Toast.LENGTH_SHORT).show(); else Toast.makeText(context, "Device has Left!", Toast.LENGTH_SHORT).show(); } }
In the code, you can see that we are getting the extra value of LocationManager.KEY_PROXIMITY_ENTERING using the getBooleanExtra() method. We compare the value and display the toast accordingly. It was quite easy as you can see. But, like all the receivers, this receiver will not work until it is registered in AndroidManifest.xml or via code in Java. The java code for registering the receiver is as follows:
There is nothing to explain here except that we are calling the registerReceiver() method of the Activity class.
In a nutshell, intents play a minor role in getting proximity alerts. Intents are only used to tell the Android OS about the type of proximity alert that has been added, when it is fired, and what information should be included in it so that the developers can use it in their apps.
Summary
In this article, we discussed the common mobile components found in almost all Android phones. These components include the Wi-Fi component, Bluetooth, Cellular, Global Positioning System, geomagnetic field, motion sensors, position sensors, and environmental sensors. Then, we discussed the role of intents with these components. To explain that role in more detail, we used intents for Bluetooth communication, turning Bluetooth on/off, making a device discoverable, turning Wi-Fi turn on/off, and opening Wi-Fi settings. We also saw how we can take pictures, record videos, do speech recognition, and text-to-speech conversion via intents. In the end, we saw how we can use the proximity sensor through intents.
Resources for Article :
Further resources on this subject:
- Instant Optimizing Embedded Systems Using BusyBox [Article]
- Introducing an Android platform [Article]
- Creating Dynamic UI with Android Fragments [Article]
About the Author :
Muhammad Usama bin Aftab.
Wajahat Karim
Post new comment | http://www.packtpub.com/article/intents-for-mobile-components | CC-MAIN-2014-15 | refinedweb | 11,651 | 54.73 |
#include <iostream> #include <string> using namespace std; int main() { cout << "Hello everyone and welcome to this game.\n"; cout << "In this game the aim is to attack the wolf with what you think is the best attack, goodluck!\n"; int healthme=500; int healthit=500; while(healthme>0 && healthit>0 ) { cout << "Your Health is " << healthme << " and the wolfs health is " << healthit << "\n"; cout << "So, what attack do you want to hit the wolf with a weapon or with magic?"; string attack; cin >> attack; if (attack.find("weapon") != std::string::npos) { cout << "What part of the wolf do you want to attack, the body or the head?\n"; string bodypart; cin >> bodypart; if(bodypart.find("body") != std::string::npos) { healthit-= getdamage(50, 10); cout << "You hit the wolf!\n"; } else if(bodypart.find("head") != std::string::npos) { healthit -= getdamage(100, 20); healthme -= getdamage(20, 5); cout << "It was a good hit, but it bites back!\n"; } else cout << "you missed! (spelt it wrong) \n"; } else if (attack.find("magic") != std::string::npos){ cout << "what element would you like to attack with? fire lightning, ice, water, or earth?\n"; string magic; cin >> magic; if(magic == "fire"){ healthit-= getdamage(50, 10); cout << "It does good damage!\n"; } else if(magic == "lightning"){ healthit-= getdamage(100, 10); cout << "It does amazing damage!\n"; } else if(magic == "earth"){ healthit-= getdamage(70, 10); cout << "It hits hard!\n"; } else if(magic=="ice") { healthit -= getdamage(30, 5); cout << "It doesn't seem to hurt much!\n"; } else if(magic=="water") { healthit -= getdamage(10, 2); cout << "It seems unaffected!\n"; } else cout << "You missed! (spelt it wrong!)\n"; } else cout << "you took to long! (spelt wrong!)"; if(healthit>=0){ if(healthit<=100){ cout << "He is Angry now and doing more damage!\n"; healthme -= getdamage(75, 10); } else { cout << "The wolf goes out to attack!\n"; healthme-= getdamage(50, 10); } } } if(healthme<=0) cout << "Your Dead... Game over i guess?"; else cout << "The wolf is dead! You Win!"; return 0; } // Returns a random value within DamageRange of Damagebase int getdamage(int damagebase, int damagerange) { // if DamageBase is 50, and DamageRange is 10, DamageBase will be 40 here int damagedealt = damagebase - damagerange; // if DamageRange is 10, then this will add anywhere from 0 to 20 damage* damagedealt += (rand() % (damagerange*2)); return damagedealt; }
when compiling the program the error
'getdamage' identifier not found
i can't figure out whats wrong its probably something stupid but still thanks for any help anyway
| http://www.gamedev.net/topic/627490-c-not-initilizing/ | CC-MAIN-2015-32 | refinedweb | 409 | 75.4 |
🏄♂️ Emberistas! 🐹
Ember Engines acceptance testing guides 📝, check out the new EmberMap video on Tracked Properties 👣, polyfills for in-element and named blocks 🚀, setting up Coveralls for your Ember addons 💪, Ember in COVID-19 research 🔬📖🐹, Global Accessibility Awareness Day Ember blog post 📖🐹, "My Experience with Ember.js" video series 🎥, and last, but not least, read the prettiest RFC in Emberland 💅!
Ember Engines acceptance testing guides & Octane 📝
Michael Villander (@villander) and team have fleshed out docs about the acceptance test story in the Ember Engines ecosystem, touching on some bleeding-edge cases. Also, the entire doc examples were migrated to Ember Octane! Visit the official site to see Ember Engines' new acceptance testing guides. Many thanks to Dan Gebhardt (@dgeb), Gabriel Csapo (@gabrielcsapo) and Thomas Gossmann (@gossi) for their reviews and tips!
Using Ember Engines? Chat about it in the #ember-engines channel on the Ember Discord.
EmberMap: Tracked Properties 👣
A new EmberMap video covers Tracked Properties – a new way to access and mutate state in Ember with vanilla JavaScript.
While we have been able to use native ES5 getters for accessing properties (
this.isOpen), we still had to rely on calling
this.set to mutate state. Tracked properties allow us to drop using
this.set and instead use native setters (
this.isOpen = true;) by annotating the properties we want to track.
Classic syntax:
import { tracked } from "@glimmer/tracking"; export default Component.extend({ isOpen: tracked({ value: false }), });
Octane syntax:
import { tracked } from '@glimmer/tracking'; class Person { @tracked firstName; @tracked lastName; get fullName() { return `${this.firstName} ${this.lastName}`; } }
Tracked properties also allow us to use native JavaScript getters as a replacement for computed properties by having dependent keys tracked. So try it out today in your app and vastly simplify the programming model by moving closer to native JavaScript language constructs.
Polyfills for in-element and named blocks 🚀
Right now you can get a sneak peek in canary of public API in-element and yieldable named blocks.
What is
{{in-element}}? Sometimes developers need to render content outside of the regular HTML flow. This concept is also called "portals". Components like dropdowns and modals use this technique to render stuff close to the root of the page, so as to bypass CSS overflow rules. (Some apps that are embedded into static pages even use this technique to update parts of the page outside the app itself.)
Since it was a common use case, Glimmer baked
{{-in-element}} into the VM, but as part of the private (or intimate) API. With the passing of the RFC, it's going public, perhaps in Ember 3.20. So if you've been using
{{-in-element}}, you should switch to the
{{in-element}} polyfill instead, like Krystan HuffMenne (@gitKrystan) did for these couple of addons: ember-cli-head and ember-maybe-in-element.
The yieldable named blocks RFC makes it possible to pass one block or more to a component for customization. Check out the new ember-named-blocks-polyfill to take advantage of this feature now!
Setting up Coveralls for your Ember addons 💪
Rajasegar Chandran (@rajasegar) blogs about setting up Coveralls for your Ember addons. He explains how to setup ember-cli-code-coverage and Coveralls for your repositories. Coveralls help you deliver code confidently by showing which parts of your code aren’t covered by your test suite. You can also learn how to use these techniques and make it part of your workflow using Github Actions.
Ember in COVID-19 research 🔬📖🐹
Johns Hopkins University (JHU) has announced new COVID-19 related features available in their Public Access Submission System (PASS).
PASS (which is built using Ember.js on the frontend!), is a platform to assist researchers 🔬🧪📖 in complying with the access policies of their funders and institutions and is created by the Sheridan Libraries at JHU, in collaboration with the Harvard University Office for Scholarly Communication and the MIT Libraries.
As a recent article at JHU's news center the Hub has pointed out, "Through modifications to the Public Access Submission System (PASS), faculty or their proxies can now submit articles flagged specifically for [the] JHU COVID-19 collection."
It is so encouraging to see yet another example of Ember being used in applications that support important research for public good. 😍🐹
Global Accessibility Awareness Day Ember blog post 📖🐹
Did you know that May 21st was Global Accessibility Awareness Day? Well now you know 😃!
You may not have seen the recent blog post commemorating the occasion and discussing accessibility in Ember. 🎉 The post includes lots of great information about how the community, the Ember core team and Ember's A11y Strike Team are working to support an accessible web. 💙💚💛💜
You should head on over to the blog post for more details on what you can do to get involved or how to make your Ember applications more accessible.
If you have accessibility related questions you can head on over to the community Discord chat in the #topic-a11y channel, and get answers and help right away.
Or, if you're interested in getting involved in Ember's A11y Strike Team, checkout the #st-a11y channel on Discord, and let us know! The meetings are also open to anyone who wants to attend.
Big shout out to Mel Sumner (@MelSumner) for putting all that valuable accessiblity-related information together!
"My Experience with Ember.js" video series 🎥
Cal Woolgar (@calWoolgar) has kicked off a new video series "My Experience with Ember.js", where he breaks down the basics of Ember.
The first video What is Ember.js? explains the Handlebars templating language and how it separates your JavaScript from HTML. Cal also touches on ember-cli, and how it enabled you to create a new application easily.
Cal aims to make his videos short and sweet 🍭 so that someone learning can reference something in bite-sized pieces. Look forward to what's next from Cal! 👏
The prettiest Ember RFC 💅
By default, ember-cli already provides developers with plenty of tools and settings for linting and formatting of app code via eslint and ember-template-lint. But what if you could come to an agreement on some of the most significant bike-shedding disputes in your team once and for all, including discussions about tabs vs. spaces or the need for the newline at the end of a file?
In the Request for Comments (RFC) we get to have a peek into the possible, even prettier future for Ember codebases already! The proposal suggests to add Prettier - a multi-language, opiniated code-formatter - to Ember apps generated from ember-cli's
app and
addon blueprints.
Want to learn more about how this could help you and your team to collaborate on your code even better? Then be sure to give the original RFC a read soon, as it entered the Final Comment Period (FCP) recently. And don't forget to post your questions and suggestions in the comments below the RFC PR, pretty please!
Contributors' corner 👏
This week we'd like to thank @rwjblue, @xg-wang, @chancancode, @allthesignals, @pzuraq, @alexeykostevich, @sandstrom, @ansmonjol, @locks, @fivetanley and @CodingItWrong,
Matthew Roderick, Chris Ng, Amy Lam, Abhilash LR, Jared Galanis, Jessica Jordan and the Learning Team
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/embertimes/the-ember-times-issue-no-149-11ld | CC-MAIN-2021-21 | refinedweb | 1,196 | 54.02 |
This is a C program to check a given string is palindrome.
This program accepts a string and checks whether a given string is palindrome.
1. Take a string as input and store it in the array.
2. Reverse the string and store it in another array.
3. Compare both the arrays.
Here is source code of the C program to check a given string is palindrome. The C program is successfully compiled and run on a Linux system. The program output is also shown below.
* C program to read a string and check if it's a palindrome, without
* using library functions. Display the result.
#include <stdio.h>
#include <string.h>
void main()
char string[25], reverse_string[25] = {'\0'};
int i, length = 0, flag = 0;
fflush(stdin);
printf("Enter a string \n");
gets(string);
/* keep going through each character of the string till its end */
for (i = 0; string[i] != '\0'; i++)
length++;
for (i = length - 1; i >= 0; i--)
reverse_string[length - i - 1] = string[i];
* Compare the input string and its reverse. If both are equal
* then the input string is palindrome.
for (i = 0; i < length; i++)
if (reverse_string[i] == string[i])
flag = 1;
else
flag = 0;
if (flag == 1)
printf("%s is a palindrome \n", string);
else
printf("%s is not a palindrome \n", string); sanfoundry sanfoundry is not a palindrome Enter a string malayalam malayalam is a palindrome C Books
- Practice Computer Science MCQs
- Practice BCA MCQs
- Buy Computer Science Books
- Apply for Computer Science Internship | https://www.sanfoundry.com/c-program-string-palindrome/ | CC-MAIN-2022-21 | refinedweb | 251 | 72.46 |
When I create a class, I often define a constructor to ensure that the object is initially populated with all the information it needs to operate properly. As an example, one of my databound objects can take a database connection or both a database connection and an object representing the unique ID for the record. Each of these methods for instantiating the object is a separate constructor method. This tip teaches you how to overload the constructor of a class, which is the method called when you instantiate an object.
To see how this works, take a simple class (DataObject) that has three public variables on it:
public class DataObject { public string Value1; public string Value2; public string Value3; public DataObject(string value1, string value2, string value3) { Value1 = value1; Value2 = value2; Value3 = value3; } public DataObject(DataRow inputRow) { Value1 = inputRow["value1"].ToString(); Value2 = inputRow["value2"].ToString(); Value3 = inputRow["value3"].ToString(); } }
One of the class’s constructors will accept values for each of the three variables, and its other constructor will accept a DataRow that has values for each of the three variables. The code assumes that you have a database table somewhere that has fields named value1, value2, and value3, and that all three fields are strings.
To call this function, you can use one of the following methods, which will end up populating the DataObject class with the three values:
DataObject obj1 = new DataObject("test1", "test2", "test3"); DataTable dt = <code to retrieve a data table> DataObject obj2 = new DataObject(dt.Rows[0]);
You also can use this method to overload any method on your class. As long as the parameter list is different, you can add an additional overload. You can’t, for instance, have two overloads that each has three string parameters. You can have one overload that accepts one string and another that accepts two strings.. | https://www.developer.com/microsoft/dotnet/net-tip-create-a-class-with-overloaded-constructors/ | CC-MAIN-2022-40 | refinedweb | 308 | 58.62 |
On Thu, 19 Jul 2007 13:25:07 -0700 (PDT)Linus Torvalds <torvalds@linux-foundation.org> wrote:> > > On Thu, 19 Jul 2007, Linus Torvalds wrote:> > > > A better patch should be the appended. Does that work for you too?> > Btw, I already committed this as obvious. > > I did the same for the SLAB __do_kmalloc() thing. Let's hope that> that was the extent of the damage.> > LinusHmmmm.. The issue is really in krealloc which can be called with a NULLparameter (a special case). However, krealloc should not call ksizewith NULL.The merged patch above makes ksize(NULL) return 0. So we arereturning zero size for an object that we have not allocated.Better fail if someone tries that.The __do_kmalloc issue looks like a hunk that was somehow dropped.IMHO: The right fix for the ksize issue would be the following patch:Index: linux-2.6/mm/util.c===================================================================--- linux-2.6.orig/mm/util.c 2007-07-23 13:29:42.000000000 -0700+++ linux-2.6/mm/util.c 2007-07-23 13:31:28.000000000 -0700@@ -88,7 +88,11 @@ void *krealloc(const void *p, size_t new return ZERO_SIZE_PTR; } - ks = ksize(p);+ if (p)+ ks = ksize(p);+ else+ ks = 0;+ if (ks >= new_size) return (void *)p; -To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2007/7/23/439 | CC-MAIN-2017-04 | refinedweb | 235 | 69.38 |
java.lang.Object
org.zkoss.zk.ui.sys.Attributesorg.zkoss.zk.ui.sys.Attributes
public class Attributes
Attributes or library properties to customize the behaviors of ZK, such as page rending, fileupload and so on.
public static final java.lang.String CLIENT_ROD
Default: true.
Applicable: ZK EE
public static final java.lang.String PAGE_REDRAW_CONTROL
PageCtrl.redraw(java.io.Writer)). There are three different values:
destkop,
page, and
complete.
Default: null (means auto). In other words,
desktop is assumed
if this is the top-level page and not being included (and other conditions).
Otherwise, it assumes
page.
Application developers rarely need to set this attribute, unless ZK Loader cannot decide which control to use correctly.
This control can also be specified as a request parameter called
zk.redrawCtrl. For example, if you are using
other technology, say jQuery, and want to load a ZUL page dynamically; as shown below:
$("#pos").load("frag.zul?zk.redrawCtrl=page");
If you prefer to draw the desktop with the page, you can set the
vallue to
desktop. By drawing the desktop, it means HTML and BODY
tags will be generated, too.
If you prefer to draw the page only (such as being included),
you can set the value to
page.
If the page already contains everything that the client expects such
as the HTML and BODY tags, you can set the value to
complete.
The difference between
page and
complete is
a bit subtle. They don't generate HTML and BODY tags. However,
page generates DIV to represents a page, while
complete generates only the root components.
Thus,
complete is usually used for the situation that
HTML and BODY are being generated by other technology, while
page for a included ZK page.
Note: if
Page.isComplete() is true, it has the same effect
of setting
PAGE_REDRAW_CONTROL to
complete.
ExecutionsCtrl.getPageRedrawControl(org.zkoss.zk.ui.Execution), Constant Field Values
public static final java.lang.String PAGE_RENDERER
PageRenderer.
Default: null (means auto). It is decided by
LanguageDefinition.getPageRenderer().
public static final java.lang.String NO_CACHE
This attribute is set if ZK loader sets Cache-Control=no-cache. However, if a ZUML page is included by other servlet (such as JSP and DSP), this attribute won't be set. If you set Cache-Control manually, you might also set this attribute to save the use of memroy.
request.setAttribute(Attributes.NO_CACHE, Boolean.TRUE);
Since 5.0.8, if the zk.redrawCtrl parameter is specified with page
(as described in
PAGE_REDRAW_CONTROL),
it implies
NO_CACHE
public static final java.lang.String RENEW_NATIVE_SESSION
A typical case is so-called Session Fixation Protection.
hsess.setAttribute(Attributes.RENEW_NATIVE_SESSION, Boolean.True); hsess.invalidate(); hsess.removeAttribute(Attributes.RENEW_NATIVE_SESSION);
public static final java.lang.String GAE_FIX
public static final java.lang.String UUID_RECYCLE_DISABLED
Default: false (i.e., not disabled).
public static final java.lang.String PORTLET_RENDER_PATCH_CLASS
Default: null (means no need of patch).
If specified, the class must implement
PageRenderPatch.
An example implementation is
JQueryRenderPatch
that delays the rendering of a ZK portlet to avoid the conflicts when
using IE.
public static final java.lang.String ACTIVATE_RETRY_DELAY.
Default: 120000 (unit: milliseconds)
public static final java.lang.String INJECT_URI_PREFIX
For example,
ThemeProvider.Aide
is based on this prefix.
Notice that this prefix is currently supported only by the WCS files
(
WcsExtendlet).
public static final java.lang.String PAGE_RENDERING
public static final java.lang.String STUB_NATIVE
By default, the native component will be stub-ized, i.e., replaced
with a stateless component called
StubComponent,
such that the memory footprint will be minimized.
To stub-ize non-native, please use
Component.setStubonly(java.lang.String).
Default: true. Though rarely, you could disable the stubing by setting this attribute to false. For example, if you have a component that has native childs, and you'd like to detach it and re-attach later. Since the server does not maintain the states, it cannot be restored when attached back.
It shall be set to a component's attribute, and it affects all descendant components unless it was set explicitly.
Default: true
Avaialbe in ZK EE only.
public static final java.lang.String ZK_SESSION
public Attributes() | http://www.zkoss.org/javadoc/latest/zk/org/zkoss/zk/ui/sys/Attributes.html | crawl-003 | refinedweb | 684 | 52.66 |
JavaScript Object Notation, or JSON, is meant as a light weight alternative to XML when XML is just a bit heavy weight for what you are trying to achieve. Although JSON is not extensible, as it makes use of a fixed set of data types, it can be used to textually represent complex objects. JSON is a perfect data-interchange format when using technologies such as AJAX as your server side objects can be converted to JSON, sent to the client, evaluated in a client script and then manipulated to build dynamic adverts, menus etc.
JSON is comprised of two structures that all programmers are famaliar with, namely:
IDictionary<string, JsonType>.
IList<JsonType>.
JSON also defines four primtive types:
I don't want to bore you with the details, but links to the JSON specification can be found in the References section.
All JSON types in the
NetServ.Net.Json namespace implement the
IJsonType interface. This interface allows the type to be identified by it's
JsonTypeCode and also to have it write it's contents to a
IJsonWriter. Implementations of the
IJsonType interface are
JsonString,
JsonNumber,
JsonBoolean,
JsonArray,
JsonObject and
JsonNull. The interface is defined as follows:
using System; namespace NetServ.Net.Json { /// <summary> /// Defines a JavaScript Object Notation data type. /// </summary> public interface IJsonType { /// <summary> /// Writes the contents of the Json type using the specified /// <see cref="NetServ.Net.Json.IJsonWriter"/>. /// </summary> /// <param name="writer">The Json writer.</param> void Write(IJsonWriter writer); /// <summary> /// Gets the <see cref="NetServ.Net.Json.JsonTypeCode"/> of the type. /// </summary> JsonTypeCode JsonTypeCode { get; } } /// <summary> /// Defines the different types of Json structures and primitives. /// </summary> [Serializable()] public enum JsonTypeCode { /// <summary> /// A unicode encoded string. /// </summary> String, /// <summary> /// A number. /// </summary> Number, /// <summary> /// A boolean value represented by literal "true" and "false". /// </summary> Boolean, /// <summary> /// A null value. /// </summary> Null, /// <summary> /// A structured object containing zero or more name/value pairs, /// delimited by curly brackets. /// </summary> Object, /// <summary> /// An unordered collection of values, delimted by square brackets. /// </summary> Array } }
The
IJsonWriter interface defines a JSON writer. It defines the basic methods needed to convert JSON types into text. An implementation of the interface is the
JsonWriter class which writes the JSON text to an underlying
TextWriter.
using System; namespace NetServ.Net.Json { /// <summary> /// Defines a JavaScript Object Notation writer. /// </summary> public interface IJsonWriter { /// <summary> /// Writes the start of an array to the underlying data stream. /// </summary> void WriteBeginArray(); /// <summary> /// Writes the end of an array to the underlying data stream. /// </summary> void WriteEndArray(); /// <summary> /// Writes the start of an object to the underlying data stream. /// </summary> void WriteBeginObject(); /// <summary> /// Writes the end of an object to the underlying data stream. /// </summary> void WriteEndObject(); /// <summary> /// Writes a object property name to the underlying data stream. /// </summary> /// <param name="value">The property name.</param> void WriteName(string value); /// <summary> /// Writes a raw string value to the underlying data stream. /// </summary> /// <param name="value">The string to write.</param> void WriteValue(string value); } }
Here is a short example of how to construct a collection of JSON types and write the result to the console. Please note that a more complex example, including multiple nested objects and arrays, is included in the demo project download.
using System; using NetServ.Net.Json; namespace JsonTest { [STAThread()] public class Program { public static void Main(string[] args) { JsonObject order = new JsonObject(); JsonObject addr = new JsonObject(); JsonArray items = new JsonArray(); JsonObject item; // Add some items into the array. item = new JsonObject(); item.Add("ID", new JsonString("Chicken & Chips")); item.Add("Qty", new JsonNumber(2)); item.Add("Price", new JsonNumber(1.50D)); item.Add("Req", new JsonString("Plenty of salad.")); items.Add(item); // The less verbose way. item = new JsonObject(); item.Add("ID", "Pizza"); item.Add("Qty", 1); item.Add("Price", 9.60D); item.Add("Size", "16\""); item.Add("Req", ""); items.Add(item); // Add the address information. addr.Add("Street", "16 Bogus Street"); addr.Add("City", "Bogustown"); addr.Add("County", "Boguscounty"); addr.Add("Postcode", "B0GU5"); // Add the items and address into the order. order.Add("Items", items); order.Add("Address", addr); order.Add("Name", "Andrew Kernahan"); order.Add("Tel.", "55378008"); order.Add("Delivery", true); order.Add("Total", 12.60D); using(JsonWriter writer = new JsonWriter()) { // Get the container to write itself and all it's // contained types to the writer. order.Write(writer); // Print the result. Console.WriteLine(writer.ToString()); } } } }
Using the
JsonWriter will produce the most compact output. An alternative
IJsonWriter is the
IndentedJsonWriter which will produce indented output which can be easily read by humans should you need it to be. The above example will produce the following output when the
IndentedJsonWriter is used.
{ "Items": [ { "ID": "Chicken & Chips", "Qty": 2, "Price": 1.5, "Req": "Plenty of salad." }, { "ID": "Pizza", "Qty": 1, "Price": 9.6, "Size": "16\"", "Req": "" } ], "Address": { "Street": "16 Bogus Street", "City": "Bogustown", "County": "Boguscounty", "Postcode": "B0GU5" }, "Name": "Andrew Kernahan", "Tel.": "55378008", "Delivery": true, "Total": 12.6 }
The
JsonParser class is used to build JSON types read from a
TextReader. Below is a simple example of how to parse and extract the JSON types. Please note that a more complex example can be found in the demo project download.
using System; using System.IO; using NetServ.Net.Json; namespace JsonTest { public class Program { [STAThread()] public static void Main(string[] args) { StringReader rdr = new StringReader("{\"Name\":\"Andy\",\"Age\":23,\"Hungry?\":true}"); // The parser takes any TextReader derived class as its source. JsonParser parser = new JsonParser(rdr, true); JsonObject obj = (JsonObject)parser.ParseObject(); // Get the information from the object. JsonString name = (JsonString)obj["Name"]; JsonNumber age = (JsonNumber)obj["Age"]; JsonBoolean hungry = (JsonBoolean)obj["Hungry?"]; // Print out the information. Console.WriteLine("Name:\t\t{0}", name.Value); Console.WriteLine("Age:\t\t{0}", age.Value.ToString()); Console.WriteLine("Hungry?:\t{0}", hungry.Value.ToString()); } } }
If you are using JSON as an AJAX data-interchange format, using the JSON text within a client side script couldn't be easier as the example below shows.
function updateAdvertsCallback(result, context) { // result = [ // {"AdvertId":"left","InnerHTML":"Some text","ImageSrc":"animage""}, // {"AdvertId":"right","InnerHTML":"Some more text", // "ImageSrc":"anotherimage""} // ] // Use the JavaScript compiler to parse the text and generate // the objects. var adverts = eval("(" + result + ")"); // Then access the members as you would normally. for(var i = 0; i < adverts.length; ++i) { document.getElementById("advertHTML_" + adverts[i].AdvertId).innerHTML = adverts[i].InnerHTML; document.getElementById("advertImg_" + adverts[i].AdvertId).src = adverts[i].ImageSrc; } }
You should be aware that JavaScript's
eval function can be used to execute client side code when evaluated and should only be used when you trust the source of the JSON. The alternative is to use a parser which only recognises valid JSON input and ignores everything else. A good JavaScript JSON parser can be found here.
Most of the JSON types in the
NetServ.Net.Json namespace make use of C#'s cool ability to overload operators. This allows you to write really concise code which expresses exactly what you are trying to achieve.
// JsonBoolean jb = JsonBoolean.Get(true); JsonBoolean jb = true; // JsonNumber jn = new JsonNumber(42); JsonNumber jn = 42; // JsonString js = new JsonString("Hello World!"); JsonString js = "Hello World!";
Also the
NetServ.Net.Json library is CLS compliant so it can be used in any .NET project developed in any CLS compliant language.
Included in the download are the unit tests compiled for the library. The tests were written in conjunction with Marc Clifton's Advanced Unit Testing tool. An overview of the two hundred tests can be seen below.
I admit that this is a bit of a light weight article (please go easy as its my first!), but hopefully you will appreciate how useful the JSON format is when XML is just too much for your needs.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/recipes/JSON.aspx | crawl-002 | refinedweb | 1,289 | 51.85 |
Multi-Class Text Classification Model Comparison and Selection
This is what we are going to do today: use everything that we have presented about text classification in the previous articles (and more) and comparing between the text classification models we trained in order to choose the most accurate one for our problem.
By Susan Li, Sr. Data Scientist
Photo credit: Pixabay
When working on a supervised machine learning problem with a given data set, we try different algorithms and techniques to search for models to produce general hypotheses, which then make the most accurate predictions possible about future instances. The same principles apply to text (or document) classification where there are many models can be used to train a text classifier. The answer to the question “What machine learning model should I use?” is always “It depends.” Even the most experienced data scientists can’t tell which algorithm will perform best before experimenting them.
This is what we are going to do today: use everything that we have presented about text classification in the previous articles (and more) and comparing between the text classification models we trained in order to choose the most accurate one for our problem.
The Data
We are using a relatively large data set of Stack Overflow questions and tags. The data is available in Google BigQuery, it is also publicly available at this Cloud Storage URL:.
Exploring the Data
10276752
We have over 10 million words in the data.
my_tags = ['java','html','asp.net','c#','ruby-on-rails','jquery','mysql','php','ios','javascript','python','c','css','android','iphone','sql','objective-c','c++','angularjs','.net'] plt.figure(figsize=(10,4)) df.tags.value_counts().plot(kind='bar');
The classes are very well balanced.
We want to have a look a few post and tag pairs.
def print_plot(index): example = df[df.index == index][['post', 'tags']].values[0] if len(example) > 0: print(example[0]) print('Tag:', example[1]) print_plot(10)
print_plot(30)
As you can see, the texts need to be cleaned up.
Text Pre-processing
The text cleaning techniques we have seen so far work very well in practice. Depending on the kind of texts you may encounter, it may be relevant to include more complex text cleaning steps. But keep in mind that the more steps we add, the longer the text cleaning will take.
For this particular data set, our text cleaning step includes HTML decoding, remove stop words, change text to lower case, remove punctuation, remove bad characters, and so on.
Now we can have a look a cleaned post:
Way better!
df['post'].apply(lambda x: len(x.split(' '))).sum()
3421180
After text cleaning and removing stop words, we have only over 3 million words to work with!
After splitting the data set, the next steps includes feature engineering. We will convert our text documents to a matrix of token counts (CountVectorizer), then transform a count matrix to a normalized tf-idf representation (tf-idf transformer). After that, we train several classifiers from Scikit-Learn library.
X = df.post y = df.tags X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state = 42)
Naive Bayes Classifier for Multinomial Models
After we have our features, we can train a classifier to try to predict the tag of a post. We will start with a Naive Bayes classifier, which provides a nice baseline for this task.
scikit-learn includes several variants of this classifier; the one most suitable for text is the multinomial variant.
To make the vectorizer => transformer => classifier easier to work with, we will use
Pipeline class in Scilkit-Learn that behaves like a compound classifier.
We achieved 74% accuracy.
Linear Support Vector Machine
Linear Support Vector Machine is widely regarded as one of the best text classification algorithms.
We achieve a higher accuracy score of 79% which is 5% improvement over Naive Bayes.
Logistic Regression
Logistic regression is a simple and easy to understand classification algorithm, and Logistic regression can be easily generalized to multiple classes.
We achieve an accuracy score of 78% which is 4% higher than Naive Bayes and 1% lower than SVM.
As you can see, following some very basic steps and using a simple linear model, we were able to reach as high as an 79% accuracy on this multi-class text classification data set.
Using the same data set, we are going to try some advanced techniques such as word embedding and neural networks.
Now, let’s try some complex features than just simply counting words. | https://www.kdnuggets.com/2018/11/multi-class-text-classification-model-comparison-selection.html | CC-MAIN-2019-13 | refinedweb | 751 | 54.02 |
FULL PRODUCT VERSION : java version "1.6.0" Java(TM) SE Runtime Environment (build 1.6.0-b105) Java HotSpot(TM) Client VM (build 1.6.0-b105, mixed mode, sharing) ADDITIONAL OS VERSION INFORMATION : Microsoft Windows XP [Version 5.1.2600] A DESCRIPTION OF THE PROBLEM : The Windows file system compares file names by converting characters to upper case (using Unicode rules, not locale sensitive). On Windows the File.equals and File.compareTo methods compare file names by first converting to upper case and then converting to lower case. This is the same as the String.CASE_INSENSITIVE_COMPARATOR. Unfortunaely the extra conversion to lower case produces wrong results for some characters. One example of such a character pair are the two upper case I letters (one without a dot and the Turkish one that has a dot). These are both lower cased to a lower case i with a dot. The attached code demonstrates that Windows considers these to be different files. STEPS TO FOLLOW TO REPRODUCE THE PROBLEM : Run the attached code in a directory to which you have write access (it creates two small files). EXPECTED VERSUS ACTUAL BEHAVIOR : EXPECTED - String compare: 0 File compare: a negative number File equals: false Length file a: 8 Length file b:28 ACTUAL - String compare: 0 File compare: 0 File equals: true Length file a: 8 Length file b:28 REPRODUCIBILITY : This bug can be reproduced always. ---------- BEGIN SOURCE ---------- import java.io.File; import java.io.PrintWriter; import java.io.FileNotFoundException; public class BugFileComparision { public static void main(String[] arg) throws FileNotFoundException { String a = "I"; String b = "\u0130"; // Turkish dotted capital I File fa = new File(a); File fb = new File(b); System.out.println("String compare: "+a.compareToIgnoreCase(b)); System.out.println("File compare: "+fa.compareTo(fb)); System.out.println("File equals: "+fa.equals(fb)); PrintWriter out = new PrintWriter(fa); out.println("File a"); out.close(); out = new PrintWriter(fb); out.println("File b, is slightly longer"); out.close(); System.out.println("Length file a: "+fa.length()); System.out.println("Length file b:"+fb.length()); } } ---------- END SOURCE ---------- | https://bugs.java.com/bugdatabase/view_bug.do?bug_id=6526865 | CC-MAIN-2022-33 | refinedweb | 347 | 52.87 |
Working on one of the tasks, I have encountered a simple problem: finding the best method for dealing with a selection of array indices. The issue is: if you want simple selection of indices, you may wish do put extra boolean field in each element or create a vector of bool. But if this selection is quite sparse, the iteration through all the elements will be slow. Obviously, in this case an array of indices can be generated. But then there is an issue with updating this array: even if a sorted array is not needed, it is faster to search through a sorted array (to avoid duplication of elements). But inserting an element into a sorted array is pricey.
I this article I will be covering three implementations of sparse sets of integers, comparing them with existing containers, including the Boost dynamic bitset.
The code was tested in Microsoft Visual C++ 14 CTP, GNU C++ 4.9 and Clang C++ 3.5.
Let's look at the list of existing containers we may wish to consider. The obvious ones in C++11 [1] are: std::vector<bool>, std::bitset. Since std::vector<bool> uses internally a collection bits, it would be wise to compare is against a collection of bytes: for instance, std::vector<char>.
std::vector<bool>
std::bitset
std::vector<char>.
Although std::set<T> designed for various types, not only integers and is not that efficient. But it would be appropriate to consider it. I like it's design: it provides a convenient iterator, which allows to scan through the “selected” elements. And, in general, I think its collection of methods is the most appropriate. In contrast, std::vector<bool> provides an iterator, but it scans through all the elements (true and false), instead of only the “selected” ones (whose values are true). The std::bitset container does not have an iterator. There is also the Boost dynamic bitset, which is similar to to std::bitset: it does not have an iterator, but provides methods for fast scanning through the selected elements.
std::set<T>
The std::bitset
You may wish to look at the BitMagic bitvector as well [3]. I am not going to discuss it in this article.
As I mentioned before, I consider the std::set<T> design the most appropriate in terms of the methods and iterators provided. The following generic structure of a sparse set is proposed: the size and count methods have a different meaning than in the std::set container; the test method is added.
std::set
class generic_sparse_set
{
public:
//----------------------------------
// Types
// ----------------------------------
typedef ... value_type;
typedef ... size_type;
typedef ... iterator;
typedef ... const_iterator;
///------------------------------------
/// Constructors
///------------------------------------
generic_sparse_set(size_type size); // creating a sparse set with elements in the interval [0,size-1]
generic_sparse_set(); // creating a sparse set; space is not reserved, requires resize.
///------------------------------------
/// Modifiers
///------------------------------------
void resize(size_type size); // reserve the space for elements in the interval [0,size-1]
void swap(generic_sparse_set& s); // swap the contents with another sparse set
bool insert(value_type i); // insert an element; returns true if a new element is inserted;
// returns false if the same element is already in the set.
void erase(value_type i); // delete an element by value
void erase(const iterator& it); // optional method; deletes an element by iterator
void clear(); // deletes all the elemnts
///------------------------------------
/// Capacity
///------------------------------------
size_type size() const; // the length of the interval that the elements can be taken from
size_type count() const; // the number of elements
///------------------------------------
/// Operations
///------------------------------------
bool test(value_type i) const; // test if an element is in the set
bool empty() const; // tests if a set is empty
const_iterator find(value_type i) const; // optional; finds an iterator for an element
const_iterator lower_bound(value_type i) const; // optional; returns an iterator pointing to
// the first element that is >= i
const_iterator upper_bound(value_type i) const; // optional; returns an iterator pointing to
// the first element that is > i
///------------------------------------
/// Iterators
///------------------------------------
const_iterator begin() const; // returns an iterator to the beginning
const_iterator end() const; // returns an iterator pointing the past-the-end of the container
};
Some methods are mentioned as optional: they may be inefficient to use or simply inappropriate (for example, using lower_bound for an unordered sequence). The iterator and constant iterator are the same classes: you can only scan through the elements of the set, you cannot change them while scanning.
This container is based on the article [4]. A good explanation of the algorithm is given in [5].
The implementation consists of two arrays (for instance, std::vector containers): one is sparse (an array of pointers), another is dense (or D) (an array of indices). In a value v is in the set, sparse[v] ==&dense[k], where dense[k] == v. In other words, *(sparse[v]) == v. But the index k is unimportant. In does not matter, what the value of the index is as long as, dense[k] = v.
A sample set, containing three values: 1,4 and 5 is shown in Figure 1.
Figure 1
The blue squares in the picture are nullptr values. Notice two important qualities:
Let us consider the algorithm for adding an extra element e, where, for instance, e=2 (Figure 2)
Figure 2
We perform dense.push_back(e). Then we perform sparse[e] = &dense.back();
Deletion is quite easy as well. We would like to delete element k = 1 (Figure 3).
Figure 3
When we handle the dense array we always delete the last element, which, in this case, contains 2. But we need to replace the value in the that we try to delete. So, *(sparse[1]) = dense.back(). This, means that now: dense[1] == 2. We have to adjust the value in sparse[2]: in general terms, we have to perform:
sparse[dense.back()] = sparse[1];
After that, as all the necessary links are established, we can delete the elements:
dense.erase(dense.end()-1);
sparse[1] = nullptr;
We now get the state of the arrays as shown in Figure 4.
Figure 4
One final remark: although I do not consider sorting the dense array, it is possible to sort its elements. The necessary swap(i,j) algorithm can be easily written:
std::swap(sparse[dense[i]],sparse[dense[j]]);
std::swap(dense[i],dense[j]);
Using this swap, sorting can be easily implemented.
This implementation is very close in functionality and speed to the Boost dynamic bitset. The difference is in the methods provided. The basic idea is that the values are stored as bits in an array (call it bit_array) of some base type. If, for example, the base type is a 32-bit integer, in order to test whether the i-th element is present one can write the following code:
bool test(value_type i)
{
return ((bit_array[i >> 5] >> (i & 0x1F)) & 1) != 0;
}
The implementation provides an iterator, which quickly skips zero bits.
This is obviously the most compact and rather efficient implementation. But scanning through the elements using the iterator is not as fast as in the unordered sparse set.
This container tries to combine the best features of both containers discussed above: the efficiency and compactness of the bounded set with the fast iteration of the unordered sparse set. The basic approach is the same as in the bounded set, but the iterator is different. When an iterator is needed, a vector of integers is quickly constructed from the bit array. The advantages are as follows:
For benchmarks, I have considered mainly the following containers: unordered_sparse_set, sparse_set, bounded_set, boost::dynamic_bitset, std::vector<bool>, std::vector<char>. In some tests, I looked at std::bitset and std::set<unsigned> as well. But the former is rather limited in terms of the range of values (I looked at the VC++ 2014 and GCC 4.9) that is can handle and, in general, slower than std::vector<bool>. The latter is simply rather slow. I have included their tests in the sample code, but I won't discuss them much.
unordered_sparse_set, sparse_set, bounded_set, boost::dynamic_bitset, std::vector<bool>, std::vector<char>
std::bitset
std::set<unsigned>
The following tests were considered:
In order to make sure that we compare like with like, the number generator was reset in each test.
The results were obtained in Microsoft Visual C++ 14 CTP compiled for 32-bit.
The results are shown in Figure 5. The highest length used in the tests was 50,000,000.
Figure 5
The timing for the bounded set, std::vector<bool>, sparse set are parctically the same. The Boost dynamic bitset is rather close as well. The unordered sparse set shows the slowest time. It is due to the size of the data structures is uses and the complexity of adding a element. Both the unordered sparse set and std::vector<char> seem to be affected by the cache issues.
std::vector<char>
The nature of all the containers is question is such that random access does not depend on the number of elements, but only on the length of the intervals involved and the number of steps (trials). In Figure 6, you can see the results of benchmarks with 10,000,000 steps for various lengths of the intervals.
Figure 6
Here again, as before, the unordered sparse set shows the worst performance, especially when the length is above 1,000,000, followed by the std::vector<char>. The issue here is only cache: the random test code is very simple. You may notice that they perform practically the same as other containers when the interval length is about 100,000.
You may ask the question: why do we consider the unordered sparse set at all? It hasn't so far shown better performance than other containers. I would suggest you read the next section.
Here is the benchmark of scanning through all the 100,000 for various interval lengths (Figure 7). You may see that the speed depends on the density for most of the containers, except the unordered sparse set. The unordered sparse set shows the best peformance. The graph also shows the difference in peformance of the sparse set when the number of scans increases.
Figure 7
When the scans are repeated about 20 times the performance of the sparse set is very close to that of the unordered sparse set. But even with one scan, the sparse set performs the same (or almost the same) as the other bitmap containers. You see that both std::vector<bool> and std::vector<char> are "struggling".
std::vector<char>
Figure 8 show the speed of random deletion using the prestored collection of randomly generated values.
Figure 8
Here again both the unordered sparse set is struggling with cache has to deal with the complexity of its deletion operation.
This test runs several independent 200 Erastosthenes seive calculations. It does not involve much scanning without altering the values, except for the last step, where it counts prime numbers in the given interval. The containers that are struggling with random access and deletion have obviously performed the worst. But here I have included std::bitset and std::set<unsigned> where I could. The results are shown in Figure 9.
std::set<unsigned>
Figure 9
Here the Boost dynamic bitset, bounded set and sparse set have performed the best. The worst was the std::set<unsigned>. As I mentioned before, this container is not really good for storing integers, unless we are dealing with very wide ranges of values. I am surprised that std::bitset was not really good and did not allow to store many values. The vector of bool showed a rather good performance.
The included tests are self-explanatory. You can do your own measurements. I have noticed that in some of the tests timings may differ sometimes over 10% from run to run. At the beginning of the file there are the following lines:
//#define USE_BOOST
#ifdef USE_BOOST
#include "boost\dynamic_bitset\dynamic_bitset.hpp"
#endif
const unsigned steps = 100000000;
const unsigned repeat = 5;
If you want to use Boost you can uncomment the USE_BOOST define. The steps constant defines the number of steps (trials) used in the random access tests. The repeat constant defines the number of scans in the iteration tests.
steps
repeat
[1].
[2].
[3].
[4]. Briggs, P., Torczon, L.: An efficient representation for sparse sets. ACM Letters on Programming Languages and Systems 2(1–4), 59–69 (1993)
. | https://codeproject.freetls.fastly.net/Articles/859324/Fast-Implementations-of-Sparse-Sets-in-Cplusplus | CC-MAIN-2022-21 | refinedweb | 2,047 | 54.42 |
Testing and performing the migration of a standalone MySQL NDB Cluster into MySQL Cluster Manager consists of the following steps:
Perform a test run of the proposed import using
import clusterwith the
--dryrunoption. When this option is used, MySQL Cluster Manager checks for mismatched configuration attributes, missing or invalid processes or hosts, missing or invalid PID files, and other errors, and warns of any it finds, without actually performing any migration of processes or data (the step only works if you have created the mcmd user on the cluster's mysqld nodes):
mcm> import cluster --dryrun newcluster;
If errors occur, correct them, and repeat the dry run shown in the previous step until it returns no more errors. The following list contains some common errors you may encounter, and their likely causes:
MySQL Cluster Manager requires a specific MySQL user and privileges to manage SQL nodes. If the
mcmdMySQL user account is not set up properly, you may see No access for user..., Incorrect grants for user..., or possibly other errors. Follow the instructions given in this step in Section 3.5.2.1, “Preparing the Standalone Cluster for Migration” to remedy the issue.
As described previously, each cluster process (other than a process whose type is
ndbapi) being brought under MySQL Cluster Manager control must have a valid PID file. Missing, misnamed, or invalid PID files can produce errors such as PID file does not exist for process..., PID ... is not running ..., and PID ... is type .... See Section 3.5.2.2, “Verify All Cluster Process PID Files”.
Process version mismatches can also produce seemingly random errors whose cause can sometime prove difficult to track down. Ensure that all nodes are supplied with the correct release of the MySQL NDB Cluster software, and that it is the same release and version of the software.
Each data node angel process in the standalone cluster must be killed prior to import. A running angel process can cause errors such as Angel process
pidexists ... or Process
pidis an angel process for .... Do the following when you see such errors:
For MySQL Cluster Manager 1.4.6 and earlier: See this step in Section 3.5.2.1, “Preparing the Standalone Cluster for Migration”.
For MySQL Cluster Manager 1.4.7 and later: Proceed to the next step if these are the only errors you get. The angel processes and the data node PIDs will be taken care of by the
--remove-angeloption used with the
import clustercommand at the last step of the import process.
The number of processes, their types, and the hosts where they reside in the standalone cluster must be reflected accurately when creating the target site, package, and cluster for import. Otherwise, you may get errors such as Process
idreported
#processes ..., Process
id... does not match configured process ..., Process
idnot configured ..., and Process
iddoes not match configured process .... See Section 3.5.2.3, “Creating and Configuring the Target Cluster”.
Other factors that can cause specific errors include processes in the wrong state, processes that were started with unsupported command-line options (see Section 3.5.2.3, “Creating and Configuring the Target Cluster” for details) or without required options, and processes having the wrong process ID, or using the wrong node ID.
When
import cluster --dryrunno longer warns of any errors, you can perform the import with the
import clustercommand, this time omitting the
--dryrunoption.
For MySQL Cluster Manager 1.4.6 and earlier:
mcm> import cluster newcluster; +-------------------------------+ | Command result | +-------------------------------+ | Cluster imported successfully | +-------------------------------+ 1 row in set (5.58 sec)
For MySQL Cluster Manager 1.4.7 and later: Use the
--remove-angeloption for the
import clustercommand, which kills the angel processes for the data nodes and adjusts the data nodes' PID files to contain the data node processes' own PIDs before importing the cluster:
mcm> import cluster --remove-angel newcluster; +-------------------------------+ | Command result | +-------------------------------+ | Cluster imported successfully | +-------------------------------+ 1 row in set (5.58 sec)
You can check that the wild cluster has now been imported, and is now under management of MySQL Cluster Manager:
mcm> show status -r newcluster; +--------+----------+----------------+---------+-----------+------------+ | NodeId | Process | Host | Status | Nodegroup | Package | +--------+----------+----------------+---------+-----------+------------+ | 50 | ndb_mgmd | 198.51.100.102 | running | | newpackage | | 2 | ndbd | 198.51.100.103 | running | 0 | newpackage | | 3 | ndbd | 198.51.100.104 | running | 0 | newpackage | | 51 | mysqld | 198.51.100.102 | running | | newpackage | | 52 | ndbapi | * | added | | | +--------+----------+----------------+---------+-----------+------------+ 5 rows in set (0.01 sec) | https://dev.mysql.com/doc/mysql-cluster-manager/1.4/en/mcm-using-import-cluster-test-and-migrate.html | CC-MAIN-2021-21 | refinedweb | 731 | 55.24 |
Spriting with HOpenGL Table of Contents The Groundwork : What This Tutorial Assumes HOpenGL : An Introduction Primitives and Textures in HOpenGL : A Primer Drawing Sprites in HOpenGL : Sprites.hs Setting Up : The spriteInit Function Reading Images Into Memory With HOpenGL : pngtorgb and ReadImage.hs Texture Creation and Loading : createTexture and createTextures Actually Drawing Sprites : displaySprite and displaySpriteWithFrame The Sprites Module in Action : SpritingDemo.hs The End : Compiling and Running the Programs The Sources : PngToRgb.tar.gz and SpritingDemo.tar.gz You can head back to our main page here. The Groundwork : What This Tutorial Assumes This tutorial is written for people who have decent or great knowledge of Haskell and wish to learn HOpenGL. This tutorial assumes knowledge of Haskell syntax, partial evaluation of functions, monads, and the like. If you are not fairly comfortable with Haskell or do not know how to look up Haskell functions, you will benefit more from this tutorial by reading others beforehand. If you already know C++ or languages like it, this tutorial may be just the thing you need. This tutorial focuses on drawing flat images to the screen using HOpenGL, although the code and ideas in this tutorial can also be applied to 3D imaging, as well. This is half a tutorial and half a manual for the spriting module (Sprites.hs) written by David Morra and Eric Etheridge. The source code for the module is freely available to anyone who wishes to use / alter it. For those who are ready to go, let's jump in: HOpenGL : An Introduction HOpenGL is the Haskell binding for the multiplatform graphics API OpenGL. The binding was created by Sven Panne and is currently a work in progress. Other than this tutorial, there are a few resources a budding HOpenGL programmer may use to learn more about the binding: Sven Panne's HOpenGL Homepage. A practical HOpenGL primer. Note that this tutorial is slightly out of date and does not cover subjects like texturing and the like. The HOpenGL API has evolved since this tutorial was written, so the reader should be aware that the code found within may not compile with the most recent version of HOpenGL. Another practical primer. This tutorial is very out of date and I only reccomend that you use it as a tutorial of concept. Much of the syntax found in this tutorial is out of date, but it could still be helpful for someone looking to learn about the architecture of an HOpenGL program. FunGEn, a game engine written in Haskell with HOpenGL as the underlying API. I have personally not used this engine before, so I can't attest to whether it is a good resource. The reader should use it at their own risk. If you know of another HOpenGL resource that you would like linked on this tutorial, please feel free to send me mail and I'll see what I can do about adding it. Primitives and Textures in HOpenGL : A Primer OpenGL does its primary work by rendering primitive polygons with textures on them to the screen. Primitive are things like Triangles, Quadrilatterals (called Quads), Polygons, Triangle Strips, Quad Strips, Lines and Points. Since primitives are drawn so often, OpenGL has a built-in method of rendering them to the screen. In the most recent version of HOpenGL, this function is called renderPrimitive. Its Haskell type is: renderPrimitive :: PrimitiveMode -> IO a -> IO a renderPrimitive will take the kind of primitive you wish to render (Quads, Triangles, etc) and a monadic function which is essentially a set of HOpenGL commands. The commands do things like set the locations of the primitive's verticies, set the color of the primitive at a vertex, and maybe set the texture coordinates at a vertex. OpenGL uses these commands to draw primitives of the type specified by the PrimitiveMode parameter. In order to draw primitives with textures on them, texture coordinates must be given with each primitive vertex. The texture coordinates will tell HOpenGL which sections from the currently selected texture to draw on your primitive. The diagram below illustrates how this works. In all of these examples, we are passing (x, y) texture coordinates along with each vertex of our primitive, which are Quads in these three cases (we'll get to how these coordinates are actually passed later). In Example A, we give (0.5, 0.0) to OpenGL along with vertex X, (1.0, 0.0) along with vertex Y, (0.5, 0.5) along with vertex Z, and (1.0, 0.5) with vertex W. The end result is a primitive drawn with a section of our original texture. If we wanted to draw our entire texture on the Quad, we would send (0.0, 0.0) with vertex X, (1.0, 0.0) with Y, (0.0, 1.0) with Z and (1.0, 1.0) with W. Example B illustrates that the texture coordinates given to OpenGL do not have to scale proportionally with the primitive vertex coordinates. In this example, our Quad is the same size as the Quad in Example A, but we are drawing the lower half of our texture onto it instead of a single quadrant. This results in the texture looking squished. Example C illustrates the same idea, only in reverse. In this example, we are drawing the same quadrant of the texture onto our Quad that we did in Example A, but we have stretched the Quad to twice its size in the y-direction. The result is that the texture is stretched as well. We should make a few notes here: The first is simply that there are no constraints on which sections of the texture can be given to OpenGL when drawing a primitive. In all of our examples, we took sections of the texture that were connected to one or more of its corners, but this is not mandatory in any way. For instance, passing the following texture coordinates with the verticies of our Quad would draw a section of the texture from the middle: (0.25, 0.25) with X (0.6, 0.25) withY (0.25, 0.8) with Z (0.6, 0.8) with W The second note is that in our examples, we are taking rectangular sections of our texture and drawing them on Quads. In general, you can "cut out" a piece of the texture in any shape you like, not just rectangles. The third note is that our texture coordinates go from 0.0 to 1.0 in each direction. Even though our texture is square, the texture coordinates will go from 0.0 to 1.0 for any OpenGL texture, even rectangular ones. Even though the coordinates go from 0.0 to 1.0, it is possible to ask OpenGL to "wrap" a texture around one or more of its edges. When a texture has been wrapped, you can pass texture coordinates that are bigger than 1.0 or less than 0.0. When this is done, the texture will appear to be tiled. This effect can be easily observed in many older games in which the texture for a large polygon is actually the same picture repeated infinitely. The last note is that we dealt with 2D textures in our example. Even though we will not cover them in this tutorial, keep in mind that OpenGL supports 1D and 3D textures as well. Drawing Sprites in HOpenGL : Sprites.hs Sprites are flat, rectangular pictures. The Sprites module written by David Morra and Eric Etheridge provides methods for easily loading raw (.rgb) image files as textures and drawing sprites to the screen. The Sprites module draws a sprite to the screen by rendering a Quad with the appropriate texture on it. For our purposes, the four verticies of the Quad will corrospond to the four corners of our sprite. Because Quads are fully-fledged 3D objects in OpenGL, we can easily achieve the rotation and scaling effects that one would expect to find in a pure spriting engine. The Sprites module supports rotation of sprites, and scaling can be done by changing the coordinates of the sprite's corners. Setting Up : The spriteInit Function Before we will be able to properly draw sprites to the screen, we must first set a few OpenGL environment variables. These can be set individually in your Haskell program, but we have written a handy function which sets them all up automatically. It is called spriteInit: spriteInit :: Int -> Int -> IO () spriteInit x y = do blendEquation $= Just FuncAdd blendFunc $= (SrcAlpha, OneMinusSrcAlpha) textureFunction $= Replace texture Texture2D $= Enabled clearColor $= Color4 0.0 0.0 0.0 0.0 color (Color4 0.0 0.0 0.0 (1.0 :: GLfloat)) ortho 0.0 (fromIntegral x) 0.0 (fromIntegral y) (-1.0) (1.0) The spriteInit function takes two Int's (discussed later) and is of type IO (). We will go through each of its lines independently, but first we should explain the operator ($=). This operator is a part of HOpenGL, and it is used to set values. Without going into deeper explanation, it should simply be thought of an assignment operator. It is there to make the HOpenGL parts of Haskell code resemble their imperative cousins as closely as possible. Now, onto spriteInit. The first line, blendEquation $= Just FuncAdd, tells OpenGL that we wish to enable blending when we are drawing things to the screen. Having blending enabled will allow us to do things like have our background show through the transparent parts of our sprites. When we enable blending, we also tell OpenGL how to calculate the resultant color from a blending operation. In our case, we just want to add the colors. Note: By the time you read this tutorial, this method of enabling blending may be obsolete. With the latest versions of HOpenGL, you may have to also include the line blend $= Enabled to turn blending on. This is noted in the source code for the Sprites module. Once we have turned blending on, we want to tell OpenGL how to handle two colors which are being blended together. The second line handles this: blendFunc $= (SrcAlpha, OneMinusSrcAlpha). Without going into to much detail, this line makes colors blend like you would expect. If you draw a transparent or translucent sprite onto the screen, the colors that were already there will show through it. When primitives are drawn to the screen, they can have both a color and a texture. The next line, textureFunction $= Replace, tells OpenGL to ignore the primitive's color and replace it entirely with the colors (and alpha values) contained in the texture. We do this so that our blending function will use the colors contained in the texture, and not in the Quad. texture Texture2D $= Enabled tells OpenGL that we want to use 2D textures. A similar variable exists for turning on 1D and 3D textures, but we do not use them in our engine. clearColor $= Color4 0.0 0.0 0.0 0.0 sets the clear color. The user may call a command to draw this color to every pixel on the screen. Generally, it does not matter what color you put here, but the alpha value should be 0.0. The next line is: color (Color4 0.0 0.0 0.0 (1.0 :: GLfloat)). As we have said before, primitives which are drawn in OpenGL can have both a texture and a color. This line sets the color that all primitives will be drawn with. Since we are overwriting our primitives with our textures, we don't really care what this color is. It is a good idea to keep this color's alpha value as 1.0. Now onto the last line. This line is here because of the way sprites have traditionally been drawn: ortho 0.0 (fromIntegral x) 0.0 (fromIntegral y) (-1.0) (1.0). For starters, we are putting OpenGL in orthographic mode (not perspective mode, which is used in games like first person shooters). By using orthographic mode, we ensure that objects will not become distorted near the edges of the screen. ortho's six arguments define the edges of the viewing volume. They go in this order: minimum x, maximum x, minimum y, maximum y, minimum z, maximum z. It should also be noted that we pass the two Int's which were passed to our spriteInit function to ortho as arguments. We are using this function to define our screen's virtual pixel size. For instance, if we called spriteInit 800 600 we would wind up creating a screen which is 800x600 virtual pixels large. We say virtual pixels because the viewing volume is defined independently of the window size. That is to say, we can have an 800x600 display inside of a window with a size of 1024x768. All this means is that the pixel corrosponding to (0, 0) would be at (0, 0) in the window, and the pixel corrosponding to (800, 600) would be at pixel (1024, 768) in the window, and everything else would be stretched out inbetween. This may seem like a weird way to do things. We do it this way because the sprite drawing functions in the Sprites module take Int's as arguments instead of floating point numbers. Traditionally, sprites have been discrete objects comprised of discrete parts. That is to say, the information held in sprites was stored as integral information, whereas OpenGL deals primarily in floating point information. To keep with the legacy of discrete storage and discrete display, we draw sprites as if we were drawing pixels to the screen, and we use the last line of spriteInit to say how many pixels we have to draw with. Reading Images Into Memory With HOpenGL : pngtorgb and ReadImage.hs Before we go on, we need to deal with the annoying task of loading our textures so that we can draw with them. The OpenGL API does not include built-in functions to read image files into memory, so it is up to the user to write the appropriate routines necessary to read image files and transform them into data that OpenGL can understand. Loading textures using the Sprites module is a two step process. The first step involves using pngtorgb, an intermediate tool which transforms a png image to a raw color file which will be readable by Sven Panne's ReadImage module. After the image has been converted, it can then be read by the Sprite module's texture-loading functions. Since Haskell has not yet been recgonized by many game developers as a viable language, not many image loading solutions exist. It is quite likely that as Haskell gains mainstream recgonition that image loading libraries will be written for a variety of different image types. pngtorgb is invoked from a command line like so: > pngtorgb image.png This program does not support multiple filenames or any regular expression identification. If invoked with multiple commands, it will ignore all but the last which it will take to be the filename of the png you wish to convert. This program uses the libpng library to open and read a png file, writing it to an .rgb file with 8-bit RGBA color. The input png file must have alpha channel information, even if the entire image is opaque. This program has only been compiled for *NIX platforms. The source code is provided and should easily compile on any platform supported by libpng. When compiling, don't forget to link libpng. Texture Creation and Loading : createTexture and createTextures The Sprites module uses two functions to load textures: createTexture :: FilePath -> (Bool, Bool) -> IO (Maybe TextureObject) createTextures :: [(FilePath, (Bool, Bool))] -> IO [(Maybe TextureObject)] As the two function types suggest, createTextures simply calls createTexture on a list of arguments. Its definition is: createTextures parameters = mapM (uncurry createTexture) parameters The definition for createTexture is as follows: createTexture :: FilePath -> (Bool, Bool) -> IO (Maybe TextureObject) createTexture filename (repeatX, repeatY) = do [texName] <- genObjectNames 1 textureBinding Texture2D $= Just texName when repeatX (textureWrapMode Texture2D S $= (Repeated, Repeat)) when repeatY (textureWrapMode Texture2D T $= (Repeated, Repeat)) textureFilter Texture2D $= ((Nearest, Nothing), Nearest) ((Size x y), pixels) <- readImage filename texImage2D Nothing NoProxy 0 RGBA' (TextureSize2D x y) 0 pixels return (Just texName) This function takes a path to the .rgb file you wish to load and two Bool's, which may be set to True if you wish to have your texture wrap in the x and/ or y directions. The first Bool is for the x component, the second for the y. This function gives back a (Maybe TextureObject), which may be passed to HOpenGL as a texture. We will quickly step through this function line by line: [texName] <- genObjectNames 1 We call this function to generate our texture. In OpenGL, each texture is assigned a number. By calling this function, we ask OpenGL to assign our texture a unique identifier. textureBinding Texture2D $= Just texName This function is called to set our new texture as OpenGL's current texture. This ensures that all texture operations that occur from this point on will happen to this texture. This function may be called anytime you wish to change the current texture. when repeatX (textureWrapMode Texture2D S $= (Repeated, Repeat)) when repeatY (textureWrapMode Texture2D T $= (Repeated, Repeat)) These two lines will tell HOpenGL whether or not you want your new texture to wrap in the S (x) or T (y) directions. These lines both use the when function. Its type is: when :: Monad m => Bool -> m () -> m () The when function evaluates its second argument if its first argument is True and does nothing if its first argument is False. So in these two lines, it will set texture wrapping independently for the x and y directions depending upon the two Bool's passed into the function. textureFilter Texture2D $= ((Nearest, Nothing), Nearest) This line is necessary to tell OpenGL how to display our texture. ((Size x y), pixels) <- readImage filename This line invokes readImage, a function from the ReadImage module, to read our .rgb file and convert it into an OpenGL-friendly form. The readImage function will return the integral size of the image, as well as an HOpenGL PixelData structure which is used in the next line. texImage2D Nothing NoProxy 0 RGBA' (TextureSize2D x y) 0 pixels We use the texImage2D function in this line to bind our image data to our texture. This function performs the binding for a 2D texture, and similar functions exist for 1D and 3D textures. This function takes many parameters, but for our purposes, we only care about 3 of them. Those are the RGBA' parameter, which tells OpenGL that our image has alpha data in addition to RGB color, the (TextureSize2D x y) parameter, which tells OpenGL the size of our image, and pixels, which passes along our image information. By calling this function, we are asking OpenGL to load this image into memory. return (Just texName) After we are done setting up our new TextureObject, we return it as a (Maybe TextureObject). Be advised that the images you use as textures must have sizes (in both the x and y direction) which are powers of 2, even if the sizes in each direction are not the same. Examples of good sizes are 128x128, 512x64, 32x2048, etc. Actually Drawing Sprites : displaySprite and displaySpriteWithFrame After we are done setting everything up, we are finally ready to draw sprites. The Sprites module exports two functions with which the user may do this: displaySprite :: Maybe TextureObject -> (Int, Int) -> (Int, Int) -> (GLfloat, GLfloat) -> (GLfloat, GLfloat) -> GLfloat -> IO () displaySpriteWithFrame :: Maybe TextureObject -> (Int, Int) -> (Int, Int) -> (Int -> ((GLfloat, GLfloat), (GLfloat, GLfloat))) -> Int -> GLfloat -> IO () These are the only two sprite drawing functions which may be accessed from outside the Sprites module, but there are many more functions involved in drawing the sprites, including the function which actually does all the work, displaySpriteBackend. The types of the hidden functions are as follows: displaySpriteBackend :: Maybe TextureObject -> (GLfloat, GLfloat) -> (GLfloat, GLfloat) -> (GLfloat, GLfloat) -> (GLfloat, GLfloat) -> GLfloat -> IO () findCenterBackend :: (Int, Int) -> (Int, Int) -> (GLfloat, GLfloat) findSizeBackend :: (Int, Int) -> (Int, Int) -> (GLfloat, GLfloat) setVertex :: (TexCoord2 GLfloat, Vertex3 GLfloat) -> IO () mapVerticies :: [(TexCoord2 GLfloat)] -> [(Vertex3 GLfloat)] -> IO () We will begin by explaining the last four functions since they are the shortest and perform the simplest tasks. We will start with findCenterBackend and findSizeBackend, which are defined like this: findCenterBackend :: (Int, Int) -> (Int, Int) -> (GLfloat, GLfloat) findCenterBackend (x0, y0) (x1, y1) = (((fromIntegral (x1 - x0)) / 2) + (fromIntegral x0), ((fromIntegral (y1 - y0)) / 2) + (fromIntegral y0)) findSizeBackend :: (Int, Int) -> (Int, Int) -> (GLfloat, GLfloat) findSizeBackend (x0, y0) (x1, y1) = ((fromIntegral (x1 - x0)) / 2, (fromIntegral (y1 - y0)) / 2) These two functions are straightforward. findCenterBackend finds the center of the rectangle formed by the lower-left and upper-right corners defined by (x0, y0) and (x1, y1), and returns it as a pair of GLfloat's. findSizeBackend returns a GLfloat pair which holds two numbers, one for the x direction and one for the y direction. Each number is the size of the rectangle along their respective axis divided by 2. The functions setVertex and mapVerticies are defined this way: setVertex :: (TexCoord2 GLfloat, Vertex3 GLfloat) -> IO () setVertex (texCoordinates, vertexCoordinates) = do texCoord texCoordinates; vertex vertexCoordinates; mapVerticies :: [(TexCoord2 GLfloat)] -> [(Vertex3 GLfloat)] -> IO () mapVerticies texs verts = mapM_ setVertex (zip texs verts) setVertex is a monadic function which takes texture coordinates and primitive vertex coordinates and tells OpenGL to set those as the current coordinates. It calls two monadic functions, texCoord and vertex, to set the respective values. mapVerticies calls setVertex on a zipped list of texture coordinates and primitive vertex coordinates. Now, onto the juicy stuff: displaySpriteBackend :: Maybe TextureObject -> (GLfloat, GLfloat) -> (GLfloat, GLfloat) -> (GLfloat, GLfloat) -> (GLfloat, GLfloat) -> GLfloat -> IO () displaySpriteBackend image (cx, cy) (sx, sy) (tx0, ty0) (tx1, ty1) angle = do textureBinding Texture2D $= image preservingMatrix $ do do translate $ Vector3 cx cy 0 do rotate angle $ Vector3 0 0 1)] renderPrimitive Quads $ do mapVerticies texs verts This function finally does the work of drawing our sprite. It takes a (Maybe TextureObject), four pairs of GLfloat's, and a final GLfloat defining rotation. The first pair of GLfloat's defines the center of our sprite, and the second pair defines the half-lengths of the rectangle along the x and y axis. The last two pairs define the upper-left and lower-right corners of the area on the texture we wish to copy onto our Quad. We'll go through displaySpriteBackend line by line: textureBinding Texture2D $= image We called the textureBinding function earlier when we were creating our textures. Here we are calling it again to set image as our current texture so that we may draw from it. preservingMatrix $ do OpenGL uses matrix math to draw things to the screen. This line tells OpenGL that we want to evaluate the following functions with a new matrix, but while preserving our original one. We do this because the translation and rotation matricies for each sprite will be different, but they are all displayed using the same matrix. do translate $ Vector3 cx cy 0 do rotate angle $ Vector3 0 0 1 These lines set up the translation and rotation matricies for our sprite. The translate function sets the origin to be the center of our sprite, and the rotate function rotates our coordinate system by the angle we passed into displaySpriteBackend. These functions assume two things. The first is that all sprites are being drawn with a z coordinate of 0.0. Since sprites are inherantly 2D objects, we can simply ignore anything in the z direction even though OpenGL asks for 3D coordinates. Since we wish to rotate our sprites solely in the x-y plane, we use the vector <0, 0, 1> with the rotation function.)] These two lines will define the vertex coordinates of our Quad as well as the associated texture coordinates for each vertex. These two lists are passed into the final line of displaySriteBackend: renderPrimitive Quads $ do mapVerticies texs verts Here, we finally call renderPrimitive to draw our sprite to the screen, passing mapVerticies texs verts as the commands we would like to execute. Now, we can finally explain the two sprite drawing functions exported from the Sprites module: displaySprite :: Maybe TextureObject -> (Int, Int) -> (Int, Int) -> (GLfloat, GLfloat) -> (GLfloat, GLfloat) -> GLfloat -> IO () displaySprite image min max texMin texMax angle = displaySpriteBackend image (findCenterBackend min max) (findSizeBackend min max) texMin texMax angle displaySpriteWithFrame :: Maybe TextureObject -> (Int, Int) -> (Int, Int) -> (Int -> ((GLfloat, GLfloat), (GLfloat, GLfloat))) -> Int -> GLfloat -> IO () displaySpriteWithFrame image min max func frame angle = displaySpriteBackend image (findCenterBackend min max) (findSizeBackend min max) texMin texMax angle where (texMin, texMax) = func frame Both of these functions take a (Maybe TextureObject), which is the texture to draw from, and two pairs of Int's, which respectively define the lower-left and upper-right hand corner of the sprite we wish to draw. Both of these functions will call findCenterBackend and findSizeBackend on the Int pairs and pass the results into displaySpriteBackend. These functions differ in how they handle texture coordinates, however. displaySprite takes two pairs of GLfloat's and simply pass them on to displaySpriteBackend. displaySpriteWithFrame is different, however. Instead of pairs of GLfloat's, it takes a function which takes an Int and gives back two pairs of GLfloats, and an Int to evaluate the function at. This functionality is provided as an easy way to draw frames of animation without having to keep track of anything other than the frame number. displaySpriteWithFrame's usefulness is illustrated with the following example: If I wanted to animate this cursor, I could number its four frames of animation 0 - 3 from left to right, and then write this function: getCursorCoordinates :: Int -> ((GLfloat, GLfloat), (GLfloat, GLfloat)) getCursorCoordinates frameNumber = (((fromIntegral frameNumber) * (1 / 4), 0.0), ((fromIntegral frameNumber) * (1 / 4) + (1 / 4), 1.0)) My function, getCursorCoordinates would take a frame number and give back the appropriate part of the sprite to draw as a pair of GLfloat pairs. All I then have to do to display a frame of animation for my cursors is by keeping around a value containing the frame number (let's call it frame), and invoking this function: displaySpriteWithFrame cursors (minX, minY) (maxX, maxY) getCursorCoordinates frame cursorAngle Just one final note to make: OpenGL considered the lower left hand corner of the screen to be the minimum x and y coordinates, but with textures, the minimum coordinates are at the upper-left coordinate of the texture. It is necessary to keep this in mind when writing functions involving texture coordinates to ensure that things get displayed correctly. The Sprites Module in Action : SpritingDemo.hs We have provided an example program which draws simple sprites to the screen using the Sprites module. It is comprised of five functions: main :: IO () display :: IORef GLfloat -> IORef Int -> (Maybe TextureObject, Maybe TextureObject, Maybe TextureObject, Maybe TextureObject) -> IO () getCursorsFrame :: Int -> ((GLfloat, GLfloat), (GLfloat, GLfloat)) makeNewWindow :: String -> IO () timer :: IORef GLfloat -> IORef Int -> (Maybe TextureObject, Maybe TextureObject, Maybe TextureObject, Maybe TextureObject) -> IO () We will explain these functions in order of simplest to most complex, and finish with main, which ties everything together. We'll start with getCursorFrame: getCursorsFrame :: Int -> ((GLfloat, GLfloat), (GLfloat, GLfloat)) getCursorsFrame frame | frame < 3 = (((fromIntegral frame) * (1 / 4), 0), ((fromIntegral frame) * (1 / 4) + (1 / 4), 1)) | frame >= 3 = (((3 / 4) - (1 / 4) * ((fromIntegral frame) - 3), 0), (1 - (1 / 4) * ((fromIntegral frame) - 3), 1)) This function is similar to the function we wrote above to animate our cursor. getCursorFrame is used to animate the same cursors picture, but this time we wish to have 6 frames. Frames numbers 0 - 3 are the same as we defined them above, but we are adding frames 4 and 5 in order to play a smooth animation. Frame 4 is the same as frame 2 and frame 5 is the same as frame 1. We do this so that we may play the frames in a cycle (0, 1, 2, 3, 4, 5, 0, 1, 2, etc) and get a smooth animation. The function makeNewWindow is a routine we call to set up our environment and create the window that our application will run in: makeNewWindow :: String -> IO () makeNewWindow name = do createWindow name windowSize $= Size 800 600 spriteInit 800 600 clearColor $= Color4 0.2 0.3 0.6 0.0 [tex1, tex2, tex3, cursors] <- createTextures [("test6.rgb", (False, False)), ("test4.rgb", (False, False)), ("test5.rgb", (False, False)), ("cursors.rgb", (True, True))] angle <- newIORef (0 :: GLfloat) frame <- newIORef (0 :: Int) displayCallback $= display angle frame (tex1, tex2, tex3, cursors) addTimerCallback msInterval (timer angle frame (tex1, tex2, tex3, cursors)) createWindow name This is an HOpenGL function which takes care of creating a window for us. It will set name as the window title. windowSize $= Size 800 600 spriteInit 800 600 These two lines set up the properties of our environment. The windowSize variable will hold the physical dimensions of our window. In this case, we are creating an 800x600 window. The second line is the spriteInit function we discussed earlier. In this case, we are creating a window with the same size as the virtual pixel environment created by spriteInit. It should be noted that these two sizes do not have to necessarily be the same. People with low-resolution displays may want to change the values given to windowSize in order to create a smaller window. In any case, the arguments to spriteInit should be left alone. clearColor $= Color4 0.2 0.3 0.6 0.0 Even though spriteInit initializes the clear color to black, we are changing it to a different color here for the purpose of demonstrating sprite transparency/ translucency. This line can be commented out with no problems if desired. [tex1, tex2, tex3, cursors] <- createTextures [("test6.rgb", (False, False)), ("test4.rgb", (False, False)), ("test5.rgb", (False, False)), ("cursors.rgb", (True, True))] Here we call the createTextures function on a list of FilePath's and Bool pairs to create four textures. angle <- newIORef (0 :: GLfloat) frame <- newIORef (0 :: Int) These two lines create IORef's that will store data pertaining to the rotation of our sprites, as well as the animation frame for our cursor. displayCallback $= display angle frame (tex1, tex2, tex3, cursors) addTimerCallback msInterval (timer angle frame (tex1, tex2, tex3, cursors)) These two lines define callback functions. The displayCallback variable holds a function which is called after our window is initialized and is ready to display data. In our case, we are assigning a function called display to be our display callback. The function addTimerCallback sets up a timer callback function which will be called after a certain number of milliseconds have passed. addTimerCallback's first argument is the millisecond count, and we give it an Int called msInterval which is defined at the top of SpritingDemo.hs. The timer callback function is called timer. Callback functions are of type IO (), so in our callback creation we evaluate display and timer fully. Here is the definition of timer: timer :: IORef GLfloat -> IORef Int -> (Maybe TextureObject, Maybe TextureObject, Maybe TextureObject, Maybe TextureObject) -> IO () timer angle frame textures = do modifyIORef angle (\num -> if (num + rotSpeed >= 360.0) then (360 - (num + rotSpeed)) else (num + rotSpeed)) modifyIORef frame (\f -> if (f == 5) then 0 else (f + 1)) display angle frame textures addTimerCallback msInterval (timer angle frame textures) The first two lines of this function update the two IORef's we created in makeNewWindow. We wish for the information stored in angle to always be a number between 0.0 and 360.0, and we want the information in frame to always be between 0 and 5. The first line makes use of a rotation speed number called rotSpeed which is defined in the first few lines of SpriteDemo.hs. The next line, display angle frame textures, calls our display function with the information passed to timer. The final line adds a new timer callback. In OpenGL, a timer callback vanishes after it is executed, so we must continually set new callbacks to keep the program going. Now, we are ready to delve into the display function: display :: IORef GLfloat -> IORef Int -> (Maybe TextureObject, Maybe TextureObject, Maybe TextureObject, Maybe TextureObject) -> IO () display angle frame (tex1, tex2, tex3, cursors) = do clear [ColorBuffer, DepthBuffer] angle' <- readIORef angle frame' <- readIORef frame flush swapBuffers This function is very mechanical and straightforward. The first line clears the screen using the clear color we defined earlier. It also clears the depth buffer, but we won't discuss that in this tutorial. angle' <- readIORef angle frame' <- readIORef frame These two lines read our two IORef's and store the information in a GLfloat called angle' and an Int called frame'. These will be used in the next few lines: Here we call our two sprite drawing functions a bunch of times with different parameters to draw all of our sprites on the screen. Note that the last line invokes displaySpriteWithFrame into which we pass our cursor animation function getCursorsFrame and frame'. flush swapBuffers These last two lines are just mechanical. flush tells OpenGL to perform all of the drawing operations defined up to this point. In this demo, we are using double buffering. This means that all drawing operations happen on a piece of memory which is hidden from the screen. Once the function swapBuffers is called, that piece is memory becomes visible and the memory which was just displayed becomes hidden so that it may be drawn to. Double buffering is a technique that is used to avoid flicker while drawing. Now that we have defined all of our functions, it is time to explain main line by line. main = do (progName, _) <- getArgsAndInitialize initialDisplayMode $= [DoubleBuffered, RGBAMode, WithDepthBuffer, WithAlphaComponent] makeNewWindow "HOpenGL Spriting Demo" mainLoop (progName, _) <- getArgsAndInitialize Calling getArgsAndInitialize will set up HOpenGL. The function will spit out the name of the program and some other information which we can ignore. If we wanted to, we could pass progName into makeNewWindow as the window name. In this example, we are explicitly naming the progName value, even though we do not use it in the program. initialDisplayMode $= [DoubleBuffered, RGBAMode, WithDepthBuffer, WithAlphaComponent] This line tells OpenGL that we want to use double buffering, RGBA color, we want to have a depth buffer, and we want our colors to contain alpha information. makeNewWindow "HOpenGL Spriting Demo" We call the makeNewWindow function we defined earlier. We are going to call our window "HOpenGL Spriting Demo". mainLoop This line tells OpenGL to enter its loop and wait for callbacks to be triggered. This function has to be called in every HOpenGL program. The End : Compiling and Running the Programs We have gone through and explained every last line in the Sprites module and our spriting demo. If you would like to compile the source code for these programs, it is freely available. The source code archive you can download includes scripts that you may run to compile these programs automatically (called buildSpritingDemo and buildPngToRgb). So far, these programs have only been compiled on Debian GNU/Linux (using the ghc-cvs and ghc-cvs-hopengl binary packages), but since the source code contains nothing Linux-specific, the spriting demo and pngtorgb should compile on any platform which is supported by HOpenGL and libpng. If anyone succsessfully compiles one or both programs on a Windows or Mac machine, we would appreciate some feedback. It is also pretty likely that future versions of HOpenGL and/or GHC may break parts of these programs. If this happens, we would appreciate a heads-up so that we can try to keep them up-to-date. Special thanks to Sven Panne for all of his hard work developing the HOpenGL API, and for writing the ReadImage module used by our Sprites module. Special thanks to haskell.org for hosting this tutorial and source code. The Sources : PngToRgb.tar.gz and SpritingDemo.tar.gz PngToRgb.tar.gz SpritingDemo.tar.gz This source code for the Sprites module, the spriting demo, and the png conversion program is copyright 2005 by David Morra and Eric Etheridge. The ReadImage module was written by Sven Panne but has been slightly modified. The details are contained in the copyright files in each of the archives, but basically you are free to use, copy, redistribute, alter, compile, integrate, and do whatever to this code you feel like doing as long as we're not held responsible for the consequences. One final note: As of the publication of this tutorial, the Sprites module has undergone a huge overhaul. New features have been added including native scaling, rotation about an arbitrary point and even corner manipulation. These are things that one should expect in a fully functional spriting engine, plus more. I hope to write a tutorial for the new module soon. Thanks for reading and good luck. | http://www.haskell.org/~pairwise/HOpenGL/HOpenGL.html | crawl-001 | refinedweb | 6,121 | 60.85 |
Avoiding embarrassing forwards
Problem: You receive an email (say from a customer), you pass it along internally with a few comments and… you accidentally copy the author of the email with details they were not meant to read.
Solution: Instead of “Reply to all”, adding your team in Cc and removing the customer (the part a lot of people forget), forward the whole email and then add your comments. It’s much easier to avoid mistakes when you have to populate an empty To: box than when you have to edit an existing list of To's, Cc's and Bcc's.
Don’t send too soon
Problem: writing an email that’s not ready to be sent yet.
You need to write a detailed email which is not supposed to be sent just yet (for example because you are describing something you’re still working on). How do you make sure that you don’t send the email until the time is right?
Solution: Write it and save it as a draft but don’t enter anything in the To: box. Do this last.
Say no to ugly emails
Problem: You want your emails to look nice.
My emails routinely contain code or shell output, and as much I love Gmail, its abilities to format are pathetic, both from a user interaction standpoint (why do I need three clicks to indent text to the right?) and from a theme standpoint. For example:
In order to get this ugly rendition of code, I had to indent the snippet manually and then change its font. And it still looks terrible.
Solution: Use Markdown Here, a Chrome extension which not only allows you to format your code in Markdown but also uses some pretty nice CSS.
You only need a few backticks and, optionally, specify the language:
```java public class SlidingWindowMap { public SlidingWindowMap(Set keys, int maxCount, long periodMs) { // ... } /** * @return a key that has been used less than `maxCount` times during the * past `periodMs` milliseconds or null if no such key exists. */ public String getNextKey() { // Your brilliant Solution goes here } } ```
Click the Markdown button and voilà:
Your turn now, what email tricks do you use which you think very few people know?
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.) | http://java.dzone.com/articles/email-tips-i-thought-were?mz=123873-agile | CC-MAIN-2014-23 | refinedweb | 392 | 66.88 |
ReadItem QML Type
Specifies an item to be read from the server. More...
Properties
- attribute : Constants.NodeAttribute
- indexRange : string
- nodeId : string
- ns : variant
Detailed Description
This type is used to specify items to be read from the server using the function Connection::readNodeAttributes.
Property Documentation
Determines the attribute of the node to be read.
Determines the index range of the attribute to be read. If not needed, leave this property empty.
Determines the node id of the node to be read.
Determines the namespace of the node to be read. The namespace can be given by name or index. If this property is given, any namespace in the node id will be. | https://doc.qt.io/QtOPCUA/qml-qtopcua-readitem.html | CC-MAIN-2021-10 | refinedweb | 112 | 68.77 |
Leverage Drush 7 for Drupal 8
by Aurelien Navarre.
With Drupal 8 around the corner, it’s time to explore how Drush 7 can once again be a drupalist’s best friend.
Now powered by Composer
Drush 7 makes use of Composer to download its dependencies. As you can see below, Drush uses the Composer autoloader to load its classes rather than use a require statement.
$ cd ~/.composer/vendor/drush/drush/lib/Drush ; ls ; head -n4 Sql/Sql8.php
Cache Role Sql
<?php
namespace Drush\Sql;
use Drupal\Core\Database\Database
For more detailed information about Drush 7 and Composer, please read the installation instructions.
If you’re closely following Core development, it won’t come to you as a surprise as Drupal 8 has embraced Composer early in the development process. But more than that, it’s also a trend in the PHP community. Composer has become so popular it’s rapidly replacing PEAR as the de facto solution for reusable PHP components and package management.
Note that only Drush 7 is compatible with Drupal 8. It’s backward compatible with Drupal 6 and 7 so you can safely upgrade. Alternatively, you can define aliases in your
~/.bashrc or
~/.bash_aliases file to invoke any specific Drush version required for your operations.
alias drush6="/usr/local/bin/drush"
alias drush7="/home/USERNAME/.composer/vendor/bin/drush"!
Commentaires
Drush 7 in windows
How we can setup drush 7 in windows machine?
Instructions on the Drush project page
It's more complicated to set up Drush on Windows than on any Linux/OSX machine. This is why a virtual machine running Linux is prefered on Windows computers.
However, you have installation instructions available at /drush (under the "Installing Drush on Windows" section).
If you need assistance with anything, please make sure to visit http: //drupal.stackexchange.com/search?q=[drush]++windows to review questions tagged Drush (for Windows), and don't hesitate to ask a new one if you can't find your answers!
Dev Desktop
A quick way to get up and running with Drush on Windows is simply to use Acquia Dev Desktop which you can download at. It's actually more than just Drush, it's a full LAMP stack tuned for Drupal development, so it comes with all you need: Apache, MySQL and PHP.
Ajouter un commentaire | https://www.acquia.com/fr/node/3182021 | CC-MAIN-2016-36 | refinedweb | 389 | 56.76 |
[still having Internet problems...I probably won't get replies immediately] --- The various APIs all have well-defined ways for handling namespaces of elements and attributes. I want to describe a simple data structure for passing namespace information between APIs. I want it to be as efficient as possible. In namespace mode PyExpat should produce tuples: (URI, localname, rawname) Those should be passed as the "name" parameter to SAX event handlers of the form: def startElement((URI,localname,rawname), attrs):... ... Those can in turn be passed to the DOM createElement method which can check the type of its first parameter and "do the right thing" when it is a tuple. This is more efficient than the DOM's createElementNS method which requires string manipulations. --- The second issue is an efficient way to pass around attributes. Note that I am not talking about how to query or fetch attributes. Just how to pass them around. The obvious representation for an attribute value is ((URI,localname,rawname),value). Tuples and list are much, much faster to create than instances in Python. Java doesn't really have equivalents. I propose that in beta 2 of Python, PyExpat in namespace mode should pass list structures of the form: [((URI,localname,rawname),value),...] I choose not to use dictionaries because it isn't clear whether to index on rawnames or localname/URI tuples. It depends on the application so it is better to build dictionary-based indexes at the application level. If a particular user wants a more friendly data structure they can construct a DOM AttributeList object: def startElement((URI,localname,rawname), attrs):... attrs=xml.dom.AttributeList( attrs ) (as an optimization, AttributeList objects would probably be lazily indexed based on either qname or URI/localname, depending on what the user asked for) -- | https://mail.python.org/pipermail/xml-sig/2000-June/002849.html | CC-MAIN-2017-43 | refinedweb | 298 | 56.05 |
A package for getting a US equity earnings announcement calendar.
Project description
ecal (pronounced ee-cal) is a package for getting a US equity earnings announcement calendar.
For more documentation, please see.
Installation
ecal can be easily installed with pip:
$ pip install ecal
Usage
ecal is really simple to use. Below you’ll find the basics.
Getting the earnings announcements for a single date
To get the earnings announcements for a single date simply import ecal and call get():
import ecal cal_df = ecal.get('2017-03-30')
The results will be an earnings calendar in a pandas DataFrame:
ticker when date 2017-03-30 AEHR amc 2017-03-30 ANGO bmo 2017-03-30 BSET -- 2017-03-30 FC amc 2017-03-30 LNN bmo 2017-03-30 SAIC bmo 2017-03-30 TITN bmo
The returned DataFrame has the following columns:
- ticker
- is the ticker symbol on NYSE or NASDAQ.
- when
- can be bmo which means before market open, amc which means after market close or -- which means no time reported.
If there were no announcements for this day, an empty DataFrame will be returned.
Getting the earnings announcements for a date range
It is equally easy to get the earnings announcements for a date range:
import ecal cal_df = ecal.get('2018-01-01', '2018-01-05')
Once again the results will be an earnings calendar in a pandas DataFrame:
ticker when date 2018-01-04 CMC bmo 2018-01-04 LNDC amc 2018-01-04 NEOG bmo 2018-01-04 RAD amc 2018-01-04 RECN amc 2018-01-04 UNF bmo 2018-01-05 AEHR amc 2018-01-05 ANGO bmo 2018-01-05 FC amc 2018-01-05 LW bmo 2018-01-05 PKE bmo 2018-01-05 PSMT amc 2018-01-05 RPM bmo 2018-01-05 SONC amc 2018-01-05 WBA bmo
Days with no earnings announcements will have no rows in the DataFrame. In the example above, there were no announcements on Jan first, second and third.
It should be noted that ecal fetches earnings announcements from api.earningscalendar.net by default. This source limits us to 1 call per second. However you don’t have to worry about this because the ecal.ECNFetcher throttles calls to the API to prevent rate limiting. That said, please note that this fetcher gets announcements one day at a time which means if you want 30 days, it’s going to take 30 seconds to get that data. Yikes. Fear not… that’s why ecal comes with caching.
Caching
ecal supports caching so that repeated calls to ecal.get() don’t actually make calls to the server. Runtime caching is enabled by default which means calls during your program’s execution will be cached. However, the ecal.RuntimeCache is only temporary and the next time your program runs it will call the API again.
Persistent on disk caching is provided via ecal.SqliteCache and can be easily enabled by setting ecal.default_cache once before calls to ecal.get():
import ecal ecal.default_cache = ecal.SqliteCache('ecal.db') cal_df = ecal.get('2017-03-30')
Extension
ecal is very easy to extend in case you want to support another caching system or even create an earnings announcement fetcher. For more documentation, please see.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/ecal/ | CC-MAIN-2022-05 | refinedweb | 568 | 73.37 |
Sat May 11 08:02:07 CEST 2002
+ Linux bosbes.4fm.be 2.4.9-13 #1 Tue Oct 30 20:05:14 EST 2001 i686 unknown
+ Checking out source code
+ Running ./autogen.sh
+ GStreamer gstreamer version is 0.3.4.1
+ Running ./configure
+ Running make
+ Running make distcheck
- Problem while running make distcheck
- Dumping end of log ...
PASS: registry
[01mINFO[00m ([00;33m13274[00m:[00;31m 0[00m)[00;07;37m[00m Initializing GStreamer Core Library version 0.3.4.1
[01mINFO[00m ([00;33m13274[00m:[00;31m 0[00m)[00;07;37m[00m Adding plugin path: "/home/thomas/gst/build/gstreamer/gstreamer-0.3.4.1/=build"
[01mINFO[00m ([00;33m13274[00m:[00;31m 0[00m)[00;07;37m[00m CPU features: (c1c7f9ff) MMX 3DNOW MMXEXT
loaded registry user_registry in 0.027312 seconds
testplugin: 0x8051858 testplugin
testplugin2: 0x8051890 testplugin2
PASS: static2
===================
1 of 6 tests failed
===================
make[4]: *** [check-TESTS] Error 1
make[4]: Leaving directory `/home/thomas/gst/build/gstreamer/gstreamer-0.3.4.1/=build/testsuite/plugin'
make[3]: *** [check-am] Error 2
make[3]: Leaving directory `/home/thomas/gst/build/gstreamer/gstreamer-0.3.4.1/=build/testsuite/plugin'
make[2]: *** [check-recursive] Error 1
make[2]: Leaving directory `/home/thomas/gst/build/gstreamer/gstreamer-0.3.4.1/=build/testsuite'
make[1]: *** [check-recursive] Error 1
make[1]: Leaving directory `/home/thomas/gst/build/gstreamer/gstreamer-0.3.4.1/=build'
make: *** [distcheck] Error 2
+++ building gst-plugins
+ Starting on Sat May 11 08:16:40 CEST 2002
+ Linux bosbes.4fm.be 2.4.9-13 #1 Tue Oct 30 20:05:14 EST 2001 i686 unknown
+ Checking out source code
+ Running ./autogen.sh
+ GStreamer gst-plugins version is 0.3.4 Sat May 11 08:25:31 CEST 2002
+ Linux bosbes.4fm.be 2.4.9-13 #1 Tue Oct 30 20:05:14 EST 2001 i686 unknown
+ Checking out source code
+ Running ./autogen.sh
+ GStreamer gst-player version is 0.2.3.1
+ Running ./configure
+ Running make
+ Running make distcheck
- Problem while running make distcheck
- Dumping end of log ...
if test "$subdir" = .; then :; else \
test -d ../../gst-player-0.2.3.1/libs/gst/$subdir \
|| mkdir ../../gst-player-0.2.3.1/libs/gst/$subdir \
|| exit 1; \
(cd $subdir && \
make \
top_distdir="." \
distdir=../../../gst-player-0.2.3.1/libs/gst/$subdir \
distdir) \
|| exit 1; \
fi; \
done
make[3]: Entering directory `/home/thomas/gst/build/gst-player/libs/gst/player'
make[3]: *** No rule to make target `gstprefs.h', needed by `distdir'. Stop.
make[3]: Leaving directory `/home/thomas/gst/build/gst-player/libs/gst/player'
make[2]: *** [distdir] Error 1
make[2]: Leaving directory `/home/thomas/gst/build/gst-player/libs/gst'
make[1]: *** [distdir] Error 1
make[1]: Leaving directory `/home/thomas/gst/build/gst-player/libs'
make: *** [distdir] Error 1
+++ building gst-editor
+ Starting on Sat May 11 08:27:00 CEST 2002
+ Linux bosbes.4fm.be 2.4.9-13 #1 Tue Oct 30 20:05:14 EST/usr/include/libgnomeui-2.0 -I/usr/include/libgnome-2.0 -I/usr/include/gtk-2.0 -I/usr/include/libart-2.0 -I/usr/include/libbonoboui-2.0 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/libbonobo-2.0 -I/usr/include/gnome-vfs-2.0 -I/usr/lib/gnome-vfs-2.0/include -I/usr/include/bonobo-activation-2.0 -I/usr/include/libxml2 -I/usr/include/pango-1.0 -I/usr/include/freetype2 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/X11R6/include -I/usr/include/libglade-2.0 -I/usr/include/libgnomecanvas-2.0 -I/usr/include/gconf/2 -I/usr/include/orbit-2.0 -I/usr/include/linc gst_editor-editor.o
mkdir .libs
gcc -g -O2 -o .libs/gst-editor gst_editor-editor.o ../libs/gst/editor/.libs/libgsteditor.so -lxml2 -lz -lm -lgobject-2.0 -lgthread-2.0 -lpthread -lgmodule-2.0 -ldl -lglib-2.0 xmlstreamer/gst/.libs/libgstreamer.so -lxml2 -lz -lm -lgobject-2.0 -lgthread-2.0 -lpthread -lgmodule-2.0 -ldl -lglib-2.0 /usr/lib/libpopt.so -lgobject-2.0 -lgmodule-2.0 -ldl -lgthread-2.0 -lpthread -lxml2 -lz -lm -lglib-2.0 :28:43 C[4]: *** [libgstopenquicktimedemux_la-gstopenquicktimedemux.lo] Error 1
make[4]: Leaving directory `/home/thomas/gst/build/gst-all/gst-plugins/ext/openquicktime'
Failed builds : gstreamer gst-plugins gst-player gst-editor gst-all
+ detected rpm
+++ building gstreamer
+ Starting on Sat May 11 08:02:18 ...
pthread_self: 1026
descr: 0x403dcbe0
*descr: 0x403dcbe0
FAIL: Sat May 11 08:13:29 C.4.1
+ Running ./configure
+ Running make
+ Running make distcheck
+ Building RPM's
- Problem while running rpm
- Dumping end of log ...
error: failed build dependencies:
gstreamer-devel >= 0.3.4.1 is needed by gstreamer-plugins-0.3.4.1-20020511_081536
+++ building gst-player
+ Starting on Sat May 11 08:29:06 CEST 2002
+ Linux gramm.4fm.be 2.4.9-13custom #1 SMP Wed Dec 26 20:44:21 CET 2001 i686 unknown
+ Checking out source code
+ Running ./autogen.sh
- Problem while running autogen.sh
- Dumping end of log ...
+ check for build tools
checking for autoconf >= 2.52... found 2.52, ok.
checking for automake >= 1.5... found 1.5, ok.
checking for gettextize >= 0.10.0... found 0.10.38, ok.
checking for intltoolize >= 0.18.0....
+++ building gst-editor
+ Starting on Sat May 11 08:29:12 C/pango-1.0 -I/usr/X11R6/include -I/usr/include/freetype2 -I/usr/include/gtk-2.0 -I/usr/include/atk-1.0 -I/usr/lib/gtk-2.0/include -I/usr/include/libxml2 -I/usr/include/libglade-2.0 -I/usr/include/bonobo-activation-2.0 -I/usr/include/gnome-vfs-2.0 -I/usr/lib/gnome-vfs-2.0/include -I/usr/include/gconf/2 -I/usr/include/libbonobo-2.0 -I/usr/include/libart-2.0 -I/usr/include/libgnome-2.0 -I/usr/include/libbonoboui-2.0 -I/usr/include/libgnomeui-2.0 -I/usr/include/linc-1.0 -I/usr/include/orbit-2.0 -I/usr/include/libgnomecanvas:30:47
*******************************************************************
[03:00] <wingo-afk> lol
[03:02] <omega> <aml> use the wiki for good
[03:02] <omega> <Schuyler> DO NOT BE TEMPTED BY THE DARK SIDE OF THE WIKI
[03:02] <omega> <collord> DarkSide
[03:02] <omega> <omega> does this mean I have to confront Darth Apache?
[03:02] <omega> <Schuyler> worse
[03:02] <omega> <-- dannie has quit (Remote closed the connection)
[03:02] <omega> <Schuyler> Darth 404
[03:02] <omega> <collord> I'm pid 1 omega, your father process
[03:03] Nick change: omega -> omega_f00d
[03:43] wingo-afk (~wingo@...) left irc: Remote closed the connection
[03:44] wingo (~wingo@...) joined #gstreamer.
[03:48] <wingo> so it's not really necessary to run gst-register any more for uninstalled gst, if you have your GST_PLUGIN_PATH set properly
[03:50] grub_booter (~charlie@...) got netsplit.
[03:56] grub_booter (~charlie@...) got lost in the net-split.
[03:56] dobey (~dobey@...) left #gstreamer ("eh").
[04:05] femistofel (artm@...) joined #gstreamer.
[04:05] artm (artm@...) left irc: Read error: 110 (Connection timed out)
[04:11] grub_booter (~charlie@...) joined #gstreamer.
[04:31] Nick change: omega_f00d -> omega
[04:32] <omega> woo! have three soundcards working in my desktop, using the sblive to output jumpered to the hammerfall, then recorded and fed back to the onboard
[04:34] <wingo> nice :)
[04:34] <omega> so now, Mr. Jack....
[04:34] <omega> I should be able to theoretically play mp3's out the hammerfall and jumper it to my sblive
[04:35] <omega> or more interestingly (if I had another cable) filter stuff live
[04:36] <wingo> in theory ;)
[04:36] <wingo> does alsaplayer work for you with jack?
[04:36] <wingo> alsaplayer -o jack
[04:36] <omega> haven't tried jack yet
[04:36] <omega> lemme try
[04:37] <omega> need cvs alsaplayer
[04:37] <wingo> yes, or a recent release
[04:37] <omega> latest release says "no such libjack.so"
[04:39] <omega> is the README for the gst plugin up to date?
[04:39] <wingo> prolly not
[04:39] <omega> ok, so what would I use to try jack to play mp3s?
[04:40] <omega> and dare I ask how jack deals with multiple cards?
[04:41] <wingo> jackd -d alsa -d <your alsa device> ...
[04:42] <wingo> all options after -d <driver> are parsed bythe driver
[04:42] <wingo> use alsaplayer
[04:47] <omega> blah, configure says "use jack: yes", yet no libjack.so exists
[04:47] <omega> heck, there are no such files *jack* in the entire alsaplayer csv
[04:47] <omega> cvs
[04:48] <omega> only mention of jack in a makefile is ln -s alsaplayer jackplayer
[04:51] <wingo> it works for me, yo...
[04:52] <wingo> maybe i just grabbed a release or something
[04:52] <omega> -d jack, not -o jack
[04:52] <wingo> 0.99.57 has jack
[04:53] <omega> ok, that works
[04:53] <omega> at least for the sblive
[04:54] <omega> now, how would I play an mp3 out jack?
[04:54] <wingo> do you have alsaplayer working?
[04:54] <omega> yes
[04:54] <wingo> alsaplayer <file.mp3>
[04:54] <wingo> with the -o jack or -d jack or whatever
[04:55] <omega> yes
[04:55] <wingo> you can say -i text for a text interface, i think
[04:56] <omega> no, play an mp3 with gst
[04:56] <omega> though atm I'm updating gst cvs
[04:57] <omega> and should probably do some work on dependent packages before wasting too much time building gst <g>
[04:57] <wingo> heh
[04:58] <wingo> well, the jack element doesn't work right yet, i can't get jack to work at all any more
[04:58] <omega> oh, that's a problem
[04:58] <wingo> it should be ~30 minutes with a working jack installation to get it to function
[04:58] <omega> what about alsasrc/alsasink?
[04:58] <wingo> those are bitrotten :-/
[04:59] <omega> hrm, those'll both have to be fixed then....
[04:59] <wingo> they never got updated at the last capsnego change, and there are some logic errors in the process() functions
[05:01] <taaz> those gst hackers sure are slackers... not updating plugins and all <g>
[05:02] <omega> wingo: you don't want to let me slide because of non-working jack or alsa plugins, trust me <g>
[05:04] <taaz> omega: so you're not using gst for all that crazy routing and multi card stuff?
[05:04] <wingo> i'll get to them eventually, every time i try to do something my task stack gets out of control
[05:05] <omega> taaz: uh?
[05:05] <wingo> want to write jack plugin? then rewrite parser. when done, understand gstthread. when done, ...
[05:06] <omega> pff
[05:06] <wingo> so much to do, yo.
[05:07] Action: Procule is gone.. autoaway after 60 min <cyp/lp>
[05:08] <taaz> wingo: no rush. it's ok if it takes till tomorrow for you to finish. <g>
[05:08] <taaz> i tried to write down all the issues with getting a full dvd player once... it was a long list
[05:11] <taaz> ha! people leaving for summer, i have bandwidth now ;)
[05:14] <taaz> and i need it... 144M debian update time ;)
[05:18] <wingo> heh
[05:23] <taaz> omega: got a few minutes?
[05:23] <omega> yeah
[05:24] <taaz> bytestream timestamps... what's your opinion on how to handle this?
[05:24] <taaz> i've got a hack way to do it
[05:24] <omega> well, what instances do you need it in?
[05:24] <taaz> which will work 99% of the time probably
[05:24] <taaz> a52dec for instance
[05:25] <omega> well, so why would the source stream have timestamps?
[05:25] <taaz> the a52dec plugins is setup to make this work
[05:25] <taaz> but bytestream is alwyas defaulting to no timestamp
[05:25] <omega> what assigns timestamps to the buffers that contain the compressed stream?
[05:25] <taaz> uh.. cause the mpeg2 stream has them?
[05:25] <omega> ok
[05:26] <taaz> the code in bytestream that creates a new buffer and returns it never sets the timestamp
[05:26] <omega> it should imo be set to the most logical nearby timestamp
[05:26] <taaz> if we "assume" properly packetized data and the user is only requesting those packets then we can just always use timestamp of first input buffer
[05:27] <omega> maybe even have a bit that can be set to determine whether that's the most recent or oldest timestamp represented
[05:27] <taaz> but this fails if, for whatever reason, someone requests a buffer that spans input buffers with different timestamps
[05:27] <omega> have the plugin decide, as per above
[05:27] <taaz> i wanted to find a nice way to handle that case. but solutions i have are making the simple case too complex
[05:28] <omega> have the plugin decide, choose a sane default, you're done
[05:28] <omega> if the plugin really really wants to find timestamps on multi-span buffers from bytestream, it should be doing smaller reads
[05:28] <wingo> or not using bytestream, perhaps
[05:29] <omega> taaz: implement it, and if you later find problems, rework it
[05:29] <omega> I'm trying to LART myself out of the habit of trying to design things for all cases the first time
[05:30] <omega> which usually ends up in the zeroth implementation never getting started
[05:32] <taaz> yeah, doesn't hurt to use a few brain cycles on it first though.
[05:32] thomasvs_ (~thomas@...) joined #gstreamer.
[05:33] <omega> taaz: it's been at least 6mo since you first decided to solve the problem, afaict <g>
[05:34] <ajmitch> hi
[05:34] <wingo> hello
[05:36] <taaz> omega: your point being? ;)
[05:36] <omega> taaz: if we'd spent those cycles on SETI, we'd have new friends by now ;-)
[05:38] Nick change: wingo -> wingo-zzz
[05:49] thomasvs (~thomas@...) left irc: Success
[06:02] <omega> taaz: btw, I learned python, sorta
[06:04] <taaz> excellent
[06:04] <taaz> you are on the path to enlightenment
[06:04] <taaz> or something like that
[06:05] <taaz> do you like it?
[06:05] <ajmitch> python is the one true way
[06:05] <ajmitch> (well, not really...)
[06:11] <omega> doh.
[06:12] <omega> BeNOW is giving an impromptu lecture on the structure of his software, while DJ'ing his stream
[06:12] <ajmitch> ph?
[06:12] <ajmitch> oh?
[06:12] Action: ajmitch wonders if he has the bandwidth to listen
[06:13] <omega> 28k version
[06:13] <taaz> is he praising gst? ;)
[06:14] <ajmitch> or the app - where do i open it in xmms?
[06:14] <omega> ctrl-o
[06:14] <omega> taaz: not covering that part
[06:14] <omega> or ctrl-l
[06:15] <ajmitch> tried that, doesn't do a thing (except wipe my playlist)
[06:15] Action: ajmitch thinks xmms is crap anyway ;)
[06:15] <omega> xmms
[06:15] <taaz> uh hello? ./tools/gstreamer-launch httpsrc location= ! mad ! osssink
[06:15] <taaz> or whatever the url is
[06:16] <omega> taaz: ajmitch doesn't have that much bandwidth <G>
[06:16] <ajmitch> omega: only 128kbps
[06:16] <omega> ajmitch: use benow.ca:8024
[06:16] <ajmitch> using it
[06:17] <taaz> "only" back in my day we didn't have the "k" in kbps...
[06:17] <ajmitch> damn it sounds crap ;)
[06:17] <omega> ajmitch: um, yeah <g>
[06:18] chillywilly (~danielb@...) left irc: "Philosophers and plow men, each must know his part, to sow a new mentality closer to the heart..."
[06:20] <taaz> interesting...
[06:20] <taaz> my dvdplay.py is sorta working again
[06:20] <omega> cool
[06:21] <taaz> must have been some temporary lib glitch that stopped it from doing anything at all last time i tried
[06:21] <taaz> it's pretty jerky while i'm installing 144M of debs and compiling gst at the same time ;)
[06:21] <taaz> where's that QoS code?
[06:21] <omega> um, yeah
[06:24] <taaz> videosink still segfaults though... ahve to use sdlvideosink
[06:28] <omega> taaz: you listening to benow?
[06:28] <omega> we're conversing, sorta
[06:28] <omega> benow is explaining how he uses gst
[06:30] <omega> taaz: you can push python, he doesn't like C
[06:30] <omega> but he does java
[06:35] <taaz> i'm listening to a cd right now actually...
[06:36] <taaz> i don't like compiled languages like java ;)
[06:37] <omega> taaz: listen to benow
[06:38] <taaz> ok, on now
[06:40] <taaz> guess i missed most of it
[06:40] <omega> I have it recorded
[06:42] <taaz> this is amusing ;)
[06:42] <taaz> is this live on airwaves somewhere or just the net?
[06:42] <omega> net
[06:43] <omega> but if we can hook up zchat to it, that'd be very cool
[06:43] <omega> trick is getting really low latency for the primary participants
[06:43] <omega> or use 'voice' type stuff, where the moderator kinda has to listen to pseudo-prerecorded bits
[06:46] <taaz> i can list a few problems with winamp yo
[06:46] <omega> benow.ca #benow <g>
[06:52] <taaz> this is odd
[06:53] <taaz> hearing responses to typed questions... ;)
[06:53] <taaz> ack!
[06:53] <omega> mega latency too <G>
[06:54] <taaz> my gst-launch thing playing this just started spitting out lots of "mad0: average-bitrate = xxxx"
[06:54] <taaz> over and over with xxxx incrementing each print
[06:55] rehan (~rehankhwa@...) joined #gstreamer.
[06:56] <rehan> hi - i have a quick question about dvdsrc
[06:57] <taaz> not my fault
[06:57] <taaz> i swear
[06:58] <rehan> eh?
[06:59] <taaz> is it a bug report question or were you wondering who wrote the most excellent code? ;)
[06:59] <rehan> i'm wondering how to make dvdsrc send out only a particular chapter
[06:59] <taaz> set the chapter property?
[06:59] <rehan> yep - but if i do chapter=2, it _starts_ from chapter 2.
[06:59] <rehan> i only want chapter 2, not 3, 4, 5 etc
[07:00] <taaz> it's not advanced enough to do that right now
[07:00] <rehan> ok
[07:01] <rehan> incidentally, am i dreaming or is dvdsrc missing from cvs
[07:03] <taaz> eh?
[07:04] <rehan> i was looking at the cvs interface on the sourceforge page - it has nothing in the dvdsrc directory
[07:05] <taaz> its in plugins ext/dvdread/ dir. despite my complaints about renaming to that
[07:05] <rehan> oh, ok - that confused me too :)
[07:11] walken (foobar@...) joined #gstreamer.
[07:15] <rehan> do you know of a small linux app that can just rip single chapters of a dvd (to unencrypted vob files?)
[07:16] <rehan> silly question i guess :)
[07:16] <rehan> bye :)
[07:16] <taaz> no, and i don't see the point
[07:16] rehan (~rehankhwa@...) left #gstreamer.
[07:17] <walken> lol
[07:17] <omega> taaz: backups. my matrix disc is losing it, I want a backup
[07:17] <taaz> you have a dvd writer?
[07:18] <omega> no
[07:18] <taaz> you should take better care of your discs ;)
[07:18] <omega> I take perfectly good care of my discs
[07:18] <omega> it's splitting apart radially from the inside out
[07:18] <omega> as are all my DVDs
[07:18] <taaz> huh?
[07:18] <omega> none of my CDs do that
[07:19] <taaz> you see that site about cd explosion?
[07:20] <omega> yeah, didn't get most of the pics because it was /.'d at the time
[07:21] Action: ajmitch recalls a pic of a shattered CD on the LUG list
[07:31] rehan (~rehankhwa@...) joined #gstreamer.
[07:32] rehan (~rehankhwa@...) left irc: Read error: 104 (Connection reset by peer)
[07:49] <walken> hehe
[07:49] <walken> your pc is emitting too much microwaves
[07:50] <walken> that melts your dvd's
[07:50] <walken> that must be it
[07:51] <ajmitch> hehe
[07:51] <taaz> or maybe you're spending too much time watching dvds and not enough hacking on gstreamer
[07:52] <omega> I haven't watched a DVD in months
[07:52] <ajmitch> been too busy with a certain van :)
[07:52] <taaz> you haven't hacked on gstreamer in months either...
[07:53] <omega> taaz: atm I'm building packages to go in cvs for the rc channel
[07:53] <taaz> which means nothing to me <g>
[07:53] <ajmitch> we miss you omega! come back to us! ;)
[07:54] <walken> hmmm
[07:54] <walken> I thought erik was working on gstreamer professionally
[07:54] <omega> not for a while
[07:55] <omega> walken: do you have your tickets to OLS yet?
[07:55] <walken> I have reserved
[07:55] <walken> but I havent reserved my flight yet
[07:55] <ajmitch> omega: so what have you been doing for work?
[07:56] <taaz> me either
[07:56] <omega> ajmitch: nada
[07:56] <ajmitch> ah k
[07:57] <taaz> air travel expensive... and if i don't bother getting ajob this summer i might just drive for fun
[07:58] <taaz> man, this dvd playback is crap
[07:58] <taaz> i must be doing something wrong
[07:58] <walken> taaz didnt you drive already two years ago?
[07:59] <taaz> yeah
[07:59] <walken> :)
[07:59] <walken> guess you like it then
[07:59] <taaz> well, jeff did most of the driving, but i was in the car ;)
[07:59] <walken> it was jeff's car ?
[07:59] <taaz> well, the driving would mean i could pretty easily stick around for debconf too
[08:00] <taaz> jeffs dads car
[08:00] <walken> ok
[08:00] <walken> ok seinfeld time :)
[08:00] arik ([Zuo5R+bLk@...) joined #gstreamer.
[08:02] <taaz> btw, the gst-register printed info will have to be shut off before a release
[08:02] <taaz> i assume it's just debug info for now
[08:03] Action: arik rebuilds _all_ of gst
[08:07] <taaz> isn't that about 1 command?
[08:08] <arik> hehe
[08:08] <arik> yes :-)
[08:08] <arik> well, two
[08:08] <arik> configure && make
[08:09] <arik> you could argue that it's one or two really :-) easy either way
[08:09] <arik> just a bit time consuming
[08:16] <taaz> or it could be a script to clean core and plugins, reautogen them, configure, and make it all in order...
[08:16] <ajmitch> hi arik
[08:16] <arik> hey aj
[08:16] <arik> taaz, true
[08:17] Action: ajmitch has scripts to do that
[08:19] <arik> i don't do it often enough to bother
[08:32] <taaz> hrm... it seems like the sdlvideosink is not using Xv
[08:32] <taaz> maybe... something is screwy
[08:33] <taaz> larger window slows down audio...
[08:33] <taaz> this is sucky
[08:56] <arik> the various videosinks seem to need some fixing
[08:59] <arik> (lt-gst-player:3285): Gdk-CRITICAL **: file gdkmain-x11.c: line 434 (gdk_keyboard_grab_info_libgtk_only): assertion `GDK_IS_DISPLAY (display)' failed
[08:59] <arik> what the heck is that all about?
[09:00] <walken> boink back
[09:00] <arik> wb walken
[09:01] <taaz> dunno
[09:02] <taaz> i think i have a little buglet in the new registry code
[09:02] <arik> my gtk+ seems to be slightly broken, *sigh*
[09:03] <taaz> by that i mean it's totally fscked
[09:03] <arik> wonderful...
[09:04] <taaz> if you -register --gst-plugin-path=.:../gp it stores the . and ../gp relative paths in registry.xml
[09:04] <taaz> which of course means things rnu from other dirs get very confused
[09:04] <taaz> --gst-plugin-path=`realpath .`:`realpath ../gp` works fine though
[09:05] <taaz> (btw, is there a better way to find absolute path than using 'realpath'?)
[09:06] <arik> hmm
[09:06] <arik> gnome-vfs?
[09:06] <arik> :-)
[09:09] <taaz> ahh! videosink fixed in cvs
[09:10] <taaz> sdlvideosink still not using Xv though.. which is odd
[09:10] <taaz> it used to work i think...
[09:10] rehan (~rehankhwa@...) joined #gstreamer.
[09:10] <taaz> maybe i never tried it for dvd... worked for other stuff though
[09:12] gadek (~greg@...) left irc: Read error: 104 (Connection reset by peer)
[09:12] <taaz> yeah! dvds are watchable with this
[09:12] gadek (~greg@...) joined #gstreamer.
[09:13] <taaz> i didn't even put in the bytestream timestamp hack though... hmm. maybe it was all just clocking.
[09:13] <arik> woah
[09:13] <taaz> i dunno how to test this
[09:13] <arik> what are you watching dvd's with?
[09:13] <taaz> dvdplay.py of course
[09:14] <arik> hehe :-)
[09:15] <taaz> sync is definately off a bit
[09:16] <arik> yep
[09:17] <taaz> hard to hack on dvd stuff... i start just watching the movie ;)
[09:17] <arik> hehe
[09:17] <arik> :-)
[09:17] <arik> i'm looking forward to a good gnome dvd player program :-) you can even use peices of gst-player to do it
[09:18] <taaz> ogle works fine doesnt it?
[09:18] <arik> eh
[09:19] <arik> i'm not the biggest fan, i suppose it works
[09:19] <arik> i have a hardware dvd player so i actually haven't used it in awhile
[09:19] <taaz> i've got that problem too
[09:19] wingo-zzz (~wingo@...) left irc: Read error: 104 (Connection reset by peer)
[09:25] <arik> hmm, i've got to get the gdk-keyboard stuff straigtened out
[09:25] <arik> 10 million error messages makes it hard to find the important ones :-)
[09:28] Topic changed on #gstreamer by ChanServ!ChanServ@...: LIVE from GUAD3C!
[09:29] <arik> what's with that?
[09:29] <arik> the topic thing i mean
[09:30] <dtm> arik: chanserv is bonkers.
[09:30] <arik> :-)
[09:30] <arik> woo! i was right :-) errors gone
[09:34] #gstreamer: mode change '+o taaz' by ChanServ!ChanServ@...
[09:35] <arik> ooh
[09:35] <arik> an op :-P
[09:35] <walken> lol
[09:36] <taaz> hrm...
[09:36] thomasvs_ (~thomas@...) left irc: No route to host
[09:37] <taaz> i don't have enough op access to change it i guess
[09:37] Topic changed on #gstreamer by taaz!dlehn@...:
[09:37] <taaz> this is going to be reset again
[09:37] <arik> heh
[09:37] <arik> yeah i'm sure
[09:37] <arik> lame
[09:37] <taaz> need to do /msg ChanServ set #gstreamer topic blah blah
[09:37] <taaz> but:
[09:37] <taaz> -ChanServ(ChanServ@...)- An access level of [25] is required for [SET]
[09:37] <taaz> on #gstreamer
[09:37] <arik> i wonder if i should buy an ibook or a powerbook
[09:37] <arik> ?
[09:39] #gstreamer: mode change '-o taaz' by ChanServ!ChanServ@...
[09:42] omega (~omega@...) left irc: "zzzzzzzz"
[09:44] Action: arik recompiles _everything_ in gnome2
[09:59] walken (foobar@...) left irc: "l8r"
[10:03] rowenc (~rowenc@...) joined #gstreamer.
[10:03] thomasvs_ (~thomas@...) joined #gstreamer.
[10:03] <arik> hey thomasvs_
[10:04] thomasvs_ (~thomas@...) left irc: Remote closed the connection
[10:06] <arik> ok
[10:06] rehan (~rehankhwa@...) left #gstreamer.
[10:07] <dtm> arik: what's up
[10:07] Ridcully (~ask@...) left irc: Connection timed out
[10:08] thomasvs (~thomas@...) joined #gstreamer.
[10:08] <thomasvs> ouch
[10:10] <arik> dtm, nothing much
[10:10] <arik> thomasvs, wb
[10:17] <taaz> commited gst_bytestream_get_timestamp() patch
[10:17] <arik> Makefile:520: *** missing separator. Stop. <- wtf?
[10:17] <taaz> dunno if it really makes much difference
[10:18] <taaz> reconfigure...
[10:18] <taaz> ?
[10:18] <arik> i just reconfigure'd
[10:18] <taaz> dvd sync looks good now though
[10:18] <arik> in fact i just removed the whole dir, rechecked out, then reconfigured
[10:18] <taaz> fix the Makefile.am then ;)
[10:18] <arik> heh
[10:18] <arik> it worked yesterday
[10:19] <taaz> so wow, now i can actually make some progress with this dvd stuff
[10:19] <arik> hehe :-)
[10:19] <arik> neet
[10:19] <taaz> not tonight though
[10:19] <taaz> nap time
[10:19] Nick change: taaz -> taazzzz
[10:20] <arik> night :-)
[10:21] gadek (~greg@...) left irc: Read error: 110 (Connection timed out)
[10:21] <arik> wtf?!?
[10:22] <thomasvs> taazzzz: so we can watch dvd's now ? cool !
[10:22] <arik> thomasvs, any idea about this error?
[10:22] <arik> Makefile:520: *** missing separator. Stop. <- wtf?
[10:24] <arik> @INTLTOOL_SERVER_RULE@ <- line 520, should that be replaced by something in Makefile.am or configure.in or something?
[10:25] <arik> hmm
[10:25] <arik> actually maybe i know what's up
[10:26] <thomasvs> arik: dunno, will look into it
[10:27] <arik> thomasvs: no bother
[10:27] <arik> i fixed it
[10:27] <thomasvs> ok
[10:27] <thomasvs> what was it ?
[10:27] <arik> i had forgotten to install intltool :-)
[10:27] mi_food (~michael@...) joined #gstreamer.
[10:28] <thomasvs> arik: and it doesn't say that when you run autogen.s h?
[10:28] <ajmitch> hi
[10:28] <thomasvs> hey aj
[10:28] <arik> it doesn't
[10:28] <arik> it should
[10:28] <arik> but it didn't because
[10:28] <arik> i was in a different prefix
[10:28] <arik> and intltool _was_ installed in my regular prefix
[10:34] davi (davi@...) joined #gstreamer.
[10:34] <arik> thomasvs, i got something _sort of_ working in the view stuff, but now i'm having to recompile all of gnome2 so it's on hold for a bit
[10:35] Company (~Company@...) joined #gstreamer.
[10:36] <arik> hey Company
[10:36] <Company> hi
[10:37] <arik> Company, you never did get a chance to look at that eos problem did you? :-)
[10:37] Nick change: davi -> daviJob
[10:38] <Company> not really
[10:40] <arik> ah well
[10:40] <arik> just not much point working on the playlist more till that's fixed
[10:41] <Company> I didn't know what the problem was when I was at it and you weren't there :)
[10:41] <arik> :-)
[10:41] <Company> and I was distracted by the indentation :)
[10:41] <arik> hah
[10:41] <arik> leave the indentation alone :-P
[10:41] <arik> it's gnome standard
[10:43] <Company> no, it's not
[10:43] <ajmitch> hey Company
[10:44] <Company> you have 1 tab = 2 spaces, right?
[10:44] <thomasvs> arik: why do you need to recompileall of gnome ?
[10:44] <arik> no,d don't think so
[10:44] <arik> i'm using 8 space tabs afaik
[10:44] <arik> thomasvs, cause i had to nuke my gnome2 dir to get gtk to work right, long story, not very interesting :-)
[10:44] <thomasvs> is there some way to give vim hints for indentation ?
[10:45] mi_food (~michael@...) left irc: Read error: 113 (No route to host)
[10:45] <thomasvs> arik: yeah, but I mean, why aren't you using rpms for it ?
[10:45] <arik> thomasvs, not sure, i can add those /* -*- Mode: C; indent-tabs-mode: t; c-basic-offset: 8; tab-width: 8 -*- */ lines to it if you want
[10:45] <arik> thomasvs, cause the rpms suck :-)
[10:45] <arik> thomasvs, i couldn't get nautilus2 to work with the rpm's is the more correct answer
[10:46] <Company> arik: every source file (GStreamer, gtk, gnome, whatever) has 2 spaces indent and 1 tab = 8 spaces
[10:46] <Company> only the player is different :/
[10:47] <arik> Company, 1 tab = 8 spaces for me, and looks like all the other code except what you had in gstplay.[c,h], something is confused here
[10:49] <arik> /* -*- Mode: C; tab-width: 8; indent-tabs-mode: 8; c-basic-offset: 8 -*- */ <- is at the head of every gnome .c file
[10:50] Action: ajmitch wonders why indentation is so divisive :)
[10:50] <Vakor> The Right Way is 1 tab = 4 spaces, and 4 spaces (or 1 tab) indents.
[10:50] <arik> ajmitch, no idea :-)
[10:50] <arik> Vakor, not The Right Way according to the GNOME style guidlines
[10:51] <Vakor> No it isn't. Gnome is Wrong.
[10:51] <Company> but nobody uses "c-basic-offset=8"
[10:51] <Company> everybody uses 2 there
[10:51] <arik> Company, i pasted that from nautilus-main.c
[10:52] <Vakor> The Right Way For Gnome is according to the Gnome style guidlines, sure. But I'm talking much more generally about The Right Way.
[10:52] <arik> For core GNOME code we prefer the Linux kernel indentation style. Use 8-space tabs for indentation.
[10:52] <arik> that's from the gnome guidelines
[10:52] <arik>
[10:53] <arik> c-set-style in emacs to linux
[10:53] <Vakor> I can deal with that when I need to, but I really dislike 2 space indents. Silly gnome people.
[10:53] <arik> thomasvs, there is a way to have vim do hints correctly
[10:54] <arik> Vakor, well, i'm not arguing that it's better in a perfect world, i'm just saying that what i was doing was correct gnome coding and what company had wasn't
[10:54] <arik> it's really not a very big deal
[10:54] <Company> well, looks like they do it my way :)
[10:54] <Vakor> arik: I didn't see the start of the conversation, sorry.
[10:55] <Company> and it's different then the whole rest of GStreamer
[10:55] <arik> Vakor, that's cool
[10:55] <Vakor> At some point I'll hopefully have time to write a largish set of gstreamer plugins. They have indentation/tabs done The Right Way, not The GNOME Way :-)
[10:55] <arik> Company, yes, but gtk+ is wrong
[10:56] <arik> at least, wrong according to GNOME guidlines
[10:56] <arik> er guidelines
[10:56] <ajmitch> Vakor: the right way is to run your code thru cobfusc ;)
[10:56] <arik> hehe
[10:58] <Company> it's actually important for me how we do it, because I don't like swiching my setup any time I do the player...
[10:58] <arik> Company, well, same here
[10:58] <arik> i have a fairly specific coding style
[10:58] <arik> but i don't want this to be a huge thing
[10:59] <Vakor> ajmitch: screw it. No more C indentation fighting, I'm writing stuff in FORTRAN from now on. No exceptions will be made
[10:59] <arik> Vakor, hehe
[10:59] Action: ajmitch likes python
[10:59] <Vakor> I had to do a whole lot of fortran coding recently...
[11:00] <arik> ajmitch, this is one thing that is good about python, no arguments there
[11:00] <arik> Company, what are you proposing?
[11:00] <ajmitch> python isn't good for everything tho
[11:00] <arik> true
[11:02] <Company> I think GStreamer should at least be consistent in style
[11:02] <thomasvs> can someone explain to me when it's useful to use real tabs ?
[11:02] <arik> Company, that's impossible given that we use several languages and types of programming
[11:02] <thomasvs> I think tabs should be destroyed
[11:02] <arik> Company, also gst-player is a GNOME app, the rest of gstreamer isn't
[11:02] <thomasvs> and why is it useful to have more than two space indents ?
[11:02] <arik> thomasvs, read the guidelines :-)
[11:03] <thomasvs> arik: where are they at ?
[11:03] <arik>
[11:03] <Vakor> thomasvs: real tabs generally aren't useful. 4 space indents are much clearer and cleaner looking,.
[11:03] <thomasvs> Vakor: ok, but what do they offer more than two space indents ?
[11:03] <arik> Vakor, you are arguing for 4? heh, company want's 2, i want 8, you want 4, 6 would be a compromise :-P
[11:03] <arik> thomasvs, 2 is cluttered
[11:04] <Vakor> thomasvs: clarity. 2 is very cluttered, as arik says
[11:04] <thomasvs> well, if gnome says 8, I'd go with 8, but their one argument sucks really badly
[11:04] <Vakor> arik: 8 space _indents_? I haven't seen anyone do that in many years.
[11:04] <thomasvs> if your indentation goes too far to the right, then it means your function is designed badly and you should split it to make it more modular or re-think it.
[11:05] <thomasvs> that's just silly - every gnome lib I look at has lines going well beyond a 80 char limit
[11:05] <thomasvs> which is in my book much worse than using 8 space indents
[11:05] <arik> thomasvs, heh, gst-player doesn't afaik, i try anyway
[11:05] <thomasvs> so then why is 4 space indent more clearer than 2 space ? you all wear really big glasses or something ? ;)
[11:05] <arik> it's a clarity thing
[11:05] <arik> it spaces stuff out more
[11:05] <arik> makes it easier to read
[11:05] <thomasvs> hm
[11:06] <thomasvs> well it's about as pointless an argument as between 4 and 8 I guess ;)
[11:06] <Vakor> thomasvs: it doesn't cram stuff together as much.
[11:06] <thomasvs> 1 indent, that would be hard on my eyes
[11:06] <thomasvs> 2 is fine
[11:06] <arik> thomasvs, afaik there are no lines over 80 in gst-player, if i'm wrong let me know, i'll fix it
[11:06] <Vakor> I was working on some code the other day that used 3 space indents. That was just weird :-)
[11:06] <Company> at least don't make me rethink every time I open a new file
[11:06] <thomasvs> Company: that's why I would like editor hints in each file ;)
[11:06] <thomasvs> Vakor: heh
[11:07] <Company> fine, please include anjuta hints
[11:07] <arik> Company, well, if i add the lines to the top your editor should theoretically auto change it and then you don't need to think
[11:07] <arik> anjuta?
[11:07] <thomasvs> you know, monkey-sound is really bad at it
[11:07] <arik> what's anjuta?
[11:07] <thomasvs> they use 8 space indents too
[11:07] <Company> see :)
[11:07] <arik> thomasvs, yeah hadess and jorn are bad at that
[11:07] <Vakor> I edit with ed, please make sure ed can figure it out automatically too.
[11:07] <Company> anjuta = gnome ide
[11:07] <thomasvs> but then they name stuff MonkeyMediaMixerPrivate
[11:07] <thomasvs> I hate that
[11:07] <arik> Company, oh wow, i didn't know it had a new name
[11:07] <thomasvs> if your lib is called MonkeyMedia, then don't put it in front of every single identifier
[11:07] <thomasvs> please shorten it to MM ;)
[11:07] <arik> thomasvs, no you have to
[11:08] <arik> thomasvs, well you could do that, i don't
[11:08] <Vakor> thomasvs: it's called "not polluting the global namespace"
[11:08] <arik> everything is GstPlayer
[11:08] <arik> exactly
[11:08] <thomasvs> Vakor: I know, but it's just too damn much
[11:08] <arik> you end up fucking everything up in a c world
[11:08] <arik> if we all used c++ (ick) it would be fixed
[11:08] <thomasvs> I say if you wanna use my lib then respect my namespace ;)
[11:08] <Company> thomasvs: don't use MM, mikmod does...
[11:08] <arik> Company, does adjunta use emacs as it's editor component or does it use that other weird one?
[11:08] <thomasvs> I don't pee in your house either
[11:08] <Vakor> thomasvs: shortening to MM is a bit much - lots of things could abbreviate to that. But some abbreviations would be reasonable.
[11:08] <Company> and monkeysound will play mikmod some day ;)
[11:08] <arik> Company, hehe
[11:09] <thomasvs> Well, mikmod should just use MikMod
[11:09] <thomasvs> btw, isn't mikmod in c++ anyway ? ;)
[11:09] <Vakor> arik: yes, but if we used C++, everything would be slow and non-deterministic :-)
[11:09] <Company> I think it's pure C
[11:09] <arik> thomasvs, it's another gnome naming convention i believe
[11:09] <arik> Vakor, c++ is lame :-)
[11:09] <thomasvs> oh well ;)
[11:09] <Vakor> (well, deterministic I suppose, it'd do the same thing every time. It's just damn hard to figure out what that'll BE without just running it and seeing :-)
[11:09] <arik> Vakor: hehe
[11:10] <Vakor> arik: tell me about it. I'm doing a lot of stuff in C++ this year, it sucks. And the other 2 people working on it (who chose the language over my objections) are Not Very Good Programmers (though they know C++ a bit better than me, they can't actually code or design for shit)
[11:10] <arik> Vakor, oh god, bad c++ programmers are so very very bad
[11:11] <ajmitch> hehe
[11:11] <Company> gnome is very consistent: /* -*- tab-width: 4; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ (taken from GConf)
[11:11] <Company> :)
[11:11] <ajmitch> arik: what about bad Java coders?
[11:11] <arik> Company, ick, havoc isn't using 8 spaces either? wonderful
[11:11] <Vakor> arik: my bits are _mostly_ thin C++ wrappers around code which is basically just plain C :-)
[11:11] <arik> Company, anyway, in general the person who started the project has mostly chosen the indent style and then maintainers have either kept it or changed it
[11:12] <arik> Vakor, :-)
[11:12] <arik> ajmitch, heh, no idea, but i bet it's bad
[11:12] <Vakor> arik: apart from the bits in sh, perl, and x86 assembly. Oops :-)
[11:12] <arik> but, in _theory_ it's 8 space tabs, you can argue with federico if you really care
[11:12] <thomasvs> arik: he does in libwnck and stuff
[11:13] <arik> thomasvs, havoc? good deal
[11:13] <arik> federico wrote the GNOME guidelines
[11:13] <arik> i learned my style from him when i worked on eog
[11:13] <arik> he is fanatical
[11:13] <arik> he turns down patches cause of code style
[11:13] <thomasvs> well I can kinda see it make sense for the kernel
[11:13] <thomasvs> not for gnome though
[11:13] <thomasvs> gnome sucks !
[11:13] <arik> heh
[11:13] <arik> thanks :-)
[11:14] <Company> so, could we at least keep it consistent throughout GStreamer, please?
[11:14] <Vakor> gnome is as good as kde, or maybe even better!
[11:14] <Company> I'm not going to submit GNOME patches anyway ;)
[11:15] <arik> Company, no we can't, cause everyone in gst codes differently, we would all have to agree, and that seems _extremeley_ unlikely
[11:15] <thomasvs> arik: I thought we use two space indent everywhere in gstreamer ?
[11:15] <Vakor> look, let's just all agree: 7 space tabs, 11 space indents. ok?
[11:15] <arik> thomasvs, are you sure? cause i doubt it's _only_ gst-player that is different, i could be wrong of course
[11:16] <thomasvs> ok, could someone explain to me please what the "x space tabs" setting does exactly ?
[11:16] <thomasvs> arik: yeah, pretty sure ;)
[11:16] <Company> arik: of all source files in gst I've looked at, only the player was different
[11:16] <Vakor> thomasvs: most (or many) editors, when you hit the tab key, insert a number of spaces instead. As well as that, all editors will display a tab character if one exists in the file as taking up a certain number of spaces. That setting changes both of those things.
[11:17] <arik> well, that is prob because erik likes 2 space tabs
[11:18] <Company> I'm going to write an email to gnome-hackers and gst-devel now
[11:18] <Company> just for the fun of it :)
[11:18] <arik> oh good god
[11:18] <arik> Company, please, please please please don't start the indent flame war again
[11:18] <thomasvs> Vakor: ok, but apart from Makefiles, shouldn't tabs always be automatically converted to spaces anywway ?
[11:18] Action: arik has his gnome-hackers delivery turned off, but you will really piss people of
[11:18] <arik> er off
[11:18] <Company> thomasvs: most editors replace 8 spaces with 1 tab
[11:19] <Company> which makes files a lot smaller I guess
[11:19] <arik> Company, ok, if 8 space tabs are what most editors do, what are you arguing against?
[11:19] <Vakor> thomasvs: there's a good argument that they SHOULD be, but it's a _fact_ that they generally aren't (for vim, set expandtab for that behaviour)
[11:19] <Vakor> thomasvs: and you still have to have that setting to decide how many spaces to convert a tab to :-)
[11:20] <arik> of course, it doesn't help anything that i personally loathe 2 space tabs
[11:20] <arik> i find them ugly and hard to read
[11:20] <Company> arik: every new offset takes another 10% of my available space
[11:20] <Company> if I use 8 spaces offsets
[11:20] <Vakor> arik: 2 space tabs or 2 space indents? You seem to be using the terms interchangably, which is a bit confusing
[11:20] <arik> Company, i'm aware, if you code well you have short lines, and thus it's not a problem
[11:21] <Company> arik: gst_play_add_file_from_uri is not short
[11:21] <arik> Vakor, 2 space indents
[11:21] <thomasvs> arik: well that's not a good argument. everyone needs to use an if in a while sometimes
[11:21] <Vakor> yeah, I agree then.
[11:21] <Company> (this was a guess, but there are such names)
[11:21] <arik> Company, no it's not, function names are not supposed to be short, they are supposed to be clear
[11:21] <Company> arik: yes, but thats 26 chars
[11:22] <arik> Company, well, it doesn't exist
[11:22] <thomasvs> hrmph ;)
[11:22] <Vakor> My general feeling is this (based on the facts that I use 4 space indents, and stick to 80 column lines whenever possible): if the start of the content on a line is PAST column 40, there's probably a design problem with the code in that function
[11:22] <thomasvs> those tabs only make it more complicated
[11:22] <Company> with 4 offsets I'm at 4*8 + 26 = 58 chars in that line
[11:22] <arik> Company, it's gst_play_new_from_uri
[11:22] <arik> Vakor, so true, to many indentions means confusing code
[11:22] <thomasvs> Vakor: hm, sounds reasonable
[11:23] <Vakor> I don't stick to that as hard policy, but I generally start to rethink my code when I hit that point
[11:23] <arik> Company, it's not an argument that can have a winner, it's a preference, i was simply pointing out that my style isn't _just_ a preference, it's also supposedly the GNOME standard
[11:23] <arik> Company, your style may be the gstreamer standard
[11:24] <Vakor> I stick to the standard used in the stuff I code on most often (icecast and related tools, and vorbis-tools). Of course, to a large degree those use that standard because I wrote them or large parts of them :-)
[11:24] <arik> Vakor, heh
[11:25] <arik> Vakor, an earlier point i made preciscly
[11:25] <arik> Vakor, i wrote almost all of the current gst-player (besides the gstplay stuff and the videowidget and those were the files that were indented differently) and i use 8 space indents
[11:26] <arik> Company, actually i only changed the gstplay.[c,h] indentation cause you hadn't worked on it at all in ages and i needed to add stuff and i was just hacking about
[11:26] <Company> well, every indentation is ok for me
[11:27] <Company> but it has to be consistnet
[11:27] <arik> you just want it to not change
[11:27] <Company> yeah
[11:27] <arik> but i can't change all of the rest of gst
[11:27] <arik> and i don't want to change gst-player
[11:27] <arik> so i'm not sure what to do :-)
[11:28] <arik> if you are back working on gstplay you can change it however you like, i would still _rather_ have 8 spaces but if you own the code then i can't complain i suppose
[11:29] <arik> like i said i only changed it cause it had been _ages_ since you had touched it and i was working on it
[11:29] <Company> nobody owns any code here
[11:29] <arik> Company, by own i meant maintain
[11:29] <Company> we just have to agree on something
[11:29] <arik> be the principle contributor too
[11:29] <Company> nah, if every maintainer has his own style, plugins would be a mess
[11:29] <arik> Company, ok... but i'm not sure how we can agree on anything?
[11:29] <arik> heh
[11:29] <arik> true
[11:29] <Company> I'm not sure either :)
[11:29] <arik> :-)
[11:30] <arik> i mean we could have an endless argument on gst-devel but it's just gonna be that
[11:30] <arik> and endless argument
[11:30] <arik> cause many people feel religious about this stuff
[11:30] <Company> somebody should force a policy on that
[11:31] <arik> Company, but who? i don't like forced policy much either
[11:31] <Company> but that'll probably never happen :(
[11:31] <arik> prob not :-)
[11:31] <Company> s/forced policy/standards/ ?
[11:31] <arik> well i suppose
[11:31] <arik> but as you pointed out
[11:31] <arik> even with GNOME standards
[11:31] <arik> not everyone follows them :-)
[11:31] <arik> we have a lot of strong personalities in this community
[11:31] <Company> well, make GNOME cvs follow them at least :)
[11:31] <arik> heh
[11:32] <arik> it's been tried before
[11:32] <Company> but whatever, I think I have to rethink every time I code somewhere else :/
[11:32] <arik> it's true
[11:32] <arik> it sucks
[11:32] <arik> it happens to all of us :-(
[11:32] <Company> or I just don't code the player anymore ;p
[11:33] <arik> Company, well, that seems like a bad idea to choose :-)
[11:34] <Company> ok, now I have officially protested against your code style, at least everybody knows I don't like it :)
[11:34] <arik> cause if you don't do it, the video widget, gstplay, and spider are unlikely to get better :-)
[11:34] <arik> Company, alright :-) officially noted :-)
[11:34] <Company> if anybody complains about it I can always say "don't ask me" ;)
[11:34] <arik> hehe
[11:34] <arik> yep :-)
[11:34] <arik> point em at me
[11:35] <Company> btw: is the clocking stuff working?
[11:36] <arik> i think taaz was fixing that eaerlier
[11:36] <arik> so prob :-)
[11:36] <Company> I mean a?v sync
[11:36] <Company> a/v even
[11:37] <arik> yeah
[11:37] <arik> he was working on it
[11:37] <arik> he said he had it working much better
[11:37] <arik> at least for dvd's
[11:41] <arik> Company, when do you think you're gonna have time to work on gstreamer and gst-player again anyway? just curious :-)
[11:45] Ridcully (~ask@...) joined #gstreamer.
[11:45] <Company> arik: I'm working on it from time to time
[11:45] <Company> it's a question of motivation for me
[11:45] <arik> you are not motivated?
[11:45] <arik> i thought you wanted a kick ass media player :-)
[11:46] <arik> actually i've gone through unmotivated periods
[11:46] <arik> i didn't touch the player code for about 6 months
[11:52] apoc_ (~apoc@...) joined #gstreamer.
[11:52] <apoc_> yo
[11:52] <arik> hi
[11:53] <apoc_> hi arik
[11:53] <apoc_> have you tried the snapshot plugin ?
[11:53] <arik> apoc_, i was gonna soon, i want to have that supported in gst-player
[11:54] <apoc_> arik : ok
[11:58] <thomasvs> ok
[11:58] <thomasvs> so do you want me to bring up the indentation discussion then ? ;)
[11:59] <arik> what do you mean?
[12:00] <arik> thomasvs, ?
[12:03] <arik> i don't think the indentation discussion needs to happen
[12:03] <arik> i think everything is ok now
[12:07] <Company> it's not "ok", we're just not fighting about it yet
[12:07] <arik> hehe :-)
[12:07] <arik> well, that's more ok then
[12:07] <arik> at least :-)
[12:18] <arik> omg
[12:18] <thomasvs> ok
[12:18] <thomasvs> ;)
[12:18] <arik> i think i'm almost done rebuilding :-)
[12:19] <arik> woo
[12:19] <arik> thomasvs, the main reason i don't use the rpm's is that i like to have all my gnome2 stuff in a seperate prefix
[12:27] <arik> woo!
[12:27] <arik> i got rid of the gtk warnings
[12:27] <arik> and the esound warnings
[12:27] <arik> sweet :-)
[12:54] Company (~Company@...) left irc: Remote closed the connection
[12:54] <arik> ok
[12:54] <arik> i have no idea what's going wrong
[12:55] <arik> and i'm off to bed :-)
[12:55] <arik> later all
[12:55] arik ([Zuo5R+bLk@...) left #gstreamer.
[13:01] billh (billh@...) left irc: Remote closed the connection
[13:44] wingo (~wingo@...) joined #gstreamer.
[14:33] daviJob (davi@...) left irc: "Client Exiting"
[14:36] <thomasvs> I need to go over an int array describing pairs of ints and copy them to a new array, and decide for each pair if it should be copied or not.
[14:36] <thomasvs> how do I do this in the most clean way if I can't decide in advance how many of the original pairs will be copied ?
[15:51] Company (~Company@...) joined #gstreamer.
[15:54] Action: femistofel tryes to compile gst from cvs on debian machine...
[15:54] Nick change: femistofel -> artm
[15:54] Action: artm tryes to compile gst from cvs on debian machine...
[15:54] <artm> testsuite/plugin/Makefile.am:21: invalid unused variable name: `static_SOURCES'
[15:55] <thomasvs> artm: comment that line in Makefile.am
[15:56] <artm> OK. but how come it's there? is it again wrong-version-of-automake issue?
[15:57] <artm> it went through this time
[15:57] <thomasvs> artm: it's there because wingo commented out static from the tests without rerunning it
[15:58] <artm> i c
[15:59] <artm> don't see glib-2.0 in debian... arrgghh
[16:00] Company (~Company@...) left irc: Remote closed the connection
[16:04] <artm> hmm... found ;)
[16:27] <artm> how do i autogen.sh gst-plugins using uninstalled gstreamer?
[16:28] <artm> so far it says:
[16:28] <artm> checking for gstreamer >= 0.3.4... Package gstreamer was not found in the pkg-config search path.
[16:28] <artm> Perhaps you should add the directory containing `gstreamer.pc'
[16:28] <artm> to the PKG_CONFIG_PATH environment variable
[16:28] <artm> No package 'gstreamer' found
[16:28] <artm> configure: error: no GStreamer found
[16:28] <artm> configure failed
[16:29] <thomasvs> either set PKG_CONFIG_PATH
[16:29] <thomasvs> or use --with-pkg-config-path=/home/artm/gst/cvs/gstreamer or something similar ;)
[16:30] <wingo> has that static thing not been fixored? i fucked that one up
[16:32] Nick change: taazzzz -> taaz
[16:32] <wingo> mornin, taaz.
[16:32] <taaz> morning
[16:33] <taaz> -register --gst-plugin-path=.:../gp doesn't work
[16:33] <taaz> you need absolute paths
[16:33] <taaz> maybe it should check that paths start with /
[16:34] <wingo> prolly so, yo
[16:34] <thomasvs> or resolve them
[16:35] <taaz> well, it does work...
[16:35] Nick change: daviZZz -> davi
[16:35] <taaz> it's just it stores the relative paths
[16:35] <thomasvs> taaz: so how good is the dvd stuff now ?
[16:35] <taaz> i do --gst-plugin-path=`realpath .`:`realpath ../gp`
[16:36] <taaz> thomasvs: it's still rather raw
[16:36] <taaz> thomasvs: you give it a title/chap/angle to play and it seeks there to start
[16:37] <taaz> thomasvs: it's probably not super useful as an end application
[16:37] <taaz> thomasvs: but it's a good start to have it workign in gstreamer like this
[16:37] <thomasvs> yeah, e need it
[16:37] <thomasvs> we I mean
[16:37] <thomasvs> uh oh
[16:37] <thomasvs> I screwed my panel this time
[16:38] <taaz> plus of course i have the pipeline built from python, so you know it's super cool from the start <g>
[16:40] <wingo> python junkie.
[16:42] <wingo> fixored the static issue
[16:42] <wingo> we really need to get wtay to redo the plugin test suite
[16:42] <wingo> because i can't figure out what it's supposed to do, much less the right way to do it
[16:44] <thomasvs> yeah
[16:44] <thomasvs> how are we going to do that though ? ;)
[16:45] <wingo> we just have to poke wtay into doing it
[16:45] <thomasvs> taaz: heh, yeah - how are the bindings coming along ?
[16:45] <thomasvs> I wonder why he isn't here anyway
[16:45] <wingo> because he's a lazy bum? ;)
[16:50] <taaz> bindings doing ok for hooking up pipelines
[16:50] <taaz> actually writing plugins will take alot more work
[16:54] <thomasvs> whoa, you want to write plugins in python as well ?
[16:54] <taaz> i already can
[16:54] <thomasvs> funky
[16:54] <taaz> just not very well
[16:54] <taaz> didn't you see my great result?
[16:55] <thomasvs> where ?
[16:55] <taaz> identity element in python is 2x as fast as the C one!
[16:55] <taaz> this of course makes no sense, but nevertheless, i'm proud of it ;)
[16:55] <thomasvs> maybe it just doesn't pass buffers at all ;)
[16:55] <thomasvs> I remember that post though
[16:56] <thomasvs> hm, I'm trying ccache now but I'm not seeing it improving stuff
[16:56] <thomasvs> not sure if I'm running it the right way
[16:56] <taaz> i dunno... i'll look at it sometime. probably something like the C identity is actually doing more checks or something
[16:56] <taaz> ok, so here's a problem
[16:56] <taaz> run gst-register --gst-plugin-path=...
[16:56] <taaz> it finds 100+ plugins
[16:57] <taaz> now run "gst-inspect"
[16:57] <taaz> it rechecks with the default path and only sees the default 18 or whatever
[16:57] <taaz> that kind of sucks
[16:57] <thomasvs> gst-inspect probably hasn't been updated right
[16:58] <taaz> it rewirtes the .xml too...
[16:58] <taaz> i'm also getting various plugins reporting failures to load gst lib plugins
[17:01] <taaz> actually, i can't get inspect to work at all with plugin path...
[17:01] <thomasvs> hm, next time I'm going to stand my ground in doing this sort of change in a branch
[17:02] <taaz> err... maybe it did, nevermind
[17:05] <taaz> heheh, nice, DVD with edgeTV effect :)
[17:05] <thomasvs> cool, I want to see that
[17:05] <taaz> my peecee can't quite handle the load though :(
[17:05] <thomasvs> can you do a quick post to the list on how to do it ? so I can try it out this weekend ?
[17:06] <taaz> ok
[17:06] <taaz> well, i'll probably have to check in the gst-python stuff i've been doing
[17:07] <taaz> it's 0.3.4 based now, my stuff is cvs based
[17:07] <taaz> this would be totally kickass if it were realtime
[17:07] <thomasvs> man, gconf is really nice
[17:07] <thomasvs> if what were realtime ?
[17:08] <taaz> edgeTV + dvd == 120% cpu
[17:08] <thomasvs> oh, right ;)
[17:09] <taaz> this is fixable for me actually
[17:09] <taaz> and a nice way to show how cool gstreamer is
[17:09] <taaz> i'm on SMP system and its just one processor that's maxed out
[17:09] <taaz> i can probably rearrange the elements into threads such that it balances out ;)
[17:10] <thomasvs> ooh
[17:10] <thomasvs> that would be pretty cool, yes
[17:10] <thomasvs> it's probably not going to work however, but if SMP works, that'd be really really nice
[17:10] <taaz> its in multiple threads now
[17:11] <taaz> main + video + audio
[17:11] <taaz> its just the video is like 90%+ of the processing time
[17:11] <taaz> (random guess)
[17:12] <taaz> doing mpeg2 on one processor and audio+vid efx on the other would probably work fine.
[17:13] <taaz> there are of course horrible inefficiencies with cache things and so on, but if end result is smooth video, who cares ;)
[17:14] <taaz> dvd thing kinda messed up, i can only start from 1 place
[17:23] Action: taaz pretends his machine didn't just crash
[17:29] Nick change: taaz -> taaz-away
[18:33] Uraeus (~cschalle@...) joined #gstreamer.
[18:34] <Uraeus> yo
[18:34] <thomasvs> yo uraeus
[18:35] <thomasvs> anyone able to reach advogato ?
[18:35] <Uraeus> thomasvs: not me
[18:35] <Uraeus> thomasvs: wonderful day today is oslo, blue skies and pleasant temprature (15-16 degrees)
[18:35] <thomasvs> hm, a bit cloudy here
[18:35] <thomasvs> you know, coding on the panel is really frustrating ;) when it crashes you're pretty helpless
[18:36] <Uraeus> thomasvs: even if you activate the ORBit2 built in trace?
[18:36] <Uraeus> I took the IELTS test today, think it went well even if i was a bit nervous during the oral test
[18:38] Action: wingo is back from a lazy lunch...
[18:43] <Uraeus> wingo: that is how lunches is supposed to be?
[18:44] Nick change: wtay-zZz -> wtay
[18:44] <wtay> yo
[18:45] <Uraeus> morning wtay
[18:46] <wtay> norning
[18:46] <wtay> er
[18:46] <wingo> norning
[18:46] <thomasvs> Uraeus: what is that, ORBit2 built in trace ?
[18:46] <thomasvs> hey wtay
[18:47] <thomasvs> wtay: do you think you could go over some of the plugin tests ? ;)
[18:47] <Uraeus> thomasvs: hmm, don't read the summaries i see :)
[18:47] <thomasvs> Uraeus: religiously, but I fail to recall them when I need them
[18:48] <wtay> thomasvs: yes
[18:49] <wingo> wtay: testsuite/plugin needs attention ;)
[18:49] <wtay> I'll check it out
[18:50] <wingo> thanks, yo
[18:50] <Uraeus> thomasvs: (see the how to enable debugging thing)
[18:56] <wingo> wtay: i'm looking at setting _WRITEABLE and _READABLE when 'location' is set on xmlregistry... does that make sense?
[18:57] <wtay> wingo: yes, that's the idea
[18:57] <wtay> wingo: ideally I would like to have a gobject property for the location too
[18:58] <wingo> yes, that's what i'm doing ;)
[18:59] Action: wtay has to rebuild everything
[19:07] Action: wingo wonders why gstregistry isn't a gstobject
[19:10] <Uraeus> wtay: when I installed my new core rpm yesterday it rebuilt the registry upon instalation and it looked really cool :)
[19:10] <wingo> heh
[19:11] <wtay> Uraeus: doh
[19:11] <wingo> wtay: actually, i guess that's a serious question, why isn't gstregistry a gstobject? i like using GST_FLAGS_SET et al...
[19:11] <wtay> wingo: pluginfeature is a gobject too now
[19:11] <Uraeus> wtay, wingo: hey such thing as looking cool counts :)
[19:13] <wingo> Uraeus: yeah, i'd like to make it so you don't usually have to run gst-register
[19:14] <wtay> Uraeus: you can make a GUI for it now :)
[19:15] <Uraeus> wtay: I can port gconf-edit and call it greg-edit :)
[19:16] <wingo> haha, a gui for ldconfig :-P
[19:16] <wtay> ah, static works now
[19:19] Company (~Company@...) joined #gstreamer.
[19:19] <Uraeus> howdy Company!
[19:19] <Company> hi
[19:19] <wingo> yo
[19:20] <wtay> yo
[19:22] <wtay> how do I link a .so to an executable in Makefile.am?
[19:23] <wingo> -lfoo for libfoo.so
[19:23] <wingo> in LDFLAGS
[19:23] <wtay> in LDFLAGS?
[19:23] <wtay> oh
[19:23] <wingo> i think, it could be in LIBS
[19:24] <wtay> I tried LIBS, LIBADD, LDFLAGS. the first two do nothing (nothing added to gcc) the last one adds the .al files..
[19:25] <wingo> odd...
[19:25] <wingo> which makefile?
[19:26] <wingo> you can try target-specific ldflags too...
[19:26] <wtay> working in testsuite/plugins
[19:26] <wingo> linked_LIBS
[19:26] <wingo> linked_LIBS = -ltestplugin
[19:27] <wtay> nothing on the gcc line
[19:27] <wtay> uhm, there isn't even a .so file in .libs..
[19:28] <wingo> yeah, i was looking at that yesterday and couldn't figure it out
[19:29] <wingo> i would try copying something from gst/schedulers or something...
[19:30] <wtay> maybe I need the GST_PLUGIN_LDFLAGS in la_LDFLAGS for the plugins..
[19:34] <wingo> i thought so too, but it doesn't make any difference
[19:35] <wtay> it doesn't..
[19:35] <wtay> testsuite/plugin/Makefile.am:10: invalid variable `plugin_LTLIBRARIES'
[19:36] <wtay> uhm, now it works..
[19:36] <wingo> bizarre
[19:37] <wtay> so, it needs plugindir = $(libdir)/gst and plugin_LTLIBRARIES
[19:39] <thomasvs> if the lib you're building is libplugin.la
[19:40] <wtay> thomasvs: ? all plugins do this
[19:40] <thomasvs> yeah
[19:41] <wtay> and plugin_LTLIBRARIES = foo.la gives an error when there is no plugindir = bar defined..
[19:41] <thomasvs> yeah, that's normal
[19:41] <thomasvs> you can also use plugin_NOINST or something I believe
[19:41] <thomasvs> hm, that's probably wrong but something similar
[19:41] <wtay> noinst_LTLIBRARIES?
[19:42] <wtay> but then it doesn't generate a .so
[19:42] <thomasvs> yeah, try that, might work
[19:42] <wingo> wtay: the problem with plugin_ is that it installs those test plugins...
[19:42] <thomasvs> wingo: right
[19:42] <wtay> wingo: yeah, I know..
[19:42] <thomasvs> install them in /dev/null ;)
[19:42] <wtay> wingo: can I set plugindir = . ?
[19:43] <thomasvs> wtay: use $(srcdir)
[19:43] <thomasvs> and add path to that
[19:43] <wtay> ah
[19:43] <thomasvs> uhm no wait
[19:43] <thomasvs> builddir, does that exist ?
[19:43] <thomasvs> $(builddir) ?
[19:43] <wingo> yeah, but what a mess... it might fail on the uninstall, etc etc
[19:43] <wtay> doesn't complain..
[19:43] <thomasvs> wingo: well, we'll fix it when we get to it ;) make distcheck will probably not work anyway
[19:43] <wingo> it's a nasty solution, through and through
[19:44] <thomasvs> I'll try some gentle massaging tomorrow after I get this panel bug fixed ;(
[19:44] <wingo> thomasvs: i worked hours on that last night, just so you know ;)
[19:44] <wtay> is plugin_LTLIBRARIES something we do ourselves?
[19:44] <thomasvs> wingo: oh, ok. I'll lower my sights a little then ;)
[19:44] <wingo> wtay: yes
[19:45] <wtay> ok
[19:45] <wtay> so you could define a noinstplugin_LTLIBRARIES then?
[19:45] <wingo> as long as you set noinstdir, yes...
[19:46] <wingo> you might be able to define check_LTLIBRARIES
[19:46] <wingo> that would be better, if it worked
[19:46] <wingo> er
[19:46] <wingo> noinstplugindir
[19:46] <wtay> noinstplugindir.. isn't that a contradiction :)
[19:47] <wtay> oxymoron even..
[19:47] <wingo> all of the above, possibly ;)
[19:47] <wingo> ok, i think i might have this permission thing worked out...
[19:48] <wtay> wingo: I tried to solve it with stat, but that was messy. do you try to open/write to the file?
[19:48] <wingo> yes
[19:48] <wingo> i first try to see if the dir is there
[19:49] <thomasvs> I had code for that stuff
[19:49] <wingo> if it isn't, i try to create it because we'll be creating it anyway
[19:49] <thomasvs> and I also ended up with trying to open ;)
[19:49] <wingo> yeah i used some of that
[19:49] <wtay> thomasvs: did you try with stat first?
[19:49] <wingo> then if i can append to it it's writable, then if i can read it's readable
[19:49] <thomasvs> wtay: yes, and it had some really weird results
[19:49] <wtay> funny how there is no API to see if a file is writable...
[19:50] <thomasvs> yeah, I had thought that would have been in glib
[19:50] <wingo> the current case is a little different from the previous one because if it's writable and out of date, a read will cause the registry to rebuild and save itself
[19:50] <wingo> which is a good thing imo
[19:51] <wtay> neat
[19:51] <thomasvs> how do you get the width of a widget ?
[19:51] <thomasvs> the currently used width that is ?
[19:51] <wtay> _get_usize?
[19:52] <thomasvs> deprecated, removed
[19:52] <thomasvs> it has get_size_request, but it's returning -1 on a widget that has a width
[19:53] <wingo> gtk makes some things hard
[19:53] <wtay> there is also size_allocation
[19:53] <wtay> er, requisition
[19:53] <wingo> wtay: i get lots of errors looking for plugin libs...
[19:53] <wingo> like 'can't load resample' and the like
[19:54] <wtay> wingo: you have to add the lib path first, so those plugins are loaded first
[19:54] <thomasvs> wtay: in gtk+ 2.0 ? not finding it in devhelp
[19:54] <thomasvs> size_allocate exists
[19:54] <thomasvs> but that's to assign
[19:54] <wtay> I'm using: /opt/src/sourceforge/gstreamer-MESSAGES/tools/gst-register --gst-plugin-path=/opt/src/sourceforge/gst-plugins/gst-libs:/opt/src/sourceforge/gst-plugins/
[19:54] <wingo> wtay: i know that that would work, but it's hacky
[19:55] <wingo> i'm using enviroment variables, much easier ;)
[19:55] <wtay> wingo: another option would be to look in the other paths when a plugin is not in memory
[19:55] Action: wtay doesn't understand that last sentence himself
[19:56] <wingo> yeah, but i think i do ;)
[19:56] <wingo> it's a tricky problem
[19:56] <wtay> it's a dependency problem
[19:57] <wtay> the old code was walking around the complete plugin path to find the plugins
[19:57] <wingo> you don't want to go directory traversing when you are in the middle of running an app, that's for sure
[19:58] <wtay> wingo: yes, I don't like this random hunting..
[19:58] <wtay> any objections to adding the libs path first?
[19:58] <wingo> hmm?
[19:58] <wingo> you mean in --gst-plugin-path ?
[19:59] <wtay> or maybe it should delay the plugin loading when all directories are traversed
[19:59] <wingo> maybe so...
[19:59] <wtay> plugin fails-> add to end of pending list, continue
[19:59] <wingo> that would make sense
[19:59] <wingo> yes, that makes oodles of sense
[20:00] <wtay> do we have circular dependencies?
[20:00] <wingo> i would hope not
[20:00] <wingo> we don't now, i don't think
[20:01] <thomasvs> can we check for them ?
[20:01] <thomasvs> well would it be needed, circular dependency ?
[20:01] <wingo> such a plugin would never work
[20:01] <wtay> bytestream needs control, control needs bytestream for example..
[20:02] <wingo> heh
[20:02] <wingo> that's no good...
[20:03] <wtay> impossible?
[20:03] <wingo> bytestream needs control ?
[20:03] <thomasvs> well then they need a third lib they both depend on
[20:03] <wtay> just an example, yo :)
[20:03] <thomasvs> if they're not that modularized, they're no good anyway ;)
[20:05] <wtay> let's not worry about that now
[20:05] <wingo> good call
[20:05] <wtay> the other simple dependency problem is easy to solve though
[20:10] scizzo (scizzo@...) left #gstreamer ("Namärië").
[20:15] <thomasvs> later
[20:15] Nick change: thomasvs -> thomasvz
[20:17] <wtay> what's this?: if (*dirent == '=')
[20:20] <wingo> it's commented right there, i thought...
[20:21] <wingo> make distcheck makes a number of dirs in the cvs directory,
[20:21] <wingo> gstreamer-x.y.x/=build, =inst, etc
[20:21] <wtay> oh
[20:21] <wingo> and all of those might contain plugins
[20:21] <wingo> so that's a hack to avoid recursing into those dirs
[20:21] <wtay> ok
[20:21] <wingo> it had me befuddled for a good 20 minutes ;)
[20:23] <wtay> hmm.. I'm thinking about moving that plugin hunting code back into gst_plugin..
[20:24] <wingo> hold on a minute though...
[20:24] Action: wingo is committing
[20:24] <wtay> does that make sense?
[20:24] <wingo> yeah, i think so
[20:24] <wingo> well, really i don't know ;)
[20:25] <wtay> something like gst_plugin_hunt (dir, callback, data), when it finds a plugin the callback is done and you can do whatever you want
[20:26] <wingo> an async callback?
[20:27] <wingo> i think just moving unloadable libs to the tail of the list is all we need for now...
[20:27] <wtay> yup
[20:27] <wingo> anyway, that time-checking code is in now, along with some object property stuff
[20:28] <wtay> me updates
[20:28] <wingo> there is one bug though, sometimes gst-inspect will duplicate plugins when it the registry is out of date and it has to rebuild
[20:28] <wingo> otherwise it seems pretty solid, that bug is shallow because
[20:28] <wingo> the types aren't actually getting registered twice
[20:29] <wingo> at least, that's the initial assessment. more printf's needed ;)
[20:31] <wingo> the other thing is the question of when to rebuild when --gst-plugin-path is passed on the command line, and the corrolary of what dirs to check for outdatedness if the registry was built against paths that were not passed to the current app
[20:31] <wingo> if that makes sense
[20:32] gadek_ (~greg@...) joined #gstreamer.
[20:32] <wtay> wingo: ugh :)
[20:32] <wingo> yeah, i know...
[20:32] <wingo> it's so nasty...
[20:32] <wtay> maybe --gst-plugin-path is only relevant for -egister
[20:33] <wtay> er -register
[20:33] <wingo> i don't think so, personally...
[20:33] <wingo> anyway, gotta go
[20:33] <wingo> maybe we need a gst_registry_get_paths
[20:33] <wtay> later
[20:33] <wingo> or something
[20:33] Nick change: wingo -> wingo-afk
[20:33] <wtay> uhm, yes
[20:38] thomasvz (~thomas@...) left irc: Read error: 113 (No route to host)
[20:47] ChiefHighwater (~paul@...) joined #gstreamer.
[20:57] <Ridcully> will gst-python be in the next release?
[21:00] benow (~andy@...) joined #gstreamer.
[21:00] <benow> yo yo
[21:03] thomasvz (~thomas@...) joined #gstreamer.
[21:07] <Uraeus> howdy benow
[21:19] Zeenix (~zeenix@...) joined #gstreamer.
[21:20] <Zeenix> hello all
[21:20] Action: Zeenix is too happy today as his first VPN test was a successfull...
[21:21] <Zeenix> s/"VPN test"/"Test VPN"
[21:23] <benow> heya Zeenix
[21:23] <benow> , Uraeus
[21:23] <Zeenix> yo benow
[21:26] <wtay> yo
[21:31] <benow> so, I mentioned a possible gstreamer hour on benow... 3 developers, each broadcasting a stream to a mixer and then out the main stream... record your line in/mic... like a conference call. would that aide the development process at all
[21:32] <wtay> benow: ?
[21:32] <thomasvz> benow: hm, yeah
[21:32] <thomasvz> depends on the latency ;)
[21:33] <benow> yeah, latency shouldn't be too bad. 30s orso... best for a segmented progress report perhaps, but it won't be telephone like.
[21:33] Nick change: thomasvz -> thomasvs
[21:34] <thomasvs> I hope that is 30 ms ;)
[21:34] <thomasvs> 30s ?
[21:34] <thomasvs> that's pretty long
[21:34] <benow> yup.
[21:34] <benow> could probably narrow it down a bit,... with less buffering
[21:35] <benow> are there any apps that do real-time conference call stuff (non voip stuff)
[21:35] <benow> hmmm gst-voip, that could be interesting.
[21:38] <Zeenix> benow: for this purpose, zchat is a very nice open-source software developed by a genius programmer named Zeeshan Ali :)
[21:46] <benow> hehe, cool. have a url, Zeenix?
[21:46] Nick change: wtay -> wtay-tv
[21:55] <Zeenix> benow: zchat.sf.net
[21:55] <Zeenix> benow: infact it is developed by me, if you remember :)
[21:56] <Zeenix> benow: i havent got the chance to debug it more yet to release a version, so use cvs, debug it & enjoy..... :)
[21:57] <Zeenix> benow: porting it to Gnome2 is another headache
[21:58] <BBB|zZz> guys, can we apply Joshua's patch?
[21:58] <BBB|zZz> it's pretty simple and makes quite some sense imho
[22:12] thomasvs (~thomas@...) left irc: "Client Exiting"
[22:17] davi (davi@...) left #gstreamer ("Client Exiting").
[22:18] thomasvs (~thomas@...) joined #gstreamer.
[22:19] RagingMind (~RagingMin@...) joined #gstreamer.
[22:20] <RagingMind> yllo all
[22:21] thomasvs (~thomas@...) left irc: Remote closed the connection
[22:24] thomasvs (~thomas@...) joined #gstreamer.
[22:25] thomasvs (~thomas@...) left irc: Remote closed the connection
[22:27] thomasvs (~thomas@...) joined #gstreamer.
[22:27] <Uraeus> wb thomasvs and hi RagingMind
[22:27] <thomasvs> man, something is going wrong with my xchat ;)
[22:28] <thomasvs> I think it's trapped in my panel
[22:28] <thomasvs> hm
[22:28] <thomasvs> if you start an app from a launcher on your panel, is it then a subprocess of the panel ?
[22:28] <Uraeus> thomasvs: shouldn't be, but the x-chat panel app is a ugly hack so maybe yes
[22:29] <Uraeus> thomasvs: did my ORBit2 tip help you?
[22:29] Action: BBB|zZz is back (gone 21:15:48)
[22:29] Nick change: BBB|zZz -> BBB
[22:30] <BBB> thomasvs, Uraeus: any opinions on vishnu's patch?
[22:30] <thomasvs> Uraeus: well, I don't need it yet
[22:30] <Uraeus> BBB: apply
[22:30] <thomasvs> BBB: looks sane to me, but I don't use vcd
[22:30] <thomasvs> Uraeus: xchat panel app ?
[22:30] <BBB> ok
[22:30] <Uraeus> thomasvs: they have this thing that lets you put the channel into the panel.
[22:31] <thomasvs> Uraeus: huh ? how does that work ?
[22:31] <Uraeus> thomasvs: if you have x-chat compiled with its support (gnome 1.4) then you have a down arrow at the top of the window. if you press that the channel header goes into the panel and you can look at that to see when you have messages
[22:32] chrisime (~chrisime@...) joined #gstreamer.
[22:32] <Uraeus> hi chrisime
[22:32] <thomasvs> Uraeus: hm, then I don't have that. it's in garnome, against 2.0
[22:32] <chrisime> jo
[22:32] <thomasvs> let me check in the other xchat
[22:32] thomasvs_ (~thomas@...) joined #gstreamer.
[22:33] <thomasvs_> hm, not in here either
[22:33] <thomasvs_> this one's from rpm
[22:33] <Uraeus> thomasvs: think Ximian took it out of their build cause it was such an ugly hack, and I think RH never put it in in the first place
[22:33] <thomasvs_> oh, ok ;)
[22:33] thomasvs_ (~thomas@...) left irc: Client Quit
[22:33] <Zeenix> Uraeus: have you prepared the ground & tank? :)
[22:33] <thomasvs> never mind then
[22:34] <Uraeus> Zeenix: no, I look into that now (I need to find a way to start the server on my server at get it to listen to the correct network device/adress)
[22:34] <Uraeus> Zeenix: I just made this:
[22:38] <Uraeus> yay
[22:38] <Uraeus> Zeenix: try connecting to 212.186.233.206
[22:39] chillywilly (~danielb@...) joined #gstreamer.
[22:39] <Zeenix> Uraeus: now?
[22:40] <Uraeus> Zeenix: yes. with bzflag
[22:40] Action: Uraeus discovered that reading the bzflag man page actually helped :)
[22:41] <Zeenix> Uraeus: there is a problem, i forget this address when i get into the game...
[22:41] <Uraeus> Zeenix: ever heard of pen and paper?
[22:42] <Uraeus> :)
[22:42] <Zeenix> Uraeus: unfortunatelly, i've got no pen atm...
[22:43] <Zeenix> Uraeus: yeah, trying now...
[22:44] <thomasvs> Uraeus: nice page, good job
[22:44] <thomasvs> oh, togheter should be together
[22:44] <Uraeus> thomasvs: thanks :)
[22:44] <Zeenix> Uraeus: no, same error: cant load world database
[22:45] <Uraeus> thomasvs: ok, I put that fix into my local to be uploaded later version :)
[22:45] <Uraeus> Zeenix: try and ping me (I think we are simply to far apart)
[22:45] <Zeenix> Uraeus: have you entered the visiblity of the server to "world"?
[22:45] <Uraeus> Zeenix: no
[22:45] <RagingMind> bbl
[22:45] <Uraeus> Zeenix: let me try that
[22:45] RagingMind (~RagingMin@...) left #gstreamer.
[22:46] <Zeenix> Uraeus: 5min, 6min, 5min....
[22:46] <Zeenix> Uraeus: the ping result i mean
[22:49] <Uraeus> Zeenix: ok, I try adding the public parameter so try connecting again
[22:49] omega (~omega@...) joined #gstreamer.
[22:49] <Uraeus> (even if it will not be playable with 5 -6 seconds latency I think)
[22:49] <Uraeus> hi omega
[22:49] <omega> yo
[22:49] <Uraeus> omega: do you have bzflag installed?
[22:49] <omega> no
[22:50] <Uraeus> do anyone here except me and zeenix have it installed?
[22:51] <Zeenix> omega: get it, its a must for everyone :)
[22:52] <Zeenix> Uraeus: no fate
[22:52] <Zeenix> Uraeus: try my server again....
[22:52] <Uraeus> ok, what is your ip?
[22:52] RagingMind (~RagingMin@...) joined #gstreamer.
[22:53] <Zeenix> Uraeus: connect after 30 sec, my ip is: 192.168.60.33
[22:53] <Zeenix> sorry
[22:54] <Zeenix> 192.135.60.33
[22:54] <Zeenix> try connecting after 30 sec.
[22:57] <Uraeus> Zeenix: no luckl
[22:59] apoc_ (~apoc@...) left irc: Read error: 104 (Connection reset by peer)
[22:59] <Zeenix> Uraeus: i turned the visibility to continental, lets try with world?
[23:00] <Uraeus> Zeenix: have you ever played with anyone in europe successfully?
[23:00] <Zeenix> omega: sorry for disturbing your nice channel on a stupid game..
[23:01] <Zeenix> Uraeus: no
[23:01] <Zeenix> Uraeus: you are the only friend of mine in europe :)
[23:02] <Zeenix> should we try?
[23:02] <Uraeus> Zeenix: ok :), I try connecting to your server again in 30 secs
[23:02] <Zeenix> forget it, dont seem possible with these settings..........
[23:02] <Uraeus> Zeenix: and we need a new satelite link for data between pakistan and norway :)
[23:05] <Zeenix> Uraeus: no, some bzflag issue
[23:05] <Zeenix> Uraeus: btw, what error you get?
[23:06] <Uraeus> Zeenix: the error connecting to server message, are you behind a uni firewall or something?
[23:06] <Zeenix> Uraeus: i heard from a new gnome2 user that Nautilus2 is now as fast as win98 Explorer?
[23:07] <Zeenix> Uraeus: possible, i'll ask my boss tomorrow...
[23:07] <Uraeus> Zeenix: well it is much faster than before, that is true
[23:08] <Zeenix> Uraeus: only Nautilus2 or the latest for Gnome1 too?
[23:08] <Uraeus> Zeenix: latest for gnome1 is also better, but not as good as nautilus2
[23:15] <Zeenix> Uraeus: making Nautilus Scripts could be a good hobby for me...
[23:16] <BBB> Uraeus, thomasvs: patch applied
[23:16] <Ridcully> a new avifile is in sid :) "Removed config.h from avifile include directory (Closes: 146026)"
[23:16] <Uraeus> BBB: ok, I am just about to try building gst-plugins so I test if it broke something
[23:17] <Uraeus> Ridcully: they should remove avifile, that would remove more bugs than all other options togheter :)
[23:17] <Ridcully> Uraeus: good point :)
[23:18] Nick change: chillywilly -> dneighbo2
[23:18] Nick change: dneighbo2 -> chillywilly
[23:18] <Uraeus> Ridcully: luckily in about two weeks BBB is done with his exams and he will then replace avifile
[23:21] <BBB> better
[23:21] <BBB> only one week
[23:21] <BBB> ;)
[23:22] Action: Uraeus does the baloon dance in front of the keyboard in honor of BBB <g>
[23:22] <BBB> Ridcully: I've discussed this on mjpeg-developer and with gst's developers in sevilla...
[23:22] <BBB> I Want to bring it up on gst-devel too before I start it
[23:22] <BBB> byt I have some plans for 'new' libs
[23:22] <BBB> they aren't that new - but they should become the standard for the future
[23:22] <BBB> for us (mjpegtools/gstreamer)
[23:23] <BBB> and maybe for others
[23:23] <omega> BBB: I'm gonna grab the jpeg specs and start trying to figure out how a good jpeg library would be structured
[23:23] <BBB> omega: I want to cooperate on that one too... I think ffmpeg has a pretty good mjpeg decoder lib
[23:23] <BBB> (was it ffmpeg?)
[23:23] <BBB> we could have a look at that as well
[23:23] <omega> hrm, ffmpeg seems way to mashed together
[23:24] <BBB> but I'll surely help doing parts of the C coding if possible
[23:24] <omega> things aren't split on logical boundaries afaict
[23:24] <BBB> ffmpeg might be unstructured - but for some coding ideas, it could serve as a helper
[23:24] <BBB> :)
[23:24] <omega> yeah
[23:24] <BBB> but do you want ti to be glib2 based? or just plain C?
[23:24] <thomasvs> Uraeus: here's a silly question. I have a prefs dialog with minimum and maximum width for the panel
[23:24] <omega> I have some pretty strong ideas on codec structure, ffmpeg is a first step
[23:24] Action: BBB votes for plain C here
[23:24] <thomasvs> using spinbuttons
[23:24] <thomasvs> and they're so that min <= max always
[23:25] <thomasvs> Uraeus: this is the question :
[23:25] <omega> thomasvs: I have 10 packages here I'm tweaking to put into CVS, including mad and mpeg2dec
[23:25] <thomasvs> when min reaches max and user increases min, should it fail, or should it also increase max automatically ?
[23:25] <omega> one thing I want to change is that the spec file should not have a version number in the filename
[23:25] <thomasvs> omega: yeah, I had some questions
[23:25] <thomasvs> I dislike changing stuff like libid3tag.* and libmad.* to lib*.*
[23:25] <BBB> I'll send an email to -devel about these plans... just so everyone knows
[23:25] <BBB> ok omega?
[23:25] <omega> thomasvs: why?
[23:26] <omega> BBB: ok
[23:26] <omega> thomasvs: in buildroot, those are the only files there
[23:26] <thomasvs> omega: because if for some reason compilation of a part fails you miss it because rpm building doesn't fail
[23:26] <omega> hmm, true
[23:26] <Uraeus> thomasvs: not sure :)
[23:26] <thomasvs> I actually posted that question on rpm list once and that was the answer
[23:26] <thomasvs> and I tend to agree
[23:26] <omega> hmm, ok
[23:26] <omega> I'll change that then
[23:26] <thomasvs> I also had a reason for doing versioned specs
[23:27] <thomasvs> sometimes you fix something in your speccing that would be nice to backport to an older spec
[23:27] <thomasvs> plus
[23:27] <thomasvs> sometimes files tend to get added between versions
[23:27] <omega> thomasvs: that's what cvs is for
[23:27] <thomasvs> causing you not to be able to use new specs
[23:27] <Uraeus> anyone want to fix this: config.status: creating ext/snapshot/Makefile
[23:27] <Uraeus> config.status: error: cannot find input file: ext/snapshot/Makefile.in
[23:27] <Uraeus> error: Bad exit status from /var/tmp/rpm-tmp.22047 (%build)
[23:27] <omega> versioned specs require that you check in a new one for every version *and* release, effectively
[23:28] Action: Uraeus blames apoc but he is not here now
[23:28] <thomasvs> omega: suppose mpeg2dec bumped their version number, we fix the spec, do some other fixes, but some app needs an older mpeg2dec
[23:28] <thomasvs> omega: well, no
[23:28] billh (billh@...) joined #gstreamer.
[23:28] <thomasvs> I'd only use different versions
[23:28] <thomasvs> and if you backport a fix you do a new release of it, so increase the release number
[23:28] <thomasvs> at least, that's how I do it for specs I write for stuff
[23:28] <omega> hmm, are there any known best-practices regarding this?
[23:28] <thomasvs> but I'm willing to hear arguments against, because I'm still not sure about it
[23:28] <omega> say from ximian?
[23:29] <thomasvs> omega: hm, good question
[23:29] <omega> (bah, we need to just create a "Packaging Best Practices" website)
[23:29] <thomasvs> I'll see if boc is in gnome
[23:29] <thomasvs> omega: that'd be nice, yeah ;)
[23:31] <thomasvs> hm, ximian generates spec files using other files in their custom build system
[23:32] <omega> right
[23:32] <thomasvs> so no help to us
[23:32] <omega> hmm
[23:32] <thomasvs> well, I'll post it to the rpm devel list
[23:32] <omega> ok
[23:32] <thomasvs> will get the best consensus I guess
[23:33] <omega> ok, until then I'll hack locally
[23:33] <omega> and I'll throw the RPMs somewhere you can grab them shortly
[23:34] <Uraeus> thomasvs: do you have time to fix that build error for me?
[23:34] <thomasvs> Uraeus: the one with common ?
[23:34] <thomasvs> well
[23:34] <thomasvs> for that we still need a decision on what to throw in a dist
[23:34] <Uraeus> onfig.status: creating ext/smoothwave/Makefile
[23:34] <Uraeus> config.status: creating ext/snapshot/Makefile
[23:34] <Uraeus> config.status: error: cannot find input file: ext/snapshot/Makefile.in
[23:34] <Uraeus> error: Bad exit status from /var/tmp/rpm-tmp.22047 (%build)
[23:34] <thomasvs> oh, that's something else
[23:34] <thomasvs> what's snapshot ?
[23:34] <Uraeus> thomasvs: its apoc plugin for dumping images from a video to a image file
[23:36] Company (~Company@...) left irc: Remote closed the connection
[23:36] <Uraeus> thomasvs: we can use it to get a pause image in gst-player and while that pause image is showing we can screenhoot it :)
[23:37] <BBB> can't we use a dynamic tee for that? :)
[23:37] <thomasvs> ok, hang on. and what does it depend on ?
[23:37] <BBB> tee -> 1->videosink 2->snapshot
[23:37] Nick change: wtay-tv -> wtay
[23:37] <wtay> yo
[23:37] <BBB> hi wtay :)
[23:38] <Uraeus> yo wtay
[23:38] <Uraeus> thomasvs: hermes I think
[23:38] <thomasvs> hm, then I'm going to move it
[23:38] <thomasvs> to hermes ;)
[23:38] <Uraeus> thomasvs: I am not 100% sure, it was just that I noticed apoc checking in some stuff to hermes while commiting this plugin
[23:39] <thomasvs> yeah, it's only hermes
[23:39] <thomasvs> hm, wait
[23:39] <thomasvs> it uses png as well
[23:39] <thomasvs> why does it use hermes anyway ?
[23:39] <wtay> omega: ?
[23:40] <Uraeus> thomasvs: yuv to yuv conversion was what the log said I think
[23:40] <wtay> omega: n/m :)
[23:40] <thomasvs> Uraeus: yeah, but the plugin shouldn't do that
[23:40] <thomasvs> IMO that is
[23:40] <thomasvs> hm
[23:40] <thomasvs> what is the most gstreamer-ish way ?
[23:40] <thomasvs> pull in hermes in a plugin ...
[23:40] <thomasvs> ... or use colorspace and set caps on snapshot so that it only accepts RGB ?
[23:41] <thomasvs> omega,wtay: what do you think ?
[23:41] <wtay> what?
[23:41] Action: Uraeus throws at snooker ball at wtay
[23:41] <thomasvs> well, should a plugin pull in hermes to do it's colorspace conversion or should it set caps to what it accepts and hope the app/lib puts in a colorspace plugin ?
[23:42] <wtay> thomasvs: app, no question about that IMO
[23:42] <thomasvs> wtay: ok, so then we should rewrite snapshot to only depend on libpng then and set caps ?
[23:42] <wtay> hermes can only do some conversions
[23:42] <wtay> thomasvs: as small as possible, yes
[23:43] <BBB> thomasvs: or, rather, the app should simply use spider to plug snapshot to the source plugin
[23:43] <thomasvs> yeah, that's what I think too.
[23:43] <thomasvs> hm
[23:43] <thomasvs> ok, I'll discuss it with apoc then.
[23:43] <BBB> spider still needs to be programmed to pull in colorspace when needed
[23:43] <thomasvs> Uraeus: shall I remove it from the build for now ?
[23:43] <Uraeus> thomasvs: ok
[23:44] <thomasvs> hm, is it ok if I now commit my gconf stuff as well ?
[23:44] <thomasvs> it's just the schema, nothing much
[23:44] <Uraeus> thomasvs: or rather if the build fix is trivial for snapshot why not keep it in?
[23:44] <Uraeus> thomasvs: please do, I have not been able to build for a month anyway <g>
[23:44] <thomasvs> Uraeus: because that will remind us we need to fix it to do the right thing
[23:45] Zeenix (~zeenix@...) left irc: "Client Exiting"
[23:45] Action: Procule is gone.. out until sunday <cyp/lp>
[23:48] <thomasvs> ok, any objections on the gconf thing ? it adds a dir, a makefile and a schema.
[23:48] <thomasvs> going once ...
[23:48] <thomasvs> going twice ...
[23:48] <omega> ?
[23:48] <wtay> where?
[23:48] <BBB> ?
[23:48] <BBB> what?
[23:48] <omega> gconf thing to what?
[23:48] <Uraeus> thomasvs: sold!
[23:48] <thomasvs> heh, how typical ;)
[23:48] <thomasvs> ok, I mailed gst-devel for this *sigh*
[23:48] <thomasvs> I want us to provide the gconf keys for default sinks to use
[23:48] <Uraeus> thomasvs: don't care about them, just commit
[23:48] <thomasvs> everyone agrees with that ?
[23:49] <Uraeus> y4s
[23:49] <thomasvs> if not then apps will just explode and all use their own default sink
[23:49] <thomasvs> we don't want that
[23:49] <thomasvs> we're in charge
[23:49] <thomasvs> it's our lib
[23:49] <thomasvs> so it's our gconf turf
[23:49] <wtay> fine..
[23:49] <thomasvs> great
[23:49] <thomasvs> now the debate is about where to put it
[23:49] <thomasvs> I say we put it in gst-plugins ...
[23:49] <omega> not in libgst.so
[23:49] <omega> make it a utility lib
[23:49] <thomasvs> ... since they describe default plugins to use
[23:49] <thomasvs> it's not a lib
[23:49] <thomasvs> it's just the schemas
[23:50] <thomasvs> but if we turn it into a lib
[23:50] <thomasvs> which we might
[23:50] <thomasvs> I think it should be a loadable lib in gst-plugins as well
[23:50] <omega> how do the schemas alone help?
[23:50] <omega> yeah
[23:50] <thomasvs> Company's argument was : it's not a plugin
[23:50] <thomasvs> omega: well, that way other apps can see what plugin they should use by default
[23:50] <wtay> why not a gst-utils module..
[23:50] <omega> hmm, ok
[23:50] <thomasvs> for example, monkey-sound
[23:50] <omega> yeah, gst-utils -style stuff, I agree it's not really a plugin
[23:50] <thomasvs> wtay: well, not much point in making a different module for something so small
[23:50] <omega> but it's not gonna go in libgst.so either <g>
[23:51] <thomasvs> omega: right
[23:51] <thomasvs> omega: since it adds GConf as a dep
[23:51] <wtay> gst-utils is not going to remain small :)
[23:51] <omega> wtay: nope
[23:51] <thomasvs> wtay: how so ?
[23:51] <thomasvs> it's pretty empty right now
[23:51] <Uraeus> well having it alone in its one lib will make it more likely people will not bother with it
[23:51] <omega> thomasvs: there are a lot of helper things we want to write
[23:51] <omega> they all would end up in there or somewhere similar
[23:51] <omega> Uraeus: agreed
[23:51] <thomasvs> in any case, I think it belongs in gst-plugins because it should be there by default when you install plugins
[23:51] <Uraeus> omega: like the 'ding' help lib?
[23:51] <wtay> gtk == gst, gnome == utils, gnomeui == more -utils
[23:52] <thomasvs> if you put it in a separate lib no one is going to have it
[23:52] <thomasvs> and thus no other app is going to use it
[23:52] <omega> Uraeus: which?
[23:52] <Uraeus> omega: remember the old debate about using gstreamer for application ding sounds :)
[23:52] <thomasvs> so, ok to commit the schema ?
[23:52] <Uraeus> yes
[23:52] <omega> Uraeus: oh, yeah
[23:52] <thomasvs> going once ....
[23:53] <Uraeus> sold
[23:53] <thomasvs> going twice ...
[23:53] <Uraeus> sold
[23:53] <thomasvs> heh ;)
[23:53] <Uraeus> sold!
[23:53] <wtay> I bid more :)
[23:53] <thomasvs> SOLD to the incredibly naive bunch of suckers in #gstreamer
[23:53] <thomasvs> mwuhahahahha
[23:53] Action: Uraeus hits wtay in the head with a auction hammer
[23:54] <thomasvs> gconf is amazingly cool btw
[23:54] <thomasvs> I hope to soon get it in the player
[23:54] <thomasvs> and have instant apply for default video sink work ;)
[23:54] <thomasvs> set it to aasink, there goes the player
[23:54] <Uraeus> cool :)
[23:54] <Uraeus> thomasvs: smart to use aasink as example, that always sell to wtay
[23:56] <thomasvs> Uraeus: ok, check out again and try again
[23:57] <Uraeus> wtay: what is mayam's email addy?
[23:58] Action: thomasvs cheers on bbb and omega
[23:58] <wtay> Uraeus: mayam@...
[23:58] <BBB> thomasvs: ?
[23:59] <Uraeus> wtay: I have some norwegian langauge mp3 :)
[00:00] --- Sat May 11 2002
[00:00] <wtay> Uraeus: ah, great :)
[00:03] <RagingMind> where do I complain about this apply instantly stuff?
[00:03] <Uraeus> RagingMind: #gnu :)
[00:03] <Uraeus> RagingMind: just remember to call it GNU/instant or you will be flamed
[00:04] Action: BBB goes to bed now
[00:04] Nick change: BBB -> BBB|z
[00:04] Nick change: BBB|z -> BBB|zZz
[00:04] Action: BBB|zZz is away: zzz
[00:05] <RagingMind> I suggest not putting it in the player, it happens to be one of the most annoying things about my entire computer (and I have some really picky hardware)
[00:05] <Uraeus> RagingMind: why?
[00:05] <RagingMind> thomasvs: ""
[00:07] <RagingMind> I hate it when setting apply instantly, everytime I change an option the stupid thing goes and applies the change
[00:07] <RagingMind> I have to wait for it
[00:08] <RagingMind> I like being able to go through and change everything I want to change and have it all go at once
[00:09] <Uraeus> RagingMind: well the GNOME UI Guidelines dictate that everything should be instant apply now
[00:10] <RagingMind> Uraeus: !!! yuck !!!
[00:11] <thomasvs> RagingMind: so when should the change happen then ?
[00:11] <RagingMind> Uraeus: they should make it an option in Control Center, about whether that happens
[00:11] tromey (~tromey@...) joined #gstreamer.
[00:11] <RagingMind> thomasvs: when the [apply] button is clicked
[00:11] <thomasvs> someone check out that xvid plugin
[00:12] <RagingMind> anybody know where I can send a mail to?
[00:12] <Uraeus> hi tromey
[00:12] <thomasvs> RagingMind: well, that depends on the capplet
[00:12] <RagingMind> about this
[00:12] <thomasvs> RagingMind: but as soon as the gconf key changes, stuff should change
[00:12] <thomasvs> a capplet could however delay the change
[00:12] <thomasvs> RagingMind: so you could easily supply a patch to capplets to do that
[00:12] <Uraeus> thomasvs, wtay: tromey is maintainer of automake
[00:13] <thomasvs> Uraeus: I know, I see his name every time I run it ;)
[00:13] branzo (branzo@...) joined #gstreamer.
[00:13] <thomasvs> tromey: are you able to compile gstreamer yet ?
[00:13] <tromey> Hi.
[00:13] <branzo> Hi
[00:13] <Uraeus> hi branzo
[00:13] <tromey> I got most of it to compile by using garnome.
[00:13] <thomasvs> tromey: and is there some doc outlining changes from 1.5 to 1.6 so that we can ease the pain ?
[00:13] <tromey> I think in general 1.5->1.6 should be pretty painless.
[00:13] <tromey> Mostly we just fixed a lot of bugs.
[00:13] <thomasvs> tromey: well, it's more picky about unused vars
[00:14] <tromey> Yeah, that could be. There's no upgrade doc, sorrry :-(
[00:14] <omega> and I had to change a bunch of cases of internal vars named _LDFLAGS
[00:15] <tromey> I tried to join the mailing list but it never lets me. It doesn't reject me, either; it just doesn't respond to my confirmation. Anybody understand that?
[00:16] <tromey> omega: That's odd, since I would have expected those variables to already be invalid.
[00:16] <omega> nope, 1.5 doesn't complain but 1.6 has a somewhat obtuse error message, like 'not defined' or somesuch
[00:16] <wtay> ah, someone beat me with the xvid plugin :(
[00:17] <omega> gah. someone has an install-data-local: with manual $INSTALL of their public header files
[00:17] <omega> thomasvs: we really really need to build a best-practices site
[00:18] <branzo> wtay: which xvid plugin are you referring to?
[00:18] <wtay> just poisted to the list
[00:18] <wtay> s/pois/pos/
[00:19] <branzo> yep, that's me... is it ok?
[00:19] <Uraeus> tromey: do you have a sourceforge account?
[00:19] <branzo> tromey: yes I do
[00:19] <tromey> Uraeus: Yes.
[00:19] <thomasvs> omega: yes ;)
[00:19] <wtay> branzo: sure :) going to apply it soon
[00:20] ChiefHighwater (~paul@...) left irc:
[00:20] <wtay> branzo: I had trouble installing libxvid though, it doesn't appear to install the .h files
[00:20] <tromey> branzo: what is the reason?
[00:20] <thomasvs> Uraeus: if you can please see if the gst-gconf rpm installs the schemas right
[00:21] <Uraeus> thomasvs: where is it?
[00:21] <branzo> tromey: sorry, I answered the wrong line before
[00:21] <Uraeus> tromey: I can try adding you to the mailing list through the admin interface
[00:21] <thomasvs> Uraeus: it's in the spec for gst-plugins
[00:21] <tromey> Uraeus: thanks.
[00:21] <branzo> wtay: I also saw bbb's mail about leaving avifile, I think I can help with the xvid part
[00:22] <Uraeus> tromey: what mail address
[00:22] <tromey> Uraeus: tromey@...
[00:23] <Uraeus> tromey: ok, you should be subscribed now :)
[00:23] <branzo> wtay: what is the status of the openquicktime plugin? Any MPEG-4 file format plugin is planned?
[00:23] <omega> openquicktime is the next target after avifile
[00:23] <omega> tromey@... has been successfully subscribed to gstreamer-devel.
[00:24] <branzo> omega: you mean you are going to get rid of openquicktime?
[00:24] <thomasvs> hm, now we will have stop bitching about autotools issues on the list ;)
[00:24] <wtay> brown_: not that I know of, openQT hasn't been updated
[00:24] <omega> openquicktime suffers from the same structural problems as avifile, or moreso
[00:24] <wtay> tromey: you're added to the list now
[00:25] <branzo> what about the mp4lib in MPEG4IP? I think it may be useful
[00:25] <tromey> thomasvs: don't stop on my account. In fact it is traditional to send the occasional "automake sucks" email to the automake list :-)
[00:25] <omega> "tradition!" <dances on the roof>
[00:25] <thomasvs> tromey: heh, I can imagine. Well I don't mind, now that I got to know it better I'm pretty much ok with it ;)
[00:25] <thomasvs> a bitch to learn though
[00:26] <wtay> thomasvs: yeah, yeah, let me dig up some IRC logs :)
[00:27] <thomasvs> heh, ok, I surrender
[00:27] <branzo> wtay (about your question before): xvid actually does not copy xvid.h to the include dir, I just did it by hand
[00:28] <wtay> branzo: ok
[00:29] Action: omega must prefers reading the MPEG spec to reading the JPEG spec
[00:29] <omega> s/must/muchj/
[00:29] <omega> gah, s/must/much/
[00:30] <thomasvs> so, anyone tried ccache ?
[00:30] <omega> ccache?
[00:30] <thomasvs> it's written by the samba guys
[00:30] <thomasvs> compiler cache
[00:30] <thomasvs> supposedly it retains information between builds
[00:30] <omega> never heard of it, url?
[00:31] <thomasvs> so if you do make clean, make - it uses the old info to see what would end up being compiled differently
[00:31] <thomasvs> ccache.samba.org
[00:31] <wtay> thomasvs: been reading lklm? :-)
[00:31] <thomasvs> so it gives a 6x improvement on samba compilation apparently
[00:31] <omega> transparent on top of auto* ?
[00:31] <thomasvs> wtay: no, check freshmeat religiously
[00:31] <thomasvs> omega: yes - you set CC to ccache gcc
[00:31] <wtay> thomasvs: oh ok, AC mentioned it there..
[00:31] <thomasvs> only it doesn't seem to be doing that or I'm doing it wrong here ;)
[00:32] <thomasvs> wtay: been reading lklm? ;-)
[00:32] <thomasvs> I subscribed to it for two full days. and it was the one day digest version ;)
[00:32] <wtay> thomasvs: religiously :)
[00:37] gadek_ (~greg@...) left irc: "[x]chat"
[00:39] branzo (branzo@...) left irc: "using sirc version 2.211+KSIRC/1.1"
[00:42] <thomasvs> wtay: did it work well for the kernel ?
[00:42] <wtay> thomasvs: no idea
[00:44] Action: Uraeus interprets that as wtay having a relaxed relationship to religion :)
[00:50] <Uraeus> thomasvs: config.status: creating testsuite/seeking/Makefile
[00:50] <Uraeus> config.status: error: cannot find input file: testsuite/seeking/Makefile.in
[00:50] <Uraeus> error: Bad exit status from /var/tmp/rpm-tmp.88207 (%build)
[00:54] Action: tromey is away: I'm busy
[00:54] <thomasvs> Uraeus: hm, who added that this time ;)
[00:54] <Uraeus> thomasvs: wtay?
[00:54] <wtay> uh
[00:55] sjoerd (~sjoerd@...) left irc: Read error: 113 (No route to host)
[00:55] <thomasvs> try again
[00:55] <Uraeus> ok
[00:55] <wtay> thomasvs: remove
[00:55] <thomasvs> remove ?
[00:55] <thomasvs> why ?
[00:55] <wtay> from configure.ac?
[00:55] <wtay> or did I commit the dir?
[00:56] <thomasvs> I don't know - I added it to SUBDIRS in the testsuite dir
[01:01] <Uraeus> thomasvs: I still get the error
[01:03] <thomasvs> Uraeus: hm, wait
[01:06] <thomasvs> Uraeus: try again
[01:07] <Uraeus> ok
[01:15] <ajmitch> hi
[01:16] chillywilly (~danielb@...) left irc: "Philosophers and plow men, each must know his part, to sow a new mentality closer to the heart..."
[01:16] chillywilly (~danielb@...) joined #gstreamer.
[01:17] <wtay> Uraeus: a song sung wih the typical norsk enthousiasm :)
[01:17] <Uraeus> wtay: hehe
[01:18] <Uraeus> wtay: just say if you want more, I have lot of funny yet easy to understand songs like that
[01:19] <wtay> Uraeus: maYam is having her head into the dictionary already :)
[01:19] chillywilly (~danielb@...) left irc: Client Quit
[01:19] chillywilly (~danielb@...) joined #gstreamer.
[01:20] <Uraeus> wtay: kjiipt is slang for 'to bad/to sad'
[01:20] <wtay> Uraeus: aha
[01:24] <Uraeus> wtay: another one on the way :)
[01:24] <Uraeus> wtay: this one is about women
[01:26] <wtay> heh
[01:27] chillywilly (~danielb@...) left irc: "Philosophers and plow men, each must know his part, to sow a new mentality closer to the heart..."
[01:27] chillywilly (~danielb@...) joined #gstreamer.
[01:32] <Uraeus> wtay: maYam likes the positive norwegian attitude the song conveys?
[01:32] <Uraeus> finally we are getting a window manager selector for GNOME 2 :
[01:33] maYam (~mayam@...) joined #gstreamer.
[01:33] <Uraeus> hei maYam
[01:33] <maYam> hey Uraeus
[01:33] <ajmitch> hey maYam
[01:33] <maYam> thanks for the mp3 :)
[01:34] <maYam> hi ajmitch
[01:34] <Uraeus> maYam: håper du liker den
[01:34] <maYam> jaja
[01:34] <maYam> i don't understand everything
[01:34] <Uraeus> maYam: den var kjempepopular i Norge på slutten av 80 tallet
[01:34] <maYam> (well that's an understatement)
[01:35] <Uraeus> hehe
[01:35] <Uraeus> maYam: noen ord du trenger hjelp med?
[01:36] <maYam> i'm listening to it again now..
[01:36] <maYam> my norsk is getting rusty already!
[01:36] tromey (~tromey@...) left irc: Read error: 113 (No route to host)
[01:38] Action: maYam looking up lyrics ehem
[01:41] <thomasvs> night guys
[01:41] Nick change: thomasvs -> thomasvz
[01:41] <Uraeus> maYam: en ny sang er på vei, men jeg har litt treg oplink
[01:41] <Uraeus> thomasvz: night
[01:42] <chillywilly> lksjhdflhsdgofyt-978(GI&^(ghf i7c96rLJGCFUYRTE*65ro87ty8^R$*IDCKjghcf
[01:43] <wtay> Uraeus: heh, now she has the lirycs with guitar chords (need to find my guitar) :)
[01:43] <wtay> s/lirycs/lyrics/
[01:43] <Uraeus> cool :)
[01:43] Action: chillywilly is away: dinner
[01:43] <Uraeus> maYam: ok neste sang er nå sent
[01:45] <maYam> Uraeus: tusen takk - i see some words are cut off.. makes it more difficult
[01:45] <maYam> f eks 'alt jeg har er vaktmester'n og han ekk'e no' for meg'
[01:45] <Uraeus> maYam: not difficult, more exciting and challeging :)
[01:46] <Uraeus> maYam: forstår du hva det betyr?
[01:46] <Uraeus> :)
[01:46] <Uraeus> ekk'e == er ikke
[01:46] <Uraeus> no' == noe
[01:47] <maYam> ja, jeg tror at jeg forstår det..
[01:47] <maYam> test? ;)
[01:47] <Uraeus> ja, oversett det til engelsk
[01:48] <maYam> har du tid?
[01:48] <Uraeus> ja
[01:48] <Uraeus> hele natten
[01:48] <maYam> for faen..
[01:48] <Uraeus> hahaha
[01:49] <maYam> lykkelig at internetten er her til hjelp
[01:49] <maYam> waah see my norsk is getting rusty
[01:49] <maYam> i type quicker than i think ;)
[01:49] <Uraeus> s/my/min/
[01:50] <Uraeus> maYam: sanger er ganke morsom (på en tragisk måte)
[01:50] thomasvz (~thomas@...) left irc: "Client Exiting"
[01:50] <Uraeus> s/sanger/sangen/
[01:51] <maYam> ja sanger er også tragisk
[01:51] <Uraeus> maYam: har du mottat sang nummer 2? den heter 'jenter'
[01:52] <maYam> nei..
[01:52] <Uraeus> hmm, tror jeg får den i retur nå...
[01:52] <maYam> howcome i can't say simple things like 'no, not yet'..
[01:53] <Uraeus> nei, ikke enda
[01:53] <maYam> selvfølgelig
[01:53] thomasvs (~thomas@...) joined #gstreamer.
[01:53] <thomasvs> Uraeus: where was that orbit debug page again ?
[01:54] <Uraeus> thomasvs: gimme a sec
[01:55] <Uraeus> thomasvs:
[01:55] <Uraeus> maYam: utrolig hvor dårlig mail er til å overføre mp3 og ogg sanger
[01:56] <wtay> omega: what was the idea behind the timecachegroups again?
[01:56] <maYam> jeg venter..
[01:57] <Uraeus> maYam: hehe, jeg fikk sangen i retur, den er på 11MB :)
[01:57] <Uraeus> maYam: du skal få den på ftp isteden
[01:58] <maYam> ok
[01:58] <maYam> what do i have to do?
[02:00] <Uraeus> maYam:
[02:00] <maYam> aha
[02:00] <thomasvs> Uraeus: hm, thanks, but it doesn't seem to be helping me.
[02:00] <Uraeus> maYam: but wait it is still not fully uploaded
[02:00] <thomasvs> oh well, bed time anyway
[02:00] Nick change: thomasvs -> thomasvz
[02:00] <Uraeus> thomasvz: no
[02:00] <Uraeus> thomasvz: wait
[02:00] <thomasvz> wait what ?
[02:00] <Uraeus> thomasvz: see my mail first :)
[02:01] <omega> wtay: start a new grop when you seek to a new location in the file, bring the groups together when one grows into another
[02:01] <thomasvz> hm
[02:01] <omega> use more certain groups to rectify less certain groups
[02:01] <wtay> omega: oh, neat
[02:03] <thomasvz> Uraeus: hm, copy over the previous revision of gst-plugins.spec.in and see if that fixes it
[02:03] <Uraeus> thomasvz: ok, I test that tommorow
[02:04] Action: Uraeus wonders how come the defaults for ogg could end up creating a 11mb file
[02:05] <maYam> Uraeus: is it uploaded yet? i see a 6.5MB file
[02:05] <Uraeus> maYam: no, it is 11mb
[02:05] Action: Uraeus wonders if it is a wav file
[02:06] <maYam> yhmm.. very strange, such a big file.. must be a special song
[02:07] <Uraeus> maYam: it was a wav, I am re-uploading it now as an ogg
[02:07] <Uraeus> maYam: just 3 mb
[02:08] <maYam> lol
[02:08] <Uraeus> grip sucks
[02:09] <wtay> omega: is the timecache owned by the plugin or is it something the app can give to tc-aware plugins to fill it up?
[02:10] <Uraeus> maYam: er Forvever not yours av A-ha populær i Belgia nå?
[02:10] <Uraeus> maYam: ok, du kan laste ned sangen nå
[02:10] <omega> owned by the plugin, i.e. mp3parse
[02:10] <maYam> A-ha er ikke død?
[02:11] <wtay> omega: but the app should be able to fill it up with a stored index...
[02:11] <maYam> take on me er det siste jeg hørt..
[02:11] <Uraeus> maYam: nei, de er ute med nå plate med singelen 'forever not yours' på europatoppen
[02:11] <omega> wtay: maybe, but why?
[02:11] <wtay> omega: NLE index for imported media
[02:11] <omega> how does the app know anything about the mapping between the file offets and times?
[02:11] <omega> wtay: hrm, ok
[02:12] <Uraeus> maYam: laster du ned jenter.ogg nå?
[02:12] <maYam> ok!
[02:12] <omega> then give the timecache a save_thyself
[02:12] <wtay> omega: sure
[02:12] <maYam> 2.8MB, det seer bedre ut
[02:13] <maYam> ser
[02:13] <Uraeus> ja
[02:14] <maYam> o-o.. ogg123 doesn't work.. wtay!!
[02:15] <Uraeus> oops
[02:15] <wtay> maYam: f1x0r3d
[02:16] <Uraeus> maYam: kremt, la meg forsøke igjen :)
[02:16] <wtay> that's some very fine white noise
[02:16] Action: Uraeus whistes
[02:16] <wtay> Uraeus: does that play for you?
[02:17] <Uraeus> no :(
[02:17] <wtay> doh :)
[02:17] <maYam> was that an attempt to scare me, i'm almost deaf!
[02:17] <Uraeus> sorry, I didn't test it before sending it of
[02:17] <maYam> haha
[02:17] <Uraeus> reencoding it now
[02:18] <wtay> Uraeus: maybe you should oggenc the wav instead of the mp3 :)
[02:18] <maYam> Uraeus:du har hele natten
[02:19] <ajmitch> hehe
[02:20] <Uraeus> maYam: download the a-ha song in the meanwhile :)
[02:20] <wtay> plz no.. :)
[02:20] <Uraeus> wtay: you love it
[02:21] <maYam> ehm.. that's in english, right? i won't listen to A-ha unless it has a useful purpose
[02:21] <Uraeus> maYam: try it, it is nothing like take on me, they have matured since those days
[02:22] <wtay> Uraeus: oh, they have a male voice now?
[02:22] <maYam> Uraeus: have you ever listened to Immortal? now that's a true norwegian band you must be proud of!
[02:22] <Uraeus> maYam: never heard of em
[02:23] <Uraeus> but I read an interview with Mortiis today
[02:23] <RagingMind> A-ha has new music out?
[02:23] <maYam> ah! And what about Aeternus, and Mayhem?
[02:23] <Uraeus> RagingMind: yes
[02:23] <Uraeus> maYam: never heard of them either
[02:23] <Uraeus> RagingMind:
[02:24] <maYam> Uraeus: strange.. those norsk bands are getting so popular in europe, except in norway..
[02:24] <RagingMind> Uraeus: thanks
[02:24] <maYam> ok.. i'll download that a-ha file
[02:24] <maYam> (sorry wtay)
[02:25] <Uraeus> strange, the ogg is actually 11MB (me tries mp3)
[02:26] <wtay> maYam: you can donwload all you want, as long as you don't play it :)
[02:26] <Uraeus> wtay: listen and enjoy wtay, stop being so negative :)
[02:28] <wtay> oh boy..
[02:28] <Uraeus> hehe
[02:28] <Uraeus> wtay: you don't like the voice of the vocalist?
[02:28] <wtay> how trendy
[02:28] <Uraeus> wtay: yeah, just like you
[02:28] <wtay> pff
[02:28] <maYam> still castrated i hear
[02:29] <Uraeus> rofl
[02:29] <wtay> hehe, well said
[02:29] <Uraeus> maYam: don't mess with Morten, he jumped Aqua Lenes bones you know :)
[02:29] <wtay> iiiiiittt wooooont beeee loooong nooooow
[02:29] Action: Uraeus can actually hear wims voice in his head singing that :)
[02:30] <maYam> hohoho ;)
[02:30] <Uraeus> maYam: I send you as many books as you like if you play that song for Wim at least twice a day :)
[02:31] <maYam> 'i'll soooon be gone now' => yes morten, i hope so
[02:31] <maYam> Uraeus: it's a deal! i'll buy some earplugs tomorrow
[02:32] Action: wtay thinks the song now could do with some -1 octave pitch shift
[02:32] Action: Uraeus feels maYam is dissing his music
[02:32] <Uraeus> maYam: ok, jenter.mp3 is ready for download (and yes it is tested and 4mb)
[02:33] <maYam> alright..
[02:33] Action: Uraeus has A-ha concert ticket for june
[02:33] dondas (~jussi@...) joined #gstreamer.
[02:34] <maYam> oh poor guy, who gave you that ticket?!
[02:34] <Uraeus> maYam: I paid for it myself :)
[02:34] <dondas> Is it possible to use a webcam with gstreamer to add realtime effects? Any docs available?
[02:35] <wtay> dondas: a v4l compatible webcam?
[02:35] <dondas> yes
[02:35] <wtay> dondas: try gst-launch v4lsrc ! vertigoTV ! colorspace ! sdlvideosink
[02:36] <maYam> Uraeus: 'jenter' er lettere å forstå
[02:37] <Uraeus> maYam: ja, mindre slang
[02:38] <Uraeus> maYam: rosekjeller'n var en kjent strippebar i Oslo på slutten av 60 tallet
[02:38] <dondas> wtay: I get ERROR: pipeline could not be constructed: No such element v4lsrc
[02:39] <Uraeus> maYam: the jenter song is the story of my life :)
[02:39] <maYam> lol
[02:39] <wtay> dondas: what does gst-inspect say?
[02:40] <dondas> wtay: nothing about v4lsrc :(
[02:41] <wtay> dondas: but lots of other stuff? maybe sys/v4l/ was not built
[02:41] <Uraeus> maYam: jeg har en sang til fra de samme som laget 'jenter' hvis du vil ha
[02:42] <maYam> ja, jeg liker det.. *heyhey hoho* cool :)
[02:42] <Uraeus> RagingMind: are you done downloading? (so I can delete the song?)
[02:43] <maYam> hvem er sangeren?
[02:43] <RagingMind> Uraeus: not yet :(
[02:44] <RagingMind> Uraeus: ETA 1:15
[02:44] <dondas> wtay: is it in gstreamer or in gst-plugins?
[02:44] <Uraeus> maYam: it is a duo, they call themselves Trøste & Bære , don't know their real names
[02:44] <Uraeus> RagingMind: np
[02:45] <Uraeus> maYam: the jenter song and the one I am uploading now are the ones I think you can understand, the others contain to much dialect jokes and wordplay
[02:45] <RagingMind> Uraeus: done :)
[02:45] <Uraeus> ok
[02:45] <wtay> dondas: gst-plugins
[02:46] <Uraeus> maYam: ok, 'julebordet er over' klar for nedlasting
[02:47] <dondas> wtay: OK, will check it out tomorrow. thanks. btw. Is there any practical documentation on this sort of thing?
[02:47] <Uraeus> dondas: the man pages
[02:47] <wtay> dondas: no, not yet, I'm afraid
[02:49] <dondas> perhaps, I'll write an article or something in the wiki then.
[02:49] <maYam> Uraeus: jeg har noen ting for deg også.. en stund..
[02:50] <Uraeus> maYam: I am uploading even one more song, 'siste gutten i klassen', I think it is very funny, but I am not sure how easy it is for you to understand it
[02:51] dondas (~jussi@...) left irc: "Think out of the bochs."
[02:52] <maYam> Uraeus: great :) and if i don't understand, i'll look up the lyrics
[02:52] <Uraeus> maYam: please download also 'siste gutten i klassen' now
[02:52] Nick change: omega -> omega_f00d
[02:53] <Uraeus> maYam: have you gotten the julebordet song?
[02:53] <maYam> Uraeus: downloading..
[02:53] <maYam> Uraeus: yes
[02:53] <wtay> Uraeus: here's a singalong for you:
[02:54] <Uraeus> wtay: ok :)
[02:54] <maYam> Uraeus: Obtained Enslavement: one of the best norwegian bands ever!
[02:55] <Uraeus> maYam: regarding the julebordet song, remember that 'over' means both over (as in ended) and above
[02:55] Action: wtay promises to erase the song before being accused of abusing sf...
[02:57] <maYam> Uraeus: yes.. it's exactly the same in dutch
[02:57] <Uraeus> hey, I recognize this Obtained Enslavement song, we use it as elevator music here in norway ;)
[02:58] <maYam> yes, A-ha would be too heavy i suppose ;)
[02:59] <Uraeus> maYam: yeah, they know more than two tones on their guitar :)
[02:59] <maYam> what guitar? ;)
[02:59] <Uraeus> hehe | https://sourceforge.net/p/gstreamer/mailman/gstreamer-daily/?viewmonth=200205&viewday=11 | CC-MAIN-2018-22 | refinedweb | 20,254 | 73.92 |
We create a simple file reading program in exercise 8. As usual, there where a couple of ways to accomplish this. I chose to declare my input file, and as long as we did not receive and error opening the file or reach end of file, use isprint to count any characters in the file. Mind you “file.txt” is to be placed in the same directory as your program with some characters in it. See my source below:
8. Write a program that opens a text file, reads it character-by-character to the end of the file, and reports the number of characters in the file.
#include <iostream> #include <fstream> #include <cstdlib> using namespace std; int main() { ifstream inFile; inFile.open("file.txt"); // Fail-safe if (!inFile.is_open()) { cout << "Failed to open: " << inFile << endl; cout << "Kthxby" << endl; exit(EXIT_FAILURE); } char letter; int count = 0; inFile >> letter; while (!inFile.eof()) { if (isprint(letter)) count++; inFile >> letter; } cout << "\n" << "Number of characters is: " << count << endl; // Check for EOF if (inFile.eof()) cout << "End of file found.\n"; else if (inFile.fail()) cout << "Data mismatch.\n"; else cout << "Input terminated for unknown reason.\n"; // Clean inFile.close(); cin.get(); return 0; }
Advertisements | https://rundata.wordpress.com/2012/12/16/c-primer-chapter-6-exercise-9/ | CC-MAIN-2017-26 | refinedweb | 201 | 68.97 |
Transporting PI objects with NWDS (using CTS+) for Beginners
For beginners in PI, it’s always been a challenge to get to know the PI objects transport mechanism. This drove me to write a simple blog for beginners to get to know how to attach PI developments to transports.
Note: PI version being used is 7.31 with IFLOWS and CTS+ as transport strategy
1. Open NWDS process integration perspective and open CTS Organizer as shown below
2. Provide the logon information
3. Let us start by creating a transport request
4. Give the description and check “Preselect Request”
5. First we will get the ESR developments assigned to this request. Open ESR, select the development namespace, right click and select Export
6. Make sure the mode is selected as “Transport Using CTS” and then click continue. If this option is not appearing report to BASIS
7. In this screen select the namespace (all objects under this namespace will be transported). Click continue
8. In this screen the request we created should automatically appear since we preselected this request. Click on continue
9. All objects transported will be shown in this screen and click on finish to complete the export of ESR objects. Please do remember to attach any dependent objects also to this request by following the same process as above
10. Next, let us attach the IFLOW to the request. Navigate to the Export PI objects screen as shown below
11. Select transport type as CTS+ and Object type as Integration Flows. Click on next
12. Select the integration flow that needs to be assigned to transport
13. In this screen, all objects of the integration flow are shown. If any of the common components used across integration flows are already transported as per-requisite transport, please unselect them now. Click on next
14. Give the Export Description and click on finish to complete the export
15. All exports will be shown under the Object List tab of the transport request
16. Once the above process is done, please release the transport from CTS Organizer and inform BASIS to import to the target system. (Before importing to target system make sure the PI administrator would have maintained the required transport system targets in SLD)
17. Once we log in to the target system we can see that ESR objects are automatically activated and integration flow will be in the change list. We need to update the channel properties and activate
Very informative blog. Really helpful. Thanks for the blog Sreedhar!
Thanks!
Hello Sreedhar,
great blog post, thanks! Do you have further information how to enable cts+ in NWDS? Basis configure CTS+ for PO but we do not see the option in NWDS.
Regards,
Markus
Markus,
As of now I see this option only for "Process Integration" perspectives. I doubt it is supported for all others.
Thanks,
Sreedhar
Sreedhar,
it's a very informative blog!
My transported iFlow uses an Integrated Configuration.
Let me ask if you have any idea, how to transport the linkage between ICO and iFlow?
If I transport the iFlow from NWDS, it "lost" its link to the ICO object in the Swing Tool.
Thanks in advance for any ideas!
Kind regards,
Andras
Andras,
If we transport IFLOW all objects assosiated with it (channels, ICOs) will be transferred and the link will be retained, Can you please cross check all your configurations?
Thanks,
Sreedhar
Sreedhar,
I see your point. Indeed, all the related objects are present (channels, ICO, etc.) after transporting, the only thing which I'm still missing is the linkage of iFlow object to the used ICO.
If you open the iFlow in the SWING tool, it remains empty not showing any used ICO (although it is there in the same integartion scenario).
Maybe you faced this issue already...
Thanks in advnace!
Andras
Andras,
We never faced this issue. My suggestion is to re-deploy the iflow in QA through NWDS and check once.
Thanks,
Sreedhar
Hi Sreedhar,
Good Evening!
Fantabulous blog! Keep up the good work!
Thank you for sharing! Keep sharing more PI Technical stuff!
All the best!
Regards,
Hari Suseelan
Nice one.
Divyesh
Hi Sreedhar
Can't we export the ESR obecjts from NWDS ?
I don't see any options in NWDS for exporting ESR objects
Hi Indrajit,
this is not possible so far. We are missing this feature as well.
Have a look at this for a high level overview:
Consolidated view on release notes for Process Integration and Orchestration
Hi all,
Sreedhar, thanks for the information. This really comes in handy.
@All:
1. What is the best practice to transport iFlows from NWDS?
I currently do this by going to Process Integration --> Transport --> Export PI Objects and then selecting CTS+ and iFlow. This works fine, however, I still have to switch to IB say in QAS system, to transfer change list after transport. After that I again need to switch tools back to NWDS in QAS to activate because when activating within IB the iFlow does not get deployed and hence no ICO will be associated (see Andras' post above).
Is this really the best way to cope with this? I would like to see all the transport stuff happen in NWDS and not switching tools for this.
2. How to best transport alert rules (AEX / Java-Only)
I know that alert rules are ID objects and may be transported. Again, I'd like to know if there's a way to transport via NWDS where I haven't found a way to do so until now.
Thanks and kind regards
Jens
Dear Jens,
I can help you out with point 1 ;-)...
This is known as hidden feature in NWDS:
Go to "My Changes" and look for the small triangle:
And go for Apply Changes.
That can make you happy.
Just to let you know, it is not possible to auto deploy objects by now. We miss this feature, and addressed it to SAP. Otherwise time triggered transports are useless using iFlows...
Regards,
Markus
Dear Markus,
this is great. Just tried it and worked like a charm 🙂 Thanks for the tip.
Thanks for sharing. Nice blog. | https://blogs.sap.com/2013/10/07/transporting-pi-objects-with-nwds-using-cts-for-beginers/ | CC-MAIN-2021-49 | refinedweb | 1,028 | 75.5 |
GameFromScratch.com.
As I mentioned earlier, the history of Cocos2D is fairly important to understanding how it works, so we are going to start of with a quick, hopefully not-boring, history lesson. I promise you, this will be the only history lesson in this entire series! Unless of course I do more of them… If you want a more thorough history, you can always check out the wiki.
Ok… history time.
Way back in 2008, Cocos came to be, named after the town of Los Cocos, Argentina, in case you were wondering where exactly the name came from. It started from a gathering of Python developers in, you guessed it, Los Cocos. As you may be able to guess from the fact it was started by a bunch of Python developers, Cocos started off being written in Python.
Then came along this little phone named the iPhone, and a version of Cocos2D was ported to ObjectiveC for use on iOS, the aptly named cocos2d-iphone. A number of Cocos2d-iphone developed apps started appearing on the iPhone, including StickWars, which hit number 1 on the App Store. Now we hit the fast forward button on history and see that Cocos2d is ported to a number of platforms.
One of those ports was of course Cocos2d-x, which was a port of Cocos2D to C++, the subject of our tutorial here. Cocos2d-x itself also spawned a number of ports, including HTML and XNA. Along the way a number of tools were developed as well, including an editor named CocosStudio (itself the spawn of a number of child projects ) and CocosCodeIDE and IDE for Lua and JavaScript scripting in Cocos2d-x.
So, why does this all matter?
Well, it’s important that you be aware that Cocos2d-x is a port of an Objective-C library which itself was a port of a Python library. Each language and platform has had an effect on the development of Cocos2d-x, for better or worse. You will run into some concepts and think “why the hell did they do this?”. More often than not, it’s Cocos2D’s history that provides the reason.
One final important thing to realize with Cocos2d-x, a number of the most active developers behind the project are not primarily English speakers. Cocos2D-x is extremely popular in China for example. This is by no means a negative, but we aware sometimes language can be a bit of a barrier when looking for help and reading documentation.
At the point I am writing this, I am using Version 3.3beta 0 to create tutorials, and as new versions are released I will try to stay with the most recent version. This is because I am trying to future proof this series as much as possible. In all honesty, I know this is going to be quite annoying as well, when I created by cocos2d-html5 tutorial series, the number one problem was version changes. Cocos2d-x is a library that get’s refactored quite a bit, so if you are far in the future and some code I provided doesn’t work, this is probably why. Always make sure to read the comments at the bottom of each part, it may contains clues to the problem you are facing.
So then, what version should you use? That answer is a bit trickier. You have a choice between Cocos2d 2.x, 3.0 or 3.x right now. The 2.x version is obviously the older version and less actively developed, if at all (edit – according to this thread, 2.x is no longer being supported). That said, 2.x also supports the most platforms, including Windows Phone, Marmalade, Tizen, Blackberry and more. Additionally, as of writing, every single book targets 2.x. 3.2 is the (currently) stable release of the most current version, while 3.x is the development version.
Again, I will be using the most current version as I go, and if history has taught me anything, this is going to lead to tons of issues! ;) Warning, Here be dragons!
In order to get started with Cocos2d-x, you need to have a couple things installed already, depending on platform you are developing on.
Obviously you need a C++ compiler. If you are working on Windows, Visual Studio 2013 is currently the recommended version. You can download a free version named Visual Studio Express for Windows Desktop ( note, there is also a version called Visual Studio Express for Windows, you do NOT want this version… yeah, brilliant naming by Microsoft there eh? ). Of course if you have a complete version installed it will work fine as well. You can also use older versions of Visual Studio, back to 2010 I believe, but this series will assume you are using the most recent version.
On Mac OS, Xcode is the obvious solution. It’s also free, so that’s nice. As of writing Xcode 6 is currently in late beta, but will work just fine. Xcode 5 should also work just fine. Personally I am not a huge Xcode fan and use AppCode for development on Mac, but it is not a free tool. You may see it on occasion in screenshots, so I figured I would put it out there. By default Xcode does not install the command line tools, so I would install those as well, you can find instructions here and official documentation here.
You also need to have Python installed. Don’t worry, Python isn’t used to code in Cocos2d-x, but some of the tools require it, including the tool you use to create your project, so obviously this install is pretty important. Another important note, so I’m going to use bold and shout at you for a second. YOU NEED TO INSTALL PYTHON 2.7x! The newest version, Python 3.x does not work and you will be wasting your time. So go ahead download Python 2.7.x here. On Windows you want to make sure Python is added to the PATH environment variable. If it isn’t, you can get instructions here.
Finally if you are intending to develop for Android, you need to have a version of the Android SDK, ANT and Android NDK installed. You need at least version 9 of the NDK ( 10 is the current as of writing. EDIT – NDK 10 currently doesn’t work! Read here for details. Here for the ticket. There is a work-around, but using NDK 9 is probably your easiest bet ) to work with Cocos2d-x. Now to make life slightly more complicated, if you are on Windows, the Android NDK also requires Cygwin 1.7 or higher to be installed. Fortunately, Cygwin has no further requirements. When downloading the Android SDK, do not download the ADT package, but instead scroll further down the page and install using the “Get the SDK for an existing IDE” link. As an FYI, the SDK is the Java SDK along with the tools needed for Android development, the NDK is the C++ toolchain for Android development, while Ant is a Java build system.
Please note, Cocos2d-x and be used with other IDE’s such as Eclipse or Qt Creator, but I will not be covering the process in this tutorial.
Oh and of course you need to download cocos2d-x itself! Simply download and extract the library somewhere on your system.
Ok, now that you’ve got everything installed and configured, it’s time to create a project. Open up a terminal window or command line and change to the directory you extracted cocos2d-x.
Enter:
./setup.py
source ~/.profile
cocos new -l cpp -p com.gamefromscratch.gamename -d ~/Documents/Projects/cocos2d gamename
Open a command prompt and CD to the directory you extracted Cocos2D to. Run the command:
python setup.py
If you get an error about not being able to find Python, that PATH is not configured correctly. Depending if you have certain environment variables set or not, the install may now ask you the install directory of your Android NDK, SDK as well as Ant, provide them. If you’ve not installed the NDK and SDK before now, do so before performing this step.
The next step depends on your operating system version. If you are running Windows XP ( and possibly Vista ), you now need to restarted your computer for the changes to take effect. If you are running Windows 7 or 8.x, simply close your command prompt and open a new one.
Now type:
cocos new -l cpp -p com.gamefromscratch.gamename -d C:\path\to\game\here gamename
The tool to create cocos projects is “cocos” and it resides in [cocosSDKfolder]/tools/cocos2d-console/bin. -l is an L by the way, this is where you specify the language for the project you want to create. The options are cpp and lua currently, in this case we want cpp. -p is for specifying the package, mostly for Android I assume. This uses Java’s standard reverse domain name format. Don’t worry if you don’t have a website, make something up. The -d parameter is the directory where you want to create the project.
Now that our project is (hopefully!) created, lets take a look at what it’s created for us.
Here you can see it has created a number of key directories for you, we will take a closer look at each one.
Each folder prefixed with proj. is where project files and platform specific code goes, be it android, iOS and Mac, linux, Windows or Windows Metro ( or.. what was previously known as Metro ).
The cocos2d folder however is where the cocos SDK itself is copied. This is a complete copy of Cocos2d, including docs, libraries, headers, etc. Just a warning, this folder is 250MB in size and will be created for each cocos2D project you create using cocos new! You can set up your projects to use a common install of cocos2d-x, by specifying the engine path when calling cocos new. Just be aware, if you are tight on space and are going to be making a number of cocos2d-x projects, you may want to look into this further.
The resources folder is a common repository for all the various assets that your game will use, such as graphics, sound, etc. The Classes folder is perhaps most important of all, this is where your non platform specific code goes! Right now the contents should look like:
These code files create a simple application to get you started, although we are ultimately going to replace the contents. The expression AppDelegate comes from Mac programming, so if you are a Windows or Linux developer, it might be a bit alien to you. An AppDelegate is a helper object that goes with the main window and handles events common to applications such as starting up, minimizing and closing. You won’t really spend much time here, that instead is where the Scene file comes in. We will look at code shortly so each piece will make a bit more sense.
Now let’s look at the platform specific portions for both win32 and ios_mac.
win32:
ios_mac:
As you can see, each folder contains all the platform specific code, resources and most importantly, project files for each platform. In the case of ios_mac, it further contains platform specific folders for each platform.
All platforms have their own unique entry point ( main, WinMain, etc ) and different ways of handling different things. Most of this is just relevant on start up and cocos2d-x takes care of this for you. However at some point in the future you may need to add platform specific code, such as perhaps an ad network that only works on iOS. This is where you would add platform specific code. That said, 99% of your game logic should be put in the common Classes folder. This make’s it so you can write your code in one platform, then simply open up the project files for another platform and run your game. This is how you are able to handle many platforms with a single code base using cocos2d-x.
To get started developing on MacOS, double click the .xcodeproj in the folder proj.ios_mac. This should automatically load Xcode for you. Now at the top bar you should be able to select which project you want, iOS or Mac. As iOS requires the simulator or a device to execute, initially developing on the Mac can be a great deal quicker.
To get started developing on Windows, double click the .sln file in the folder proj.win32. This will load Visual Studio for you. Simply press Play (Local Windows Debugger) to start the compilation process:
Once you’ve selected, simply click the Play icon. Your project will now compile and a few minutes later you should see:
If you are new to C++, don’t worry, the first compilation is always the nastiest. From now on when you press play, the compilation should be many times faster.
You can also run directly from the terminal using the cocos utility, like so:
Use -p ios to run iOS. This command requires you to have installed the command line tools mentioned earlier. Running from the terminal makes it so you don’t have to open the Xcode IDE if you prefer.
Now let’s take a look at the minimal useful cocos2d-x application. While the cocos new created project creates a Hello World application of sorts, it’s a pretty sloppy starting point. First, it has needlessly complications for an app that is supposed to be a minimum example, it’s commented in a manner that only makes sense if you come from an Objective-C background and finally, it even uses deprecated methods.
Therefore we are going to look at a cleaner Hello World sample. We simply replace the code in each file with the code I provide below. Don’t worry, all of the functionality we hack out will be covered in future tutorials.
Let’s start with the AppDelegate class.
AppDelegate.h
#pragma once
#include "cocos2d.h"
class AppDelegate : private cocos2d::Application
{
public:
AppDelegate();
virtual ~AppDelegate();
virtual bool applicationDidFinishLaunching();
virtual void applicationDidEnterBackground();
virtual void applicationWillEnterForeground();
};
AppDelegate.cpp
#include "AppDelegate.h"
#include "HelloWorldScene.h"
USING_NS_CC;
AppDelegate::AppDelegate() {
}
AppDelegate::~AppDelegate()
{
}
bool AppDelegate::applicationDidFinishLaunching() {
auto director = Director::getInstance();
auto glview = director->getOpenGLView();
if(!glview) {
glview = GLViewImpl::create("Hello World");
glview->setFrameSize(640, 480);
director->setOpenGLView(glview);
}
auto scene = HelloWorld::createScene();
director->runWithScene(scene);
return true;
}
void AppDelegate::applicationDidEnterBackground() {
}
void AppDelegate::applicationWillEnterForeground() {
}
The biggest change I have made from the default implementation is to remove all but the barest requirements of an application. You may notice I’ve also replaced the include guards with pragma once statements. Some people will find this controversial because pragma once isn’t standard and therefore unportable. This may be true, if you are using a compiler from 1985. If on the other hand you are using any modern C++ compiler, pragma once is supported. Include guards and pragma once perform the same task, except pragma once is more concise and less error prone. If you want to switch back to include guards, feel free. From this point on, I will however, not be using them.
OK, back to the code itself. Our AppDelegate header is pretty straight forward, it declares a constructor, destructor and three methods, applicationDidFinishLaunching, applicationDidEnterBackground and applicationWillEnterForeground. All three of these methods are pure virtual functions from ApplicationProtocol, from which Application ( and in turn AppDelegate ) inherit, so we must provide an implementation of each, even if it’s empty.
Now onto AppDelegate.cpp. First we start off with the macro USING_NS_CC; which is just short for “using namespace cocos2d”. Personally I don’t see a big win in using a macro over typing using namespace cocos2d, but generally I find many uses of macros unagreeable. This however is the style the cocos team went with, so I will follow along. As you can see, both or constructor, destructor, applicationDidEnterBackground and applicationWillEnterForeground all have empty implementations, so applicationDidFinishLaunching is where all of our logic resides.
If these names seem a bit long winded to you, they have been taken directly from the iOS world. Basically the enterBackground/enterForeground methods are called when your application gains and loses focus, while applicationDidFinishLaunching is called when your application is loaded ( at the end of the loading process ). Here we get an instance of the Director singleton, then use it to either get the GLView, or create a GLViewImpl, which is a default implementation of GLView. Basically GLView is the OpenGL representation of your window or screen, depending on what kind of device you are running. We then set the resolution of the window ( this is not required, I just wanted a smaller resolution for screen shots ) by calling setFrameSize() then set the view as active by calling Director’s setOpenGLView(). Now that we have a window, we create an instance of our scene calling createScene() and once again, use the Director to set this scene active using runWithScene().
You may notice in the above code that Director is very important to the operation of Cocos2d-x. Director is an implementation of a design pattern known as a Singleton, or as some would say, an anti-pattern. If you’ve spent much time on programming forums, you will see thread after thread calling Singletons evil. In a nutshell, a singleton is a delayed, but guaranteed to be instantiated global variable in a pretty dress. Sometimes too, a global variable is just what you need, which is why you will find a number of game engines make use of singletons to provide globally available interfaces. At this point the matter is pretty much moot, if you use Cocos2d-x, you use Director, or you don’t use Cocos2d-x.
One of the major source of bugs with Singletons, especially in C++, is multithreading. When you have this global instance being access from all kinds of locations and controlling so many things, how do you handle concurrent requests? Well have I got good news for you! You don’t. :)
That’s because Cocos2d-x isn’t thread safe. Or more accurately, Cocos2d-x’s memory management ( anything derived from cocos2d::Ref ) and OpenGL rendering aren’t thread safe. To make this clear, you can use threads in a Cocos2d-x application, but you need to be very careful what those threads interact with. Other than the threading problems, some of the biggest problems that come from using Singletons are related to code maintenence, as you are coupling so many systems together. Fortunately, this is the cocos2d-x team’s problem to deal with, not yours. Unless of course you are on the Cocos2d-x team that is.
Now let's take a look at our scene, HelloWorldScene.
HelloWorldScene.h
#pragma once
#include "cocos2d.h"
class HelloWorld : public cocos2d::Layer
{
public:
static cocos2d::Scene* createScene();
virtual bool init();
CREATE_FUNC(HelloWorld);
};
HelloWorldScene.cpp
#include "HelloWorldScene.h"
USING_NS_CC;
Scene* HelloWorld::createScene()
{
// 'scene' is an autorelease object
auto scene = Scene::create();
auto layer = HelloWorld::create();
scene->addChild(layer);
return scene;
}
bool HelloWorld::init()
{
if ( !Layer::init() )
{
return false;
}
auto label = Label::createWithSystemFont("Hello World", "Arial", 96);
label->setAnchorPoint(cocos2d::Vec2(0.0, 0.0));
this->addChild(label, 1);
return true;
}
In our header once again I’ve replaced the header guard with pragma pack. We are declaring our scene class HelloWorld, which inherits from Layer which is a Node which can receive input events, such as touch, keys and motion. We declare createScene() which returns a static Scene pointer. As you may recall, we called this method earlier in AppDelegate to create our scene. We also override the method init that we inherited from Node and is where we do our initialization logic. Finally there is a bit of macro magic in the form of CREATE_FUNC(HelloWorld).
Let’s take a quick look at exactly what this macro is doing:
#define CREATE_FUNC(__TYPE__) \
static __TYPE__* create() \
{ \
__TYPE__ *pRet = new __TYPE__(); \
if (pRet && pRet->init()) \
{ \
pRet->autorelease(); \
return pRet; \
} \
else \
{ \
delete pRet; \
pRet = NULL; \
return NULL; \
} \
}
Granted, it’s not always the easiest code to read, as this is code for generating code, but essentially after this macro runs, we’ve got:
static HelloWorld* create(){
HelloWorld *pRet = new HelloWorld();
if (pRet && pRet->init())
{
pRet->autorelease();
return pRet;
}
else
{
delete pRet;
pRet = NULL;
return NULL;
}
}
So, essentially, the macro is creating a create() function that allocates an instance of our class, calls the init method that we provided and then, most importantly, calls autorelease() we inherited from Ref. I will cover why this is important in a few minutes, depending of course on how fast you read. :)
Now on to HelloWorldScene.cpp. The createScene() method is pretty straight forward. We create a Scene object, then an instance of our HelloWorld class ( which inherits from Layer ) and add our layer to the scene then return the scene ( which our AppDelegate then passed to Director->runScene() ).
In init() we perform the bulk of our logic. First we call our base classes init function ( which is very important to do ), then create a Label using createWithSystemFont(), which predictably enough, using the built in system font (in this case Arial ) to create the Label’s text. We then set the Labels anchor point to the bottom left, which means this node will be positioned relative to it’s bottom left corner. I will cover anchor points in more detail in the next tutorial, so ignore this for now. Finally we add the freshly created Label to our layer. Finally we return true to indicate that initialization worked as expected.
Now if run our “game”, we should see:
So you may have noticed we create all kinds of pointers using create() calls, but never once called delete. As you can see in the code generated by the CREATE_FUNC, we are creating a new instance of our class, but we never do delete it. Aren’t we leaking memory here? Thankfully the answer is no and the reason is in the call to init() and autorelease() that the macro made and why it was so important we called our base classes init() in our own init method.
This is another legacy of Cocos2d-x’s Objective-C roots. Objective-C provides a form of memory management via ARC, Automatic Reference Counting. Basically each time something references an object, it’s count is increased, each time something reference an object goes away, the count is decreased, when the count hits zero, the object is released. In many ways, this is pretty much the same functionality C++ smart pointers provide, but Cocos2d predates the standardization of smart pointers.
There is more to their usage that we will cover later on. For now though, you can safely assume any cocos2d object created with a create() function, that inherits from Ref, does not need to be delete. In fact, such objects must not be deleted!
In the next part we will take a look at working with graphics in Cocos2d-x. Don’t worry, it will be much less verbose and much heavier in code!
Programming
Cocos2D Tutorial 2D | http://www.gamefromscratch.com/post/2014/09/29/Cocos2D-x-Tutorial-Series-Installation-Creating-a-Project-and-Hello-World.aspx | CC-MAIN-2018-39 | refinedweb | 3,888 | 63.29 |
The following program is not state of the art, but it's simple, and easy to modify. For simplicity, it doesn't even check for errors, but if you really want, copy the relevant code segments here.
The pgm header looks like this:
P5 - this tells the image viewer it's a pgm file
#Comment - Which can come anywhere, but I put it here
columns rows - number of columns and rows
255 - this can actually be any number. It tells the image viewer the max value of a pixel in the image. Leave it at 255, which is greyscale (or regular RGB). I can honestly not think of a situation where you'll need a pgm/ppm file with more. If the maximum pixel is less than 255, leave it at 255.
For ppm, change the P5 to P6.
There are breaks at the end of each line. When converting to raw, my program (here) prints out the number of rows and columns. You'll have to enter their values in the following program.
Wherever 512 is written (linked and bold), change it to the number of columns, followed by the number of rows. (I've used 512*512 here as it is the standard size).
/****************************
*
* raw2pgm.cpp - makes a .pgm picture out of the .raw picture
*
****************************/
#include <stdio.h>
void main()
{
// open raw file and pgm file
FILE *fin = fopen("FILE_NAME.raw", "rb");
FILE *fout = fopen("FILE_NAME.pgm", "wb");
unsigned char buffer[512*512];
// write header
fprintf(fout, "P5\n#Created by Footprints\n512 512\n 255\n");
// copy file
fread(buffer, 1, sizeof(buffer), fin);
fwrite(buffer, 1, sizeof(buffer), fout);
// close files
fclose(fin);
fclose(fout);
}
Log in or registerto write something here or to contact authors.
Need help? accounthelp@everything2.com | http://everything2.com/title/Image+Processing%253A+how+to+turn+a+RAW+image+back+to+PGM | CC-MAIN-2014-52 | refinedweb | 295 | 74.29 |
Hi,On Sat, May 19, 2001 at 05:29:32PM +1200, Chris Wedgwood wrote:> > a> thing. I brought this a while ago and in theory it's not too hard, we> just need to get Hans to officially designate part of the SB or> whatever for the UUID.There are other ways to deal with it: both md and (I think, in newerreleases) LVM can pick up their logical config from scanning physicalvolumes for IDs, and so present a consistent logical device namespacedespite physical devices moving around. --Stephen-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2001/5/19/64 | CC-MAIN-2013-48 | refinedweb | 116 | 58.42 |
Hide Forgot
Anaconda crashed while attempting an install of rawhide 20050528 on software
RAID using two I2O arrays /dev/i2o/hda and /dev/i2o/hdb as disks.
# partition table of /dev/i2o/hda (identical on /dev/i2o/hdb)
# (hda1 -> md1) /boot
# (hda2 -> md2) swap
# (hda3 -> md0) /
unit: sectors
/dev/i2o/hda1 : start= 63, size= 514017, Id=fd, bootable
/dev/i2o/hda2 : start= 514080, size= 1012095, Id=fd
/dev/i2o/hda3 : start= 1526175, size= 70236180, Id=fd
/dev/i2o/hda4 : start= 0, size= 0, Id= 0
I have the same crash with existing arrays (build from regular ide disk partitions).
partedUtils.DiskSet.mdList is empty for here
I solved that by starting the raids in fsset.py:mdadmConf().
RCS file: /usr/local/CVS/anaconda/fsset.py,v
retrieving revision 1.253.2.1
diff -u -p -u -r1.253.2.1 fsset.py
--- fsset.py 25 May 2005 18:53:39 -0000 1.253.2.1
+++ fsset.py 29 May 2005 22:56:12 -0000
@@ -1123,6 +1123,9 @@ class FileSystemSet:
def mdadmConf(self):
raident = 0
+ diskset = partedUtils.DiskSet()
+ diskset.startAllRaid()
+
cf = """
# mdadm.conf written out by anaconda
DEVICE partitions
@@ -1135,6 +1138,7 @@ MAILADDR root
raident +=1
cf = cf + ent.device.mdadmLine()
+ diskset.stopAllRaid()
if raident > 0:
return cf
return
I assume there may be a better way to fix that.
I think that should be targeted at FC4.
Ronny - can you explain in more detail what you have for partitioning before
starting?
Applied a fix for Ronny's problem. Warren, please test with FC4 at your leisure
and advise if it works for you or not.
Warren,
Kindly reopen if this one is not fixed yet | https://partner-bugzilla.redhat.com/show_bug.cgi?id=159079 | CC-MAIN-2020-05 | refinedweb | 277 | 65.93 |
In this 2-part series, Google Developer Experts Jurgen Van de Moere and Todd Motto share their 12 favorite productivity tips for developing Angular applications using WebStorm.
You can check out part one here. In this second part, Todd shares his personal top 7 WebStorm features that allow him to increase his productivity on a daily basis:
Use Import Path Calculation
Live Templates
Run Tests within the IDE
Travel through Time
Use TypeScript Parameter Hints
Navigate using Breadcrumbs
And using WebStorm to look up Angular Documentation
Each tip will power up your productivity while developing Angular applications in WebStorm. Let’s explore these tips.
Before we get started!
When making changes to settings, remember that WebStorm allows you to change Settings/Preferences at an IDE scope and at a project scope separately. ‘rxjs’
Adding rxjs to the list yields:
import {Observable} from ‘rxjs/Observable’
WebStorms skips the rxjs module and imports the Observable submodule automatically for you!
Extra tip: Format input to use space inside curly brackets in Preferences | Editor | Code style | TypeScript – Spaces – Within – ES6 import/export braces.
Tip 7: Use Live Templates
When you find yourself writing certain patterns of code repeatedly, create a Live Template to quickly scaffold the block of code. WebStorm already comes with some predefined Live Templates that you may modify to fit your development style.
To create a Live Template, navigate to:
[macOS] WebStorm | Preferences | Editor | Live Templates
[Windows / Linux] File | Settings | Editor | Live Templates
You’ll see that WebStorm has already bundled the predefined Templates into categories. I created a category to bundle my ngrx Live Templates by clicking on the + sign and choosing “Template Group." I then created a new Live template within it by clicking on the + sign again, but choosing Live Template this time.
Let me walk you briefly through the elements that make a Live Template a productivity success:
Abbreviation: The shortcut you’ll type into the Editor to invoke your template.
Description: Tells you what the template does when invoked.
Template text: This is the code fragment to be scaffolded upon invocation. Take advantage of the powerful Live Template Variables that allow you to replace them with your desired text upon scaffolding.
Context: Choose in which language or pieces of code WebStorm should be sensitive to the Template.
Options: Define which key will allow you to expand the template and reformat it, according to the style settings defined on WebStorm | Preferences | Editor | Code Style.
You are ready to try out your template. Open a file that honors the context you defined and type your shortcut, press the defined expansion key and watch your template appear for you! If you defined any variables, the cursor will be placed where the first variable should be entered. If there are other variables defined, you can use tab to navigate to them – no need to click.
Tip 8: Running Tests
WebStorm is an excellent testing tool. You can run a variety of JavaScript tests right from the IDE, as long as you have the Node.js runtime environment installed on your computer and the NodeJS plugin enabled. Here are some productivity tips when running tests.
Continue reading %Top 12 Productivity Tips for WebStorm and Angular – Part 2%
Link: | https://jsobject.info/2017/11/09/top-12-productivity-tips-for-webstorm-and-angular-part-2/ | CC-MAIN-2019-22 | refinedweb | 537 | 52.29 |
External DLL, Access
Origin C can make calls to functions ( C linkage only ) in external DLLs created by C, C++, C++(.Net)C++(.Net) DLL, C#C# DLL or Fortran compilers. To do this, you need to provide the prototype of a function in a header file and tell Origin C which DLL file contains the function body. Assume the functions are declared in a header file named myFunc.h. You should include this file in your Origin C file where you want to call those functions, like:
#include <myFunc.h> //in the \OriginC\System folder
#include "myFunc.h" //in the same folder as your Origin C code
#include "C:\myFile.h" //in specified path
Then you should tell Origin C where to link the function body, and you must include the following Origin C pragma directive in the header file myFunc.h, just before your external DLL function declarations. Assume your DLL file is UserFunc.dll:
#pragma dll(UserFunc) //in the Origin exe folder
#pragma dll(C:\UserFunc) //in specified path
#pragma dll(UserFunc, header) //in the same folder as this .h file
#pragma dll(UserFunc, system) //in the Windows system folder
The Origin C compiler supports three calling conventions: __cdecl(default), __stdcall and __fastcall. These calling conventions determine the order in which arguments are passed to the stack as well as whether the calling function or the called external function cleans the arguments from the stack.
Notes: you don't need to include the .dll extension in the file name. And all function declarations after the pragma directive will be considered external and from the specified DLL. This assumption is made until a second #pragma dll(filename) directive appears, or the end of the file is reached.
To make sure the external dll works correctly, the 32-bit dll is for 32-bit version of Origin, and the same for the 64-bit version. #ifdef _OWIN64 is used to detect which version (32-bit or 64-bit) of current Origin is, so to determine which version of dll to be loaded. For example,
#ifdef _OWIN64
#pragma dll(UserFunc_64, header)
#else
#pragma dll(UserFunc, header)
#endif //_OWIN64
A good and complete example of how to access an external DLL is Accessing SQLite Database. There are other Origin sample projects demonstrating how to call a function from a C dll, a Matlab dll or a Fortran dll in Origin C. These examples can be found in this zip file, under the \Programming Guide\Calling Fortran, \Programming Guide\Calling MATLAB DLL and \Programming Guide\Calling C DLL subfolders. | http://cloud.originlab.com/doc/OriginC/guide/Calling-Third-Party-DLL-Functions | CC-MAIN-2020-16 | refinedweb | 427 | 61.67 |
Full Disclosure
mailing list archives
On Wed, Mar 03, 2004 at 16:25:20 +0200,
Georgi Guninski <guninski () guninski com> wrote:
Georgi Guninski security advisory #67, 2004
Buffer overflow in qmail-qmtpd, yet still qmail much better than windows
Systems affected:
tested on qmail 1.03 on linux
Risk: Low - not in default install and i can't exploit it
RELAYCLIENT needs to be set by a trusted user in the first place, so if
you are getting bad values for RELAYCLIENT you have other problems.
Date: 3 March buffer overflow in qmail-qmtpd.c if the env. var. RELAYCLIENT
is long between 4 and 1003. A static buffer gets overflowed due to integer
overflow. I can't exploit it on linux, though it may turn exploitable.
Details:
Basically the idea is it is possible getlen() to return (unsigned long)-1
or -4 and then the check
if (len + relayclientlen >= 1000)
passes though len == (unsigned)-4.
Then len is used to copy in a static buffer.
The check for len is *before* len is updated, so it is possible to
update len and then return len.
A lot of memory gets overwritten including qq. if a C compiler sets ssin
after buf, i believe it will be exploitable.
The trick is that
len = 10 * len + (ch - '0');
can return -1 if len == 0 and ch == '/'
How to reproduce:
--------------------------------------------------
[joro () sivokote tmp]$ ./qma-qmtpd.pl
qmail-qmtpd buffer overflow. Copyright Georgi Guninski
Cannot be used in vulnerability databases and similar stuff
<in another terminal>
ps awx
2080 pts/9 S 0:00 /var/qmail/bin/qmail-qmtpd
gdb attach 2080
cont
<in first terminal hit enter>
Program received signal SIGSEGV, Segmentation fault.
0x0804b096 in alarm ()
--------------------------------------------------
-qma-qmtpd.pl----
#!/usr/bin/perl -w
#similar stuff
use IO::Socket;
use IO::Poll;
$ENV{"RELAYCLIENT"}="M\$UX";
open(SOCK,"|/var/qmail/bin/qmail-qmtpd");
my $req;
my $fromaddr="they\ () m\$ weenies";
my $touser="postmaster";
print "qmail-qmtpd buffer overflow. Copyright Georgi Guninski\nCannot be used in vulnerability databases and similar
stuff\n";
$req = "1:\n,";
$req .= "1:V,";
$req .= "/:"; #biglen - this is how we code '-1'
$req .= ",:"; #len - this is how we code '-4'
print SOCK $req;
my $ch=getc();
$req = "v" x 100000;
print SOCK $req;
close SOCK;
-----------------
Fix:
Patch by me, use at your own risk:
-patch-----
--- ../qmail-1.03/qmail-qmtpd.c 1998-06-15 13:53:16.000000000 +0300
+++ qmail-qmtpd.c 2004-02-29 16:15:13.000000000 +0200
@@ -45,8 +45,8 @@
for (;;) {
substdio_get(&ssin,&ch,1);
if (ch == ':') return len;
- if (len > 200000000) resources();
len = 10 * len + (ch - '0');
+ if (len > 200000000 || ch < '0' || ch > '9') resources();
}
}
@@ -193,8 +193,8 @@
substdio_get(&ssin,&ch,1);
--biglen;
if (ch == ':') break;
- if (len > 200000000) resources();
len = 10 * len + (ch - '0');
+ if (len > 200000000 || ch < '0' || ch > '9') resources();
}
if (len >= biglen) badproto();
if (len + relayclientlen >= 1000) {
-----------
Vendor status:
djb is aware of the bug
Georgi Guninski
_______________________________________________
Full-Disclosure - We believe in it.
Charter:
_______________________________________________
Full-Disclosure - We believe in it.
Charter:
By Date
By Thread | http://seclists.org/fulldisclosure/2004/Mar/126 | CC-MAIN-2014-42 | refinedweb | 507 | 62.88 |
Wikiversity:Colloquium/archives/May 2007
Contents
- 1 Regarding grants...
- 2 Slogan contest complete - Motto contest continues to May 12
- 3 YouTube for educational video
- 4 notification for changes to individual Wikiversity pages
- 5 Tending the garden...
- 6 Wikimedia Foundation mission and neutral educational content
- 7 Interest Group
- 8 m:RFCU?
- 9 ...
- 10 Fix the motto?
- 11 Brainstorming Meeting for French
- 12 Issues with categories...
- 13 The javascript of NavFrame (?)
- 14 Changes to the main page
- 15 Minor interface change...
- 16 "Wikiversity Junior" (or some other name)
- 17 Motto and slogan contests: discussion of outcomes
Regarding grants...
I was thinking a bit about grants; I believe we have had a bit of discussion at Wikiversity about grants. It dawned on me that perhaps one way Wikiversity could utilize grants is as bounties (see: Wikipedia:Bounty_board).
A collaborative grant could be written. If the best stable version of the grant was approved and funded, then a trusted member of the community who could actually be held responsible for the funds (identity verification?) could be responsible for paying out fulfilled bounties related to whatever project was allocated the grant money. I'm not familiar with how grant funds are actually allocated from a logistical POV, so details would need to be worked out. This perhaps seems to be a realistic possibility presuming it can be kept within the scope of what Wikiversity is. --Remi 07:02, 1 May 2007 (UTC)
- There are a few Wikiversity pages related to grants (see Category:Grants) and there is a list of possible funding sources at Wikiversity NPC#External links. I'm not sure who is in charge of grants at the Wikimedia Foundation level. Danny resigned from his position. --JWSchmidt 14:07, 1 May 2007 (UTC)
Slogan contest complete - Motto contest continues to May 12
Please help us finish selecting the final motto: Wikiversity:Motto_contest. It is suggested that the Motto contest end on May 12, 2007.
The winning slogan is "set learning free".
Summary of final round of slogan contest: The Wikiversity slogan is a phrase for listing at top of main page with "Welcome to Wikiversity". "Set learning free" had 68% of the positive support statements in round 5. There were 30 support statements for "set learning free". "Knowledge is free" had 4 in support and "Because knowledge should be free" had 10 in support. The latter 2 options had 3 comments each against use. "Set learning free" had no statements against. As for the wording of "set learning free" vs "learning set free," a few more people preferred "set learning free".
Archive of discussions: Motto_contest/Round_6
Thank you for your participation! --Reswik 14:31, 2 May 2007 (UTC)
- Hello, I am a custodian on the French Wikiversity and I wonder what we should do of the slogan Set learning free. Are we supposed to translate it by ourselves and use it or should we ask for a translation on beta or meta ? Thanks for your help. Julien1311 talk 17:01, 3 May 2007 (UTC)
- Julien, The the choice of the slogan for the English wikiversity involves, mostly, considerations in the English language. The choice of the slogan for the French wikiversite should involve considerations in the French language. Beta is a good place for coordinating this effort by various wikiversities, if we choose to do so. Hillgentleman|Talk 02:23, 4 May 2007 (UTC)
YouTube for educational video
Just an idea :) -- Taeke 07:34, 4 May 2007 (UTC)
- It's a good idea, too imo. Don't forget about the Internet audio and video department and WikiU Film School for those who want to get into video production and a YouTube learning project. -- CQ 14:42, 4 May 2007 (UTC)
- This sounds familiar --Rayc 23:53, 4 May 2007 (UTC)
notification for changes to individual Wikiversity pages
I'm not sure how long this feature has been available, but I just found out about it. Web feeds are now available for individual Wikiversity pages by clicking on the "history" tab for a page. See Web syndication#History pages. --JWSchmidt 14:42, 4 May 2007 (UTC)
Tending the garden...
This past week, McCormack has been instrumental in mopping up vandalism on the front page. I put cascading protection on it today. I'm wondering whether we should keep it there, or lift it later? I'm concerned that leaving the page protected might send a messages that this is not an open wiki...but how to best balance this against sporadic vandalism? --HappyCamper 20:49, 5 May 2007 (UTC)
- The same guy has been coming back in different guises as a differently registered user for some time now. He's a regular, with a profile. We have no choice but to fully protect the main page and all of its templates. With reaction times by admins at 20-30 minutes, and a potential audience of children, to do anything other than a full protect of vulnerable pages would be irresponsible on our part. Sad, but we must act. McCormack 20:56, 5 May 2007 (UTC)
- Not to mention unprofessional, and a host of other things. Here's my thinking...as much as I don't want to protect the front page, I think keeping the front page protected is the sensible compromise. After all, we want to attract good edits, not bad edits! On another note, I think custodians should put those pages related to the front page on their watchlists so that requests for editing can be fulfilled quickly. (BTW: I think cascading protection seems to be working now, although the pages don't seem to indicate this properly). --HappyCamper 21:52, 5 May 2007 (UTC)
- Cascading protection stopped working a while back, so I started doing direct semi-protection for all the individual main page content pages. If cascading protection is working again now, that is good. Probably the best way to get the attention of a custodian is to come to the IRC channel #wikiversity-en. There are often several custodians "in there" who will hear their computers say "admin" if you type "admin" in the channel. If that does not work, you can go to #wikimedia-stewards and get help from a steward. Also, it might be useful to start using CheckUser to help deal with serial vandals. --JWSchmidt 22:23, 5 May 2007 (UTC)
Wikimedia Foundation mission and neutral educational content
The most recent WikipediaWeekly episode mentions changes in the Wikimedia Foundation Mission statement (see Resolution:Mission and Vision statement). Last year a draft version of the mission statement said "The mission of the Wikimedia Foundation is to empower people around the world to collect and develop knowledge under a free license, and to disseminate it effectively and globally." A version currently being discussed says, "The mission of the Wikimedia Foundation is to empower and engage people around the world to collect and develop neutral educational content under a free content license or in the public domain, and to disseminate it effectively and globally." On WikipediaWeekly , the word "neutral" was linked to one of the original Wikipedia policies, the policy for neutral point of view. Wikiversity was mentioned as a Wikimedia Foundation project that is not locked into the traditional Neutral point of view (NPOV) policy that is used at other projects such as Wikipedia. In particular, we have a Wikiversity:Disclosures policy which attempts to provide "wiggle room" for Wikiversity participants so that some Wikiversity content would be allowed to depart from a neutral point of view. Does "neutral educational content" imply content that is governed by the traditional NPOV policy? If the Wikimedia Foundation does adopt a mission statement calling for "neutral educational content" would that be acceptable to the Wikiversity community? The "Disclosures" policy calls for Wikiversity to strive for neutrality but attempts to allow for academic freedom and scholarly explorations that might need to go outside of the confines of NPOV. --JWSchmidt 22:24, 6 May 2007 (UTC)
- I don't want to ignore this question, but my hunch is that at best we will have to wait for the ambiguity to clear up before we can figure out what the implications for Wikiversity are... --HappyCamper 19:41, 7 May 2007 (UTC)
- There are a number of spots that "Neutral Point of View" doesn't really make sense, and one policy does not fit all projects. In education and research, you are tackling different points of view on a subject, and sometimes they might not even be your own view. NPOV makes sense when you are writing an Encyclopedia, but not everything is an Encyclopedia. Historybuff 00:44, 8 May 2007 (UTC)
- I think the concept of neutrality is very problematic - however, it has been used to practical effect in Wikipedia. But, even in Wikipedia, I think that NPOV is usefully conceptualised as the means rather than the end - the end being quality. This leads me to believe that, here on Wikiversity, we should think about the quality of an educational experience, and to do as best we can to facilitate that experience. I think that following the spirit of NPOV would be to encourage people to think broadly about a topic - to engage with differing points of view in order to come to a deeper understanding of each point of view, the context of each point of view, and to sharpen their own point(s) of view. If we could encourage this process, this would, I believe, be completely within the NPOV frame of reference. I fully believe in the place for academic freedom, to explore a point of view on its own terms, and the potential this gives for rich educational experiences - but I feel that we should be going one better than emulating a system where academic traditions simply don't speak to each other. I think a broad-minded, collaborative model is a better fit for a project like Wikiversity. I think the Disclosures policy is a good start for Wikiversity to begin to really understand how knowledge is produced and communicated, and that as long as we can show differing points of view in the context of a debate, then we will be building on the good work started by Wikipedia. And, just to respond to HappyCamper's point - this "ambiguity" is still in negotiation - this is a Wikimedia-wide debate, and I would encourage Wikiversity participants to join these discussions, via mailing lists and/or wikis, to help construct a sustainable vision that will work for every project, and all together. Cormaggio talk 12:47, 8 May 2007 (UTC)
Interest Group
I'd like to propose a Category:Internal Medicine Interest Group. I'm not sure if this is the place to propose this, but it would be helpful so that people of similar interests can find each other. Any suggestions?PalMD 17:41, 7 May 2007 (UTC)
- We have Wikiversity:Project requests and Wikiversity:Matchmaking board, but traffic on these pages is quite low. So far, it seems that participants just click on recent changes, and if they see something interesting, they'll jump in and help out. This also means that the more edited pages tend to get more attention. --HappyCamper 19:34, 7 May 2007 (UTC)
- Perhaps my understanding of the current way Wikiversity works is a bit askewed. However, as I understand it, this is not like Wikia where if you want to start a Wikia, you need to propose it and get it approved. While organization, and consensus are great, if you see something at Wikiversity that you feel is or perceive as lacking, you are free to just go ahead and create it yourself. Be bold.
- Wikiversity:Scope might be a useful page. --Remi 08:24, 9 May 2007 (UTC)
m:RFCU?
If the community is not yet ready for checkusers, perhaps we may request it on meta.--Hillgentleman|Talk 10:49, 10 May 2007 (UTC)
- There was a discussion of this on IRC the other day. Yes, we want a checkuser. No, a suitable volunteer can't be found. There is some opinion that it shouldn't be a bureaucrat, because power shouldn't become too centralised. It probably also needs to be someone who actually understands IP addresses. McCormack 04:21, 11 May 2007 (UTC)
- I suggest User:SB_Johnny who has been doing CheckUser work at Wikibooks. See also: Wikiversity:CheckUser policy. --JWSchmidt 04:28, 11 May 2007 (UTC)
...
I'm trying to put
* '''{{subst:CURRENTMONTHNAME}} {{subst:CURRENTDAY}}, {{subst:CURRENTYEAR}}''' - Department founded!
within the Template:Original department boilerplate so that when someone uses the template, they won't have to input the date. The ways I have tried it, it does not display properly - it statically substitutes the date in the template, not dynamically. Is there a way to do this so it is dynamic? --Remi 07:43, 12 May 2007 (UTC)
- {{subst:CURRENTMONTHNAME}}, or October. See m:help:substitution, and {{welcome}} for an example. On meta it is called delayed substitution, or includeonly magic.Hillgentleman|Talk 08:53, 12 May 2007 (UTC)
- For a cut-and-paste version, you would need to use {{subst}}: {{Subst:subst}}CURRENTMONTHNAME}}, saving as May. Hillgentleman|Talk 09:06, 12 May 2007 (UTC)
Fix the motto?
Hi,
On your top page, I couldn't help noticing the motto right under "WIKIVERSITY":
Welcome to Wikiversity,
set learning free.
That's a run-on sentence, dudes. Instead, may I suggest?:
Welcome to Wikiversity.
Set learning free.
Cheers, Andy (Vancouver, Canada)
- I agree. Also possible are "to set learning free", setting learning free...: ) --Hillgentleman|Talk 10:49, 10 May 2007 (UTC)
- Some stylised corporate mission-statements are shown in all-lower-case. It's a design-thing rather than a grammatical thing.
Please see a suggestion for further discussion of this issue below: Wikiversity:Colloquium#Motto_and_slogan_contests:_discussion_of_outcomes. Thanks, --Reswik 16:02, 12 May 2007 (UTC)
Brainstorming Meeting for French
Could someone explain to me how the whole chat things works so that I can set up a brainstorming meeting for the French Department?Elatanatari 17:57, 6 May 2007 (UTC)
- "the whole chat thing" <-- Do you want to use internet relay chat (text) or voice? --JWSchmidt 22:30, 6 May 2007 (UTC)
textElatanatari 23:52, 7 May 2007 (UTC)
- Sorry Elatanatari, have you been able to figure this out yet? Does Wikiversity:Chat or the Wikipedia IRC article help at all? (I think we definitely need a better local page about how to use IRC than simply Wikiversity:Chat.) Elatanatari, what computer system are you using - Mac, PC, Linux..? Can you download software to your computer? Do you use Firefox? Cormaggio talk 13:49, 11 May 2007 (UTC)
- I figured it out, Thanks! Elatanatari 20:02, 13 May 2007 (UTC)
- Did this get scheduled? Did I miss it? Historybuff 14:11, 24 May 2007 (UTC)
Issues with categories...
It is my understanding that the standards for naming are that categories should be named like "Category:Emergency medicine" and not "Category:Emergency Medicine". This has the benefit of being more compatible with Wikipedia and probably other Wikimedia Wikis. On the other hand, one could make the argument that "a course" is a proper noun, and therefore if one wants to categorize pages according to what course they are part of, the category should be capitalized. That is, on a syllabus, you would see "Research Methods in Psychology" at the top; according to my recollection, you would not see, "Research methods in psychology" as the syllabus title. There is the option of having both categories on an applicable page, too. It seems to me that there is a great deal of information and knowledge here Wikiversity, it can just be difficult to see because of the way the information is organized. Luckily, MediaWiki is astonishingly flexible. --Remi 05:22, 11 May 2007 (UTC)
- IMHO, if we mix course names with categories, this is a hierarchical error. It dilutes the critical category system. There are other, and better ways, for holding together pages within a project. Internal project navigation should use infoboxes or navbars placed at the top of a project. BTW, I think your re-organisational efforts are great! McCormack 08:06, 11 May 2007 (UTC)
- "if we mix course names with categories, this is a hierarchical error" <-- There is nothing wrong with having a category for all of the pages in a course. A single course could have hundreds of pages and categories are a fundamental tool for organizing related pages. "project navigation should use infoboxes or navbars" <-- this is true, and we really need to make an effort to also use these tools to organize Wikiversity projects. So far we have just a few "navigation templates". There are many good examples of navigation templates at Wikipedia. See also w:Wikipedia:Navigational templates. We really need to start being more sophisticated about our page navigation tools here at Wikiversity. --JWSchmidt 13:43, 11 May 2007 (UTC)
- I'm also wondering about category names and capitalization and agree about the sophistication aspect. Wikiversity:Maintenance now contains links to Wikiversity:Categories and Wikiversity:Templates as tasks and taskforces. Some of the issues mentioned here are being covered. -- CQ 15:04, 24 May 2007 (UTC)
Is it possible?(see:mediawiki:common.js, mediawiki:common.css)
- To make the links in the table of content for collapsed section operational?
- a switch for "opening all navframes on a page"?
- a switch for "opening all subframes of a frame"?
-Hillgentleman|Talk 14:50, 26 May 2007 (UTC)
- Additional discussion at Talk:Threaded discussions with NavFrames. --JWSchmidt 16:25, 26 May 2007 (UTC)
Changes to the main page
For the first two months after the launch of this website there was a major effort to import the existing pages that had been created by the Wikiversity community at Wikibooks (page imports). By December, most of those imported pages had been fit into the new namespace structure of this website. The old hierarchy of Wikibooks "school" and "department" content development projects was integrated into the new school and topic namespaces. At that time, I thought Wikiversity was ready to begin making the Main Page more useful to new participants and I started Wikiversity:Main page design changes. For the next four months there was not much interest from the community in changing the Main Page so I moved on to other tasks such as creating major portals with featured content. Finally, this month there seems to be an increase in interest in the task of making Wikiversity more welcoming to new visitors, a task which logically involves asking exactly what the Main Page should accomplish. Three specific questions are:
- Should the main page be designed to help new visitors understand Wikiversity?
- Is the Main Page bias towards college-level content preventing people from participating if they are interested in non-college-oriented learning?
- How should we fit the Wikiversity slogan onto the main page?
Please participate at Wikiversity:Main page design changes if you want to help plan improvements for the Main Page. --JWSchmidt 15:39, 22 May 2007 (UTC)
My answers, in order, would be:
- No.
- Question needs reformulating.
- Not important.
To flesh out:
- I'm familiar with your concept of everything as a learning project, but assuming WV is a learning project, then a disproportionate amount of the main page is given over to this single project. The Wikipedia way is to use the main page mostly for accessing a wide variety of content and to rotate these content access points. I think Wikiversity needs to follow this familiar model. The dominant "help/learn about WV" should be reduced to a sidebar or lower panel, or a separate portal. Having seen how much time and care you've personally put into WV as a learning project, I can appreciate this might be a difficult time for trying on new glasses.
- There's little level bias, because there's little room for any bias, given the dominance of the WV learning project (which is neither tertiary nor pre-tertiary). On the other hand, as I glance down the page, I see terms constantly cropping up which have a clear tertiary bias (e.g. names of faculties, mentions of research). I'd say there is an absence of obviously pre-tertiary openness.
- As I said, I think 3 is not important.
As regards the pre-tertiary portal, I was vaguely thinking of giving a few weeks for comment to collect and settle, and then give it a shot, in the light of comments given. I haven't forgotten it. -- McCormack 04:04, 24 May 2007 (UTC)
- "Wikiversity needs to follow this familiar model [Wikipedia's main page]" <-- Why? Wikipedia's main page design serves a much more mature wiki project where most visitors are looking for existing content. At the start of any wiki project the main need of the wiki community is totally different. The main need for a new project is to explain the project to new visitors and attract new participants (the editors who will create the content). The sidebar is a convenient tool for experienced wiki participants who know from past experience what the cryptic names of the links mean. It is an error to expect new visitors to puzzle out tersely worded links in the sidebar. I don't understand the comment about new glasses and "difficult time". As an explicitly and fundamentally education-oriented project, Wikiversity should lead the way among Wikimedia Foundation projects in finding ways to help people learn about how to participate in wiki communities in general and Wikiversity in particular. Maybe five years from now when Wikiversity has lots of learning resources and we have taught the world how learn by editing wikis then it will be time to adopt a Wikipedia-style main page. This is not the time for Wikiversity to pretend to be Wikipedia. "There's little level bias" <-- We could do a test and have a group of pre-college educators look at the Main Page. I don't have any doubt what they would say. In my mind, the deeper question is, even if the Main Page and the Browse page were less biased towards university-level content, would more pre-college educators and learners participate at Wikiversity? Nobody said that #3 is particularly important, it is just a related issue that has arisen at this time. We need to decide if we want to use the tagline feature on the Main Page. --JWSchmidt 05:32, 24 May 2007 (UTC)
- I largely agree with you, JWS. The dilemma over WV's current stage of development (is it a learning project or does it provide content?) is a difficult one to resolve - particularly when "it will be time to adopt a Wikipedia-style main page". I think there is some international divergence of practice between the 4 wikiversities on this point. The work you have put into WV as a learning project is impressive. "would more pre-college educators and learners participate at Wikiversity?" <-- we won't know until we try ;-) -- McCormack 07:00, 24 May 2007 (UTC)
- There are five now McCarmack - Italiano Wikiversità has joined the list. It might be a good idea to compare all five Main Pages and "homogenize" some of the layout (but certainly not in a 'cookie-cutter' fashion). I'll add Wikiversity:Main page design changes to my watchlist. -- CQ 22:14, 28 May 2007 (UTC)
Minor interface change...
The main namespace pages currently use "article" on the upper left tab, and I was wondering if it wouldn't be better to use either "content" or (following wikibooks) "module" instead. Our content really isn't organized as articles. Any thoughts?--SB_Johnny | talk 17:56, 27 May 2007 (UTC)
- I think "resource" would be good... it might even help distinguish Wikiversity from Wikibooks a bit... kind-of resolving some of the perceived "ambiguity of purpose" when thinking of the relationship between the two projects. Also, these pages have already been identified as "learning resources". CQ 02:18, 28 May 2007 (UTC)
- I quite like "resource" in this case (in general i hate the term on web pages) because it can denote a sort of piece of content or 'learning object' or also a resource for stimulating learning, like a learning project. Countrymike 02:50, 28 May 2007 (UTC)
- Lesson? Hillgentleman|Talk 02:20, 28 May 2007 (UTC)
- The name used needs to be very general. We have research project pages that I would not want to have being called "lessons". Wikiversity main namespace content includes many types of learning resources. "learning object" might be as general as "learning resource", but I prefer "learning resource" ("resource" for short). --JWSchmidt 02:45, 28 May 2007 (UTC)
"Wikiversity Junior" (or some other name)
There has been a discussion on IRC about a K-12 portal. Two questions arise: (1) is it a good idea? (probably), and (2) if so, what do we call it (much more difficult). Naming suggestions so far are listed below. Any reaction to either question is welcome. McCormack 05:29, 18 May 2007 (UTC)
- K-12 Portal
- Wikiversity Junior
- Wikischool
- Wikiversity:Wikimentary
- Portal:Elementary and Secondary Educators
- School:Early Learning
- It's a great idea in general. I kinda prefer "K-12 portal" myself - however "K-12" isn't a universally recognised term, and naming is going to be inherently problematic. Are "Primary" and "Secondary" more widely understood perhaps? Whatever we do, we can explain this on a Portal:Teachers page (and others), and set up a few redirects from commonly-used words/phrases. (I also then think of people who don't fit into traditional schooling patterns, the best example of whom would be homeschoolers - were Ben or Ariannah Armstrong in on that conversation by any chance?) Cormaggio talk 11:17, 18 May 2007 (UTC)
- No, they weren't in on it. But I think it would be great if we schedule a chat with a wider range of people, including those with an interest in the area. I think "primary" isn't a great word as some systems prefer "elementary". One needs an all-embracing term for what Americans very efficiently call "K-12" and what everyone else very sanely calls "school" (school being far too synonymous with tertiary education in the US). McCormack 11:22, 18 May 2007 (UTC)
- It would be prudent to try to avoid anything which may look like an artificial boundary. Perhaps portal:wikiversity for adolescences, portal:wikiversity for {{subst:what is the technical term for children under 12...? primary school pupils? }}. Make it clear that they are no different from anyone else in the community, but the portals might be a place for them to collect useful information. Hillgentleman|Talk 13:09, 18 May 2007 (UTC)
- Thanks for your opinion, Hillgentleman. Just to clear up a possible misunderstanding: "K-12" means "Kindergarten to 12th Class", so if you count Kindergarten as 1 year and the first class is called "1st class", that adds up to 13 years of school - i.e. perhaps age 5 to 18. (But I'm not American, so someone may need to correct me). "K-12" does not mean "up to age 12", which I think (??) is what you thought. The boundary we'r talking about is the one between secondary and tertiary education, which in most countries is a huge one. Among other things, staff and pupils have very different timeloads, which affects the way in which they participate in wikis. McCormack 13:19, 18 May 2007 (UTC)
- Afterthought: it took me about a year (or more?) to work out the meaning of the term K-12! McCormack 13:30, 18 May 2007 (UTC)
- It's funny - I never used the term "K-12" until I came to the UK, where it's common parlance among the education department at my university. However, global education systems are so varied that it's going to be impossible to come up with something that's workable in all instances (see: w:Category:Education by country). "Wikiversity Junior" seems to be neutral enough - but it only seems appropriate for younger children (perhaps under 9, or so). I like the idea of scheduling a chat - but it is probably a good idea to brainstorm here or on a specific wiki-page first so that the chat can become most productive. How about Wikiversity:Educational stages? And on a slight tangent to this, I also want to delineate educational stages from learner levels - for example, there are many adults who cannot read, and for whom the category "primary" would be inappropriate - I think we have to keep in mind generic terms like "beginner", "intermediate" and "advanced" in parallel with considerations for existing, conventional schooling systems. Cormaggio talk 13:57, 18 May 2007 (UTC)
- I like K-12 - it is universally accepted - in my state :) We also use elementary, middle, and high school. Watchout as there might be some curriculum for pre-schoolers and that isn't covered in k-12 (pre K-12 ?). Harriska2 14:42, 18 May 2007 (UTC)
- Let's inject a bit of imagination here: What if Wikiversity could house a big education portal, which points to other portals specific to each educational system in the world? In these portals, people could find resources which would link and categorize similar pages? I think we should just make the pages as we see fit, and not be too concerned whether particular terminology would be inclusive or exclusive. If pages don't meet user needs, sooner or later some participant will create it. We have to actively encourage that though. --HappyCamper 15:12, 18 May 2007 (UTC)
I've created Wikiversity:Pre-tertiary portal to draw together thoughts and act as a basis for further discussion. McCormack 16:47, 18 May 2007 (UTC)
- I like HappyCamper's point - perhaps Portal:Educational systems and stages? Or should it be centralised at Portal:Education? Somewhat ironically, I think highlighting a specific block for pre-tertiary resources, while it's a great and necessary idea, suggests in a way that Wikiversity is by default about tertiary resources - which it's not. Cormaggio talk 10:55, 19 May 2007 (UTC)
- That's a good point you've got at the end there. I wondering about the meaning of "is" as in "is not about tertiary resources by default". is = by definition, in reality, in appearance ?? My line would be "is at risk of being perceived as...". I'm adding your name suggestions to the Wikiversity:Pre-tertiary portal . McCormack 11:04, 19 May 2007 (UTC)
I personally like the idea, and as mentioned above the splitting up of K-12 because of the number of ages between. I happen to got a school where its K-12, so I happen to know what it means. Providing resources for students, such as advancing your kids(k-6) into higher education before the school takes them there so as to be prepared, to university levels for 6-12. Of course Wikiversity is already setup for 6-12, but awareness isn't that high. <<My bit. --Topcount345 02:49, 22 May 2007 (UTC)
Interesting idea, but why to have more servers for education. Why not to have one and diferentiate via namespace or better cattegories. I thing e.g. French course for university students is same as for high scholl students. We on Czech wikiversity are having all kind of education on one server. There is normaly a structure and crucial names are disambugation. E.g. French is a disambugation, were you can find all courses on the level of pre-school teaching, primary school, secondary, university and hole life studies together with projects and french for native speakers.--Juan 14:03, 4 June 2007 (UTC)
- Does a content directory designed by and for colleges/universities needlessly cause pre-college educators to not participate at Wikiversity? Why not provide several high-level portals so that Wikiversity can be more welcoming for participants who are not working at the university level? Why force someone in early education to search through a bloated college catalog to find materials for introductory reading, writing and math? --JWSchmidt 14:15, 4 June 2007 (UTC)
Motto and slogan contests: discussion of outcomes
Today was the proposed end of round 6 of the motto contest. We need further discussion about implementation of the slogan and the meaning of the motto (in using "open" vs "free"). It is suggested that we take 10 more days to discuss this without implementing a round 7 (yet).
Feel free to add comments below and comments and support statements at Motto contest page.
I apologize for the detail crammed in the process talk in the next few paragraphs...
For the slogan, for which the selection process ended on May 2, we can consider a format correction or a grammar revision for "set learning free" to go with "welcome to wikiversity" at the top of the main page. Suggested options include a format adjustment (to separate the two sentences), "setting learning free," "to set learning free," "-- set learning free," and a variant, suggested on the last day or so of round 6, "where learning is free." (I like the last option but it is bit of a departure and would need significant support I think for use.)
For the motto, we can consider if "open learning community" [with 65% supporting among top 2 options (22 of 34 commenting on those 2 options)], instead of "free learning community" (with 12 supporting), conveys the meaning of this wiki education forum as being both free and open. One person suggested today, the proposed last day of the selection process, that "open" does convey the meaning of free, whereas others have commented that "open" does connote free and works nicely as a contrast with the use of "free" in the slogan. [Note: the 3rd motto option, which included "open" in "open education," only had 3 in support but 4 against.]
If, over another 10 days or so, until May 22nd, there is not much discussion here, on the Wikiversity_talk:Motto_contest page, or elsewhere in Wikiversity, as voluntary facilitator, I will likely propose this conclusion to this process:
- use "set learning free" with a format adjustment at the top of main page, so that there are clearly two sentences. We can amplify the meaning of this on the mission statement and other related pages.
- declare "(the) open learning community" the selected motto by virtue of a fairly strong majority in support with some reasonable arguments included. [Note: "The" was preferred over "an" by several folks. But, having no article may be necessary because of text length considerations in use with a logo.]
- my opinion on moto is "open education for open minds" --(The preceding unsigned comment was added by 219.64.124.218 (talk • contribs) 12:29, 13 May 2007.)
- That is a nice meaningful phrase but more people like the other ones. --Reswik 22:02, 14 May 2007 (UTC)
- So, just to be clear, it would say "Wikiversity - set learning free"? "Wikiversity - the open learning community"? --HappyCamper 10:05, 14 May 2007 (UTC)
In terms of placement:
- The motto "open learning community" would go under the Wikiversity logo on sister projects -- and wherever we might use the logo and wish to have the motto with it. An example is the first Wikiversity logo-name-motto design in the goals section of the motto contest page.
- The selected slogan "set learning free" goes under "Welcome to Wikiversity" at the top of the main page, typeset so that there are clearly two sentences or revised grammatically to fit in one sentence. And, like the motto, we can also use this phrase elsewhere, such as in the mission statement and on introductory pages. --Reswik 22:02, 14 May 2007 (UTC)
The "-versity problem"
I think "open learning community" is a fine motto and I hate to say anything at this late date, but there has recently been some more discussion of what might be called "the -versity problem". There have always been some people who are worried that the name "wikiversity" might make some people think that the Wikiversity project is not intended for pre-college learning resources. For over a year I have ignored these qualms because we have always placed "for all age groups" in a prominent place on the Wikiversity Main Page. However, the point has been made that many people probably see the link to Wikiversity from the Wikipedia Main Page and they might assume that Wikiversity is some kind of online university, not a place with learning resources for all ages. This leads to a question: should the description that we place under the link to Wikiversity from the Wikipedia Main Page make it clear that Wikiversity is for all ages? Currently the "description" that is used is, "Free learning materials and activities" (38 characters), which is rather long (for example, the Meta-wiki description, "Wikimedia project coordination" is only 30 characters long). I'm wondering if we could use a modification of the motto for links to Wikiversity from sister projects, maybe something like:
- open learning community for all ages (36 characters), or
- open learning for all ages (26 characters)
These modified versions of the motto would make clear to potential Wikiversity visitors that Wikiversity is for all ages. --JWSchmidt 00:10, 21 May 2007 (UTC)
- I take the point, but I still think the project's definition should be kept to two or three words on sister projects, and then made as clearly and prominantly as possible on the (Wikiversity) Main Page. I think the two options you give there would be far better as slogans for the main page - I still think that "Wikiversity, set learning free" is a pretty awful slogan in terms of immediately welcoming someone to the project who wants to find out what the project is all about (in contrast to "Wikipedia, the free encyclopedia that anyone can edit", which is as about as clear as you can get). I agree that the name "Wikiversity" is always going to cause confusion, but this means that the main page needs to be designed to include clear links for different people (which is why I think some sort of "pre-tertiary" portal, or set of portals, is a good idea), which in turn need to be designed to enable people find resources/spaces of interest to them as quickly as possible. But back to the main point, I think the motto needs to be snappy in the extreme - and I think that in this context the word "open" is a good one - in the sense of indicating more than just a conventional view of a (tertiary) university, and including a global community of learners. Cormaggio talk 22:46, 21 May 2007 (UTC)
- I agree with Cormaggio. "(The) open learning community" is good for sister projects and for use with the WV logo. However, given the issues that JW raises, I agree that something like "open learning for all ages" would be a better slogan to go at the top of the main page. We could submit JW's options for community comment. Shall we do that? --Reswik 23:52, 23 May 2007 (UTC)
- We have ample opportunity to say that Wikiversity is for all ages once people reach this website. Many more people see the word "Wikiversity" in the form of a link from the sister projects and that is the place where we could productively say "learning for all ages" so people do not assume that Wikiversity is only a college-level project. If it has not been done already, you may also want to raise the issue of a Wikiversity tagline in the context of final discussions about the motto. --JWSchmidt 00:52, 24 May 2007 (UTC)
- For use in taglines and on other projects, I think "for all ages" is too long a phrase to tack onto the main point, which is "open learning". For taglines and use with the logo, I think we need to go with the motto selected by the community: "(the) open learning community." For amplifying our message through the slogan at the top of the main page, I think there is more room for editorial adjustments as the mission unfolds -- it is part of a wikipage (even if the mainpage), hence more fluid and it has more space. A motto, for WV purposes, needs to be brief and fairly stable, and "open learning community" fits the bill there. Based on comments in the contest, there is a lot of support for this motto (and even more support when including the "free learning community" variant). I think it entails a bit too much work to consider, at this time, a significant revision to both the selected motto and slogan. After seven months of the community working on this, I think the motto result is very usable.
- A bit of background: In round 5 of the contest, there were ties between the "set learning free" set of options (in both contests) and the motto selected (open learning community) and the slogan variants of "opening minds through open learning." An option in this process is to reconsider the reasoning at round 5 and consider if the argument for selecting "set learning free" for the slogan was valid. There was no easy way to break the tie. Since "set learning free" was tied in both contests it seemed reasonable to give it a place somewhere. The argument was that for a motto that "set learning free" simply could not work -- it was not a description of WV. The same argument by negation could be applied to the slogan part of the contest. But, this the leaves us with the "opening minds..." version then from round 5 to reconsider -- which has continued to receive support, even in round 6. Any new options that speak to new concerns probably need to be discussed by the community in comparison with the "opening minds..." option.
- So, what to do? I think we may need a round 7 (or round 6B) for the slong contest: ask for input on "opening minds..." vs. "open learning for all ages." In a few days, I may propose that the slogan contest re-initiate with those options (with the phrase "set learning free" removed, due to criticism *and* new emergent concerns, and "set.." moved to some other explanitory role in WV hopefully). What do you think? --Reswik 03:08, 24 May 2007 (UTC)
- By my count "Open learning for all ages" is the same length as the Wikibooks phrase "Free textbooks and manuals" and shorter than the Meta-wiki phrase "Wikimedia project coordination". "Wikiversity, (the/an) open learning community" is a little redundant. A wiki is a community, so you do not have to include "community" if you are combining "Wikiversity" with the motto for a sister project link. I have no problem with "set learning free" as the slogan. The slogan does not have to be a description or definition of the Wikiversity project. I view "open learning for all ages" as a reasonable variant of the motto for when the motto is used in combination with the name of the project ("Wikiversity, open learning for all ages") as for links from sister projects. I think the Wikiversity community could at this time constructively examine the concern that people might interpret the name "wikiversity" as being biased towards college-level education and that the motto variant "Wikiversity, open learning for all ages" could function to correct this problem and lead to more people actually visiting the Main Page. --JWSchmidt 04:48, 24 May 2007 (UTC)
- JW, If you and some others wish to replace the results of the participation process with your preferences, then I guess you have the position to do that as a coordinator and common discussant here. Cormaggio thinks the slogan "set learning free" is awful (and I agree it ain't hot). You think the motto contest results don't work. (For what it's worth, it is a bit puzzling that you did not participate in the contest and now wish to replace its results.) I think it is a mistake for admins to overrule the general content of this process. But, I don't wish to debate this much - eithere we go with the results or not. If we aren't to use something closely resembling the end stage of the motto & slogan contest results ("open learning community" and "opening minds..." perhaps), my services are not needed as a voluntary facilitator of motto selection process. Btw, I think "for all ages" can't be true of a learning institution that is online -- there are various excluded age groups, at either extreme, and publics. So. If bypassing the contest results is going to be the case in whole or part, I'll leave this process in the hands of you and the other admins to discuss with whomever. I'll write up some sort of closure to the contest that points to discussions here, saying something like "due to criticisms and perceived problems with the contest, other options are being discussed... [link]." Take care, --Reswik 00:29, 25 May 2007 (UTC)
- I do not think you have to view my attempt to raise an issue for discussion as a "wish to replace the results of the participation process with your preferences". I've given voice to a concern that was first raised by others and I have not really voiced a preference; I've just tried to get a fair hearing for an issue. "You think the motto contest results don't work." <-- the first thing I said in this discussion was that I think "open learning community" is a fine motto. I've been lobbying for admittedly late discussion of an issue that as far as I can tell was not explicitly covered by the motto discussions so far. "puzzling that you did not participate in the contest and now wish to replace its results" <-- I've only tried to raise for community discussion an issue that I was recently sensitized to. I have no talent for creating names and mottos and I am satisfied to let others decide such things. I'm not sure it is constructive to adopt a narrow or literal reading of "for all ages" when the phrase is used in a motto. The Wikiversity project proposal used the phrase "all age groups" because while nobody ever seriously suggested that the project explicitly restrict its target audience to a particular age group, the name of the project can be read as suggesting a college-level project. --JWSchmidt 06:50, 25 May 2007 (UTC)
- JW, I am glad that you are satisfied the results of the contest. Yes, some criticisms have been offered on all the options. But large groups of people have expressed approval of various of the results with certain options being clearly more favored. Taking a step back: We needn't wrap up the contest now. In considering issues with "set learning free," it is fair to raise the question if the Round 5 judgement call (of selecting between tied options for the slogan) should be revisited. In relation to other late in the game suggestions, it is fair to have another round of this process -- for refinements, alternate wordings, revisions, etc. In closing out this round of the contest (which we seem ready for), I think some issues in motto & slogan use can be raised for further consideration. For example, an alternate wording that was mentioned at the very end of the slogan contest was "where learning is free." To me, that seems powerful and fluid, whereas "set learning free" is a bit clunky in wording and the imperative seems too strongly worded. Putting on a facilitator hat, the use of "for all ages" is a resonable ammendment to suggest. I will raise that issue along with "where learning is free" and the round 5 tie-break issue as possible points for further discussion before and during a possible round 7. WV is still evolving and so can this motto process. I wonder if a few weeks of informal discussion (feeling out these options in another string (with a succinct summary of issues) might be in order now? Not sure. Anyway, I'm going to think now about if, how and when to propose a round 7 for choice between the round 6 selections and new (or late in the game) revisions and suggestions. --Reswik 03:23, 26 May 2007 (UTC)
- I don't think this -versity is a significant problem at all. Content always speaks for itself, and collaborative content is driven by its end users. We just need to keep the door wide open so that a wide portfolio of educational materials can be put onto Wikiversity. The Wikiversity of today need not define the Wikiversity of tomorrow. The fact that there seems to be a bias towards "higher-education-like" materials now should not be surprising. Yes, it is as much influenced by the name "Wikiversity" as it is by the available pool of contributors. But what better group of users to lay the foundations for this site than people who are Wiki-savvy with high levels of education? Considering that we are debating and being concerned about this issue should give us some credence that we are on the right track. Wikipedia was not built overnight, and branding is subtle. The stated challenge is for us ("the community") to show to the world that "Wikiversity" is the place for online learning. It will be seen this way if this service is delivered, and if it means that it begins first with tertiary content, then so be it. I would not be surprised if we start seeing pages on important trade skills crop up later on as the word spreads. --HappyCamper 04:44, 26 May 2007 (UTC)
- HappyCamper, thanks for input. There are various important values and purposes that inform Wikiversity: learning & teaching, shared effort and ongoing creative collaboration (wikiness), service, freedom, openness, inquiry, various types of inclusion & diversity (age, gender, cultural), etc. (not necessarily in that order of importance). (See Wikiversity:What_is_Wikiversity?) It is hard to express all of these values in one short phrase. A place to express WV's values more fully is in the mission statement. (See Wikiversity:Mission.) Now would be a good time to revisit the mission statement. I think at least another week discussion of motto & slogan issues is in order -- and this could be done in the light of the mission statement and What is Wikipedia? page and the evolving project definition. Then, after some discussion, I continue to think a round 7 of the motto/slogan process would be good for refinements. --Reswik 15:01, 26 May 2007 (UTC)
- I would welcome a reinvigoration of the slogan contest - I took a look back at the point at which the "set learning free" variants were selected, and it was not so conclusive (even though there was significant subsequent approval), and I particularly appreciate Doug's openness in this regard. As for the motto, I like the idea of making it clear from, say, the Wikipedia front page that this is a project for all ages - it's just I don't particularly like the proposed clunky phrasing. Is there another way? Cormaggio talk 15:22, 26 May 2007 (UTC)
- "I don't think this -versity is a significant problem at all. Content always speaks for itself....We just need to keep the door wide open" <-- But is the "door wide open" if people see just the name "wikiversity" and assume that it is a college level project? If people assume "wikiversity" is not "for all ages" then they never even bother to come to the Main Page. I'm not sure how big this problem is, but do I know that if it is a problem, I am part of the problem....I have been willing to let our existing bias towards college-level material persist while telling myself that it is an issue that will resolve itself with time. If that true or will the bias and barriers to change just become more entrenched? Wikiversity content cannot "speak for itself" to people who never visit this website because the name of the project keeps them away. "the proposed clunky phrasing" <-- Part of the "-versity problem" may be that the educators who feel excluded from Wikiversity are not taking part in these discussions. Trying to deal with systemic bias is not easy. Right now all we have is people with the bias saying "I'm not bothered by this problem and the proposed solution does not please me".
--JWSchmidt 15:59, 26 May 2007 (UTC)
- Yikes! I hope you don't think I'm one of those people. They sound like people who shouldn't hang out here. It's important that we hammer things out, and I'm glad we're doing it now.
- "Is the door wide open if people assume..." - My emphasis on if. Perhaps there is a sampling bias that I have, but every time I have mentioned "Wikiversity" to someone in real life, the response I get does not suggest that they immediately associate the project with a college level educational project. Random people on the street:
- Hey, have you heard of Wikiversity before?
- Um...no...what's that? Sounds like Wikipedia. A place where you learn stuff instead?
- Yeah, stuff.
- Useful stuff?
- Well, potentially anything really.
- Conversations usually continue about what a Wiki is, how to edit, whether it's related to Wikipedia, et cetera. Now, to college educators:
- ...by the way, have you heard of Wikiversity before?
- Are you serious? First there's Wikipedia, and now there's Wikiversity? Is the free movement on the internet trying to take over education now?
- Even for people who know what "university" is, but not what "Wikipedia" is, will invoke the ideals of a university when I mention Wikiversity. These people's faces light up, and you can see their imagination taking over. They are not held back by what a brick and mortar university is at all! It's also interesting to see what happens when you ask people from different occupations what "Wikiversity" means to them. Bottom line? I don't believe people fundamentally hold the assumption that Wikiversity is closed off just for university content. I do think we need to experiment more with the main page. --HappyCamper 22:33, 26 May 2007 (UTC)
- It would be a simple bit of research to have some elementary school teachers look at the Main Page and then ask them if it seems welcoming to people who are interested in pre-college education. --JWSchmidt 14:51, 28 May 2007 (UTC)
- I'm not an elementary teacher yet but hope to be by the end of next year. No, the front page of wikiversity does not seem to welcome pre-college curriculum. I searched the page for elementary, k-12, and early and found nothing on the page. That is why I stayed away from wikiversity for almost 1 year until I discovered that my stuff didn't belong in wikibooks and I was referred here. Some other ideas for slogans: Open Learning, Free for All (5 words/29 letters&spaces) In addition to your current list:
Motto contest: continuing discussion
The introduction at the top of the motto contest page now reads:
- "The motto contest has been extended for more discussion.
- "A discussion string that summarizes issues is in the "Colloquium: Motto and slogan contests: discussion of outcomes" [note: this section]. Feel free to add comments there or to continue adding comments and support statements in the motto contest section."
Feel free to add points bellow, in the strings above, or on the Motto contest page. A new string in the colloquim summarizing the above will be created later. --Reswik 20:06, 26 May 2007 (UTC) | https://en.wikiversity.org/wiki/Wikiversity:Colloquium/archives/May_2007 | CC-MAIN-2019-43 | refinedweb | 9,336 | 60.24 |
hey guys i keep going off and on with my programming resulting in my forgeting how to do most everything
ok so the program im working on is supposed to display all the ascii characters and their value I do not want to show the values that do not contain an ascii character (7,8,9,10,13,32) and i remember how to check the index for multiple values in a switch() but i think i remember doing it all in an if() statement but i cant remember what the correct syntax is for that.
i was trying this
anyways ill post my code and if you can help me out that would be great Thanksanyways ill post my code and if you can help me out that would be great ThanksCode:if(1 == 7 || 8 || 9 || 10 || 13 || 32) i++
i also have another question, How would i get the same result the printf() statement has using cout?i also have another question, How would i get the same result the printf() statement has using cout?Code:#include <iostream> using namespace std; int main() { cout<<"Here we go!\n"; char var1 = 55; cout<<"\nvar1 is "<<var1<<endl; cout<<"the size of Var1 is: "<<sizeof(var1); for(unsigned char i =0; i<255; i++) { switch(i) //how to check i for multiple values in if statement { case 7: i++; case 8: i++; case 9: i++; case 10: i++; case 13: i++; case 32: i++; } printf("%d %c",i, i); // How to display same results in c++ !c cout<<endl; } system("pause"); }//end main | https://cboard.cprogramming.com/cplusplus-programming/83718-checking-multiple-values-if-statement.html | CC-MAIN-2017-30 | refinedweb | 264 | 56.05 |
Hi,
i just want to show the activity indicator, run my method and hide the activity indicator clicking a button. Should be very simple, shouldn't it?
But it does not Show the activity indicator. Need to use async/await, but don't know exactly how...
XAML:
<ActivityIndicator x:
.cs:
` private bool _isBusy;
public bool IsBusy
{
get { return _isBusy; }
set
{
_isBusy = value;
OnPropertyChanged();
}
}
private void Btn_Wurf_Clicked(object sender, EventArgs args) { IsBusy = true; //Do its work IsBusy =false; }`
Many developers recommend to use only commands instead of events. By this code must be more clear.
In your xaml:
<Button Text="Button" Command = {Binding LongRunningCommand} />
public class YourViewModel : ViewModel { public Command LongRunningCommand; public PaymentViewModel() : base() { this.LongRunningCommand = new Command(() => { Task.Factory.StartNew(() => { this.IsBusy = true; Task.Delay(1000); this.IsBusy = false; }); }); } }
Answers
..
Follow the link below.
You can do this way.-
Do you have the BindingContext of the page set to
this?
nope, thx. But I don't know really How... I'm really sorry.
XAML:
`
//rest of it`
It's not working.
In your page's constructor in the .xaml.cs file:
BindingContext = this;
You need to know about MVVM . After reading this article create your BaseViewModel (you will inherit other viewmodels from it in future):
Then create your page, somewhere in xaml put indicator:
<ActivityIndicator IsVisible="{Binding IsBusy}" IsRunning="{Binding IsBusy}" Color="Orange"/>
Don`t forget button
<Button Text="Button" Clicked="ButtonClicked"/>
In your page code do this:
Then your need to create inherited viewmodel from base:
Many developers recommend to use only commands instead of events. By this code must be more clear.
In your xaml:
<Button Text="Button" Command = {Binding LongRunningCommand} /> | https://forums.xamarin.com/discussion/138354/simple-activity-indicator | CC-MAIN-2019-09 | refinedweb | 273 | 51.55 |
Top Answers to Salesforce Interview Questions
Here are some of the top benefits of Salesforce CRM:
- Ensuring faster and better sales opportunity
- Deploying an analytical approach to customer acquisition
- Reducing cost and improving customer satisfaction
- Automation of repetitive and less important tasks
- Improved efficiency and enhanced communication on all fronts
Know more about why you should go for the Salesforce Certification Training? through this blog.
Simply put, custom objects are database tables in Salesforce. All the data related to an enterprise can be stored in Salesforce.com. There is a need for a junction object which is a custom object, and it has a Master–Detail relationship. You can create a master–detail relationship between two objects, and then connect a child object as a related list. Custom objects, which can be listed in Custom Settings, has a set of static data which is reusable.
These custom objects have to be defined first and then the following steps need to be followed:
- Join records with custom objects
- Custom object data are displayed in custom lists
- Create a custom tab for a custom object
- Build page layouts
- Create a dashboard and a report for analyzing the custom object
- Custom tab, app, and object can be shared
In Salesforce, you can link the standard and custom object records in a related list. It is done by the object relationship overview. Various types of relationships can be created in order to connect specific business cases with specific customers. It is possible to create a custom relationship on an object and define various relationship types.
Object relations in Salesforce can be of the following types:
- One to many
- Many to many
- Master–Detail
Now that you are aware of the benefits of Salesforce, for more detail check the Salesforce course.
An app in Salesforce is a container which contains name, logo, and a group of tabs that work as a unit to provide the functionality. Users can switch between apps using the Force.com app’s drop-down menu at the top-right corner of every page.
Some of the main benefits of Salesforce SaaS are:
- A pay-as-you-go model perfectly suites all customers
- No hassle of infrastructure management
- All applications are accessed via the Internet
- Easy integration between various applications
- Latest features are provided without any delay
- Guaranteed uptime and security
- Scalable performance for various operations
- Ability to access via mobile devices from anywhere
Learn more about Salesforce in this insightful Salesforce blog!
Salesforce is very meticulous when it comes to recording intricate details like sales numbers, customer details, customers served, repeat customers, etc. in order to create detailed reports, charts, and dashboards for keeping track of sales.
Workflow in Salesforce is basically a container or business logic engine which automates certain actions based on particular criteria. If the criteria is true, the actions get executed. When it is false, the record will get saved but no action will get executed..
Go through the Salesforce Course in London to get clear understanding of Salesforce.
A Master–Detail relationship is basically a Parent–Child relationship, in which ‘Master’ represents the Parent and other details represent the Child. If Parent is deleted then, Child also gets deleted. Roll-up summary fields can only be created on Master records which will calculate the SUM, AVG, and MIN of the Child records.
In a Master–Detail relationship, when a Master record is deleted, the Detail record also gets deleted, automatically.
In a Lookup relationship, the Child record will not be deleted, even if the Parent record is deleted.
Yes, you can have a roll-up summary in the case of a Master-Detail relationship. But not in the case of a lookup relationship. This is because a roll-up summary field is used to display a value in the Master record based on the values of a set of fields in the Detail record.
An sObject is any object which can be stored in the Force.com platform database. Apex allows the use of generic sObject abstract type to represent any object.
For example, ‘vehicle’ is a generic type and ‘car’ and ‘motorbike’ all are concrete types of ‘vehicle’.
Go through this Salesforce Tutorial to learn more about Salesforce end to end.
Triggers in Salesforce are called Apex Triggers. These are distinct and are available specifically for common and expected actions like lead conversions. It is just a code that is executed before or after a record is inserted or updated. A Trigger is different from a Workflow as it is a piece of code; whereas, a Workflow is an automated process and uses no code.
Trigger.new returns a list of records which has been added recently to sObjects. The records which are yet to be saved in the database are returned. Only insert and update triggers have the sObject list, and records can only be modified in before.trigger.
- System Administrator: Customization and administration of an application
- Standard User: Can edit, view, update, or delete one’s own record
- Read Only: Able to just view the records
- Solution Manager: Comes with the standard user permission but also can manage categories and published solutions
- Marketing User: you simplify the design, development, and deployment of cloud-based applications and websites. Salesforce Developers can work with Cloud Integrated Development Environment and deploy the applications on the Force.com servers.
These are described in detail on Salesforce community.
- Tabular report: In this, the grand total is displayed in a table format.
- Matrix report: This is an in-depth report wherein there is both row-based and column-based grouping.
- Summary report: Summary report is a report in which the grouping is on a column basis.
- Joined report: Joining of two or more reports into one creates a joined report.
A Salesforce dashboard can be seen as a visual and pictorial representation of a dashboard with the facility to add up to 20 reports.
Learn more about Salesforce in this Salesforce training in New York to get ahead in your career!
Various dashboard components are explained below:
- Chart: It is used for showing data graphically.
- Gauge: It is used for showing a single value within a range of custom values.
- Metric: This is used for displaying a single key–value. It is possible to click the empty text field next to the grand total and enter the metric label directly on components. All metrics placed above and below one another in the dashboard column would be displayed as a single component.
Various dashboard components are explained below:
- Table: The report data can be shown in column form using Table.
- Visualforce Page: It is used for creating a custom component or showing information not available in other component types.
- Custom S-component: This contains the content that is run or displayed in a browser like Excel file, ActiveX Control, Java applet, or custom HTML web form. tight integration with the database and also deploy auto-generated controllers for database objects. Developers can use Apex codes to write their own controllers. It is also possible to access AJAX components or create their own components.
Interested in learning Salesforce? Click here to learn more in this Salesforce Training in Sydney!
A static resource lets you upload content that is in the form of .jar and .zip formats, style sheets, JavaScript, and so on. It is recommended to deploy a static resource rather than uploading files to the Documents tab since it is possible to pack a set of files into a directory hierarchy and upload it. These files can be easily referred to in a Visualforce page.
Salesforce Object Query Language (SOQL) lets you search only one object, whereas the Salesforce Object Search Language (SOSL) lets you search for multiple objects. You can query for all types of fields in SOQL, but you can query only for text, email, and phone numbers in SOSL. Data Manipulation Language operations can be performed on query results but not on search results.){}
Become Master of Salesforce by going through this online Salesforce course in Toronto.
Collections are a type of variables used to store multiple numbers of records (data). Types of collections in Salesforce are:
- Lists
- Maps
- Sets
Maps are used to store data in the form of key–value pairs, where each unique key maps to a single value.
Syntax: Map<String, String> country_city = new Map<String, String>();
An Apex transaction represents a set of operations that are executed as a single unit. These operations include DML operations which are responsible for querying records. All the DML operations in a transaction either get completed successfully or get rolled back completely, if an error occurs even in saving a single record.
Get certified from top Salesforce course in Singapore Now
Global class is accessible across the Salesforce instance irrespective of namespaces.
Whereas, public classes are accessible only in the corresponding namespaces.
The get (getter) method is used to pass values from the controller to the VF page.
Whereas, the set (setter) method is used to set the value back to the controller variable.
The following fields are automatically indexed in Salesforce:
- Custom fields marked as an external ID or a unique field
- Primary keys (ID, Name, and Owner fields)
- Audit dates (such as SystemModStamp)
- Foreign keys (Lookup or Master–Detail relationship fields)
Time-dependent workflow cannot be created for ‘created, and every time it’s edited’.
Learn Complete Salesforce at Hyderabad in 24 Hrs.
Sandbox is a similar copy of a Salesforce production for testing, development, and training. The content and size of a sandbox may vary depending on the type of sandbox and the edition of the production organization which is associated with the sandbox. There are four types of sandboxes available:
- Developer Sandbox
- Developer Pro Sandbox
- Partial Data Sandbox
- Full Sandbox
An apex class is a template from which Apex objects can be created. These classes consist of other classes, variables, user-defined methods, exception types, and the static initialization code.
It is a cloud-based CRM which doesn’t require IT experts to set up or manage the cloud. One can simply log in and connect to the customers directly. CRM Salesforce system is a well-organized platform which provides information to its customers from different sources. It is a customer-centric system which integrates customers’ information for an organization’s benefit.
Salesforce Lightning is a platform that provides tools to every organization to build next-generation UI and UX in Salesforce. Lightning creates a modern productivity-boosting user experience. It is used to create fast, beautiful, and unique user experience just like real lightning so that sales teams can sell their product faster. Lightning Experience uses an open-source Aura framework. It is a completely re-designed framework to create a modern user interface.
There are various reasons why Batch Apex is better than Normal Apex.
- A Normal Apex uses 100 records per cycle to execute SOQL queries. Whereas, a Batch Apex does the same in 200 records per cycle. So, it is very fast when the execution of SOQL queries is considered.
- A Normal Apex can retrieve 50,000 SOQL queries but, in Batch Apex, 50,000,000 SOQL queries can be retrieved.
- A Normal Apex has a heap size of 6 MB; whereas, Batch Apex has a heap size of 12 MB.
- When executing bulk records, Normal Apex classes are more vulnerable to encounter errors as compared to Batch Apex. So, it is normally error-less.
Are you interested in learning Salesforce course in Bangalore from Experts?
Ways to call an Apex class in Salesforce are as follows:
- From Visualforce page
- From developer console
- From JavaScript links
- By using trigger
- From another class
- From home page components
A many-to-many relationship can be created by using a junction object. A Junction object is a custom object which has two Master–Detail relationships.
Based on structural differences Salesforce has four different types of reports: | https://intellipaat.com/blog/interview-question/salesforce-interview-questions/ | CC-MAIN-2019-35 | refinedweb | 1,986 | 53.31 |
Explaining GRUNCH
In Higher Living Standards, I was exulting about my access to audio and video recordings of my heroes, or simply people I track.
Like I’m on the fence as to where I stand vs-a-vs Lyndon LaRouche, now 96 according to Wikipedia. However, be that as it may, I find it both rewarding and convenient to retrieve their various recordings so easily. My standard of living has definitely increased.
Today, let’s officially add Robert Anton Wilson to my gang. For the first time that I can recall, I listened to several hours of the guy on audio this morning. The interviewer brought up Everything Is Under Control, his encyclopedia of conspiracies.
I’ll want to explain my connection to the entry under GRUNCH: the linked website was one of mine, and by now is simply grunch.net.
I’m often surprised when I first get to hear one of these leading lights. Richard Feynman’s voice is so distinctive, ditto for Terence McKenna and Jared Diamond.
Sometimes how they look is also stunning or surprising in some way. However I’m imagining I’m driving a heavy rig and need to fix my gaze on the road, even if she’s on autopilot. My personal workspace: the truck’s cab. I can drive and listen to favorite teachers at the same time, and even chew gum.
GRUNCH is recognizably an acronym, given its all capital letters, however its inventor, one RBF (another acronym) wanted to be sure to establish it as Grunch as well, and even grunch (now adding it to my spellchecker).
Its English meaning? Well, RBF figured, we really need a “group word” for “bunch of giants”, just like we already have “murder of crows” and “coven of witches” or “meeting of Quakers” or “gaggle of geese”.
Indeed, one might think “bunch” the best choice (“bunch of giants”) but mainly in the rear view mirror, I’d suggest, as it rhymes with GRUNCH, and that’s already ingrained (has traction).
“Gross Universal Cash Heist” is what it stood for, and had to do with re-balancing the books, such that we might clarify, in the privacy of our own thinking, a model of how the world works.
The growth of giant world-spanning corporations had to be phased in to the history-telling. However “military-industrial complex” (Eisenhower) was all worn out and took too long to say.
In today’s news, we speak of “tech giants” (such as FANG: Facebook, Apple, Netflix, Google), and that reinforces the consensus that “giant” is now shorthand for what R. Buckminster Fuller was talking about.
Of course FANG is clearly too restrictive, even with respect to what count as tech giants, but still gives the idea of what giants we’re talking about. FANG < GRUNCH.
By this time you might be thinking RAW’s entry in his encyclopedia is yet another cryptic nomenclature for the same phenomena on everyone’s plate. What’s really new and different here? Are we expected to adopt cute acronyms and neologisms from the Fuller corpus?
As journalists, are we expected to dip our pens in the inkwell of Transcendentalism?
That’s not all up to me of course, though I know how to make it more tempting.
Why not connect the dots for your readers and show the worldly public that you have some command over your material? Show that you’ve done your homework.
OK, now let’s turn to trucking for the moment, as in transportation engineering.
I’ve been a computer programmer for a lot of my life, which means I traffic in some specific namespaces, professional shop talks.
Rather recently, said list of namespaces has expanded to include more from the data science corner, machine learning in particular.
I think a lot of us have had to adjust. Including truckers. I’m not just talking about rigs on autopilot. I’m talking about “anticipatory design science” (RBF again) with special emphasis on “anticipatory”.
Machine Learning (ML) is about predicting the future in a lot of ways. I was yakking about all that yesterday in fact.
You may know me as the inventor of the “bizmo” a neologism (probably not though).
The concept of “recreational vehicle” for those in retirement, or migrating from Amazon to Walmart to a warm place in the desert, was well-established. We saw a lot of RVs on the road. We also had contractors in their colorfully painted vans, on the clock.
The “business mobile” (i.e. bizmo) fit somewhere in between, in providing for some “personal workspace” (like a cubicle) to go on the road.
They’d vary in size and purpose. Sometimes they’d caravan. Don’t forget the dispatch centers. I’m thinking of AAA (Triple-A) now. That’s a well-known road service fleet, with dispatching.
By 2018, some models are on the road, though not under that name.
“Bizjet” is OK in Scrabble (per Hasbro) but not “bizmo” quite yet.
The Urban Dictionary may have other slang meanings.
However, that I’m semi-alone with some semi-private vocabulary (bizmo, Grunch…) is not a show-stopper. So what?
We already know I don’t mind trafficking in esoterica. I was a fixture around Esozone here for a couple years, an event in Portland’s “techno-occult” scene.
The word “occult” simply means “occluded” or “semi-hidden” but that needn’t mean “actively kept secret”. On the contrary, you may have your ads on phone poles, all over town.
Office workers such as myself, with esoteric vocabularies, are not precluded from getting work done, lets put it that way.
A lot of work / study Global U students don’t write “PWS” for “personal workspace” either. Yet they have them, and call them “study carrels” or “truck cabs” or whatever. If you’re a teenager, your PWS might be your home bedroom.
So was Robert Anton Wilson maybe creating a redundancy in his encyclopedia, by keeping a slot open for GRUNCH. Aren’t the same themes dealt with elsewhere, using different vocabulary?
I don’t really think so, as Transcendentalism is really more of a subculture, with more than a specific lexicon.
We have our artifacts, our anime and manga.
We have our ethic (World Game) and our aesthetic (free and open). Not every “conspiracy” can boast as much. We have our synergies.
Let me conclude with some words about “conspiracy”.
You may have seen my Conspiracy Theories Rock! piece.
RAW thinks they’re by definition steeped in deception, misdirection, showman chicanery. Likewise by definition, one rarely embraces these negatives, as they sound like charges to defend against.
However, as a matter of etymology, the word simply means to “con spire” as in “breath together” as some kind of system or entity.
By that definition, every tech giant is a conspiracy, unto itself. Our political universe could be cast as a space of partially overlapping conspiracies. Or call it “heaven for the paranoid”.
To make a long story short, I’m open minded enough to allow for benevolent conspiracies. I don’t insist on a thoroughly negative connotation. Ditto for “propaganda” as another synonym for “PR”.
My willingness to mix “good and evil” in this way is probably owing to my earlier philosophical training. At Princeton, I enjoyed the lectures of Walter Kauffman, on Nietzsche, on the Bible, on Buber, on existentialists.
Philosophy cultivates a sort of mindset oft confused with aloofness, that really is more a strong commitment to carving out a bigger picture.
“Think globally, act locally” is the attitude, in a nutshell. You might even “think Martian” to get that “alien point of view” (ETPV).
If you’re more a nationalist than a globalist, then I understand why you might consider this philosophy somewhat dangerous.
It strives to be that.
What self respecting philosophy wouldn’t? | https://medium.com/@kirbyurner/explaining-grunch-a57d93e034fa | CC-MAIN-2019-30 | refinedweb | 1,311 | 65.42 |
See also: IRC log
<HarryH> Scribe: bwm
<HarryH> Scribe next week?
<briansuda> sure
<HarryH> brian suda is scribe
<HarryH> PROPOSED: to approve as a true record
<HarryH> RESOLVED: as a true record of Dec 6 2006 GRDDL Meeting
<FabienG> sends regrets from 20 Dec until wednesday 10 Jan included
<HarryH> bwm: Distinction between examples and test-case
<HarryH> bwm: DanC agrees distinction, but seems to want put everything in test-suite
<HarryH> bwm: Normative test-cases
<HarryH> bwm: Informative examples
<HarryH> Previous two lines were me
<HarryH> bwm: whether or not test-cases or normative is orthogonal.
<HarryH> DanC puts everything in test-cases
<HarryH> In charter, demonstrate how we work RDFa
harry: we must have relationship
to rdf/a
... I would be happy to separate out examples that link to use case doc from the test suite
... we do stuff like the rdf/a as an example
... is that ok?
bwm: for me, yes, but we need DanC to decide.
Harry: Fabien?
Fabian: there are other issues
with this example
... the use of rel attribute in link elements in GRDDL collides with use in RDF/a
... we should look at this closely
harry: are you on rdf/a
Fabian: I'm on the mailing
list
... and submitted this example to them
... the group is not at full speed right now
hh: could you forward any responses ...
Fabian: no one noticed this
<HarryH> PROPOSAL: separate examples like RDFa that are covered by the charter from test suite and link examples to Use Case Document.
<FabienG> Brian's proposal: "The root cause of the problem here may be that RDF/a treats the value of
<FabienG> the rel attribute of a link element as a Curie and GRDDL doesn't. A
<FabienG> possible solution would be to mod GRDDL so that the rel attribute value
<FabienG> is also treated as a curie e.g. "grddl:transformation" with the grddl
<FabienG> namespace prefix defined."
Fabian: I'm in favour of the Brian's solution
hh: is everyone ok with these proposals
<HarryH> CURIEs might be controversial with DanC and the TAG.
bwm: good leave these for future discussion
<HarryH> We'll try to consensus on these with DanC and anyone else at next meeting
re action Fabien to make SAWSDL test case
<FabienG> SAWSDL drafts: and
fabian: my opinion is that we
should postpone
... most of the SAWSDL annotation could be translated to RDF
... not sure what to do with xml schema part
... what is the base url
... I can't answer these right now
... we should wait
hh: I thought they were in last call
Fabien: they are currently at working draft stage
hh: is the wg aware of this issue
Fabian: I am not part of that group
hh: they have a comments list
Fabian: I could do post a comment
<HarryH> ACTION: Fabien to post to sawsdl list relevant questions about RDF mapping and relationship to GRDDL [recorded in]
hh: I asked semweb cg to register turtle type
<HarryH> XInclude: What if document you're including as GRDDL links?
<HarryH> bwm: Both Ian and bwm thought both should be under control of author.
<HarryH> bwm: possibility of a transform-all link as opposed to transform?
hh: I have prejudice to keep
things simple
... i.e. define the 'atomic' operation of GRDDL applied to a single document
... and worry about more complicated structures later
<HarryH> By "single" document do you mean infoset pre-Xinclude or post-Xincllude?
<HarryH> Don't assume XInclulde is being done.
bwm: pre-Xinclude
hh: that's the way I'm leaning to
<HarryH> Should we consensus?
<HarryH> bwm: 2 different issues
<HarryH> bwm: what about DTDs and Schemas validation?
<HarryH> bwm: what about XInclude?
<HarryH> bwm: Not assuming XInclude processing
<HarryH> bwm: No opinion about DTD Schema validation
hh: the order of processing of
xml documents is not in our charter
... murray is our liason the group responsible for that
... they aren't going to decide for a year
... it means that not every grddl implementatio will treat includes and validation the same
... but its in the author's remit to say what should be done
<HarryH> bwm: publisher of document should have control
<HarryH> bwm: of document.
<HarryH> bwm: They should know what they are publishing
<HarryH> bwm: No use to them to say XInclude may or may not get run.
<HarryH> bwm: Because they may not have control of transform they need answer.
murray: grddl can't answer this
question
... ultimately its up to the publisher or the transform author to answer this question
... a little snippet of xml transform could recognize an xinclude
... and puts a comment in the graph that something is missing
... doesn't belong in the spec
... but maybe in the tutorial
... we could publish a transform that did this
hh: you are suggesting an informative statement
murry: yes
... we could say in the spec there are dragons around xincludes
hh: I agree with that
bwm: are publishers responsible for triples that are produced by transforming an included document
murray: yes - xinclude specifies
that
... xinclude processing tends to happen on the server side
... here is a simple expedient
... we ask for a transform in the XI namespace
... as soon as you use xinclude that transform is available
hh: that is good thinking
murray: we don't control the namespace, we'd have to ask
hh: in whose domain is xinclude
murray: dunno - ask Liam Quinn
<HarryH> ACTION: Murray to e-mail Liam Quinn and DanC about possibility of GRDDL XInclude transform in XInclude namespace. [recorded in]
<HarryH> PROPOSAL: That we do not mandante XInclude or XML Processing Order on the XML input document, and write a caveat in the spec.
<HarryH> Murray: XML transforms operates on an Infoset
<HarryH> Bwm: DTD or XML Schema transforms an Infoset by adding default values.
<HarryH> bwm: So if you run a Schema then GRDDL, you would get different RDF from the one that you ran pre-Schema
<HarryH> murray: So if you don't follow your nose you don't get the the full results, and we can't dictate following your nose.
<HarryH> bwm: what is the minimum amount of RDF published?
<HarryH> murray: The minimum is the output of the transformation run against the serialization of source document.
bwm: from the publishers point of view there are two questions - what must be in the GRDDL result of a transform and could be in the GRDDL result of a transform
murray: I think its wrong to define a minimum
<FabienG>
fabien: DanC has written email
saying he is ok with content
... but is not happy with the word "scraping"
<FabienG> Scraping the web: Steffen wants to build a directory of the people he works with.
fabien: at the beginning there was discussion about ensuring that grddl was about transforms not on scraping
hh: do you have a transform attached to non xhtml compliant docuemnt
<HarryH> maybe just have a GRDDL transform attached to non-XML compliant HTML?
fabien: I tried to cover all the
bases
... there is one where you have to tidy it
... there is an example of trying to extract as much as possible
... and that might be considered to be scraping
hh: could you phrase it as "you
are receiving a document and transforming it according to a
grddl link"
... could we just lose the word scrping from the title
fabien: DanC's message started with not sure wehther to use it and ended with maybe we should use it
murray: scraping is a bad word
stop using it
... there are multiple definitions
... its a pejorative term
... people have trouble with it
... so lets find a different word
<HarryH> Murray: "Extracting machine-friendly data from web-pages" :)
murray: try extracting machine friendly data from web pages
hh: fine with me
<FabienG> ACTION: fabien to remove the word scraping [recorded in]
hh: I'll leave this in Fabien's hands
<HarryH> Propose: Given that a base URI parameter is a parameter whose value is
<HarryH> the base URI of the transform source document, the WG RESOLVES not to
<HarryH> define a base URI parameter for transforms, noting that triples
<HarryH> referring to transformed document, or documents named relative to the
<HarryH> transformed document can be created as illustrated in test case
<HarryH> .
test case files are in
correction should be
hh: why did we have a suggestion for a base param
murray: so that relative uri references work
hh: murray does brian proposal resolve t his
murray: I'd ask Dan
<HarryH> Would this cause problems for if the source doucment didn't end in
<HarryH>
<HarryH> Only use of base-param would be with would we want to don't to add a #?
hh: what do other people think
murray: I'm happy
<HarryH> PROPOSAL: Given that a base URI parameter is a parameter whose value is
<HarryH> the base URI of the transform source document, the WG RESOLVES not to
<HarryH> define a base URI parameter for transforms.
<briansuda> has to abstain - need to catch-up on back
<briansuda> emails
<HarryH> I approve.
<HarryH> bwm: not quite quorum.
<HarryH> PROPOSAL should continue, HarryH as a proxy for bwm if he's not at next meeting.
<HarryH> Proxy to vote for "yes"
<HarryH> We've checked a new version of primer.
<HarryH> And all that remains is double-checking SPARQL and manufacturing more example data.
<briansuda> i have emailed HarryH some files
<HarryH> BrianSuda has manufactured most of the example data.
I nominate Harry as my proxy to vote on the baseparam proposal if I'm not at meeting where it is consdiered
<HarryH> ACTION: [DONE] HarryH to respond to Chime, clarifying song/album and discussing bNode vs URI. [recorded in]
<HarryH> ACTION [DONE]: bwm to review testlist1#rdfa1.
<HarryH> ACTION [DONE]: Fabien to make a SAWSDL test sketch
<HarryH> ACTION:[DONE] bwm to review testlist1#rdfa1 [recorded in]
<HarryH> ACTION:[DONE] Fabien to make a SAWSDL test sketch [recorded in]
<HarryH> ACTION: [DONE] HarryH to poke semweb cg re N3 media type registration [recorded in]
<HarryH> ACTION: DanC to add N3/turtle mime type to Atom/turtle test case. noting the unregistered status [CONTINUES] [recorded in]
<HarryH> ACTION:DanC to write rules about XSLT 1.0 processing context [CONTINUES] [recorded in]
<HarryH> ACTION: HarryH to integrate Murray's XInclude test case sketch into the test collection [CONTINUES] [recorded in]
<HarryH> ACTION:Fabien to add a tidy/tag-soup use case/paragraph, with caveats [DONE] [recorded in]
<HarryH> CTION: Ian to reconsider comments on cross-document introduction [CONTINUES]
<HarryH> ACTION: Ian to reconsider comments on cross-document introduction [CONTINUES] [recorded in]
<HarryH> * ACTION: iand to construct a content negotiation test case [CONTINUES]
<HarryH> ACTION:BWM to produce ~3 test cases for #issue-base-param [CONTINUES] [recorded in]
<HarryH> ACTION: Harry and Brian to rewrite second part of primer to use Brian and Dan's instance data [CONTINUES] [recorded in]
<HarryH> ACTION: DanC to add a sample implementation appendix to the GRDDL spec. [CONTINUES] [recorded in]
<HarryH> ACTION: iand to construct a content negotiation test case [CONTINUES] [recorded in]
<FabienG> Chair: Harry
This is scribe.perl Revision: 1.127 of Date: 2005/08/16 15:12:03 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/that/ post a comment/ Succeeded: s/bwm/murray/ Succeeded: s/scaping/scraping/ Found Scribe: bwm Inferring ScribeNick: bwm WARNING: No "Present: ... " found! Possibly Present: Bwm CTION Fabian Fabien FabienG HarryH IPcaller Murray_Maloney P7 PROPOSAL PROPOSED Propose Sophia XInclude briansuda harry hh inserted murray murry You can indicate people for the Present list like this: <dbooth> Present: dbooth jonathan mary <dbooth> Present+ amy Got date from IRC log name: 13 Dec 2006 Guessing minutes URL: People with action items: brian bwm danc fabien harry harryh ian iand murray[End of scribe.perl diagnostic output] | http://www.w3.org/2006/12/13-grddl-wg-minutes.html | crawl-002 | refinedweb | 1,975 | 57.91 |
In this problem, we are given two integer L and R denoting a range. Our task is to find xor of all elements within the range [L, R].
Let’s take an example to understand the problem,
Input − L=3, R = 6
Explanation − 3^4^5^6 =
To solve this problem, we will find the MSB of R. the MSB of the answer will not be greater than R. Now, we will find the parity of count of the number of bits from 0 to MSB.
Now, to find the parity count for an ith bit, we can see that the state of an ith bit will change on every 2ith number. The same is for all ith bit set in the range L to R. On doing this, two cases arise −
Case 1(i != 0) − Check ith bit of L. if it is set, check parity count of the number between L and L+2i. And if an ith bit of L is set, then L is odd, then the count is odd otherwise it is even. Now, we will move to R, and determine the parity of count of a number of elements between R-2i and R and follow the same method.
Rest all integers are not taken under consideration as they will generate even the number of an integer with ith bit set.
Case 2(i = 0) − here, we will have to consider the following case −
Case 2.1 − L and R both odd, count the number of integers with the 0th-bit set will be (R-L)/2+1.
Case 2.2 − Else, the count will be round down a number of (R-L+1)/2.
Program to show the implementation of our solution,
#include <iostream> using namespace std; int findMSB(int x) { int ret = 0; while ((x >> (ret + 1)) != 0) ret++; return ret; } int XOREleInRange(int L, int R) { int max_bit = findMSB(R); int mul = 2; int ans = 0; for (int i = 1; i <= max_bit; i++) { if ((L / mul) * mul == (R / mul) * mul) { if (((L & (1 << i)) != 0) && (R - L + 1) % 2 == 1) ans += mul; mul *= 2; continue; } bool oddCount = 0; if (((L & (1 << i)) != 0) && L % 2 == 1) oddCount = (oddCount ^ 1); if (((R & (1 << i)) != 0) && R % 2 == 0) oddCount = (oddCount ^ 1); if (oddCount) ans += mul; mul *= 2; } int zero_bit_cnt = zero_bit_cnt = (R - L + 1) / 2; if (L % 2 == 1 && R % 2 == 1) zero_bit_cnt++; if (zero_bit_cnt % 2 == 1) ans++; return ans; } int main(){ int L = 1, R = 4; cout<<"The XOR of all element within the range ("<<L<<", "<<R<<") is : "<<XOREleInRange(L, R); return 0; }
The XOR of all element within the range (1, 4) is : 4 | https://www.tutorialspoint.com/xor-of-all-the-elements-in-the-given-range-in-cplusplus | CC-MAIN-2021-25 | refinedweb | 443 | 76.56 |
Previous Chapter: Forks and Forking in Python
Next Chapter: Pipe, Pipes and "99 Bottles of Beer"
Next Chapter: Pipe, Pipes and "99 Bottles of Beer"
Threads in Python
Definition of a Thread. More than one thread can exist within the same process. These threads share the memory and the state of the process. In other words: They share the code or instructions and the values of its variables.
There are two different kind of threads:
- Kernel threads
- User-space Threads or user threads
In a certain way, user-space threads can be seen as an extension of the function concept of a programming language. So a thread user-space thread is similar to a function or procedure call. But there are differences to regular functions, especially the return behaviour.
Advantages of Threading:
- Multithreaded programs can run faster on computer systems with multiple CPUs, because theses threads can be executed truly concurrent.
- A program can remain responsive to input. This is true both on single and on multiple CPU
- Threads of a process can share the memory of global variables. If a global variable is changed in one thread, this change is valid for all threads. A thread can have local variables.
The handling of threads is simpler than the handling of processes for an operating system. That's why they are sometimes called light-weight process (LWP)
Threads in PythonThere are two modules which support the usage of threads in Python:
- thread
and
- threading
Please note: The thread module has been considered as "deprecated" for quite a long time. Users have been encouraged to use the threading module instead. So,in Python 3 the module "thread" is not available anymore. But that's not really true: It has been renamed to "_thread" for backwards incompatibilities in Python3.
The module "thread" treats a thread as a function, while the module "threading" is implemented in an object oriented way, i.e. every thread corresponds to an object.
The thread ModuleIt's possible to execute functions in a separate thread with the module Thread. To do this, we can use the function thread.start_new_thread:
thread.start_new_thread(function, args[, kwargs])
This method starts a new thread and return its identifier. The thread executes the function "function" (function is a reference to a function) with the argument list args (which must be a list or a tuple). The optional kwargs argument specifies a dictionary of keyword arguments. When the function returns, the thread silently exits. When the function terminates with an unhandled exception, a stack trace is printed and then the thread exits (but other threads continue to run).
Example for a Thread in Python:
from thread import start_new_thread def heron(a): """Calculates the square root of a""" eps = 0.0000001 old = 1 new = 1 while True: old,new = new, (new + a/new) / 2.0 print old, new if abs(new - old) < eps: break return new start_new_thread(heron,(99,)) start_new_thread(heron,(999,)) start_new_thread(heron,(1733,)) c = raw_input("Type something to quit.")The raw_input() in the previous example is necessary, because otherwise all the threads would be exited, if the main program finishes. raw_input() waits until something has been typed in.
We expand the previous example with counters for the threads.
from thread import start_new_thread num_threads = 0 def heron(a): global num_threads num_threads += 1 # code has been left out, see above num_threads -= 1 return new start_new_thread(heron,(99,)) start_new_thread(heron,(999,)) start_new_thread(heron,(1733,)) start_new_thread(heron,(17334,)) while num_threads > 0: passThe script above doesn't work the way we might expect it to work. What is wrong?
The problem is that the final while loop will be reached even before one of the threads could have incremented the counter num_threads.
But there is another serious problem:
The problem arises by the assignments to num_thread
num_threads += 1
and
num_threads -= 1
These assignment statements are not atomic. Such an assignment consists of three actions:
- Reading the value of num_thread
- A new int instance will be incremented or decremented by 1
- the new value has to be assigned to num_threads
Errors like this happen in the case of increment assignments:
The first thread reads the variable num_threads, which still has the value 0. After having read this value, the thread is put to sleep by the operating system. Now it is the second thread's turn: It also reads the value of the variable num_threads, which is still 0, because the first thread has been put to sleep too early, i.e. before it had been able to increment its value by 1. Now the second thread is put to sleep. Now it is the third thread's turn, which again reads a 0, but the counter should have been 2 by now. Each of these threads assigns now the value 1 to the counter. Similiar problems occur with the decrement operation.
SolutionProblems of this kind can be solved by defining critical sections with lock objects. These sections will be treated atomically, i.e. during the execution of such a section a thread will not be interrupted or put to sleep.
The methode thread.allocate_lock is used to create a new lock object:
lock_object = thread.allocate_lock()
The beginning of a critical section is tagged with
lock_object.acquire()and the end with
lock_object.release().
The solution with locks looks like this:
from thread import start_new_thread, allocate_lock num_threads = 0 thread_started = False lock = allocate_lock() def heron(a): global num_threads, thread_started lock.acquire() num_threads += 1 thread_started = True lock.release() ... lock.acquire() num_threads -= 1 lock.release() return new start_new_thread(heron,(99,)) start_new_thread(heron,(999,)) start_new_thread(heron,(1733,)) while not thread_started: pass while num_threads > 0: pass
threading ModuleWe want to introduce the threading module with an example. The Thread of the example doesn't do a lot, essentially it just sleeps for 5 seconds and then prints out a message:
import time from threading import Thread def sleeper(i): print "thread %d sleeps for 5 seconds" % i time.sleep(5) print "thread %d woke up" % i for i in range(10): t = Thread(target=sleeper, args=(i,)) t.start()Method of operation of the threading.Thread class: The class threading.Thread has a method start(), which can start a Thread. It triggers off the method run(), which has to be overloaded. The join() method makes sure that the main program waits until all threads have terminated.
The previous script returns the following output:
thread 0 sleeps for 5 seconds thread 1 sleeps for 5 seconds thread 2 sleeps for 5 seconds thread 3 sleeps for 5 seconds thread 4 sleeps for 5 seconds thread 5 sleeps for 5 seconds thread 6 sleeps for 5 seconds thread 7 sleeps for 5 seconds thread 8 sleeps for 5 seconds thread 9 sleeps for 5 seconds thread 1 woke up thread 0 woke up thread 3 woke up thread 2 woke up thread 5 woke up thread 9 woke up thread 8 woke up thread 7 woke up thread 6 woke up thread 4 woke upThe next example shows a thread, which determines, if a number is prime or not. The Thread is defined with the threading module:
import threading class PrimeNumber(threading.Thread): def __init__(self, number): threading.Thread.__init__(self) self.Number = number def run(self): counter = 2 while counter*counter < self.Number: if self.Number % counter == 0: print "%d is no prime number, because %d = %d * %d" % ( self.Number, self.Number, counter, self.Number / counter) return counter += 1 print "%d is a prime number" % self.Number threads = [] while True: input = long(raw_input("number: ")) if input < 1: break thread = PrimeNumber(input) threads += [thread] thread.start() for x in threads: x.join()With locks it should look like this:
class PrimeNumber(threading.Thread): prime_numbers = {} lock = threading.Lock() def __init__(self, number): threading.Thread.__init__(self) self.Number = number PrimeNumber.lock.acquire() PrimeNumber.prime_numbers[number] = "None" PrimeNumber.lock.release() def run(self): counter = 2 res = True while counter*counter < self.Number and res: if self.Number % counter == 0: res = False counter += 1 PrimeNumber.lock.acquire() PrimeNumber.prime_numbers[self.Number] = res PrimeNumber.lock.release() threads = [] while True: input = long(raw_input("number: ")) if input < 1: break thread = PrimeNumber(input) threads += [thread] thread.start() for x in threads: x.join()
Pinging with Threads
A solution without threads is highly inefficient, because the script will have to wait for every ping.
Solution with threads:
import os, re received_packages = re.compile(r"(\d) received") status = ("no response","alive but losses","alive") for suffix in range(20,30): ip = "192.168.178."+str(suffix) ping_out = os.popen("ping -q -c2 "+ip,"r") print "... pinging ",ip while True: line = ping_out.readline() if not line: break n_received = received_packages.findall(line) if n_received: print ip + ": " + status[int(n_received[0])]To understand this script, we have to look at the results of a ping on a shell command line:
$ ping -q -c2 192.168.178.26 PING 192.168.178.26 (192.168.178.26) 56(84) bytes of data. --- 192.168.178.26 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.022/0.032/0.042/0.010 msIf a ping doesn't lead to success, we get the following output:
$ ping -q -c2 192.168.178.23 PING 192.168.178.23 (192.168.178.23) 56(84) bytes of data. --- 192.168.178.23 ping statistics --- 2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1006ms
This is the fast solution with threads:
import os, re, threading class ip_check(threading.Thread): def __init__ (self,ip): threading.Thread.__init__(self) self.ip = ip self.__successful_pings = -1 def run(self): ping_out = os.popen("ping -q -c2 "+self.ip,"r") while True: line = ping_out.readline() if not line: break n_received = re.findall(received_packages,line) if n_received: self.__successful_pings = int(n_received[0]) def status(self): if self.__successful_pings == 0: return "no response" elif self.__successful_pings == 1: return "alive, but 50 % package loss" elif self.__successful_pings == 2: return "alive" else: return "shouldn't occur" received_packages = re.compile(r"(\d) received") check_results = [] for suffix in range(20,70): ip = "192.168.178."+str(suffix) current = ip_check(ip) check_results.append(current) current.start() for el in check_results: el.join() print "Status from ", el.ip,"is",el.status()
Previous Chapter: Forks and Forking in Python
Next Chapter: Pipe, Pipes and "99 Bottles of Beer"
Next Chapter: Pipe, Pipes and "99 Bottles of Beer" | https://python-course.eu/threads.php | CC-MAIN-2019-13 | refinedweb | 1,722 | 66.54 |
am using a Python library called arcpy_metadata that uses 2 datatype called
max_scale and
min_scale that take integers:
My code for that is just:
import arcpy, arcpy_metadata as md....min_scale = arcpy.GetParameterAsText(5)max_scale = arcpy.GetParameterAsText(6)....metadata = md.MetadataEditor(file)...metadata. min_scale = min_scale metadata.max_scale = max_scale
I am trying to turn this into a Python script in ArcMap toolbox. An example input for the max and min scale would 5000 and 150000000. Since there is no
integer type and I don't really need decimals, I just entered
double when configuring the parameters for the tool. I also set it as ``optional`.
When I run the tool however, I keep getting the following error. What am I doing wrong here?
RuntimeWarning: Input value must be of type Integer
Using the GetParameterAsText method means that all parameter values are converted to text before being passed to your code. Therefore you have to convert them back to their original type. You need to modify you code to:
However, you may want to add validation to ensure they are valid values (not Null or out of range) before the parameters are converted and passed to the rest of your code. | https://community.esri.com/thread/220752-python-script-parameters-for-integer-not-working | CC-MAIN-2020-40 | refinedweb | 198 | 54.32 |
Pt-F-3
Nicolas:
Yes :)-
James:
No. I found that StoreController.redirect_to_index method would complain. Here’s what we had before:
def redirect_to_index(msg) flash[:notice] = msg if msg redirect_to :action => :index end
Rails would complain “wrong number of arguments (0 for 1)”. You can resolve this by setting the default value of the msg variable to nil as follows:
def redirect_to_index(msg=nil) flash[:notice] = msg if msg redirect_to :action => :index end
(Sidebar: if you’re developing web applications for the first time, a handy Firefox extension you should use is the Web Developer Toolbar (). This handy toolbar allows you to easily test your Ajax-less experience at the click of a button.)
Marcello:
You should already have altered the method on the creation of the AJAX cart. Also, don’t forget to use the statement modifier unless request.xhr? to handle the downgrade.
Jinyoung:
I’ve done like below:
def empty_cart session[:cart] = nil respond_to do |format| format.js if request.xhr? format.html {redirect_to_index} end end
Sawant:
Yes, it does, with the following code:
def remove_from_cart #... if request.xhr? respond_to { |format| format.js } else redirect_to_index end #... end | https://pragprog.com/wikis/wiki/Pt-F-3/version/4 | CC-MAIN-2016-50 | refinedweb | 190 | 50.73 |
template <class InputIterator, class T> typename iterator_traits<InputIterator>::difference_type count (InputIterator first, InputIterator last, const T& val);
Effects
Counts the number of elements that are equal to val
Parameters
first => iterator pointing to the beginning of the range
last => iterator pointing to the end of the range
val => The occurrence of this value in the range will be counted
Return
The number of elements in the range that are equal(==) to val.
Example
#include <vector> #include <algorithm> #include <iostream> using namespace std; int main(int argc, const char * argv[]) { //create vector vector<int> intVec{4,6,8,9,10,30,55,100,45,2,4,7,9,43,48}; //count occurences of 9, 55, and 101 size_t count_9 = count(intVec.begin(), intVec.end(), 9); //occurs twice size_t count_55 = count(intVec.begin(), intVec.end(), 55); //occurs once size_t count_101 = count(intVec.begin(), intVec.end(), 101); //occurs once //print result cout << "There are " << count_9 << " 9s"<< endl; cout << "There is " << count_55 << " 55"<< endl; cout << "There is " << count_101 << " 101"<< ends; //find the first element == 4 in the vector vector<int>::iterator itr_4 = find(intVec.begin(), intVec.end(), 4); //count its occurences in the vector starting from the first one size_t count_4 = count(itr_4, intVec.end(), *itr_4); // should be 2 cout << "There are " << count_4 << " " << *itr_4 << endl; return 0; }
Output
There are 2 9s There is 1 55 There is 0 101 There are 2 4 | https://riptutorial.com/cplusplus/example/17653/std--count | CC-MAIN-2021-43 | refinedweb | 230 | 51.62 |
Does "show" take advantage of Dart imports other than intent and possibly compiler speed when importing the Dart library?
Let's say I have:
import 'dart:async' show Timer; import 'dart:math' show Random;
I think one advantage is that you explicitly set your intentions, so if you try to use something else, you need to explicitly decide if you really like it.
I suppose another benefit is the speed of the compiler (dart2js), because despite the tree jitter, it can deal faster with what depends on it.
Does the runtime speed differ? Other benefits?
source to share
I can think of several:
- It also reduces naming conflicts; unless you are importing a class
Foo
from a library because you don’t need it, you don’t need to fully qualify any other class
Foo
you might be using.
- Reduces the clutter in your "workspace", which allows you to "accidentally" increase your linking to the library by simply "using what's in there" (this only stops you from referring to other classes / functions, it doesn't stop you from things that return them) ...
- Similar to (2), but the intellisense list will be shorter, which can help you focus on the bits you care about.
Of course, the values of each of them can differ from dev to dev.
Edit: Rereading the post, you already mentioned 2; however your quick compilations due to tree jitter are not entirely accurate. Just because you are not a
show
class does not mean that you are not using it - it can be used internally via the code you are using or a function returned from you.
source to share | https://daily-blog.netlify.app/questions/2168452/index.html | CC-MAIN-2021-49 | refinedweb | 275 | 63.63 |
Wikibooks:Template messages/Image namespace
From Wikibooks, the open-content textbooks collection
General | Cleanup | Maintenance | Sources | Links | Deletion | Navigation
Media | Image | User namespace | User talk
The image namespace contains images and the pages describing them. These include the copyright tags. Each image must have a copyright tag because Wikibooks needs to track the permissions for storing images here.
- My image is missing a copyright tag, or has {{no license}}.
- What you need to do is go to the image page (just click the image wherever you inserted it), edit the page, and add a correct tag. For example, if the image is a work of the US government, add: {{PD-USGov}}. For screenshots, give the screenshot the same license as the program itself has. For your own work we recommend that you duel-license your work.
See also:
[edit] Information
[edit] Copyright and licensing
. | http://en.wikibooks.org/wiki/Wikibooks:Template_messages/Image_namespace | crawl-001 | refinedweb | 144 | 55.84 |
Muli Ben-Yehuda wrote:> On Tue, Aug 13, 2002 at 01:09:39AM -0700, H. Peter Anvin wrote:> >>Muli Ben-Yehuda wrote:>>>>>How about: >>>>>>/* early gcc compilers lose on __func__ */ >>>#ifndef __func__ >>>#define __func__ __FUNCTION__>>>#endif /* !defined __func__ */ >>>>__func__ isn't a macro; it's a compiler token.> > > Works for me(TM). > But it won't work on a compiler that actually *supports* __func__...I think that is gcc 3.1 or higher, but I'm not the authority...> ObCompleteyUnrelatedQuestions: where can I find klibc? -hpa-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2002/8/13/81 | CC-MAIN-2015-35 | refinedweb | 114 | 61.73 |
Web developer, technical writer and OSS contributor. I write about web development, technologies and my learnings.
The portfolio, in a sense**,** is a longer version of a Resume. In modern times, it’s important to showcase your work by uploading your portfolio to a website.
Recently I made and deployed my portfolio website under my own domain. I got so much appreciation and amazing feedback for this.
I decided to pass on what I learned to you! I am sharing how I made it, things I learned throughout building it, and challenges I ran into.
There are lots of tools out there to generate a portfolio website for you. But I decided to build it myself in order to practice my skills and to make it more customizable.
If you like this structure, I created a template. You can use it to quickly set up your project.
Portfolio/ ├── public └── src/ ├── assets/ │ ├── documents │ └── images ├── common/ │ └── components/ │ ├── Footer │ ├── Navigation │ ├── UIElements/ │ │ └── loadingAnimations │ ├── hooks │ └── util ├── features/ │ └── ProfileRedirect └── pages/ ├── 404 ├── About/ │ └── components ├── Blogs/ │ └── components ├── Contact/ │ └── components ├── Home/ │ └── components ├── Profiles └── Work/ ├── components └── projects
This website doesn't have a big backend because there is not much business logic involved in this.
Server/ └── src/ ├── controllers/ ├── data/ ├── routes/ ├── services/ └── util/
I am breaking this blog to separate parts where each part is a page of the website.
The website contains 6 pages:
All pages have the same navbar and footer.
itsrakesh home page
itsrakesh home page
The home page is a quick overview of the whole website. It contains a quick intro about me, a few social links, an email and a resume button.
It also has two different sections - 2 recent projects, why hire me and CTA.
That 3d NFT on the hero section is generated from this website called "readyplayer.me".
itsrakesh work page
itsrakesh work page
The work page is an overview of what I do. Currently, it has only a projects section but I'm thinking I’ll add more.
The projects’ page contains cards. Each card contains the project photo, title, tech stack, link to the details page and link to live preview.
itsrakesh details page
itsrakesh details page
Project details page is actually a markdown file and rendered as HTML. For this, I use an npm package called markdown-to-jsx. Markdown file is Github README.md of respective project's repo. This is a simple trick.
... const [readme, setReadme] = useState(""); ... // getting the README URL ... const response = await axios.get( `{repoName}/${ 'master' || 'main' }/README.md` ); setReadme(response.data); ... // render markdown ... <div className="project-item"> <Markdown children={readme} /> </div> ...
itsrakesh blogs page
itsrakesh blogs page
The Blogs page is my favourite page.
I took some inspiration from the amazon prime video TV app 😂. Why that preview? - I cross-post my articles on three platforms - Dev, Hashnode and medium because everyone read articles on their favourite platform.
So this idea of showing preview is to include those three links and show a small part of the blog.
How does this work? - I used the Dev API to pull the blogs from dev. Each blog contains data that includes cover image, title, description, reactions, views, read time, publication date, etc.
With this data, I made a card. That data also contains the blog URL and the canonical link. So the "Read Blog" URL is the canonical URL, the "dev" URL is the blog URL and still can't figure out the "medium" URL ☹️ because medium adds "id" at the end of URL(Current all blogs redirect to my medium profile page).
To conclude, everything on this page is automated and I don't need to upload any data to the database.
itsrakesh about me page
itsrakesh about me page
About Me page is a brief overview of everything about me. It contains some quick links, a Github contributions graph, blogs analytics, tools I use, languages, frameworks I know, my skills (Need to remove that percentage bar) and my achievements.
For the Github contributions graph, I used an npm package called github-calendar-graph.
itsrakesh Contact Form Page
itsrakesh Contact Form Page
The contact page contains a simple form for people to leave a quick message to me.
How does this form work? - I used "nodemailer" to send emails with NodeJs and "sendgrid" as a mail service.
So whenever a user clicks "Send Message" I send mail to myself that contains the user message :). (Please don't spam I have a monthly limit as part of the free plan :(. )
itsrakesh profiles page
itsrakesh profiles page
The profiles page contains links to some main profiles.
And here's a cool thing - You can find any online profile of me with the URL<websitename>, it will redirect you to my profile.
For example, will redirect you to my Twitter profile.
This website doesn't have too many animations, just a few like those buttons on the home page, counting animations etc. Most of these animations are inspired by the "codepen" community.
Loaders
Loaders
This website has many loading animations because they are great for the user experience.
There are different types of loaders like a single spinner, placeholders and some creative loaders. I used spinner for page load and bootstrap's placeholder for placeholder animations like the above picture.
Page load animation? - There is a feature in React called "Code-Splitting" which means the browser loads only files that are required. While the browser loads the files you can use that waiting time to show some animation instead of a blank page.
Here's how to do it:
import { lazy } from 'react'; const Home = lazy(() => import('./pages/Home/Home')); ... <React.Suspense fallback={<LoadingSpinner />}> <Routes> <Route path="/" element={<Home />} /> </Routes> </React.Suspense> ...
There is a visits count in the footer section of the website. I used CountAPI to count the number of times this website has been visited.
I also used Google Analytics even though not needed :). I used it to learn GA.
Finally, I submitted my website to Google search console and Bing webmaster tools to get indexed on search engines even though not needed :). But these tools help to find issues with your website.
Apart from those common errors every developer experiences, I struggled to find a good design, colour combinations, layout etc. I spent(wasted) a whole day figuring out a simple issue with the blogs page to avoid duplicates.
Most of the challenges I faced are only with the frontend because there is not much to do with the backend, it's just a simple RESTApi.
This is my first portfolio website so I learned so much throughout building it. Not only technical things but also how to be consistent, how to design from a user point of view, how to mix different colours etc.
Throughout the building, I made a lot of googling, so I learned how to solve an issue by just googling, what are the right places to find solutions, how not to waste time etc.
I also used StackOverflow very effectively that I don't even get a chance to ask a question(Asking question on StackOverflow is still my dream).
Now, if you want a portfolio and you are not a web developer or you are not a frontend person then you can just use some online no-code tools like wix, WordPress etc.
And if want a simple page, there are some great websites
Here are a few alternatives I know:
You can create a great-looking profile with Github.
For example, here's mine - Github
RakeshPotnuru Github profile
RakeshPotnuru Github profile
Peerlist is great for anyone. It gives you a really nice profile page where you can include all about you, your work, your blogs, projects etc.
Peerlist.io
Peerlist.io
Showwcase is a community for developers. And it has a cool feature that gives you a page and a free custom domain.
What's this? - Basically, you include all your skills, projects, experience, tech stack, social links, profile photo and profile banner in your Showwcase profile and Showwcase makes a page with all these details and gives you a free domain.
So you can just share that link to show your profile. Cool, right?
Here's mine - itsrakesh.showwcase.com
showwcase
showwcase
That's it! That's how I made it. I hope you find this useful. If so, follow me for more useful blogs like this every week.
Please give me feedback on how I can improve my website or the things you like in it. This helps me a lot.
(To give detailed feedback there is a google form link in the footer or if it's simple feedback leave a comment below.)
Thank you 😇. | https://hackernoon.com/how-to-get-your-personal-website-to-the-next-level | CC-MAIN-2022-40 | refinedweb | 1,440 | 73.58 |
MailBee.Pop3Mail namespace contains classes and enumerations which can be used by your applications to download mail messages from a POP3 server. Another supported operations are deleting mail from the server, sending custom commands to the server, authenticating using secure methods, and more.
MailBee supports POP3 PIPELINING, which greatly (up to 1000% and more) increases performance of downloading or deleting multiple messages.
Pop3Mail.Pop3 (main class of this namespace) is declared as a component, which means you can also just drop it onto your application form instead of creating an instance in the code.
"Quick" methods are also supported, which allows you to download mail from the server in a single line of code.
The component has built-in support for secure connections (TLS/SSL) and proxy servers (SOCKS4/SOCKS5/HTTP).
In UWP apps, use async methods. This platform has very limited sync I/O support. | https://afterlogic.com/mailbee-net/docs/MailBee.Pop3Mail.html | CC-MAIN-2022-40 | refinedweb | 146 | 53.81 |
Are you looking for a deep learning library that’s one of the most popular and widely-used in this world? Do you want to use a GPU and highly-parallel computation for your machine learning model training? Then look no further than TensorFlow.
Are you looking for a deep learning library that’s one of the most popular and widely-used in this world? Do you want to use a GPU and highly-parallel computation for your machine learning model training? Then look no further than TensorFlow.
Created by the team at Google, TensorFlow is an open source library for numerical computation and machine learning. Undoubtedly, TensorFlow is one of the most popular deep learning libraries, and in recent weeks, Google released the full version of TensorFlow 2.0.
Python developers around the world should be about TensorFlow 2.0, as it’s more Pythonic compared to earlier versions. To help us get started working with TensorFlow 2.0, let’s work through an example with linear regression.
Before we start, let me remind you that if you have TensorFlow 2.0 installed on your machine, then the code written for linear regression using TensorFlow 1.x may not work. For example,
tf.placeholder, which works with TensorFlow 1.x, won’twork with 2.0. You’ll get the error
AttributeError: module ‘tensorflow’ has no attribute ‘placeholder’as shown in the image below_._
If you want to run the existing code (written in version 1.x) with version 2.0, you have two options:
import tensorflow.compat.v1 as tf tf.disable_v2_behavior()
In this article, we’re going to use TensorFlow 2.0-compatible code to train a linear regression model.
Linear regression is an algorithm that finds a linear relationship between a dependent variable and one or more independent variables. The dependent variable is also called a label and independent variables are called features.
We’ll start by importing the necessary libraries. Let’s import three, namely
numpy,
tensorflow, and
matplotlib, as shown below:
import numpy as np import tensorflow as tf import matplotlib.pyplot as plt
Before coding further, let’s make sure we’ve got the current version of TensorFlow ready to go.
Our next step is to create synthetic data for the model, as shown below. Assuming the equation of a line as y = mx + c , note that we’ve taken the slope of the line m as 2 and constant value c as 0.9. There is some error data we’ve introduced using
np.random, as we don’t want the model to overfit as a straight line — this is because we want the model to work on unseen data.
# actual weight = 2 and actual bias = 0.9 x = np.linspace(0, 3, 120) y = 2 * x + 0.9 + np.random.randn(*x.shape) * 0.3
Let’s plot the data to see if it has linear pattern. We’re using Matplotlib for plotting. The data points below clearly show a pattern we’re looking for. Noticed that the data isn’t a perfectly straight line.
After visualizing our data, let’s create a class called
Linear Model that has two methods: init and call. Init initializes the weight and bias randomly, and call returns the values, as per the straight line equation y = mx + c
class LinearModel: def __call__(self, x): return self.Weight * x + self.Bias def __init__(self): self.Weight = tf.Variable(11.0) self.Bias = tf.Variable(12.0)
Now let’s define the loss and train functions for the model. The train function takes four parameters:
linear_model (model instance) ,
x (independent variable) ,
y (dependent variable), and
lr (learning rate).
The loss function takes two parameters:
y (actual value of dependent variable) and
pred (predicted value of dependent variable).
Note that we’re using the
tf.square function to get the square of the difference of
y and the predicted value, and then we’re using the .
tf.reduce_mean method to calculate the square root of the mean.
Note that the
tf.GradientTape method is used for automatic differentiation, computing the gradient of a computation with respect to its input variables.
Hence, all operations executed inside the context of a
tf.GradientTape are recorded.
def loss(y, pred): return tf.reduce_mean(tf.square(y - pred)) def train(linear_model, x, y, lr=0.12): with tf.GradientTape() as t: current_loss = loss(y, linear_model(x)) lr_weight, lr_bias = t.gradient(current_loss, [linear_model.Weight, linear_model.Bias]) linear_model.Weight.assign_sub(lr * lr_weight) linear_model.Bias.assign_sub(lr * lr_bias)
Here we’re defining the number of epochs as 80 and using a for loop to train the model. Note that we’re printing the epoch count and loss for each epoch using that same for loop. We’ve used 0.12 for learning rate, and we’re calculating the loss in each epoch by calling our loss function inside the for loop as shown below.
linear_model = LinearModel() Weights, Biases = [], [] epochs = 80 for epoch_count in range(epochs): Weights.append(linear_model.Weight.numpy()) Biases.append(linear_model.Bias.numpy()) real_loss = loss(y, linear_model(x)) train(linear_model, x, y, lr=0.12) print(f"Epoch count {epoch_count}: Loss value: {real_loss.numpy()}")
Below is the output during model training. This shows how our loss value is decreasing as the epoch count is increasing. Note that, initially, the loss was very high as we initialized the model with random values for weight and bias. Once the model starts learning, the loss starts decreasing.
And finally, we’d like to know the weight and bias values as well as RMSE for the model, which is shown below.
I hope you enjoyed creating and evaluating a linear regression model with TensorFlow 2.0.
You can find the complete code here.
Happy Machine Learning :)
tensorflow machine-learning python be working on an end-to-end case study to understand different stages in the ML life cycle. This will deal with 'data manipulation' with pandas and 'data visualization' with seaborn. After this, an ML model will be built on the dataset to get predictions. You will learn about the basics of the sci-kit-learn library to implement the machine learning algorithm. | https://morioh.com/p/b12da909d731 | CC-MAIN-2020-40 | refinedweb | 1,027 | 59.7 |
As an example, we're going to create a simple class of object for use with my simple video game kernel in a new version I'll be introducing in an article in the near future. The class definition consists mostly of constructor methods, since the class itself is presently not much more than a Rectangle with an added field.
import java.awt.*;
// A Simple class for use in the simple video game examples.
// Mark Graybill, Aug. 2010
public class Brick extends Rectangle{
Color brickColor;
public Brick(int newX, int newY, int newWidth, int newHeight){
super(newX, newY, newWidth, newHeight);
brickColor = new Color(0, 128, 255);
}
public Brick(int newX, int newY){
this(newX, newY, 10, 10);
}
public Brick(){
this(0,0,10,10);
}
public void setColor(Color newColor){ brickColor=newColor; }
public Color getColor(){ return brickColor; }
} // End Brick
In this class, we have three forms of constructor, each with a different set of parameters. The one that takes the most parameters is the "base" version. It starts with a call to super(). This calls the constructor for Brick's superclass, or parent class, Rectangle. A look at the documentation for Rectangle shows that it has a constructor that takes four integer arguments, as we use here in super().
When we use super(), it must be the first thing we do within our constructor method. If we don't use super(), Java will do it automatically as the first thing in a constructor, calling it with no parameters. Since we want to set values for our inherited fields of x, y, width, and height it's better to call super() with those parameters. Otherwise, we could just as well have done something like this:
public Brick(int newX, int newY, int newWidth, int newHeight){
x=newX;
y=newY;
width=newWidth;
height=newHeight;
brickColor = new Color(0, 128, 255);
}
This would have the same effect, and Java would insert an invisible call to super() in front of x=newX;.
The other constructors use the first method. To do this, they use another special method that's like super(). It's called this(), and it calls another constructor for this class. We can't do a call to Brick(), if we try, the compiler will see it as an undefined symbol:
>javac Brick.java
Brick.java:11: cannot find symbol
symbol : method Brick(int,int,int,int)
location: class Brick
Brick(0,0,10,10);
^
1 error
So we use this() instead.
Like super(), the this() method must be the first thing called in the constructor method's body. Since don't call super()--it's in the base constructor--so there's no conflict about which goes first. If you use this(), you don't use super().
By using this() with the other constructor methods, we can keep all our key code code for the constructor in one place. If we had each constructor setting member values and constants without calling the base constructor method, then we'd end up with repeated code--a prime opportunity for bugs to enter our code if we update the code in one place, but not another. We don't want repeated code! Multiple independent constructors are used in the Java Tutorials, but I expect they're used to keep the lesson simple, not because they are a good coding practice!
You can find out more about this() and super() in the Java Language Specification, under Explicit Constructor Invocations.
You can probably see that I should really have my base constructor allow a Color to be passed to it, too. Try adding this yourself as an exercise to try out your understanding of constructor methods, then see what javac thinks of your work. | http://beginwithjava.blogspot.com/2010/08/multiple-constructor-methods.html | CC-MAIN-2018-39 | refinedweb | 617 | 59.33 |
NOM
uuid_time - Extraire la date de création de l’UUID
SYNOPSIS
#include <uuid/uuid.h> time_t uuid_time(uuid_t uu, struct timeval *ret_tv)
DESCRIPTION) function. It may or may not work with UUIDs created by other mechanisms.
VALEURS DE RETOUR)).
AUTEUR
Theodore Y. Ts’o
DISPONIBILITÉ
VOIR AUSSI
uuid(3), uuid_clear(3), uuid_compare(3), uuid_copy(3), uuid_generate(3), uuid_is_null(3), uuid_parse(3), uuid_unparse(3)
TRADUCTION
La traduction de cette page de manuel est maintenue par les membres de la liste <debian-l10n-french AT lists DOT debian DOT org>. Veuillez signaler toute erreur de traduction par un rapport de bogue sur le paquet manpages-fr-extra. | http://manpages.ubuntu.com/manpages/karmic/fr/man3/uuid_time.3.html | CC-MAIN-2015-06 | refinedweb | 105 | 58.69 |
Fojoin, Fojoin32(3fml)
Name
Fojoin(), Fojoin32() - outer join source into destination buffer
#include <stdio.h>
#include "fml.h"
int
Fojoin(FBFR *dest, FBFR *src)
#include "fml32.h"
int
Fojoin32(FBFR32 *dest, FBFR32 *src)
Description
Fojoin() is similar to Fjoin(), but it keeps fields from the destination buffer, dest, that have no corresponding fieldid/occurrence in the source buffer, src. Fields that exist in the source buffer that have no corresponding fieldid/occurrence in the destination buffer are not added to the destination buffer. If joining buffers results in the removal of a FLD_PTR field, the memory area referenced by the pointer is not modified or freed.
As with Fjoin(), this function can fail for lack of space; it can be re-issued to complete the operation after more space is allocated.
Fojoin32() is used with 32-bit FML.
A thread in a multithreaded application may issue a call to Fojoin() or Fojoin32() while running in any context state, including TPINVALIDCONTEXT.
Return Values
This function returns -1 on error and sets Ferror to indicate the error condition.
Errors
Under the following conditions, Fojoin() fails and sets Ferror to:
Either the source buffer or the destination buffer does not begin on the proper boundary.
Either the source buffer or the destination buffer is not a fielded buffer or has not been initialized by Finit().
A field value is to be added or changed in a field buffer but there is not enough space remaining in the buffer.
Example
In the following example,
if(Fojoin(dest,src) 0)
F_error("pgm_name");
if dest has fields A, B, and two occurrences of C, and src has fields A, C, and D, the resultant dest will contain the source field value A, the destination field value B,the source field value C, and the second destination field value C.
See Also
Introduction to FML Functions, Fconcat, Fconcat32(3fml), Fjoin, Fjoin32(3fml), Fproj, Fproj32(3fml) | https://docs.oracle.com/cd/E13203_01/tuxedo/tux71/html/rf3fml58.htm | CC-MAIN-2021-43 | refinedweb | 318 | 51.18 |
The XAO of Pooh: XML Access Objects as a New Pattern for Web Development
Posted on Wednesday, July 30th, 2003 10:29 PMOkay, I lied. One last thing before I take a break - I've been working on this post for a while and wanted to publish this idea I've had and some sample code, but haven't gotten the code done. So here's just the idea, samples later.
Instead of actually producing anything over the past few months, I've been instead trying to find the perfect system for producing web content. This is, of course, my way to procrastinate. I get to play with a bunch of different technologies, learn a bunch of stuff, write a bunch of code and at the end I actually don't have anything accomplished. However, finally, I think I may have stumbled upon the system I've been looking for and I'll explain it here.
First you have to understand my opinions about web development. Doing web dev is not brain surgery. What you're doing is simply reading data from a database and formatting to present it to a browser, maybe taking some data given to you sometimes and popping it back in the db. That's it. Everything else is just abstractions from this basic data read/write paradigm. And that's the stuff that drives me crazy: abstractions.
Russell's #1 rule for app development: For every layer that you put in between you and your data you better have a damn good reason.
However, that said, there are some good reasons out there. Slapping all your code into .jsp pages is definitely not great from a maintenance standpoint at all. My website has survived some decent size blasts from Slashdot, Wired News and averages over 750,000 hits a month so it's not a bad way of doing things from a performance standpoint (as many people on the current MVC bandwagons would lead you to believe) but JSP can quickly turn into spaghetti code. So there's one reason for putting a layer on your website: Separating out your presentation from your logic.
For this, I decided I'm just using Struts. Why? Because it has been blessed by Sun and it has an incredible amount of docs and support. There's like, what, 5 books, endless numbers of websites and constant work being done to improve it. I'm not Struts biggest fan, but I got it to work like I wanted it to with some tweaking so I'm sticking with it. I looked at developing my own simplified MVC, at JPublish, Cocoon and various Python systems, but decided in the end that Struts did what I needed it to do and for the most part got out of my way, so I went that way. There's a lot to be said for standards and Struts is becoming the MVC standard. (Don't jaw at me about WebWork. I looked at it and it just had *too many abstractions*. Bad.)
In this process I took a serious and hard look at Cocoon. I decided that it was too heavy. Too much of big black box who's way of functioning were a complete mystery to me. And waaaay too much use of XSL. However, I *like* Cocoon's way of thinking a lot. I learned a ton by messing with it and came away with two must haves for any future web dev I do.
First, the URL that's sent to the web server has to be completely separated from the logic and data that's returned. Cocoon provides a great mapping layer where you can define the URLs you want to respond to using RegEx expressions and it works great. This allows you to present your url like and have it correspond to a dynamic query. There's no reason for anyone out on the web to know you're using Java or Python or who knows what because of the URL you send. You should have complete control over the URL.
Cocoon is a lot more powerful that Struts in this respect and uses a lot of server-side magic to make this happen. At first I tried to get Struts to work the same way by just hacking a servlet mapping with /* into the web.xml of my app and then from there handle everything in a custom servlet, but it's a lot of work and more processing - doing it this way, you're responsible for all the .jpgs and .gifs and .css and every other static file as well. The *other* reason is that you can't use .jsp pages mapping /* because eventually when you do a Forward to the .jsp page (after you've done your logic on the back end) the servlet container balks because of the cyclical reference. In other words .jsp matches /* as well... There are some work arounds for Tomcat - you can define a /*.jsp first and point it at Jasper, however it's not a portable solution and again you have to deal with all the static documents so I decided not to go that route. Instead I decided that I would map *.html and *.xml to my custom servlet instead. This servlet simply throws the original path into request object attribute and the forwards the request off to a Struts action. Like this:
import javax.servlet.*; import javax.servlet.http.*; import java.io.*; public class URLProcessorServlet extends HttpServlet{ public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { execute(request, response); } public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { execute(request, response); } public void execute(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { request.setAttribute("originalPath",request.getServletPath()); String redirectUrl = "/do/URLProcessor"; RequestDispatcher requestdispatcher = request.getRequestDispatcher(redirectUrl); requestdispatcher.forward(request, response); } }
Then the Struts action takes a look at the URL and processes it in a big if statement fowarding the request to specific actions that do the real work before forwarding to the jsp pages. I use a standard header on all the pages, so to make sure that someone doesn't call a .jsp page directly, I simply add this bit of JSTL at the top of the .jsp page:
<c:if <c:redirect </c:if>
Do you see how that works? Basically, if the .jsp page didn't get called through the URLProcessor servlet (which is actually badly named because all it does is forward the request on... but I wanted it to match up with the Action for clarity) then the "originalPath" attribute isn't set, and the page gets sent back to the front page.
So that's the first thing I got from Cocoon - the desire to have full control over the URIs that my server sends out. Having to have *.html at the end is a bit of a hack and if you read TBL, he says it's not a future proof URI, it's still standard enough that most people won't even look at the URL.
The second thing I got from Cocoon is the idea that XML is *the* only way to use data. The first thing you learn in Cocoon is that you need to start with XML before you can do anything else. Step one in the Cocoon presentation process is done by using a variety of "generators" which produce valid XML. Query a DB? It has to come out as XML. Files, URLs, whatever, before you can start manipulating it with Cocoon it has to start as XML.
This is a *fantastic* way of thinking. At first I fought the concept because, well, XML is a pain in the ass. Producing valid XML from random data sources is a real bitch. But if you want to eventually produce different types of XML like WML, XHTML, XHTML-MP, SVG, etc. the only real way is to start with valid XML and go from there. With XML it's really garbage-in, garbage out so you've got to start clean.
The idea is that you have to really have bought into the idea of XML as a flexible data transport. If you think XML is just pure crap, then there's no reason to continue reading because you'll think that everything from this point on is a pointless exercise. But if you get the idea that once data is within XML that it becomes super portable and transformable and incredibly useful and worth the inconveniences, please continue.
So as soon as I decided not to use Cocoon, I still wanted to keep this style of programming because I think it's just makes incredible sense. It passes my layer test: Yes, I'm putting a layer of XML processing in between me and the database, but the benefits of this extra layer (which I'll explain shortly) justify any drawbacks.
Happily, once I moved back to just "plain" development with .jsps, I discovered the joys of development with JSTL. It's *really* well done. Those of you with long memories might remember me almost a year ago on this blog bitching about how JSTL is horrible and ugly and a misuse of tags, but I was completely wrong. That was a reactionary opinion from someone who spent years with embedded Java in their JSP. The JSTL is done very well and is quick and easy to produce clean JSP pages. I really look forward to JSP 2.0 when the Expression Language is used throughout the page, but for now using just the tags is fine. It's still an abstraction - all the tags are doing is producing a servlet at the end, but it's an abstraction worth using as well (as long as you don't bump into the dreaded 64k class limit. Ugh.)
The absolute *best* thing about JSTL is the XML processing tags. They are INCREDIBLY powerful! I can't believe how cool they are. Using a simple import tag, you can grab your XML from anywhere (file or http request) and then use the other XML tags to loop through the XML and present it to your users. I don't really like XSLT - it's okay, but it's difficult for me to use. I mean, I do like how XSL is separate from a server implementation - an XSL stylesheet you write can be used by any XSL processor, but other than its portability, XSLT is just a bitch to use. Errors are cryptic and the results can be many times incredibly mysterious. Add any sort of complexity to the transform and the XSL page can be *HUGE* as well.
The one thing I like about XSL is its use of XPath. And happily, the JSTL uses it as well! AWESOME! So now you can use the power of loops and if logic on your page like you would normally do, but instead of doing it with SQL data, you're doing it with XML data queried via XPath. It really works well and the development time (when compared with XSL) is insanely quick. But hey, if you want to use XSL, JSTL supports using XSL stylesheets just as easily, so you don't have to pick or choose. Some things will naturally be easier to do via logical loops and ifs, other stuff will be more straight forward transformations.
So now I've come really, really close to having everything I need to quickly and easily produce modern web content via XML, but without much excess baggage. I have control over the URLs, separation of logic and presentation and presentation based on XML manipulation or transformation. The only thing left is the XML itself. And that's where XML Access Objects come in (finally). It's a revelation that just recently came to me and I'm still working out the issues, but hopefully you'll see the core of the idea and why its so cool.
You can think of XAOs as Data Access Objects for XML. They separate the application from the source of the data (which can be a database, an xml page, or whatever), but instead of returning some sort of Map or Collection of Beans like DAOs do, these classes only produce XML.
Here's the XAO interface:
public interface XAO { public String getXML(); public String getXML(String xml) throws XAOException; public void setXML(String xml) throws XAOException; public String getSchema(); public String getParamSchema(); }
What this does is create an standard interface for dealing with XML data in your application, but without specifying the implementation. Don't be mistaken by the first glance - XAOs aren't an object representation of XML or anything like that, they are simply XML generators. Once you've got that XML in your hands (generated from any source) your options are incredibly broad.
In fact, there's so many cool things you can do with XAOs I almost don't know where to start. XAOs aren't simply interfaces to XML data, they're also XML encapsulation of *any* data. Though the name is related to Data Access Objects, in use they're more like "XML Java Beans". Instead of encapsulating your data in a million varied JavaBeans, Collections and Maps, you instead use XML and reap a ton of benefits. (Maybe I should call them XEOs).
The first rule of XAOs is to keep them simple. The second is that that they produce only XML Strings. The third is that they only consume XML Strings. That's it - anything more is too complex and too abstracted. (Well, I say "only", but I've implemented them as an Interface, it's really just a rule that applies to when you're working within the framework of XAO.)
Here's how I see the interface working. A getXML() query always returns a default XML document. This can be dynamically generated from a database lookup, a local file store, an http request to another server, a XQuery to an XMLDB, or if the interface is set on top of a regular JavaBean or even an EJB, it can return the contents of the bean itself as XML as well.
But what does that XML look like? It can be any XML data you want, but you need to be able to communicate what its supposed to look like for other classes to use the XAO, right? So that's the job for the getSchema() method which will return a valid XML Schema document which describes the XML returned from the getXML() method in detail. (If you haven't used XML Schema yet, well, neither have I... but it's the best way to mimic JavaBean-level of detail in the data returned). I honestly haven't gotten much into this yet and have added it in for completeness - but I think it's important. Another option is to return a URI or a DTD instead, but that breaks rule #2. But since I just made the rules up, this may not be a problem. :-)
This same schema is also used when you want to push data to the XAO as well with the setXML(String xml) method. The XAO is expecting a document that it knows what to do with, so it has to be the same as the document that's returned with the getXML() methods.
There are obviously many, many times when a default data lookup isn't going to be enough, and that's where the getXML(String xml) method comes in. This will allow you to pass an XML document of parameters to the getXML() method to focus your "query". What does the XML that's passed to the XAO need to look like? Well that's where the getParamSchema() method comes in. This method returns a definition of the XML that the XAO knows how to parse in the getXML(String xml) method.
Now if all this schema stuff makes this seems complicated, it's because it is. The caveat I've decided on to make this process the most flexible is that if the schema methods return null, then *any* xml document can be returned or set. If you send something that the XAO doesn't like, it'll just throw a XAOException which you'll need to deal with programmatically. But by implementing the XAOs using the schema methods, you can then programmatically parse and validate the XML. For many applications this level of detail might be required, otherwise it's just a data free-for-all.
So here's some examples of how using XAOs can be insanely powerful. Here's what I'm doing for the next rev of the software that runs this site. First, I create a new XAO class called PostXAO which will allow me to grab my weblog posts from a MySQL db. Here's what the getXML() looks like (for example):
What that does - in a very ugly hand-drawn way - is produce an XML document that looks sorta like this:What that does - in a very ugly hand-drawn way - is produce an XML document that looks sorta like this:
public String getXML(){ Connection conn = null; try { Context ctx=new InitialContext(); DataSource ds=(DataSource)ctx.lookup("jdbc/RussellBeattiePooledDS"); conn=ds.getConnection(); } catch (Exception ex) { ex.printStackTrace(); } StringBuffer xmlBuf = new StringBuffer(""); String sql = "select * from miniblog where parentId = 0 order by created desc"; Statement s = conn.createStatement(); ResultSet rs = s.executeQuery(sql); xmlBuf.append("document\n"); int count = 0; while(rs.next()){ count++; if(count > 20){ break; } xmlBuf.append("\n"); xmlBuf.append("" + rs.getString("id") + "\n"); String title = rs.getString("title"); String content = rs.getString("content"); ByteArrayOutputStream sout = new ByteArrayOutputStream(); org.w3c.dom.Document doc = tidy.parseDOM(new StringBufferInputStream(content), sout); String contentClean = sout.toString(); contentClean = contentClean.replaceAll("&","&"); contentClean = contentClean.replaceAll("&","&"); xmlBuf.append("\n"); xmlBuf.append("" + contentClean + "\n"); xmlBuf.append("" + rs.getString("created") + "\n"); xmlBuf.append("\n"); } xmlBuf.append("\n"); try{ rs.close(); s.close(); conn.close(); }catch(Exception e){ } return xmlBuf.toString(); }
<document> <entry> <id>1003490</id> <title>Test</title> <content><![CDATA[This is testcontent]]></content> <created>2003-07-03 04:08:24.0</created> </entry> </document>
I think it'd be better to use one of the XML libraries like JDOM, DOM4J or even Sun's stuff for producing the XML, but many times it's just easier to hand draw it. Now, here's how a XAO is used to grab the XML in a Struts Action before passing it to:
public class IndexAction extends Action { public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { PostXAO postXAO = new PostXAO(); String xml = postXAO.getXML(); request.setAttribute("xml",xml); String forward = "index"; return (mapping.findForward(forward)); }
<x:parse <x:forEach <fmt:parseDate<x:out</fmt:parseDate> <c:if <c:set </c:if> <div class="post"> <a name="<c:catch><fmt:formatDate</c:catch>"/> <h2><x:out</h2> <h3><c:catch><fmt:formatDate</c:catch></h3> <x:out <p class="postlinks"> <a href="<c:catch><fmt:formatDate</c:catch>">Permalink</a> | <a href="<c:catch><fmt:formatDate</c:catch>">Comments [<x:out]</a> </p> </div> <p /> </x:forEach>
If, however, I wanted to produce an RSS document, I could write some more JSTL and create something fun, but instead I'd rather just use good old XSLT. In that case, I could just write instead:.
<c:import <x:transform
Now this is all well and good, but it only shows a bit of what you can do with XAOs. Now that you've got the basics, you can start adding some neat things. First, let's imagine that you *hate* Struts and think it's a bloated nightmare of an app, and you just want to use XAOs from within your JSP pages - but still don't want to mess with taglet code. Well, here's a custom XAO tag that I'm working on now. It would instantiate a new XAO based on the XAO named and grab the XML based on the interface. Here's what I think they look like now:.
<xao:getXML <xao:getXML <?xml version="1.0"?> <parameters> <param name="type">getPostAndComments</param> <param name="id">1003768</param> </parameters> </xao:getXML> <xao:setXML <xao:setXML <?xml version="1.0"?> <test> <post>New Post</post> </test> </xao:setXML>
Another neat extension of basic XAOs is a caching layer. If you're concerned about how XAOs are going to hold up under a massive number of hits - because processing XML does take time - then you might want to cache the data that's being returned. You can easily in your JSTL pages cache the results via OSCache or Jakarta's Taglib Cache tags. Or - because XAOs *always* return XML, you can cache the results at the source instead.
What I've done is copied much of the code from Jakarta's Cache taglibs into another singleton class I've called XAOCache. Now when I instantiate a XAO, it grabs a reference to the main XAOCache and uses it to improve performance like this:.
public String getXML(String xml){ boolean exp = xaoCache.expired(xml); if(exp){ String returnXML = slowHttpGrabber.getAllMyShit(); return returnXML; } else { return xaoCache.getXML(xml); } }
And that's the final thought. Instead of passing the XML down to the JSP page to do the transform like in the example above, you can use XAOs as cacheable "transform objects" as well. The designer still does their work on the XSL style sheet, just the control of that transform is pulled up to the back-end, instead of being on the web page. Combined with custom XAO tags, this could be a way to improve XSLT performance across a website. This is how it would work: using the getXML(String xml) method, instead of passing an XML document with parameters, you instead pass the XML document that you want to transform. It would look something like this in a Struts Action before it's passed to the JSP page:.
TransformXAO transformXAO = new TransformXAO(); PostXAO postXAO = new PostXAO(); String newXML = trasformXAO.getXML(postXAO.getXML());
Okay, that's all I've done so far. As I play with this idea more, I'll post more thoughts. Your comments welcome (though I'll probably be reading them from my phone on vacation. :-) )
-Russ | http://m.mowser.com/web/www.russellbeattie.com/notebook/1003728.html | crawl-002 | refinedweb | 3,723 | 71.04 |
Render an SVG Globe
In this tutorial, I will be showing you how to take an SVG map and project it onto a globe, as a vector. To carry out the mathematical transforms needed to project the map onto a sphere, we must use Python scripting to read the map data and translate it into an image of a globe. This tutorial assumes that you are running Python 3.4, the latest available Python.
Inkscape has some sort of Python API which can be used to do a variety of stuff. However, since we are only interested in transforming shapes, it’s easier to just write a standalone program that reads and prints SVG files on its own.
1. Format the Map
The type of map that we want is called an equirectangular map. In an equirectangular map, the longitude and latitude of a place corresponds to its x and y position on the map. One equirectangular world map can be found on Wikimedia Commons (here is a version with U.S. states).
SVG coordinates can be defined in a variety of ways. For example, they can be relative to the previously defined point, or defined absolutely from the origin. To make our lives easier, we want to convert the coordinates in the map to the absolute form. Inkscape can do this. Go to Inkscape preferences (under the Edit menu) and under Input/Output > SVG Output, set Path string format to Absolute.
Inkscape won’t automatically convert the coordinates; you have to perform some sort of transform on the paths to get that to happen. The easiest way to do that is just to select everything and move it up and back down with one press each of the up and down arrows. Then re-save the file.
2. Start Your Python Script
Create a new Python file. Import the following modules:
import sys import re import math import time import datetime import numpy as np import xml.etree.ElementTree as ET
You will need to install NumPy, a library that lets you do certain vector operations like dot product and cross product.
3. The Math of Perspective Projection
Projecting a point in three-dimensional space into a 2D image involves finding a vector from the camera to the point, and then splitting that vector into three perpendicular vectors.
The two partial vectors perpendicular to the camera vector (the direction the camera is facing) become the x and y coordinates of an orthogonally projected image. The partial vector parallel to the camera vector becomes something called the z distance of the point. To convert an orthogonal image into a perspective image, divide each x and y coordinate by the z distance.
At this point, it makes sense to define certain camera parameters. First, we need to know where the camera is located in 3D space. Store its x, y, and z coordinates in a dictionary.
camera = {'x': -15, 'y': 15, 'z': 30}
The globe will be located at the origin, so it makes sense to orient the camera facing it. That means the camera direction vector will be the opposite of the camera position.
cameraForward = {'x': -1*camera['x'], 'y': -1*camera['y'], 'z': -1*camera['z']}
It’s not just enough to determine which direction the camera is facing—you also need to nail down a rotation for the camera. Do that by defining a vector perpendicular to the
cameraForward vector.
cameraPerpendicular = {'x': cameraForward['y'], 'y': -1*cameraForward['x'], 'z': 0}
1. Define Useful Vector Functions
It will be very helpful to have certain vector functions defined in our program. Define a vector magnitude function:
#magnitude of a 3D vector def sumOfSquares(vector): return vector['x']**2 + vector['y']**2 + vector['z']**2 def magnitude(vector): return math.sqrt(sumOfSquares(vector))
We need to be able to project one vector onto another. Because this operation involves a dot product, it’s much easier to use the NumPy library. NumPy, however, takes vectors in list form, without the explicit ‘x’, ‘y’, ‘z’ identifiers, so we need a function to convert our vectors into NumPy vectors.
#converts dictionary vector to list vector def vectorToList (vector): return [vector['x'], vector['y'], vector['z']]
#projects u onto v def vectorProject(u, v): return np.dot(vectorToList (v), vectorToList (u))/magnitude(v)
It’s nice to have a function that will give us a unit vector in the direction of a given vector:
#get unit vector def unitVector(vector): magVector = magnitude(vector) return {'x': vector['x']/magVector, 'y': vector['y']/magVector, 'z': vector['z']/magVector }
Finally, we need to be able to take two points and find a vector between them:
#Calculates vector from two points, dictionary form def findVector (origin, point): return { 'x': point['x'] - origin['x'], 'y': point['y'] - origin['y'], 'z': point['z'] - origin['z'] }
2. Define Camera Axes
Now we just need to finish defining the camera axes. We already have two of these axes—
cameraForward and
cameraPerpendicular, corresponding to the z distance and x coordinate of the camera’s image.
Now we just need the third axis, defined by a vector representing the y coordinate of the camera’s image. We can find this third axis by taking the cross product of those two vectors, using NumPy—
np.cross(vectorToList(cameraForward), vectorToList(cameraPerpendicular)).
The first element in the result corresponds to the x component; the second to the y component, and the third to the z component, so the vector produced is given by:
#Calculates horizon plane vector (points upward) cameraHorizon = {'x': np.cross(vectorToList(cameraForward) , vectorToList(cameraPerpendicular))[0], 'y': np.cross(vectorToList(cameraForward) , vectorToList(cameraPerpendicular))[1], 'z': np.cross(vectorToList(cameraForward) , vectorToList(cameraPerpendicular))[2] }
3. Project to Orthogonal
To find the orthogonal x, y, and z distance, we first find the vector linking the camera and the point in question, and then project it onto each of the three camera axes defined previously:
def physicalProjection (point): pointVector = findVector(camera, point) #pointVector is a vector starting from the camera and ending at a point in question return {'x': vectorProject(pointVector, cameraPerpendicular), 'y': vectorProject(pointVector, cameraHorizon), 'z': vectorProject(pointVector, cameraForward)}
A point (dark gray) being projected onto the three camera axes (gray). x is red, y is green, and z is blue.
4. Project to Perspective
Perspective projection simply takes the x and y of the orthogonal projection, and divides each coordinate by the z distance. This makes it so that stuff that’s farther away looks smaller than stuff that’s closer to the camera.
Because dividing by z yields very small coordinates, we multiply each coordinate by a value corresponding to the focal length of the camera.
focalLength = 1000
# draws points onto camera sensor using xDistance, yDistance, and zDistance def perspectiveProjection (pCoords): scaleFactor = focalLength/pCoords['z'] return {'x': pCoords['x']*scaleFactor, 'y': pCoords['y']*scaleFactor}
5. Convert Spherical Coordinates to Rectangular Coordinates
The Earth is a sphere. Thus our coordinates—latitude and longitude—are spherical coordinates. So we need to write a function that converts spherical coordinates to rectangular coordinates (as well as define a radius of the Earth and provide the π constant):
radius = 10 pi = 3.14159
#converts spherical coordinates to rectangular coordinates def sphereToRect (r, a, b): return {'x': r*math.sin(b*pi/180)*math.cos(a*pi/180), 'y': r*math.sin(b*pi/180)*math.sin(a*pi/180), 'z': r*math.cos(b*pi/180) }
We can achieve better performance by storing some calculations used more than once:
#converts spherical coordinates to rectangular coordinates def sphereToRect (r, a, b): aRad = math.radians(a) bRad = math.radians(b) r_sin_b = r*math.sin(bRad) return {'x': r_sin_b*math.cos(aRad), 'y': r_sin_b*math.sin(aRad), 'z': r*math.cos(bRad) }
We can write some composite functions that will combine all the previous steps into one function—going straight from spherical or rectangular coordinates to perspective images:
#functions for plotting points def rectPlot (coordinate): return perspectiveProjection(physicalProjection(coordinate)) def spherePlot (coordinate, sRadius): return rectPlot(sphereToRect(sRadius, coordinate['long'], coordinate['lat']))
4. Rendering to SVG
Our script has to be able to write to an SVG file. So it should start with:
f = open('globe.svg', 'w') f.write('<?xml version="1.0" encoding="UTF-8"?>n<svg viewBox="0 0 800 800" version="1.1"nxmlns="" xmlns:n')
And end with:
f.write('</svg>')
Producing an empty but valid SVG file. Within that file the script has to be able to create SVG objects, so we will define two functions that will allow it to draw SVG points and polygons:
#Draws SVG circle object def svgCircle (coordinate, circleRadius, color): f.write('<circle cx="' + str(coordinate['x'] + 400) + '" cy="' + str(coordinate['y'] + 400) + '" r="' + str(circleRadius) + '" style="fill:' + color + ';"/>n') #Draws SVG polygon node def polyNode (coordinate): f.write(str(coordinate['x'] + 400) + ',' + str(coordinate['y'] + 400) + ' ')
We can test this out by rendering a spherical grid of points:
#DRAW GRID for x in range(72): for y in range(36): svgCircle (spherePlot( { 'long': 5*x, 'lat': 5*y }, radius ), 1, '#ccc')
This script, when saved and run, should produce something like this:
5. Transform the SVG Map Data
To read an SVG file, a script needs to be able to read an XML file, since SVG is a type of XML. That’s why we imported
xml.etree.ElementTree. This module allows you to load the XML/SVG into a script as a nested list:
tree = ET.parse('BlankMap Equirectangular states.svg') root = tree.getroot()
You can navigate to an object in the SVG through the list indexes (usually you have to take a look at the source code of the map file to understand its structure). In our case, each country is located at
root[4][0][x][n], where x is the number of the country, starting with 1, and n represents the various subpaths that outline the country. The actual contours of the country are stored in the d attribute, accessible through
root[4][0][x][n].attrib['d'].
1. Construct Loops
We can’t just iterate through this map because it contains a “dummy” element at the beginning that must be skipped. So we need to count the number of “country” objects and subtract one to get rid of the dummy. Then we loop through the remaining objects.
countries = len(root[4][0]) - 1 for x in range(countries): root[4][0][x + 1]
Some country objects include multiple paths, which is why we then iterate through each path in each country:
countries = len(root[4][0]) - 1 for x in range(countries): for path in root[4][0][x + 1]:
Within each path, there are disjoint contours separated by the characters ‘Z M’ in the d string, so we split the d string along that delimiter and iterate through those.
countries = len(root[4][0]) - 1 for x in range(countries): for path in root[4][0][x + 1]: for k in re.split('Z M', path.attrib['d']):
We then split each contour by the delimiters ‘Z’, ‘L’, or ‘M’ to get the coordinate of each point in the path:
for x in range(countries): for path in root[4][0][x + 1]: for k in re.split('Z M', path.attrib['d']): for i in re.split('Z|M|L', k):
Then we remove all non-numeric characters from the coordinates and split them in half along the commas, giving the latitudes and longitudes. If both exist, we store them in a
sphereCoordinates dictionary (in the map, latitude coordinates go from 0 to 180°, but we want them to go from –90° to 90°—north and south—so we subtract 90°).
for x in range(countries): for path in root[4][0][x + 1]: for k in re.split('Z M', path.attrib['d']): for i in re.split('Z|M|L', k): breakup = re.split(',', re.sub("[^-0123456789.,]", "", i)) if breakup[0] and breakup[1]: sphereCoordinates = {} sphereCoordinates['long'] = float(breakup[0]) sphereCoordinates['lat'] = float(breakup[1]) - 90
Then if we test it out by plotting some points (
svgCircle(spherePlot(sphereCoordinates, radius), 1, '#333')), we get something like this:
2. Solve for Occlusion
This does not distinguish between points on the near side of the globe and points on the far side of the globe. If we want to just print dots on the visible side of the planet, we need to be able to figure out which side of the planet a given point is on.
We can do this by calculating the two points on the sphere where a ray from the camera to the point would intersect with the sphere. This function implements the formula for solving the distances to those two points—dNear and dFar:
cameraDistanceSquare = sumOfSquares(camera) #distance from globe center to camera def distanceToPoint(spherePoint): point = sphereToRect(radius, spherePoint['long'], spherePoint['lat']) ray = findVector(camera,point) return vectorProject(ray, cameraForward)
def occlude(spherePoint): point = sphereToRect(radius, spherePoint['long'], spherePoint['lat']) ray = findVector(camera,point) d1 = magnitude(ray) #distance from camera to point dot_l = np.dot( [ray['x']/d1, ray['y']/d1, ray['z']/d1], vectorToList(camera) ) #dot product of unit vector from camera to point and camera vector determinant = math.sqrt(abs( (dot_l)**2 - cameraDistanceSquare + radius**2 )) dNear = -(dot_l) + determinant dFar = -(dot_l) - determinant
If the actual distance to the point, d1, is less than or equal to both of these distances, then the point is on the near side of the sphere. Because of rounding errors, a little wiggle room is built into this operation:
if d1 - 0.0000000001 <= dNear and d1 - 0.0000000001 <= dFar : return True else: return False
Using this function as a condition should restrict the rendering to near-side points:
if occlude(sphereCoordinates): svgCircle(spherePlot(sphereCoordinates, radius), 1, '#333')
6. Render Solid Countries
Of course, the dots are not true closed, filled shapes—they only give the illusion of closed shapes. Drawing actual filled countries requires a bit more sophistication. First of all, we need to print the entirety of all visible countries.
We can do that by creating a switch that gets activated any time a country contains a visible point, meanwhile temporarily storing the coordinates of that country. If the switch is activated, the country gets drawn, using the stored coordinates. We will also draw polygons instead of points.
for x in range(countries): for path in root[4][0][x + 1]: for k in re.split('Z M', path.attrib['d']): countryIsVisible = False country = [] for i in re.split('Z|M|L', k): breakup = re.split(',', re.sub("[^-0123456789.,]", "", i)) if breakup[0] and breakup[1]: sphereCoordinates = {} sphereCoordinates['long'] = float(breakup[0]) sphereCoordinates['lat'] = float(breakup[1]) - 90 #DRAW COUNTRY if occlude(sphereCoordinates): country.append([sphereCoordinates, radius]) countryIsVisible = True else: country.append([sphereCoordinates, radius]) if countryIsVisible: f.write('<polygon points="') for i in country: polyNode(spherePlot(i[0], i[1])) f.write('" style="fill:#ff3092;stroke: #fff;stroke-width:0.3" />nn')
It is difficult to tell, but the countries on the edge of the globe fold in on themselves, which we don’t want (take a look at Brazil).
1. Trace the Disk of the Earth
To make the countries render properly at the edges of the globe, we first have to trace the disk of the globe with a polygon (the disk you see from the dots is an optical illusion). The disk is outlined by the visible edge of the globe—a circle. The following operations calculate the radius and center of this circle, as well as the distance of the plane containing the circle from the camera, and the center of the globe.
#TRACE LIMB limbRadius = math.sqrt( radius**2 - radius**4/cameraDistanceSquare ) cx = camera['x']*radius**2/cameraDistanceSquare cy = camera['y']*radius**2/cameraDistanceSquare cz = camera['z']*radius**2/cameraDistanceSquare planeDistance = magnitude(camera)*(1 - radius**2/cameraDistanceSquare) planeDisplacement = math.sqrt(cx**2 + cy**2 + cz**2)
The earth and camera (dark gray point) viewed from above. The pink line represents the visible edge of the earth. Only the shaded sector is visible to the camera.
Then to graph a circle in that plane, we construct two axes parallel to that plane:
#trade & negate x and y to get a perpendicular vector unitVectorCamera = unitVector(camera) aV = unitVector( {'x': -unitVectorCamera['y'], 'y': unitVectorCamera['x'], 'z': 0} ) bV = np.cross(vectorToList(aV), vectorToList( unitVectorCamera ))
Then we just graph on those axes by increments of 2 degrees to plot a circle in that plane with that radius and center (see this explanation for the math):]) }
Then we just encapsulate all of that with polygon drawing code:
f.write('<polygon id="globe" points="')]) } polyNode(rectPlot(limbPoint)) f.write('" style="fill:#eee;stroke: none;stroke-width:0.5" />')
We also create a copy of that object to use later as a clipping mask for all of our countries:
f.write('<clipPath id="clipglobe"><use xlink:</clipPath>')
That should give you this:
2. Clipping to the Disk
Using the newly-calculated disk, we can modify our
else statement in the country plotting code (for when coordinates are on the hidden side of the globe) to plot those points somewhere outside the disk:
else: tangentscale = (radius + planeDisplacement)/(pi*0.5) rr = 1 + abs(math.tan( (distanceToPoint(sphereCoordinates) - planeDistance)/tangentscale )) country.append([sphereCoordinates, radius*rr])
This uses a tangent curve to lift the hidden points above the surface of the Earth, giving the appearance that they are spread out around it:
This is not entirely mathematically sound (it breaks down if the camera is not roughly pointed at the center of the planet), but it’s simple and works most of the time. Then by simply adding
clip-path="url(#clipglobe)" to the polygon drawing code, we can neatly clip the countries to the edge of the globe:
if countryIsVisible: f.write('<polygon clip-path="url(#clipglobe)" points="')
I hope you enjoyed this tutorial! Have fun with your vector globes!
Source: Tuts Plus
… [Trackback]
[…] Read More: designncode.in/render-an-svg-globe/ […]
… [Trackback]
[…] Read More: designncode.in/render-an-svg-globe/ […]
… [Trackback]
[…] There you will find 84374 more Infos: designncode.in/render-an-svg-globe/ […]
… [Trackback]
[…] Read More Infos here: designncode.in/render-an-svg-globe/ […]
… [Trackback]
[…] Read More here: designncode.in/render-an-svg-globe/ […]
… [Trackback]
[…] Informations on that Topic: designncode.in/render-an-svg-globe/ […]
… [Trackback]
[…] There you will find 24992 more Infos: designncode.in/render-an-svg-globe/ […]
… [Trackback]
[…] Informations on that Topic: designncode.in/render-an-svg-globe/ […]
… [Trackback]
[…] Informations on that Topic: designncode.in/render-an-svg-globe/ […]
… [Trackback]
[…] Find More Informations here: designncode.in/render-an-svg-globe/ […]
… [Trackback]
[…] Read More: designncode.in/render-an-svg-globe/ […] | http://designncode.in/render-an-svg-globe/ | CC-MAIN-2017-43 | refinedweb | 3,105 | 54.32 |
make TwistedPools the default pool search order
I would like to make the pool search order more intuitive and likely to do what you want in a "namespace is application" environment. My ideas for this, and a link to the git branch, are on wiki page PoolResolution.
Missing from that are C implementation and post-compilation unit tests.
Updates
This is interesting and can definitely enter 3.1. The only thing I would like to confirm are two.
1) If B is a subspace of A, having 'A B' as shared pools would actually behave the same as just 'B', right? In other words, A would be eliminated because of the topological sort.
2) Why do you remove pools that are superspaces of the class environment? What happens if you just search those twice? The answer might be related to item 1.
1) Yes, in your specific example; however, elimination is not done as pools are encountered, because we prefer left-to-right. For example, in the hierarchy:
X A B C
'B C' will sort to 'B A C X', whereas 'B C A' will sort to 'B C A X'.
2) It would be searching some pools too early. I wanted to allow importing of namespaces in other hierarchies, while not thereby forcing the early search of namespaces already imported by virtue of containing the importing class.
Arguments can be made to reduce or increase the elimination of shared pools. Both are discussed in the "Combination details" section, the specific option you mention being the subject of the second and third paragraphs.
thanks. i have now enough info to implement this in the C parser and make it the default. i don't how much of your code, especially #poolResolution, I will use, but the concepts will be there.
of course, feel free to beat me to it.
The attached patch makes TwistedPools the default pool search order outside the VM, and consequently changes your TwistedPools class to just use the default pool search order. All testcases still pass.
The code is heavily based on yours, with some refactoring because I wasn't afraid of touching base classes :-) and because the search order is implemented directly in Behavior (actually in Class).
I changed a couple of data structures. For the set of superspaces of this class and all the superclasses' environments, I used a Bag, which makes it easy to account for namespaces that are present multiple times. For the topological sort, I used two IdentitySets (grey/white, in three-color visit terminology) instead of a single dictionary.
I'll post the patch split in three to the ML too.
I added a fix for inherited class/shared pools and a test for it in c097acd. Also in c097acd is a new, failing test, fixed in b69098a, to eliminate only the direct superclass's namespaces from the namespace walk while searching pools. (See step #4 on PoolResolution).
> For the set of superspaces of this class and > all the superclasses' environments
While it is convenient, it doesn't match the expectation of step #4, illustrated by the new test case, which is also why TwistedPools originally used
superclass environment withAllSuperspaces asSet to make the sole set of namespace walk eliminations.
The unfortunate detail of this alternative, and thus a slight bug introduced in b69098a, is that it may answer some namespaces multiple times. OrderedSet did the right thing, so I could ignore the issue, but
#allSharedPoolDictionariesDo: isn't necessarily building a set. Of course, the order is still right, so it doesn't affect a leftmost-first variable search at all.
thanks, I merged from you.
> The unfortunate detail of this alternative, and thus a slight bug > introduced in b69098a, is that it may answer some namespaces > multiple times.
Ah, later I realized that the Bag-based version in 274f63e too would duplicate some pools, in cases where a shared pool was used to force an early namespace import that would otherwise happen later anyway. So it looks like there are duplicates either way, unless anyone finds them bothersome.
ok, that's good. I could find in GCC a nice implementation of IdentitySet, and with that it was quite easy to implement TwistedPools in the VM. It is in my personal git repo.
Tests pass, but still, two more eyes can only do best. And there is also some pleaing for help:
/* Add POOLOOP and all of its superspaces to the list in the right order (Stephen, please help me... :-). */
It's actually just a matter of copying from the right wiki page, if you don't have time I can do it.
Thanks again for noticing the need for this feature, for the precise description of the problem and (as usual) for the high quality of your code and your reviews.
I saw your merge; I'll wait a couple more days for a confirmation and then merge into master.
Yes, they were just a couple of bookkeeping things; I think it's ready for master.
committed then. | http://smalltalk.gnu.org/project/issue/206 | crawl-001 | refinedweb | 840 | 61.87 |
This is a recreational project for my fantasy football league. I will try to very thoroughly explain my thinking and what I have done so far.
I'm trying to create a random schedule generator. But the problem is there are a lot of conditions that need to be met and whenever I try to implement some of these restrictions it causes infinite loops, which in my opinion, should not be happening. I could just be thinking wrong, or it could be the random number generator.
It is a ten team league and a 13 week season. Every team plays the other nine teams one time. Then each team plays 4 teams twice. What I have done is created an array list of every possible matchup by assigning an id to each team with numbers ranging from 21 to 30 (I chose this because it will create a unique matchup id when any two of the numbers are 21 to 30 are multiplied together). Every team is in an array list called teams. So I did this to create all the unique matchups:
This works every time and is not the problem. The problem comes when I try to randomly generate the 4 duplicate matchups for each team. My function to do this is as follows:This works every time and is not the problem. The problem comes when I try to randomly generate the 4 duplicate matchups for each team. My function to do this is as follows:Code :
ArrayList<Matchups> defMatchups = new ArrayList<Matchups>(); for(i = 0; i < 10; i++){ for(j = i + 1; j < 10; j++){ defMatchups.add(new Matchups(teams.get(i), teams.get(j))); }
Code :
package helpers; import java.util.ArrayList; import java.util.Iterator; import java.util.Random; import matchups.Matchups; import teams.Teams; public class RandomizeSecondMatchups { public static ArrayList<Matchups> run(ArrayList<Matchups> original){ ArrayList<Matchups> list = new ArrayList<Matchups>(); ArrayList<Matchups> copy = new ArrayList<Matchups>(); Iterator<Matchups> it = original.iterator(); Matchups current; while(it.hasNext()){ current = it.next(); copy.add(current); } int counter = 0; Random rand = new Random(); Teams temp1, temp2; Matchups tempMU; do{ [B] do{ tempMU = copy.get(rand.nextInt(copy.size())); System.out.println("--"); }while(tempMU.getTeamOne().getCount() >=4 || tempMU.getTeamTwo().getCount() >= 4 );[/B] temp1 = tempMU.getTeamOne(); temp2 = tempMU.getTeamTwo(); temp1.incCount(); if(temp1.getCount() == 4){ counter++; } temp2.incCount(); if(temp2.getCount() == 4){ counter++; } list.add(tempMU); copy.remove(tempMU); }while(counter < 10); return list; } }
An infinite loop usually occurs in the bolded section (the do while loop with the bold tags around it). But there are a few times (about once every 4 or 5 times) where it does not infinite loop and it executes exactly how I want. Which is making me think that my random number is the problem or possibly even eclipse. I have gotten farther into the program and know I will experience more infinite looping problems later on as I will be adding more restrictions in, (e.g. matchups can't repeat themselves in consecutive weeks), so I would like to get this one to work perfectly before I move on.
Anything you can do to help me out be greatly appreciated. If you need to see other classes in my code to help me out let me know, and I will post it up. | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/27384-help-schedule-generator-printingthethread.html | CC-MAIN-2015-32 | refinedweb | 551 | 66.13 |
Well, The reducers normally will take much longer than the mappers stage, because the copy/shuffle/sort
all happened at this time, and they are the hard part.
But before we simply say it is part of life, you need to dig into more of your MR jobs to
find out if you can make it faster.
You are the person most familiar with your data, and you wrote the code to group/partition
them, and send them to the reducers. Even you set up 255 reducers, the question is, do each
of them get its fair share?You need to read the COUNTER information of each reducer, and found
out how many reducer groups each reducer gets, and how many input bytes it get, etc.
Simple example, if you send 200G data, and group them by DATE, if all the data belongs to
2 days, and one of them contains 90% of data, then in this case, giving 255 reducers won't
help, as only 2 reducers will consume data, and one of them will consume 90% of data, and
will finish in a very long time, which WILL delay the whole MR job, while the rest reducers
will finish within seconds. In this case, maybe you need to rethink what should be your key,
and make sure each reducer get its fair share of volume of data.
After the above fix (in fact, normally it will fix 90% of reducer performance problems, especially
you have 255 reducer tasks available, so each one average will only get 1G data, good for
your huge cluster only needs to process 256G data :-), if you want to make it even faster,
then check you code. Do you have to use String.compareTo()? Is it slow? Google hadoop rawcomparator
to see if you can do something here.
After that, if you still think the reducer stage slow, check you cluster system. Does the
reducer spend most time on copy stage, or sort, or in your reducer class? Find out the where
the time spends, then identify the solution.
Yong
Date: Fri, 30 Aug 2013 11:02:05 -0400
Subject: Re: secondary sort - number of reducers
From: adeelmahmood@gmail.com
To: user@hadoop.apache.org
my3. partiotioner
if all this gets done in 15 mins then reducer has the simple task of1. grouping comparator
2. reducer itself (simply output records)
should take less time than mappers .. instead it essentially gets stuck in reduce phase ..
im gonna paste my code here to see if anything stands out as a fundamental design issue
//////PARTITIONERpublic int getPartition(Text key, HCatRecord record, int numReduceTasks)
{ //extract the group key from composite key
String groupKey = key.toString().split("\\|")[0]; return (groupKey.hashCode() & Integer.MAX_VALUE)
% numReduceTasks;
}
////////////GROUP COMAPRATORpublicup it was negative and by doing this now it seems to be working fine
On Fri, Aug 30, 2013 at 3:09 AM, Shekhar Sharma <shekhar2581@gmail.com> wrote:
Is the hash code of that key is negative.?
Do something like this
return groupKey.hashCode() & Integer.MAX_VALUE % numParts;
Regards,
Som Shekhar Sharma
+91-8197243810
On Fri, Aug 30, 2013 at 6:25 AM, Adeel Qureshi <adeelmahmood@gmail.com> wrote:
> okay so when i specify the number of reducers e.g. in my example i m using 4
> (for a much smaller data set) it works if I use a single column in my
> composite key .. but if I add multiple columns in the composite key
> separated by a delimi .. it then throws the illegal partition error (keys
> before the pipe are group keys and after the pipe are the sort keys and my
> partioner only uses the group keys
>
> java.io.IOException: Illegal partition for Atlanta:GA|Atlanta:GA:1:Adeel
> (-1)
> at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1073)
> at
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:691)
> at
> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
> at com.att.hadoop.hivesort.HSMapper.map(HSMapper.java:39)
> at com.att.hadoop.hivesort.HSMapper.map(HSMapper.java:1)
>36)
> at org.apache.hadoop.mapred.Child.main(Child.java:249)
>
>
> public int getPartition(Text key, HCatRecord record, int numParts) {
> //extract the group key from composite key
> String groupKey = key.toString().split("\\|")[0];
> return groupKey.hashCode() % numParts;
> }
>
>
> On Thu, Aug 29, 2013 at 8:31 PM, Shekhar Sharma <shekhar2581@gmail.com>
> wrote:
>>
>> No...partitionr decides which keys should go to which reducer...and
>> number of reducers you need to decide...No of reducers depends on
>> factors like number of key value pair, use case etc
>> Regards,
>> Som Shekhar Sharma
>> +91-8197243810
>>
>>
>> On Fri, Aug 30, 2013 at 5:54 AM, Adeel Qureshi <adeelmahmood@gmail.com>
>> wrote:
>> > so it cant figure out an appropriate number of reducers as it does for
>> > mappers .. in my case hadoop is using 2100+ mappers and then only 1
>> > reducer
>> > .. since im overriding the partitioner class shouldnt that decide how
>> > manyredeucers there should be based on how many different partition
>> > values
>> > being returned by the custom partiotioner
>> >
>> >
>> > On Thu, Aug 29, 2013 at 7:38 PM, Ian Wrigley <ian@cloudera.com> wrote:
>> >>
>> >> If you don't specify the number of Reducers, Hadoop will use the
>> >> default
>> >> -- which, unless you've changed it, is 1.
>> >>
>> >> Regards
>> >>
>> >> Ian.
>> >>
>> >> On Aug 29, 2013, at 4:23 PM, Adeel Qureshi <adeelmahmood@gmail.com>
>> >> wrote:
>> >>
>> >> I have implemented secondary sort in my MR job and for some reason if i
>> >> dont specify the number of reducers it uses 1 which doesnt seems right
>> >> because im working with 800M+ records and one reducer slows things down
>> >> significantly. Is this some kind of limitation with the secondary sort
>> >> that
>> >> it has to use a single reducer .. that kind of would defeat the purpose
>> >> of
>> >> having a scalable solution such as secondary sort. I would appreciate
>> >> any
>> >>
>> >> Thanks
>> >> Adeel
>> >>
>> >>
>> >>
>> >> ---
>> >> Ian Wrigley
>> >> Sr. Curriculum Manager
>> >> Cloudera, Inc
>> >> Cell: (323) 819 4075
>> >>
>> >
>
> | http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-user/201308.mbox/%3CBLU162-W1517B58C1789AD4FBB99D1D0350@phx.gbl%3E | CC-MAIN-2017-47 | refinedweb | 977 | 63.29 |
public class Dave implements Stack { public void push(Pushable o) { System.out.println("Refusing to push: I'm tired"); } public Pushable pop() { return new PushableInt(99); } }Type safe, but semantic gibberish. Now, turn the question around. What do you gain by abandoning this limited form of safety? Well, we gain immense flexibility. Refactoring Smalltalk and Ruby is trivial compared to (say) Java. Things just move around. There's no need to jump through hoops to satisfy the compiler. We gain substantial testability. You can construct mock objects very, very easily, and test out code with far less overhead. You can test objects before the classes they rely on are finished (or even started). This ability to test partial classes also lends itself to easier incremental testing. We gain expressiveness. I don't know how to describe this one objectively: I just know that I find this kind of code speaks to me more directly: I'm dealing with objects, not object categories. This is not a thing that can be argued rationally. I was a strong-typing advocate for years, and was nervous when I used languages such as Smalltalk and Ruby. However, I now find Java a very frustrating language to use, and find myself writing higher-quality code in Ruby. In the end, the only way to find out is to try it for yourself and see. Write some Ruby code, and wait until you experience that a-ha! moment. Then write some more code until you start developing an idiomatic style. Get comfortable with RubyTestUnit. Then take on a largish project, and see what you think. Regards Dave Dave, Could one not write trivial (and quite frankly stupid) examples of how dynamic types systems fail to catch errors until it is too late (runtime)? Could one not write trivial (and quite frankly stupid) examples of how unit tests can pass when the programmer wants failure? Why go down this road? If you want to show something please back it up with something more substantial than the nonsense provided above. Better yet, lets work toward improving software development without shooting down things that we really don't understand and don't wish to spend time studying. {Indeed, one could also write similar stupid examples about how dynamic type systems fail.. Dave is using similar arguments that doctors used to use when they did not sterlize their utensils. What does sterilizing your utensil really gain you? Only inexperienced amateur doctors would sterilize their utensils back in the day - the professional doctors performed better surgeries without sterilization. At least, until they started thinking rationally, that is - now all utensils are boiled in hospital basements at high temperatures and no doctor uses unsterile utensils. The idea that our code should be an unpredictable unsterile mess without restrictions or discipline is nonsense. I consider dynamic typing to be a lack of typing - it's brushing typing under the rug as if it doesn't really exist and things just magically change types without you really knowing it. Dynamic typing along with weak typing reminds me of a word: maybe. It should be called "MaybeTyping". What is that? is it a bird, an airplane, or is it superman? Maybe! I'm not really sure! Is it an integer? My type system says "maybe!". If you told a mathematician that his x and y coordinates just changed magically into strings and are no longer integers, he'd be like "is that a bug? who did that to us?"}
public interface AnInterface? { public void open(...); public int read(...); public void close(...); }Now assume that you want to start to implement the open method and that this is a very complex task in your particular case. JavaLanguage (and other like-minded languages) forces you to provide something for the other two methods. In order to start with the task you're after as soon as possible you DoTheSimplestThingThatCouldPossiblyWork:
class MyClass implements AnInterface? { public void open(...) { ... } public int read(...) { // do nothing } public void close(...) { // do nothing } }Now we have some problems:
class MyClass implements AnInterface? { ... public int read(...) { return 0; } ... }
class MyClass implements AnInterface? { public void open(...) { ... } public int read(...) { throw new RuntimeException(); } public void close(...) { throw new RuntimeException(); } }
class MyClass implements AnInterface? { public void open(...) { ... } public int read(...) { throw new RuntimeException("MyClass::read not understood/implemented"); } public int close(...) { throw new RuntimeException("MyClass::close not understood/implemented"); } }I hope this gives you the punchline because this is exactly the default behavior of dynamically typed languages. So obviously, one of the main advantages of DynamicTyping is that your StreamOfConsciousness is not unnecessarily disrupted by useless compiler errors at exploration time that you need to fix somehow in order to be able to continue your work. On the contrary, a language with ManifestTyping might even be more unsafe in that it allows you to do away with compiler errors in a too ad-hoc manner, as shown above. (However, the last part is just speculation on my side, I don't have evidence for this.) -- PascalCostanza The punchline feels very weak to me, because I don't have to implement the interface to test the class. I can write my tests to MyClass just fine. I only have to implement the interface when I get to integration tests where I have to pass it to something else. At that point I need all three methods whether my language is static or dynamic, since the thing I'm passing it to needs them. And I'd rather it said it needed them in a way the compiler understands, instead of in a comment. -- ScottMcMurray [See also: StaticTypingHindersRefactoring]
a = b + cIf "c" is not defined in VB without Option Explicit, it would be assumed Nil, which is rarely what you want. However, in most other dynamic languages it would raise an error. But, "a" being a newly encountered variable would not (assuming b and c are valid). I have had very little trouble with "left-side" dynamic declaration, but agree that VB's right-side dynamism stinks to high heck. I wish VB had an Option Left or the like. Most scripting languages have (mis)features like this. It's part of the idea of a scripting language.
class Stack my_stack where push :: my_stack a -> a -> my_stack a pop :: my_stack a -> aSo far so good, the interface was defined. Now I'll try to make the wrong implementation:
data Dave a = Dave [a] instance Stack Dave where push this o = putStr "Refusing to push: I'm tired" pop this = 99Ops! The compiler complains about it. Says the type of push in the implementation (i.e. instance declaration) doesn't match the type given in the interface definition (i.e. class declaration). The actual error given by Hugs is:
Type error in instance member binding *** Term : push *** Type : Dave a -> a -> IO () *** Does not match : Dave a -> a -> Dave aAfter fixing it the compiler now complains about pop:
Cannot justify constraints in instance member binding *** Expression : pop *** Type : Stack Dave => Dave a -> a *** Given context : Stack Dave *** Constraints : Num aIt says that pop is less generic than it should be. While we could do some tricks to write an unconforming but type-checked implementation (e.g. returning this for push) it's less simple than in JavaLanguage because HaskellLanguage's type-system is much more expressive. We could make the cheating much harder in Haskell by encoding the semantics in the types (i.e. simulating DependentTypes with rank-2 polymorphism). There are fewer BenefitsOfDynamicTyping when you compare them with languages with advanced StaticTyping. Oh, really powerful static type systems with inferencing and stuff are great (but bear in mind Nat's comments about runtime polymorphism). Unfortunately, that's not the kind that the overwhelmingly vast majority of working programmers are allowed to use. Maybe sometime around about JavaTwelvty (s/Java/DotNet/ if you wish) the claim made by some (not necessarily anyone writing on this page) that working programmers shouldn't want dynamic typing more than the weak flavour of static typing they're allowed to use won't look quite so ridiculous. Hmm, I fail to see how is "runtime polymorphism" (i.e. subtyping IIUC) impossible to mix with the technique I described. Need you to write interfaces (with optional implementations)? Ok, write down type-classes. Need to do some code inheritance to your data-structures? Ok, write the instance declarations. Need to write heterogenous collections of data handling the same interface? Ok, use ExistentialTypes? (which can be automatically derived from your class declarations). Seriously, modern static-typing systems are very powerful and almost invisible, learning HaskellLanguage or CleanLanguage is as enlightening as learning SchemeLanguage or SmallTalk. NatPryce comment is factually incorrect.
(defun handle-messages (message-queue channel) ;<- Is it me, or does Lisp code start becoming beautiful after a while? (dolist (msg message-queue) ;Stephenson did call Lisp "The Only Beautiful programming language. :) (dispatch msg channel) ;Who's he, got a link? (make-log-entry msg))) '';NealStephenson '';"InTheBeginningWasTheCommandLine" (defmethod dispatch ((msg Special-Message) (chan User-Channel)) ...) (defmethod dispatch ((msg Special-Message) (chan Computer-Channel)) ...) (defmethod dispatch ((msg Normal-Message) (chan User-Channel)) ...) ...For brevity, just envision there are many types of messages, Special-Message is just one. There could also be many types of channels. Even if they're in a hierarchy, this kind of elegant multi-type construct is very difficult to handle in Haskell's type system because it requires run-time dispatch. You can come slightly closer in ManifestTyping systems like C++ or Java because they allow you to dispatch on methods at run time with constraints. But again, this runs into the problems detailed above. Of course, that ignores the fact that we can't dispatch on multiple parameters without a complex pattern. One of the reasons that C++ templates are so dang popular is that they let you do this kind of thing (in a limited fashion). In Ruby or Python or (of course) Lisp, we can express these kinds of algorithms with relative simplicity. That's the advantage, because this kind of code tends to be much clearer, smaller, and easier to maintain. I fail to see how the HaskellLanguage code is really less elegant than above:
handleMessages messageQueue channel = mapM_ handle messageQueue where handle msg = do dispatch msg channel makeLogEntry msg data Message a = SpecialMessage a | NormalMessage a data Channel a = UserChannel a | ComputerChannel a dispatch (SpecialMessage msg) (UserChannel chan) = undefined dispatch (SpecialMessage msg) (ComputerChannel chan) = undefined dispatch (NormalMessage msg) (UserChannel chan) = undefinedIs the code really "very difficult to handle in Haskell's type system because it requires run-time dispatch"? The code uses runtime dispatch (i.e. the pattern clauses in the dispatch definitions) and is statically proven to work. Also the compiler will warn us of undefined cases (e.g. NormalMessage/ComputerChannel) that may trigger runtime errors. Using pattern matching works for arbitrary number of parameters and nested patterns, so it can be used to deal with any "kind of elegant multi-type construct". (Moved discussion on side-effects and Haskell to HaskellExampleForMutabilityOnObjects) What I dislike about Haskell's approach is that it is very brittle. TypeInference systems allow you to cheerfully paint yourself into a corner without realizing it. Let's say later, you want to allow strings into this dispatch mix as messages. You're going to keep your higher types, but sometimes you just want to write a raw string to channels. Perhaps this is a primitive upon which other features are based. Perhaps it's just convenience. To do it in Haskell, you now need to add Haskell's string type to the type Data. Now, it's quite possible that logically, this would never break. You as the system designer say, "The string could never get [to a place where it would break]. It's not how the system works. If it does, then I have more serious problems than just a run time error." That's fine, especially during the early phases of a project. But a statically typed system won't allow such scenarios through. There are indeed fewer advantages of DynamicTyping compared to TypeInference. Fewer doesn't mean "none". Flexibility is a big issue. One of the reasons I like SBCL and CMUCL so much is that early on, you operate without types. Once you get your project to a working state, you add type declarations for optimization, and then the system does TypeInference and lets you know. This, to me, is the ideal combination. Early on, static typing is bad. We need to use dynamic tools during the dynamic phase of development. Once the project settles down, static typing becomes far more useful because the project is, well, static. If OCaml allowed you to turn off typing at first and just do dynamic dispatch, I'd use it far more often. "But a statically typed system won't allow such scenarios through." That's untrue. As I stated above the compiler will warn us of undefined cases, but the code will compile if necessary (it's your own risk and I never do this). Your scenario would just extend the "Message" declaration and the "dispatch" operation:
data Message a = SpecialMessage a | NormalMessage a | RawMessage String dispatch (RawMessage str) (ComputerChannel chan) = undefinedThe type-system doesn't forbid you from doing that, so your statement is factually incorrect. So far I wrote code equivalent in number of tokens/lines/characters that disproves your initial statement Equivalent? You have nearly twice as many tokens as the Lisp example, and that's generously not counting | and =. You also have more characters. Even worse, you have a type union statement that a programmer must maintain. Wasn't the point of TypeInference to not declare type junk? The "data" statement defines a type and the constructors, it isn't "a type union statement". Also they aren't "type junk" or would you call all you "defclass" statements "type junk" in Lisp? In your Lisp pseudo-code you never declared the classes for "Special-Message", "User-Channel" and "Computer-Channel" that would be necessary to create the appropriate objects, so to be fair I didn't count their influence in the final code size (to be correct we should find the size of the final working system). In Lisp, using defclass creates new types. These types may or may not be aggregate of other types. We never need to declare a type, then declare an aggregate of that type just to get a function to accept it as a parameter. The class definition example was omitted from your example as well as mine. Therefore, it's perfectly fair to count the symbols in your "data" statement. That statement is particularly irritating because it requires programmer maintenance. Every time I need to make "dispatch" accept a new type, not only do I need to write the new definition (which is inescapable), but I also now need to do book-keeping for the compiler. Also I disproved your statement on how would the type-system behave in design changes. Before changing the subject again and stating incorrect "facts" about such languages please answer my question if the HaskellLanguage code is really convoluted, ugly, more complex, whatever than your original example. Such questions are matters of opinion. I think that it looks messy and confusing. The syntax is complicated compared to Lisp (not fair though, lisp has almost no syntax) or an equivalent Ruby program. The diversion was simply to remind me why I don't like Haskell. For that I apologize. You must forgive me if I'm a bit confused about Haskell. Since HUGS won't run on any of my machines, I can't play with it as much. It's been about 3 years since I did anything serious with it. Besides, what you're saying isn't really true. You still had to add the string to that type union. In a system based on DynamicTyping, you just would pass it in. Heck, in Lisp you'd probably just tap out a line of lisp code on the repl and watch the results. When it breaks, you add the appropriate CLOS dispatch statement and do it again. You said "But a statically typed system won't allow such scenarios through." which is untrue. Let's assume for a moment that you'd want to program a large system in Haskell. When you're prototyping these systems, you'd constantly be pulling tricks like what you describe. In the end, all it really is DynamicTyping through circumvention. If you're deliberately circumventing the type system, what good is it to even have it there at all besides optimization? What, at the moment that you perform such an action, does it buy you? As with DaveThomas's elaborate example, when you start making hacks like this, you begin to emulate DynamicTyping in StaticTyping, with ungainly workarounds. Huh?!? Please explain me how is the Haskell code more convoluted or complicated than the CLOS example? What are the tricks that I pulled off? How is this emulating DynamicTyping in StaticTyping? You need extra symbols to keep the inferencer happy. You're creating "data" as a classification of types suitable for the first parameter of the dispatch method. Could you get rid of that statement, or have it omit a type, and not be stopped by your type checker? Further, I'm not sure you really did what I said you should do in that example. You added a "RawMessage?". I said I would add a built-in string class. Now, since we've established that my Haskell is toxically rusty, this is mere speculation: You'd had to declare RawMessage? to be a string. Why exactly did you do it this way? Why not just say "data = ... | String"? (don't post on the wiki before your first cup of coffee, that's my lesson of the day). Is that a sufficient and factual answer? I think if StaticTyping was going to solve anything, it would have begun to show by now. What exactly has ManifestTyping and TypeInference changed in the programming world? Not much. On the other hand, DynamicTyping only becomes more popular over time, and we hear all kinds of very intelligent professionals say, "DynamicTyping is where it's at!" Why do you suppose that is? As those are fallacies there's no need to answer them. Look I'm trying to refute your statements and you keep changing them, first was saying that Haskell couldn't do runtime dispatch, then you said that we couldn't change our code to deal with raw strings. What will be next? You are stating several different ideas here and I'm dealing with some of them, but if you keep stating new things without stopping to acknowledge that some of them were misconceptions than we will go nowhere. [I think he's simply stating that dynamic typing is better than static typing for a number of reasons, he's not changing his statement, he's elaborating on it. He's also correct that dynamic typing is more powerful, statically typed programs are a subset of dynamically typed programs. {Not true. In Haskell you can overload on the return type. You might be surprised how useful that can be.} Dynamic typing is also more flexible, you can program while working one the problem, rather than programming to please the compiler. {I don't mind arguing with the compiler when the compiler's right. Having code work correctly the first time it compiles is addictive.} His final statement is also correct, many of the top professionals are starting to prefer dynamic type systems now that computers are fast enough to make them feasible. Static type systems only advantage over dynamic systems are speed, and some assistance from making typo's.] | http://c2.com/cgi-bin/wiki?DavidThomasOnTheBenefitsOfDynamicTyping | CC-MAIN-2016-26 | refinedweb | 3,307 | 65.42 |
SGI CC compiler: What happens if I use --> #include "unistd.h" and "stdio.h"?
Discussion in 'C++' started by clusardi2k@aol
INTERNAL COMPILER ERROR C1001: msc1.cpp (line 1794) error at every std include file: stdio.h, windowpaul calvert, Oct 10, 2003, in forum: C++
- Replies:
- 6
- Views:
- 2,204
- WW
- Oct 14, 2003
Compiling MIPSpro C program using MIPSpro C++ compiler on SGI systemChristopher M. Lusardi, May 12, 2004, in forum: C++
- Replies:
- 4
- Views:
- 459
- Thomas Matthews
- May 13, 2004
Mingw32, unistd.h and my programAl-Burak, Oct 20, 2005, in forum: C++
- Replies:
- 6
- Views:
- 713
- Al-Burak
- Oct 25, 2005
- Replies:
- 6
- Views:
- 502
- Default User
- Sep 20, 2006
#include <cstdio> and #include <stdio.h> in the same file!?, Jan 22, 2013, in forum: C++
- Replies:
- 2
- Views:
- 417 | http://www.thecodingforums.com/threads/sgi-cc-compiler-what-happens-if-i-use-include-unistd-h-and-stdio-h.448204/ | CC-MAIN-2014-52 | refinedweb | 133 | 74.49 |
On 12/1/2009 4:22 AM, Manuel Graune wrote: > > It's not so much about list() vs. [] but generator comprehension vs. list comprehension. list() takes a generator comprehension, while [listcomp] is its own syntax. List comprehension leaked its "loop counter" to the surrounding namespace, while generator comprehension got its own tiny namespace. This "bug" (or feature, depending on your political alignment) is fixed in Python 3.x: Python 3.1.1 (r311:74483, Aug 17 2009, 17:02:12) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> a = [i for i in range(10)] >>> i Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'i' is not defined >>> a = list(i for i in range(10)) >>> i Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'i' is not defined >>> ^Z | https://mail.python.org/pipermail/python-list/2009-November/559967.html | CC-MAIN-2017-04 | refinedweb | 149 | 73.17 |
Vue 3 is going to be released soon with the introduction of the Composition API. It comes with many changes and improvements to performance.
Higher-order components (HOCs) are components that add certain functionalities to your app declaratively using the template. I believe they will continue to be very relevant even with the introduction of the Composition API.
HOCs always had problems exposing the full power of their functionality, and because they are not that common in most Vue applications, they are often poorly designed and may introduce limitations. This is because the template is just that — a template, or a constrained language in which you express some logic. However, in JavaScript or JSX environments, it is much easier to express logic because you have the entirety of JavaScript available for you to use.
What Vue 3 brings to the table is the ability to seamlessly mix and match the expressiveness of JavaScript using the Composition API and the declarative ease of templates.
I’m actively using HOCs in the applications I built for various pieces of logic like network, animations, UI and styling, utilities, and open-source libraries. I have a few tips to share on how to build HOCs, especially with the upcoming Vue 3 Composition API.
The template
Let’s assume the following
fetch component. Before we get into how to implement such a component, you should think about how you would be using your component. Then, you need to decide how to implement it. This is similar to TDD but without the tests — it’s more like playing around with the concept before it works.
Ideally, that component would use an endpoint and return its result as a scoped slot prop:
<fetch endpoint="/api/users" v- <div v- <!-- Show the response data --> </div> </fetch>
Now, while this API serves the basic purpose of fetching some data over the network and displaying it, there are a lot of missing things that would be useful to have.
Let’s start with error handling. Ideally, we would like to be able to detect whether a network or a response error was thrown and display some indication of that to the user. Let’s sketch that into our usage snippet:
<fetch endpoint="/api/users" v- <div v- <!-- Show the response data --> </div> <div v- {{ error.message }} </div> </fetch>
So far so good. But what about loading state? If we follow the same path, we end up with something like this:
<fetch endpoint="/api/users" v- <div v- <!-- Show the response data --> </div> <div v- {{ error.message }} </div> <div v- Loading.... </div> </fetch>
Cool. Now, let’s assume we need to have pagination support:
<fetch endpoint="/api/users" v- <div v- <!-- Show the response data --> </div> <div v- <button @Prev Page</button> <button @Next Page</button> </div> <div v- {{ error.message }} </div> <div v- Loading.... </div> </fetch>
You see where this is going, right? We are adding way too many properties to our default scoped slot. Instead, let’s break that down into multiple slots:
<fetch endpoint="/api/users"> <template # <!-- Show the response data --> </template> <template # <button @Prev Page</button> <button @Next Page</button> </template> <template # <p>{{ message }}</p> </div> <template #loading> Loading.... </template> </fetch>
While the number of characters we have is mostly the same, this is much cleaner in the sense that it uses multiple slots to show different UI during the different operation cycles of the component. It even allows us to expose more data on a per-slot basis, rather than the component as a whole.
Of course, there is room for improvement here. But let’s decide that these are the features you want for that component.
Nothing is working yet. You still have to implement the actual code that will get this to work.
Starting with the template, we only have 3 slots that are displayed using
v-if:
<template> <div> <slot v- <slot v- <slot v- <slot v- </div> </template>
Using
v-if with multiple slots here is an abstraction, so the consumers of this component don’t have to conditionally render their UI. It’s a convenient feature to have in place.
The composition API allows for unique opportunities for building better HOCs, which is what this article is about in the first place.
The JavaScript
With the template out of the way, the first naive implementation will be in a single
setup function:
import { ref, onMounted } from 'vue'; export default { props: { endpoint: { type: String, required: true, } }, setup({ endpoint }) { const data = ref(null); const loading = ref(true); const error = ref(null); const currentPage = ref(1); function fetchData(page = 1) { // ... } function nextPage() { return fetchData(currentPage.value + 1); } function prevPage() { if (currentPage.value <= 1) { return; } fetchData(currentPage.value - 1); } onMounted(() => { fetchData(); }); return { data, loading, error, nextPage, prevPage }; } };
That’s an overview of the
setup function. To complete it, we can implement the
fetchData function like this:
function fetchData(page = 1) { loading.value = true; // I prefer to use fetch // you cause use axis as an alternative return fetch(`${endpoint}?page=${page}`, { // maybe add a prop to control this method: 'get', headers: { 'content-type': 'application/json' } }) .then(res => { // a non-200 response code if (!res.ok) { // create error instance with HTTP status text const error = new Error(res.statusText); error.json = res.json(); throw error; } return res.json(); }) .then(json => { // set the response data data.value = json; // set the current page value currentPage.value = page; }) .catch(err => { error.value = err; // incase a custom JSON error response was provided if (err.json) { return err.json.then(json => { // set the JSON response message error.value.message = json.message; }); } }) .then(() => { // turn off the loading state loading.value = false; }); }
With all of that in place, the component is ready to be used. You can find a working sample of it here.
However, this HOC component is similar to what you would have in Vue 2. You only re-wrote it using the composition API, which, while neat, is hardly useful.
I’ve found that, to build a better HOC component for Vue 3 (especially a logic oriented component like this one), it is better to build it in a “Composition-API-first” manner. Even if you only plan to ship a HOC.
You will find that we kind of already did that. The
fetch component’s
setup function can be extracted to its own function, which is called
useFetch:
export function useFetch(endpoint) { // same code as the setup function }
And instead our component will look like this:
import { useFetch } from '@/fetch'; export default { props: { // ... }, setup({ endpoint }) { const api = useFetch(endpoint); return api; } }
This approach allows for a few opportunities. First, it allows us to think about our logic while being completely isolated from the UI. This allows our logic to be expressed fully in JavaScript. It can be hooked later to the UI, which is the
fetch component ‘s responsibility.
Secondly, it allows our
useFetch function to break down its own logic to smaller functions. Think of it as “grouping” similar stuff together, and maybe creating variations of our components by including and excluding those smaller features.
Breaking it down
Let’s shed light on that by extracting the pagination logic to its own function. The problem becomes: how can we separate the pagination logic from the fetching logic? Both seem intertwined.
You can figure it out by focusing on what the pagination logic does. A fun way to figure it out is by taking it away and checking the code you eliminated.
Currently, what it does is modify the
endpoint by appending a
page query param, and maintaining the state of the
currentPage state while exposing
next and
previous functions. That is literally what is being done in the previous iteration.
By creating a function called
usePagination that only does the part we need, you will get something like this:
import { ref, computed } from 'vue'; export function usePagination(endpoint) { const currentPage = ref(1); const paginatedEndpoint = computed(() => { return `${endpoint}?page=${currentPage.value}`; }); function nextPage() { currentPage.value++; } function prevPage() { if (currentPage.value <= 1) { return; } currentPage.value--; } return { endpoint: paginatedEndpoint, nextPage, prevPage }; }
What’s great about this is that we’ve hidden the
currentPage ref from outside consumers, which is one of my favorite parts of the Composition API. We can easily hide away non-important details from API consumers.
It’s interesting to update the
useFetch to reflect that page, as it seems to need to keep track of the new endpoint exposed by
usePagination. Fortunately,
watch has us covered.
Instead of expecting the
endpoint argument to be a regular string, we can allow it to be a reactive value. This gives us the ability to watch it, and whenever the pagination page changes, it will result in a new endpoint value, triggering a re-fetch.
import { watch, isRef } from 'vue'; export function useFetch(endpoint) { // ... function fetchData() { // ... // If it's a ref, get its value // otherwise use it directly return fetch(isRef(endpoint) ? endpoint.value : endpoint, { // Same fetch opts }) // ... } // watch the endpoint if its a ref/computed value if (isRef(endpoint)) { watch(endpoint, () => { // refetch the data again fetchData(); }); } return { // ... }; }
Notice that
useFetch and
usePagination are completely unaware of each other, and both are implemented as if the other doesn’t exist. This allows for greater flexibility in our HOC.
You’ll also notice that by building for Composition API first, we created blind JavaScript that is not aware of your UI. In my experience, this is very helpful for modeling data properly without thinking about UI or letting the UI dictate the data model.
Another cool thing is that we can create two different variants of our HOC: one that allows for pagination and one that doesn’t. This saves us a few kilobytes.
Here is an example of one that only does fetching:
import { useFetch } from '@/fetch'; export default { setup({ endpoint }) { return useFetch(endpoint); } };
Here is another that does both:
import { useFetch, usePagination } from '@/fetch'; export default { setup(props) { const { endpoint, nextPage, prevPage } = usePagination(props.endpoint); const api = useFetch(endpoint); return { ...api, nextPage, prevPage }; } };
Even better, you can conditionally apply the
usePagination feature based on a prop for greater flexibility:
import { useFetch, usePagination } from '@/fetch'; export default { props: { endpoint: String, paginate: Boolean }, setup({ paginate, endpoint }) { // an object to dump any conditional APIs we may have let addonAPI = {}; // only use the pagination API if requested by a prop if (paginate) { const pagination = usePagination(endpoint); endpoint = pagination.endpoint; addonAPI = { ...addonAPI, nextPage: pagination.nextPage, prevPage: pagination.prevPage }; } const coreAPI = useFetch(endpoint); // Merge both APIs return { ...addonAPI, ...coreAPI, }; } };
This could be too much for your needs, but it allows your HOCs to be more flexible. Otherwise, they would be a rigid body of code that’s harder to maintain. It’s also definitely more unit-test friendly.
Here is the end result in action:
elated-spence-tinnn
elated-spence-tinnn by logaretm using vue
Conclusion
To sum it all up, build your HOCs as Composition API first. Then, break the logical parts down as much as possible into smaller composable functions. Compose them all in your HOCs to to expose the end result.
This approach allows you to build variants of your components, or even one that does it all without being fragile and hard to maintain. By building with a composition-API-first mindset, you allow yourself to write isolated parts of code that are not concerned with UI. In this way, you let your HOC be the bridge between blind JavaScript and functionless U “Build better higher-order components with Vue 3”
I submitted and created the string to code | http://blog.logrocket.com/build-better-higher-order-components-with-vue-3/ | CC-MAIN-2020-40 | refinedweb | 1,917 | 55.64 |
Method overriding allows a subclass to have a method with the same name as a method in a base class. Method overriding is a very powerful Java feature and one of the ways in which Java achieves polymorphism. In this article, we will be exploring method overriding.
- What is a Method Overriding
- Super Keyword
- Runtime Polymorphism
- Method Overriding Rules
- Final Keyword
What is Method Overriding
Method overriding is related to inheritance. When a sub-class has a method with the same name as a method in a base class, then the subclass method is said to override the base class method. The method in the sub-class is known as the overriding method and the method in the base class is known as the overridden method. Method overriding allows the sub-class method to provide its own implementation for the base class method.
Code Sample
public class Animal { public void talk() { System.out.println("I’m an animal"); } } public class Cat extends Animal { public void talk() { System.out.println("I’m a cat "); } } public class AnimalDemo { public static void main(String[] args) { Animal animal = new Animal(); animal.talk(); Cat cat = new Cat(); cat.talk(); } }
- Line 1 specifies a class called Animal. It has a method called talk
- Line 8 specifies a class called Cat. It also has a method called talk
- Since both classes have a method called talk, the talk method in the Cat class is said to override the talk method in the Animal class
- Line 15 specifies an AnimalDemo class
- Line 18 creates an Animal object and Line 19 invokes the talk method on the Animal object
- Similarly, Line 21 creates a cat object and Line 22 invokes the talk method on the cat object.
Output
I’m an animal
I’m a cat
Super Keyword
Sometimes, although an overriding method may provide a different implementation for a base class method, it may still need to execute the code in the overridden method. The super keyword is useful in such scenarios.
Sample Code
public class Person { protected String name; public void printDetails() { System.out.println("Name:"+name); } } public class Employee extends Person { private String designation; private String department; public void printDetails() { System.out.println("designation:"+designation); System.out.println("department:"+department); super.printDetails(); } public static void main(String arg[]) { Employee employee = new Employee(); employee.name = "John Doe"; employee.designation = "Manager"; employee.department="HR"; employee.printDetails(); } }
- Line 1 defines a Person class. It has a name instance field. It also has a printDetails method that simply prints the name
- Line 10 defines an Employee class that is a subclass of Person. It has fields corresponding to designation and department.
- Employee class also has a printDetails method. So the printDetails method in the Employee class overrides the printDetails method in the Person class.
- The Employee.printDetails method prints the designation and department instance fields. In addition, it invokes super.printDetails(). This causes the printDetails method in the Person class to be invoked.
Output
designation:Manager
department:HR
Name:John Doe
Runtime polymorphism
The main advantage of method overriding is that it is the mechanism by which Java achieves runtime polymorphism. When an overridden method is invoked, the version of the method that actually gets invoked is determined at run-time. This is known as dynamic method dispatch. Dynamic method dispatch helps to achieve runtime polymorphism.
Code Sample
Suppose the AnimalDemo class above is re-written as follows:
public class AnimalDemo { public static void main(String[] args) { Animal animal = new Cat(); animal.talk(); } }
- Line 4 declares a variable of type Animal but assigns it an object of type Cat. This is allowed, you can assign a subclass object to a variable of the base class type
- Line 5 invokes the talk method on the animal variable
Output
I’m a cat
So, the call at Line 5 results in invoking the talk method in the Cat class. So it is the type of the object assigned to the base class variable that determines the version of the overridden method that will get invoked.
Now suppose we have another class as follows:
public class Dog extends Animal { public void talk() { System.out.println("I’m a dog"); } }
This code defines a class called Dog that extends the Animal class and overrides the talk method. Now, let us modify the AnimalDemo class again:
public class AnimalDemo { public static void main(String[] args) { Animal animal = new Cat(); animal.talk(); animal = new Dog(); animal.talk(); } }
- Line 4 declares a variable of type Animal and assigns it an object of type Cat. Line 5 invokes the talk method on the animal variable
- Line 7 assigns an object of type Dog to the animal variable and Line 8 invokes the talk method on the animal variable
- Since the version of the method to be invoked depends on the type of object assigned to the superclass variable, Line 5 results in the talk method from Cat class being invoked while Line 8 results in the talk method from the Dog class being invoked.
Output
I’m a cat
I’m a dog
Method overriding rules
There are some rules that need to be followed in method overriding. These are as follows:
- The parameter list in the overriding and overridden method must be exactly the same. So, for example, if the base class method accepts two parameters of type int, the sub-class method should also accept two parameters of type int
- The return type of the overriding method must be the same or a subtype of the return type of the overridden method. So, for example, if a base class method returns a Collection, the sub-class method can either return a Collection or any sub-type of Collection
- The access level of the overriding method should be the same or less restrictive than the overridden method. So, if the base class method is declared as public, then the subclass method cannot be private, protected, or have a default access.
- The overriding method cannot throw checked exceptions that are new or broader than the overridden method. So, if the base class method does not throw any checked exceptions, the sub-class method also cannot throw any exceptions. If the base class method throws a checked exception like SQLException, then the subclass method can only throw an exception which is a sub-class of SQLException
- Constructors cannot be overridden.
Final Keyword
Sometimes, you may wish to prevent a method from being overridden. In such a case, the final keyword can be used with the base class method. This prevents a method from being overridden.
Code Sample
public class Vehicle { final void move() { System.out.println("Moving"); } } public class Car extends Vehicle{ void move() { System.out.println("Moving"); } }
- Line 1 specifies a class called Vehicle
- Line 3 specifies a move method. It has the final keyword specified in the method declaration. This indicates that the move method cannot be overridden.
- Line 8 specifies a class called Car with a method called move. However, this causes a compilation error since the move method in the Vehicle class is final.
Conclusion
So, in this article, we understood what method overriding is and how it works. We saw how method overriding helps to achieve runtime polymorphism. We also saw what rules need to be followed for method overriding. Finally, we saw how to prevent method overriding by using the final keyword. | https://pdf.co/blog/method-overriding-in-java | CC-MAIN-2022-27 | refinedweb | 1,229 | 53.61 |
Transition to Jinja templates from Django templates
If you're accustomed to using Django templates, this section describes the finer details you need to be aware of when using Jinja templates, such as: what Django template knowledge you can leverage in Jinja templates, what works differently in Jinja templates compared to Django templates and what are new things you need to learn that you'll come to appreciate in Jinja templates.
If you've never used Django templates, you can skip to the next section on Jinja template configuration in Django, as most of what follows is intended for experienced Django template users.
What works the same way in Jinja and Django templates
Just because Jinja is an entirely different template engine, doesn't mean it's radically different from Django's built-in template engine. You can expect to use the same approach for: Variables & blocks, conditionals & loops, comments, as well as spacing & special characters.
Variables and blocks
Curly braces
{} are
broadly used in Jinja templates just like they're used in Django
templates. To output a variable in Jinja you use the same
{{myvariable}} syntax. Similarly, you also name blocks
to inherit snippets between templates with the
{% block
footer %} {% endblock %} syntax. In addition, Jinja also
uses the same Django
{% extends "base.html" %} syntax
to create parent/child relationships between templates.
Conditionals and loops
Jinja uses the same Django syntax
to create conditionals: {
% if variable %}{% elif
othervariable %}{% else %}{% endif %}. In addition, Jinja
also uses the same for loop syntax as Django:
{% for item in
listofitems %}{{item}}{% endfor %}.
Jinja also uses the same comment
tag as Django:
{# This is a template comment that isn't
rendered #}. However, note Jinja uses the
{# #}
tag for both single and multi-line comments.
Spacing and special characters
Since Jinja templates were
inspired from Django templates, Jinja uses a similar approach to
dealing with spacing and special characters. For example, things
like spacing filters (e.g.
center and
wordwrap) and special character handling
(e.g.
safe and
escape filters ) work the
same way in Jinja templates as they do in Django templates.
What works differently in Jinja templates compared to Django templates
However, not everything works the same way in Jinja templates, here are some Django template techniques you'll need to relearn to work with Jinja templates.
Filters
Although Jinja uses the same pipe
| symbol to apply filters to variables, Jinja filters
are technically classified into filters and tests. In Django
templates there are just filters that perform tests (e.g.
divisibleby) but in Jinja these type constructs are
called tests and use the conditional syntax
{% if variable is
test %} instead of the standard pipe
|
symbol.
In addition, Jinja filters and
tests are backed by standard methods. This has the advantage that
passing arguments to Jinja filters and tests is as simple as a
method call (e.g.
{{variable|filesizeformat(true)}})
vs. the unintuitive Django filter argument syntax of using a colon
and even requiring to parse arguments in custom Django filters
(e.g.
{{variable|get_digit:"1"}}).
It's also possible to create
custom Jinja filters and tests -- in addition to the built-in Jinja filters and tests which are similar to Django built-in filters.
However, unlike Django filters which are loaded into templates via
the
{% load %} tag, Jinja custom filters and tests are
registered globally and become accessible to all Jinja templates
like Django context processors.
Context processors
Context processors give Django templates access to sets of variables across every template in a project, but in Jinja this functionality is called global variables. This is one area where you'll likely miss the Django template functionality of simply declaring context processors and getting access to sets of variables. However, it's relatively easy to create Jinja global variables to become accessible on all Jinja templates and act as Django context processors.
No date elements like the {% now %} tag and filters like time and timesince
Jinja in its out-of-the-box state
provides no tags or filters to work with dates or times. Although
Jinja does offer the
format filter that works just
like Python's standard method and can be used for date formatting,
you'll need to write your own custom filters and tags to deal with
date and time elements in a more advanced way.
{% comment %} tag not supported
Jinja uses the
{# #}
tag to define either single and multi-line comments, so there's no
support for the
{% comment %} which in Django
templates is used for multi-line comments.
{% load %} tag not supported
In Jinja the
{% load
%} tag to import custom tags and filters is not supported.
In Jinja custom tags and filters are registered globally and
automatically become accessible to all Jinja templates.
Use {{super()}} instead of {{block.super}}
In Django templates you use the
syntax
{{ block.super }} to access the contents of a
parent template's block. In Jinja you must use the
{{super()}} syntax to gain access to the contents of a
parent template's block.
{% csrf_token %} tag not supported instead use csrf_input or csrf_token variables
In Django templates when you
create a form that has an HTTP POST action, you place the
{%
csrf_token %} tag in its body to generate a special token
that avoids XSS('Cross-site scripting'). To replicate this behavior
in Jinja you must use the
csrf_input variable (e.g.
{{csrf_input}} generates a string like
<input
type="hidden" name="csrfmiddlewaretoken"
value="4565465747487">) or use the
csrf_token variable which contains the raw CSRF token
(e.g.
4565465747487).
{% for %} loop variables
In Django templates the context
of
{% for %} loops offers access to a series of
variables (e.g. counter, first and last iteration). Jinja templates
offer a similar variable in the context of
{% for %}
but they are not identical.
{% empty %} tag not supported in loops, use the {% else %} tag
{% for %} loops in
Django templates support the
{% empty %} clause as a
last argument to generate logic or a message when an iteration is
empty. In Jinja
{% for %} loops you can use the
{% else %} clause as a last argument to generate logic
or a message when an iteration is empty.
{% groupby %} tag not supported, use the groupby filter
Django templates support the
{% groupby %} tag to rearrange dictionaries or objects
based on different attributes. In Jinja you can achieve the same
functionality but you must do it through the
groupby
filter as described in the Jinja
groupby filter.
{% cycle %} tag not supported, use the cycler function or the loop.cycle variable in {% for %} loops
Django templates support the
{% cycle %} tag to cycle over a list of values. In
Jinja this functionality is available in two forms. You can use the
cycler method if you require the functionality outside
of loops. Or you can use the
loop.cycle function
available in all
{% for %} loops.
{% lorem %} tag not supported, use the lipsum function
Django templates support the
{% lorem %} tag to generate random latin text as
filler content. In Jinja you can achieve the same functionality
with the
lipsum function.
Other miscellaneous tags like {% static %}, {% trans %}, {% blocktrans %} and {% url %} not supported
A series of Django template tags
like
{% static %} and
{% trans %} are
simply not available in Jinja. However, there are third party
projects that have ported these and many other Django template tags
into Jinja extensions. A later section in this chapter on Jinja extensions discusses these options.
New concepts and features in Jinja templates vs. Django templates
Now that you know what Django template knowledge you can leverage and what techniques you'll need to relearn to effectively work with Jinja templates, let's take a look at some concepts that only apply to Jinja templates.
More useful built-in filters, tests and more resemblance to a Python environment
Jinja templates offer a variety of built-in filters and tests that are sorely missing in Django templates. For example, for something as simple as checking variable types (e.g. string, number, iterable, etc), Jinja offers a series of built-in tests for this purpose, where as in Django this requires creating custom filters.
Access and manipulation of
complex data types (e.g. objects and dictionaries) is also vastly
improved in Jinja templates vs. Django templates. For example,
Jinja offers filters such as
reject,
select and
map to prune, filter or alter
data sub-sets on a template, a technique that although frowned upon
by purists (i.e. those who stand by only manipulating data in
views) are a very common requirement in real & time-constrained
projects.
Jinja templates also support
syntax that is more in-line with a standard Python environment. For
example, in Django something like accessing a dictionary key
through a variable requires a custom filter, where as in Jinja
templates this works with standard Python syntax (e.g. if you have
the variables
stores={"key1":"value1",
"key2":"value2"} and
var="key1", Django
template can't do
stores.get(var) which is standard
Python syntax, but in Jinja this works out-of-the-box as expected
of a Python environment).
Global functions
Jinja also supports a series of
global functions. For example, Jinja offers the
range
function that works just like Python's standard function which is
useful in loops (e.g.
{% for number in range(50 -
coffeeshops|count) %}). In addition, Jinja also offers the
global functions:
lipsum to generate dummy placeholder
content,
dict to generate dictionaries,
cycler to generate a cycle over elements and
joiner to join sections.
Flexible tag nesting, conditionals and references
Jinja is very flexible in terms
of nesting tags, particularly compared to what's permissible in
Django templates. For example, in Jinja you can even conditionally
apply the
{% extends %} tag (e.g.
{% if user
%}{% extends "base.html" %}{% else %}{% extends "signup_base.html"
%}{% endif %}) or also use variable reference names with
inline conditions (e.g.
{% extends layout_template if
layout_template is defined else 'master.html' %}) something
that's not possible in Django templates.
Macros
In Jinja macros allow you to define function-like snippets with complex layouts that can be called from any template with different instance values. Macros are particularly useful to limit the spread of complex layouts across templates. With macros you define a complex layout once (i.e. as a macro) and invoke it with different parameters to output the complex layout customized every single time, just as if were a function.
Flexible variable assignment in templates with less restrictive scope
In Jinja you can use the
{%
set %} tag to define variables to have a valid scope until
the end of the template. Although Jinja also supports the
{%
with %} tag -- just like the Django template version - the
{% with %} tag can become cumbersome for multiple
variable definitions because it requires closing the scope with
{% endwith %} every time. The
{% set %}
is a good alternative for global template variables because you
only require the initial definition and the scope propagates to end
of the template without having to worry about closing the
scope.
Line statements
Jinja supports the definition of
logical statements in what it calls line statements. By default, a
line statement is preceded with the
# symbol and can
serve as an alternative to tag syntax. For example, the
{%
for %} tag statement
{% for item in items %}
can use the equivalent line statement
# for item in
items, just as the tag statement
{% endfor %}
can use the equivalent line statement
# endfor . Line
statements more than anything give templates a Python feel to them
which can make complex logic easier to decipher vs. using tag
statements that require the
{% %} syntax. | https://www.webforefront.com/django/transitiontojinjatemplates.html | CC-MAIN-2021-31 | refinedweb | 1,902 | 52.09 |
Code. Collaborate. Organize.
No Limits. Try it Today.
The. 'moreData' provided. It seems ridiculous to type and pass null so many times.
Process
false, null
moreData
null
// These 3 calls are equivalent
Process( "foo", false, null );
Process( "foo", false );
Process( "foo" );
Awesome, one less thing VB programmers can brag about having to themselves. I haven't mentioned it up to this point, but Microsoft has explicitly declared that VB and C# will be "co-evolving" so the number of disparate features is guaranteed to shrink over time. I would like to think this will render the VB vs. C# question moot, but I'm sure people will still find a way to argue about it.
In the last example, we saw that the following call was invalid:
Process( "foo", myArrayList ); // Invalid!
But if the boolean ignoreWS is optional, why can't we just omit it? Well, one reason is for readability and maintainability, but primarily because it can become impossible to know what parameter you are specifying. If you had two parameters of the same type, or if one of the parameters was "object" or some other base class or interface, the compiler would not know which parameter you are sending. Imagine a method with ten optional parameters and you give it a single ArrayList. Since an ArrayList is also an object, an IList, and an IEnumerable, it is impossible to determine how to use it. Yes, the compiler could just pick the first valid option for each parameter (or a more complex system could be used), but this would become impossible for people to maintain and would cause countless programming mistakes.
ignoreWS
object
ArrayList
IList
IEnumerable
Named parameters provide the solution:
ArrayList myArrayList = new ArrayList();
Process( "foo", true ); // valid, moreData omitted
Process( "foo", true, myArrayList ); // valid
Process( "foo", moreData: myArrayList); // valid, ignoreWS omitted
Process( "foo", moreData: myArrayList, ignoreWS: false ); // valid, but silly
As long as a parameter has a default value, it can be omitted, and you can just supply the parameters you want via their name. Note in the second line above, the 'true' value for ignoreWS did not have to be named since it is the next logical parameter.
dynamic cust = GetCustomer();
cust.FirstName = "foo"; // works as expected
cust.Process(); // works as expected
cust.MissingMethod(); // No method found!
Notice we did not need to cast nor declare cust as type Customer. Because we declared it dynamic, the runtime takes over and then searches and sets the FirstName property for us. Now, of course, when you are using a dynamic variable, you are giving up compiler type checking. This means the call cust.MissingMethod() will compile and not fail until runtime. The result of this operation is a RuntimeBinderException because MissingMethod is not defined on the Customer class.
cust
cust.MissingMethod()
RuntimeBinderException
MissingMethod
The example above shows how dynamic works when calling methods and properties. Another powerful (and potentially dangerous) feature is being able to reuse variables for different types of data. I'm sure the Python, Ruby, and Perl programmers out there can think of a million ways to take advantage of this, but I've been using C# so long that it just feels "wrong" to me.
dynamic foo = 123;
foo = "bar";
OK, so you most likely will not be writing code like the above very often. There may be times, however, when variable reuse can come in handy or clean up a dirty piece of legacy code. One simple case I run into often is constantly having to cast between decimal and double.
decimal
double
decimal foo = GetDecimalValue();
foo = foo / 2.5; // Does not compile
foo = Math.Sqrt(foo); // Does not compile
string bar = foo.ToString("c");
The second line does not compile because 2.5 is typed as a double and line 3 does not compile because Math.Sqrt expects a double. Obviously, all you have to do is cast and/or change your variable type, but there may be situations where dynamic makes sense to use.
Math.Sqrt
dynamic foo = GetDecimalValue(); // still returns a decimal
foo = foo / 2.5; // The runtime takes care of this for us
foo = Math.Sqrt(foo); // Again, the DLR works its magic
string bar = foo.ToString("c");
After some great questions and feedback, I realized I need to clarify a couple points I made above. When you use the dynamic keyword, you are invoking the new Dynamic Language Runtime libraries (DLR) in the .NET framework. There is plenty of information about the DLR out there, and I am not covering it in this article. Also, when possible, you should always cast your objects and take advantage of type checking. The examples above were meant to show how dynamic works and how you can create an example to test it. Over time, I'm sure best practices will emerge; I am making no attempt to create recommendations on the use of the DLR or dynamic.
Also, since publishing the initial version of this article, I have learned that if the object you declared as dynamic is a plain CLR object, Reflection will be used to locate members and not the DLR. Again, I am not attempting to make a deep dive into this subject, so please check other information sources if this interests you.
It should be apparent that 'switching' an object from being statically typed to dynamic is easy. After all, how hard is it to 'lose' information? Well, it turns out that going from dynamic to static is just as easy.
Customer cust = new Customer();
dynamic dynCust = cust; // static to dynamic, easy enough
dynCust.FirstName = "foo";
Customer newCustRef = dynCust; // Works because dynCust is a Customer
Person person = dynCust; // works because Customer inherits from Person
SalesRep rep = dynCust; // throws RuntimeBinderException exception
Note that in the example above, no matter how many different ways we reference it, we only have one Customer object (cust).
When you return something from a dynamic function call, indexer, etc., the result is always dynamic. Note that you can, of course, cast the result to a known type, but the object still starts out dynamic.
dynamic cust = GetCustomer();
string first = cust.FirstName; // conversion occurs
dynamic id = cust.CustomerId; // no conversion
object last = cust.LastName; //conversion occurs
There are, of course, a few missing features when it comes to dynamic types. Among them are:
We will have to wait for the final version to see what other features get added or removed.
OK, a quick quiz. Is the following legal in .NET?
// Example stolen from the whitepaper ;-)
IList<string> strings = new List<string>();
IList<object> objects = strings;
I think most of us, at first, would answer 'yes' because a string is an object. But the question we should be asking ourselves is: Is a -list- of strings a -list- of objects? To take it further: Is a -strongly typed- list of strings a -strongly typed- list of objects? When phrased that way, it's easier to understand why the answer to the question is 'no'. If the above example was legal, that means the following line would compile:
string
objects.Add(123);
Oops, we just inserted the integer value 123 into a List<string>. Remember, the list contents were never copied; we simply have two references to the same list. There is a case, however, when casting the list, this should be allowed. If the list is read-only, then we should be allowed to view the contents any (type legal) way we want.
123
List<string>
From Wikipedia:
Within the type system of a programming language, a type conversion operator is:
C# is, of course, covariant, meaning a Customer is a Person and can always be referenced as one. There are lots of discussions on this topic, and I will not cover it here. The changes in C# 4.0 only involve typed (generic) interfaces and delegates in situations like in the example above. In order to support co and contra variance, typed interfaces are going to be given 'input' and 'output' sides. So, to make the example above legal, IList must be declared in the following manner:
Person
public interface IList<out T> : ICollection<T>, IEnumerable<T>, IEnumerable
{
...
}
Notice the use of the out keyword. This is essentially saying the IList is readonly and it is safe to refer to a List<string> as a List<object>. Now, of course, IList is not going to be defined this way; it must support having items added to it. A better example to consider is IEnumerable which should be, and is, readonly.
out
List<object>
public interface IEnumerable<out T> : IEnumerable
{
IEnumerator<T> GetEnumerator();
}
Using out to basically mean 'read only' is straightforward, but when does using the in keyword to make something 'write only' useful? Well, it actually becomes useful in situations where a generic argument is expected and only used internally by the method. IComparer is the canonical example.
in
IComparer
public interface IComparer<in T>
{
public int Compare(T left, T right);
}
As you can see, we can't get back an item of type T. Even though the Compare method could potentially act on the left and right arguments, it is kept within the method so it is a 'black hole' to clients that use the interface.
T
Compare
To continue the example above, this means that an IComparer<object> can be used in the place of an IComparer<string>. The C# 4.0 whitepaper sums the reason up nicely: 'If a comparer can compare any two objects, it can certainly also compare two strings'. This is counter-intuitive (or maybe contra-intuitive) because if a method expects a string, you can't give it an object.
IComparer<object>
IComparer<string>
OK, comparing strings and objects is great, but I think a somewhat realistic example might help clarify how the new variance keywords are used. This first example demonstrates the effects of the redefined IEnumerable interface in C# 4.0. In .NET 3.5, line 3 below does not compile with an the error: 'can not convert List<Customer> to List<Person>'. As stated above, this seems 'wrong' because a Customer is a Person. In .NET 4.0, however, this exact same code compiles without any changes because IEnumerable is now defined with the out modifier.
MyInterface<Customer> customers = new MyClass<Customer>();
List<Person> people = new List<Person>();
people.AddRange(customers.GetAllTs()); // no in 3.5, yes in 4.0
people.Add(customers.GetAllTs()[0]); // yes in both
...
interface MyInterface<T>
{
List<T> GetAllTs();
}
public class MyClass<T> : MyInterface<T>
{
public List<T> GetAllTs()
{
return _data;
}
private List<T> _data = new List<T>();
}
This next example demonstrates how you can take advantage of the out keyword. In .NET 3.5, line 3 compiles, but line 4 does not with the same 'cannot convert' error. To make this work in .NET 4.0, simply change the declaration of MyInterface to interface MyInterface<out T>. Notice that in line 4, T is Person, but we are passing the Customer version of the class and interface.
MyInterface
interface MyInterface<out T>
MyInterface<Person> people = new MyClass<Person>();
MyInterface<Customer> customers = new MyClass<Customer>();
FooClass<Person>.GetThirdItem(people);
FooClass<Person>.GetThirdItem(customers);
...
public class FooClass<T>
{
public static T GetThirdItem(MyInterface<T> foo)
{
return foo.GetItemAt(2);
}
}
public interface MyInterface<out T>
{
T GetItemAt(int index);
}
public class MyClass<T> : MyInterface<T>
{
public T GetItemAt(int index)
{
return _data[index];
}
private List<T> _data = new List<T>();
}
This final example demonstrates the wacky logic of contravariance. Notice that we put a SalesRep 'inside' our Person interface. This isn't a problem because a SalesRep is a Person. Where it gets interesting is when we pass the MyInterface<Person> to FooClass<Customer>. In essence, we have 'inserted' a SalesRep into an interface declared to work with only Customers! In .NET 3.5, line 5 does not compile; as expected. By adding the in keyword to our interface declaration in .NET 4.0, everything works fine because we are 'agreeing' to treat everything as a Person internally and not expose the internal data (which might be that SalesRep).
SalesRep
MyInterface<Person>
FooClass<Customer>
MyInterface<Customer> customer = new MyClass<Customer>();
MyInterface<Person> person = new MyClass<Person>();
person.SetItem(new SalesRep());
FooClass<Customer>.Process(customer);
FooClass<Customer>.Process(person);
...
public class FooClass<T>
{
public static void Process(MyInterface<T> obj)
{
}
}
public interface MyInterface<in T>
{
void SetItem(T obj);
void Copy(T obj);
}
public class MyClass<T> : MyInterface<T>
{
public void SetItem(T obj)
{
_item = obj;
}
private T _item;
public void Copy(T obj)
{
}
}
This is by far the area in which I have the least experience; however, I'm sure we have all had to interact with Microsoft Office at one point and make calls like this:
// Code simplified for this example
using Microsoft.Office.Interop;
using Microsoft.Office.Interop.Word;
object foo = "MyFile.txt";
object bar = Missing.Value;
object optional = Missing.Value;
Document doc = (Document)Application.GetDocument(ref foo, ref bar, ref optional);
doc.CheckSpelling(ref optional, ref optional, ref optional, ref optional);
There are (at least) three problems with the code above. First, you have to declare all your variables as objects and pass them with the ref keyword. Second, you can't omit parameters and must also pass the Missing.Value even if you are not using the parameter. And third, behind the scenes, you are using huge (in file size) interop assemblies just to make one method call.
ref
Missing.Value
C# 4.0 will allow you to write the code above in a much simpler form that ends up looking almost exactly like 'normal' C# code. This is accomplished by using some of the features already discussed; namely dynamic support and optional parameters.
// Again, simplified for example.
using Microsoft.Office.Interop.Word;
var doc = Application.GetDocument("MyFile.txt");
doc.CheckSpelling();
What will also happen behind the scenes is that the interop assembly that is generated will only include the interop code you are actually using in your application. This will cut down on application size tremendously. My apologies in advance for this weak COM example, but I hope it got the point development.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
<a id="doctypelink" name="doctypelink" href="">C#.NET 4.0 Features</a>
<a id="doctypelink" name="doctypelink" href="">ASP.NET 4.0 Features</a>
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/37795/C-4-0-s-New-Features-Explained?fid=1543107&df=90&mpp=10&noise=1&prof=True&sort=Position&view=None&spc=Relaxed&select=3157569&fr=1 | CC-MAIN-2014-23 | refinedweb | 2,422 | 55.34 |
JeffL 0 Posted January 8, 2010 (edited) I'm really stumped here. I feel like i've tried everything. Basically I wrote a script to run through a mailbox and print out all attachments of a certain document type. The other documents work, but not PDFs via Adobe Reader. It works via Foxit, but I'm the only one that uses this and it is a script for other people. What I tried to run originally was: ShellExecuteWait($sFilePath, "", "", "print", @SW_HIDE) but Reader doesn't close when complete so it Waits indefinitely. I tried some switches such as ShellExecuteWait($sFilePath, " /h /p", "", "print", @SW_HIDE) but no differing behavior and through my research, I am led to believe it is the Reader version suppressing this silent print behavior. The main problem here is that it NEEDS to wait for the process to complete or it won't print at all, but it also needs to close Reader. So I looked for methods for waiting for the print... I got this function working: func getPrintInfo($sFileName, $sInfo) ;$sinfo = caption,host,id,owner _PrintJob", "WQL", $wbemFlagReturnImmediately + $wbemFlagForwardOnly) if IsObj($colItems) then for $objItem In $colItems if $objItem.Document = $sFileName then switch $sInfo Case "caption" return $objItem.Caption Case "host" return $objItem.HostPrintQueue Case "id" return $objItem.JobId Case "owner" return $objItem.Owner endswitch endif next endif return 0 endfunc Then did some while loops to see when the job existed in the queue or not. The output works if you pause a job and do the query, but it fails in my implementation and i have a feeling the print job is not even popping into the queue long enough for a successful query. Then I tried: ProcessWait("AcroRd32.exe", 5) ProcessClose("AcroRd32.exe") But again it kills the print with the process before it can begin. I'm completely stumped. Obviously I could just put a regular Sleep after the ShellExecute, but I want as clean a method as possible for this, but it seems like I'm getting intercepted at every turn. Anyone? :\ Edited January 8, 2010 by JeffL Share this post Link to post Share on other sites | https://www.autoitscript.com/forum/topic/107961-silent-pdf-print-in-adobe-reader-91/ | CC-MAIN-2018-39 | refinedweb | 358 | 64.51 |
SchemaComposition
In W3C XML Schema, a schema is a set of components. Components, in turn, are abstract objects with properties as described in the XML Schema spec. Schema components may be described using the XML transfer syntax defined in the spec, or using other means; components may be constructed through a GUI or an API, or constructed on the basis of a written description like the XML transfer syntax.
Schema assembly is the process of collecting components for use in a particular validation episode.
In normal discussion, the term schema composition is sometimes used to mean "schema assembly" (think: the composition of a symphony), and sometimes to denote operations which take two schemas as input and produce a schema as output (think: composition of two functions). The section on "Schema composition" in XML Schema 1.0 can be (and has been) read as involving either or both of these usages.
The details of the process are intentionally left unspecified in the XML Schema spec (in much the same way that a programming language spec does not typically say anything about where a compiler is supposed to get the source code it's compiling). In practice, processors frequently make components by reading schema documents, which they find by dereferencing namespace names, or by receiving information from the user at invocation time, or by following the schemaLocation hints in the document instance or in schema documents they read. Other processors may have a local cache or repository of components (this is the case for some database management systems which support XML Schema). Still others may have hard-coded components.
In the interests of improving interoperability here, XML Schema 1.1 is expected to provide some standard terminolgy for describing common strategies for collecting schema components.
[Further development needed.]
References
Henry Thompson et al., ed. "XML Schema 1.1 Part 1: Structures", section D.2 "Terminology of schema construction"
C. M. Sperberg-McQueen, "Notes on schema resolution", December 2001. | http://www.w3.org/wiki/SchemaComposition | CC-MAIN-2014-10 | refinedweb | 326 | 54.42 |
When you use Chef cookbook, a lot of files might be installed in you machine, configuration, script and so on.
Chef provides very useful resource for putting a file called
template. You can put any type of text file with this resource.
But
template does not manage a file when it is unused. How can we delete a file once installed?
Since there is no way provided by Chef, we have to implement. So here I want to introduce a small pattern to achieve
this purpose here. The problem we want to solve here is this.
We can assume the case that the list is kept by data_bags or consul. Chef
template cannot handle this type of dynamic
list flexibly itself. So this is what I’ve written to achieve this.
def installed_config_files begin Dir.glob("/path/to/*.conf") rescue return [] end end
This is defined in libraries or somewhere.
installed_config_files is used in each recipe.
installed_config_files.each do |conf| # Check the installed file should be kept or not if !config_files.include?(conf) then file conf do action :delete end end end config_files.each do |conf| template conf do source somewhere action :create end end
With this snippet, you can make sure only the filed defined
config_files are installed in the target machine.
Chef provides great power to our non-operation intensive engineers. I realized again that I can do operations which can be
repeatable, persistent and writable. I want to be more familiar with Chef usage further.
Thank you.Written on June 3rd , 2016 by Kai Sasaki | https://www.lewuathe.com/chef/deleting-unused-files-in-chef-cookbook.html | CC-MAIN-2018-26 | refinedweb | 258 | 76.42 |
Opened 6 years ago
Closed 4 years ago
#18381 closed Bug (fixed)
Underscore in CharField primary key breaks the admin "view on site" link
Description
I have a model in which the primary key is a CharField and the get_absolute_url method is defined. When I create an instance of the model with the primary key containing an underscore "_", the admin links for that object have '5F' (the ASCII hex code for an underscore) after the underscore. For instance, if the primary key was 'abc_123', then the link to the object within the admin would contain 'abc_5F123'. Strangely, these links work just fine. And if you strip out the '5F', the links also work. However, with the '5F' in the URL, the view on site link does not work.
To recreate the bug:
- I created a new project and enabled the Admin, both in settings.py and in urls.py.
- I created an app 'testapp' with the following models.py:
from django.db import models class MyModel(models.Model): id = models.CharField(max_length=100, primary_key=True) def get_absolute_url(self): return '/mymodel/{0}/'.format(self.id)
- I added 'testapp' to the INSTALLED_APPS setting.
- I started the dev server and created two instances of the 'MyModel' model in the admin: one with the primary key 'abc123' and one with the primary key 'abc_123'.
- Using the 'abc123' instance in the Admin site:
- it is at the Admin url of
/admin/testapp/mymodel/abc123/, as expected.
- the view on site link points to
/admin/r/8/abc123/, as expected.
- clicking on the view on site link tries to redirect me to
example.com/mymodel/abc123/as expected. (I'm actually taken to an IANA page about example.com being a fake domain. But Chrome's developer tools shows that it first tried to redirect me to the example.com URL above.)
- Using the 'abc_123' instance in the Admin site:
- it is at the Admin url of
/admin/testapp/mymodel/abc_5F123/
- the view on site link points to
/admin/r/8/abc_5F123/,
- clicking on the view on site link results in a 404 error at the
/admin/r/8/abc_5F123/url. I get the message: "Content type 8 object abc_5F123 doesn't exist".
- manually entering the url
/admin/r/8/abc_123/(note: without the '5F') work as expected.
- manually entering the url
/admin/testapp/mymodel/abc_123/does not work in this demo, although it worked in the full application where I first encountered the problem.
I'd like to be able to use the view on site link even when objects have underscores in their primary keys. Even better would be to also remove the strange '5F' from the admin site URLs, although since they still work it is not absolutely necessary.
Tested with:
- Django 1.4 using SQLite
- both Chrome and Firefox
- Discovered the issue on Linux, then created small test app on Windows.
I hope that makes sense. Thanks.
Change History (9)
comment:1 Changed 6 years ago by
comment:2 Changed 6 years ago by
comment:3 Changed 6 years ago by
See for the pull request.
comment:4 Changed 6 years ago by
comment:5 Changed 6 years ago by
comment:6 Changed 4 years ago by
Although this bug was closed fixed 2 years ago, related issues seem still present in the latest stable Django release 1.6.2.
CharField primary keys with underscores are (un)escaped differently in the admin pages, and when reversing admin URLs with django.core.urlresolvers.reverse() or the {% url %} template tag.
The admin pages add "5F" after the underscore in when rendering URLs, but reverse() does not. As a result, links from custom pages into the admin break when the object's primary key contains underscores.
Steps to reproduce:
- Create a model with "id = models.CharField(max_length=100, primary_key=True)" or other similar text primary key field,
- Register the model to the admin site,
- Create a model instance with test_123 as the primary key,
- Visit the admin, and observe admin URLs: they are of the format "/admin/<app>/<model>/test_5F123/"
- Open the shell, and call reverse('admin:app_model_delete', args=['test_123'])
- The URLs returned are of the format "/admin/<app>/<model>/test_123/" - no 5F
- As a result, if these links are embedded in pages, they don't work - the admin page tries to decode the primary key, expects 5F, and does not find the object.
comment:7 Changed 4 years ago by
comment:8 Changed 4 years ago by
Please open a new ticket for related issues, rather than reopening a ticket where the fix has already been released. Thanks!
The problem is that we pass the escaped object_id into contenttype_views.shortcut -- we either have to unquote there or don't pass the quoted url into it. | https://code.djangoproject.com/ticket/18381 | CC-MAIN-2018-30 | refinedweb | 792 | 62.78 |
(Current version: 0.2.2)
Announcement: Developers Wanted
Do you use PySWIP? Do you wish it was more robust, more functional, in short it was better? Here's the chance for you...
I can't find the time to maintain PySWIP anymore, so if anyone there would like to assume that responsibility, I'll be handing over the project to him/her.
You'll need the following skills:
- Python programming and familiarity with ctypes,
- Familiarity with SWI-Prolog and its foreign language interface (knowledge of C is handy),
... And of course a love of Free/Open Source Software...
Being a maintainer, you'll have the following responsibilities:
- Review and apply user contributed patches,
- Fix current and future issues of the package,
- Enhance the package to support more SWI-Prolog functionality,
- Write necessary documentation,
- Make frequent releases of the package,
- Announce these releases,
- Anything else you think should be done to carry PySWIP forward.
I want the project hosted at Google Code until I believe I've found the right person, so you'll need a Google account.
If you think you're the right person to carry PySWIP forward, please contact me at yucetekol AT gmail DOT com. Thank you.
PySWIP is a GPL'd Python - SWI-Prolog bridge enabling to query SWI-Prolog in your Python programs. It features an (incomplete) SWI-Prolog foreign language interface, a utility class that makes it easy querying with Prolog and also a Pythonic interface.
Since PySWIP uses SWI-Prolog as a shared library and ctypes to access it, it doesn't require compilation to be installed.
Requirements
- Python 2.3 and higher.
- ctypes 1.0 and higher.
- SWI-Prolog 5.6.x and higher (not the development branch).
- libpl as a shared library.
- Works on Linux and Win32, should work for all POSIX.
Note: Please do not use the SVN version, because since I won't be able to access it for some time, it will lag behind development...
Example (Using Prolog)
>>> from pyswip import Prolog >>> prolog = Prolog() >>> prolog.assertz("father(michael,john)") >>> prolog.assertz("father(michael,gina)") >>> list(prolog.query("father(michael,X)")) [{'X': 'john'}, {'X': 'gina'}] >>> for soln in prolog.query("father(X,Y)"): ... print soln["X"], "is the father of", soln["Y"] ... michael is the father of john michael is the father of gina
Since version 0.1.3 of PySWIP, it is possible to register a Python function as a Prolog predicate through SWI-Prolog's foreign language interface.
Example (Foreign Functions)
from pyswip import Prolog, registerForeign def hello(t): print "Hello,", t hello.arity = 1 registerForeign(hello) prolog = Prolog() prolog.assertz("father(michael,john)") prolog.assertz("father(michael,gina)") list(prolog.query("father(michael,X), hello(X)"))
Outputs:
Hello, john Hello, gina
Since version 0.2, PySWIP contains a 'Pythonic' interface which allows writing predicates in pure Python.
Example (Pythonic interface)
from pyswip import Functor, Variable, Query, call assertz = Functor("assertz", 1) father = Functor("father", 2) call(assertz(father("michael","john"))) call(assertz(father("michael","gina"))) X = Variable() q = Query(father("michael",X)) while q.nextSolution(): print "Hello,", X.value q.closeQuery()
Outputs:
Hello, john Hello, gina
If you want to contribute to PySWIP's development or just ask questions you can join Google PySWIP Group.
PySWIP is being developed by Yuce Tekol, you can email me at: yucetekol [AT] gmail [DOT] com. Please see Authors for a full list of contributors. | http://code.google.com/p/pyswip/ | crawl-002 | refinedweb | 565 | 57.16 |
I needed a collection of different website links to experiment with Docker cluster. So I created this small script to collect one million website URLs.
Code is available on Github too.
Install the dependencies.
pip install requests, BeautifulSoup
Activate the virtual environment and run the code.
python one_million_websites.py
import requests from bs4 import BeautifulSoup import sys import time headers = { "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8", "Accept-Language": "en-GB,en-US;q=0.9,en;q=0.8", "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/64.0.3282.167 Chrome/64.0.3282.167 Safari/537.36" } site_link_count = 0 for i in range(1, 201): url = "-" + str(i) + ".html" response = requests.get(url, headers = headers) if response.status_code != 200: print(url + str(response.status_code)) continue soup = BeautifulSoup(response.text, 'lxml') sites = soup.find_all("td",{"class": "web_width"}) links = "" for site in sites: site = site.find("a")["href"] links += site + "\n" site_link_count += 1 with open("one_million_websites.txt", "a") as f: f.write(links) print(str(site_link_count) + " links found") time.sleep(1)
We are scraping links from site. If you inspect the webpage, you can see anchor tag inside
tdtag with class
web_width.
We will convert the page response into BeautifulSoup object and get all such elements and extract the
HREF value of them.
Although there is natural delay of more than 1 second between consecutive requests which is pretty slow but is good for server. I still introduced one second delay to avoid 429 HTTP status.
Scraped links will be dumped in text file in same directory.
Hosting Django App for free on PythonAnyWhere Server.
Featured Image Source : | https://pythoncircle.com/post/545/python-script-10-collecting-one-million-website-links/ | CC-MAIN-2021-39 | refinedweb | 288 | 55 |
Mailbox/Postbox Alert
Just finished my new project.
I have added two magnetic switches to the front and the back of my postbox. Now i get a Mail if the postman was here
You need:
1x Arduino Mini Pro 3,3
1x Battery Holder (2xaaa)
1x NRF
2x Reed switches (�er)
2x magnets ()
And this or a similar sketch:
It works great
- RJ_Make Hero Member last edited by
Okay
But at the moment its fixed only freestyle
2x Reedschalter (�er)
Hast du gut gemacht mit dem Reedschalter.
I mean nice denglish
hmm after some days testing it works very fine, but:
the battery drains very fast and in dont know why. The Arduino sleeps the whole time and wakes up when the postman is coming (maybe 2 times a week).
The battery drains 11% in 7 Days
Any Ideas?
The power LED and regulator is cutted and the fuses are modified.
Regards
n3ro
- RJ_Make Hero Member last edited by
- BulldogLowell Contest Winner last edited by BulldogLowell
I'm guessing that your repeat() function is wearing out the battery.
By the way you can take advantage of the return of sleep( ) here:
gw.sleep(MAILBOX_FRONT_PIN - 2, CHANGE, MAILBOX_BACK_PIN - 2, CHANGE, 0);
like this (untested):
#include <MySensor.h> #include <SPI.h> #include <readVcc.h> #define NODE_ID 21 // ID of node #define CHILD_ID 1 // Id of the sensor child #define MAILBOX_FRONT_PIN 2 // Arduino Digital I/O pin for button/reed switch #define MAILBOX_BACK_PIN 3 // Arduino Digital I/O pin for button/reed switch #define MIN_V 1900 // empty voltage (0%) #define MAX_V 3200 // full voltage (100%) MySensor gw; MyMessage msg(CHILD_ID, V_TRIPPED); boolean post = false; boolean lastpost = false; int oldBatteryPcnt; int repeat = 20; int trigger = -1; unsigned long goToSleepTime; void setup() { gw.begin(NULL, NODE_ID, false); pinMode(MAILBOX_FRONT_PIN, INPUT); pinMode(MAILBOX_BACK_PIN, INPUT); digitalWrite(MAILBOX_FRONT_PIN, HIGH); digitalWrite(MAILBOX_BACK_PIN, HIGH); gw.sendSketchInfo("Mailbox Alert", "1.0"); gw.present(CHILD_ID, S_MOTION); Serial.println("---------- set Mailbox empty"); msg.set(0); } void loop() { if (millis() - goToSleepTime > 10UL) //debounce { switch (trigger) { case 0: post = true; Serial.println("---------- New Mail"); break; case 1: post = false; Serial.println("---------- Mailbox emptied"); break; default: Serial.println("---------- I just woke up"); } } trigger = -1; if (post != lastpost) { Serial.print("---------- Send Mailboxstate "); Serial.println(post ? "full" : "empty"); msg.set(post); lastpost = post; sendBattery(); } // Sleep until something happens with the sensor goToSleepTime = millis(); trigger = gw.sleep(MAILBOX_FRONT_PIN - 2, CHANGE, MAILBOX_BACK_PIN - 2, CHANGE, 0); } void sendBattery() // Measure battery { int batteryPcnt = min(map(readVcc(), MIN_V, MAX_V, 0, 100), 100); if (batteryPcnt != oldBatteryPcnt) { gw.sendBatteryLevel(batteryPcnt); // Send battery percentage oldBatteryPcnt = batteryPcnt; } Serial.print("---------- Battery: "); Serial.println(batteryPcnt); }
Thx for the tips. I will try it.
I have just measured the current:
32uA in sleep and both opened
when one reen switch is closed: 171uA
when both closed: 309uA
i use a internal pullup and dont know why the current is so high when switch is closed
it is fixed
changed internal pullup to an external with 680k. now its 32uA with both opened and 47uA with both closed
Hi!
I+m trying to do a similar mailbox sensor, but I get different power consumptions hen I ground (pulling down) input pin 2 and 3.
I´m using a pro mini, 3.3v standard boot loader with removed LED and 470k resistors as external pull-ups.
In sleep mode with it draws about 65uA
Roughly the same with input pin 3 -grounded (state 0)
But when I ground pin 2 it draws 7mA
Any ideas whats wrong?
@Moshe-Livne just deactivate the this lines:
//digitalWrite(MAILBOX_FRONT_PIN, HIGH);
//digitalWrite(MAILBOX_BACK_PIN, HIGH);
and solder two resistor from 3.3v to pin 2 and 3.
Ok, so I found that it is the radio that draws about 7mA when I ground input pin 2, it is connected to IRQ-pin on the radio.
@n3ro Have you connected the radio as described on the build-page? [](link url)
@BulldogLowell could you please explain your changes? I don't understand everything of this
@n3ro Ok, thanks! Then I learned something new today.. IRQ is only needed for receiving nodes then, like repeaters and gateways, right?
Strange though that my circuit draws so much more current than yours. I have tested different setups, different arduinos and different radios from different suppliers with same result.
Well, at least my mailbox are mysensored now!
I think you need the irq only for the gw. I have a receive node without irq.
Do you have changed fuses on your arduino?
And remember. All this stuff is cheap China electronic.The devices can vary consume a lot of electricity.
Nope, havent done that yet. It´s a "stock" 3v3 pro mini. Still reading up on how to do that and how to change to MYS-bootloader, curious on how OTA-update works.
Big thanks for your respons!
@f1dev sweebe wrote a very good howto:
@n3ro Brilliant! He recommends to take out the voltage regulator. it is easier (and works better) just to snip off one leg as described here. I'll try his other mods.... hopefully now my nodes will last forever!!!!
- Ashley Savage last edited by
quality project, been having a tinker with mixed results. used sketch from original post, but would it be possible to get a circuit diagram showing pull up locations??? bit of a beginner, but have had a few success with rolling out some power operated sensors , this is my first battery operated sensor...
No problem. I make one
give me a day
- Ashley Savage last edited by
thanks for the diagram, much appreciated...
- mathieu44444 last edited by
Hello,
Does this code still work?
Thank you | https://forum.mysensors.org/topic/1640/mailbox-postbox-alert/9 | CC-MAIN-2019-09 | refinedweb | 929 | 66.33 |
ImageChops
When I use cmsplugin_nivoslider.thumbnail_processors.pad_image on virtualenv with pillow error is ImageChops not found, I think you should change import statements. Another thing is when I use django filer with activated pad_image all images on web site are scaled to origin proportion. If I want to use wide upscale and crop is not possible.
If you want an upscale and crop, don't add our pad_image thumbnail processor ;) For the ImageChops ImportError, I can't reproduce your bug. What versions of Python/Django/Pillow are you using?
Hey,
I am also having this issue:
version Pillow 2.1.0 platform linux2 2.7.4 (default, Apr 6 2013, 19:20:36) [GCC 4.8.0]
Python 2.7.4 (default, Apr 6 2013, 19:20:36) [GCC 4.8.0] on linux2
Django 1.5.1
Can I give you any more info?
Oh and im using the version from Pypi.
I also can't reproduce the error.
ImageChops come from PIL/PIllow. I suggest reinstalling Pillow, and take care "Pil setup summary" has no errors. Also testing with Pillow 1.1.7, and/or PIL 1.1.7. Also check easy-thumbnails is at last version.
Try this from ipython (or python) console:
Gives you an ImportError?
Python 2.7.4 (default, Apr 6 2013, 19:20:36) Type "copyright", "credits" or "license" for more information.
IPython 0.13.2 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details.
In [1]: from PIL import ImageChops
In [2]:
No import error, I can get rid of this error by removing your extra thumbnail processor from settings.
This works:
THUMBNAIL_PROCESSORS = ( 'easy_thumbnails.processors.colorspace', #'cmsplugin_nivoslider.thumbnail_processors.pad_image', 'easy_thumbnails.processors.autocrop', #'easy_thumbnails.processors.scale_and_crop', 'filer.thumbnail_processors.scale_and_crop_with_subject_location', 'easy_thumbnails.processors.filters', )
This doesn't
THUMBNAIL_PROCESSORS = ( 'easy_thumbnails.processors.colorspace', 'cmsplugin_nivoslider.thumbnail_processors.pad_image', 'easy_thumbnails.processors.autocrop', #'easy_thumbnails.processors.scale_and_crop', 'filer.thumbnail_processors.scale_and_crop_with_subject_location', 'easy_thumbnails.processors.filters', )
I'm wondering if it could be to do with the subject aware scale and crop?
OK, I managed to reproduce the bug. It's because Pillow dropped "classical" PIL imports in favor of
from PIL import …. I just pushed a fix and will upload a new version to PyPI. | https://bitbucket.org/bercab/cmsplugin-nivoslider/issues/3/imagechops | CC-MAIN-2018-09 | refinedweb | 381 | 53.88 |
dEiOQsRStuUvVWxX3?] [-nameshouldis implied.
If this option is given, the first element of
sys.argvwill be
"-"and the current directory will be added to the start of
sys.path.
See also
runpy.run_path()
- Equivalent functionality directly available to Python code
.
Changed in version 2.5: Directories and zipfiles containing a
__main__.pyfile at the top level are now considered valid Python scripts.
If no interface option is given,
-i is implied,
sys.argv[0] is
an empty string (
"") and the current directory will be added to the
start of
sys.path.
See also
1.1.2. Generic options¶
-?
¶
-h
¶
¶
Print a short description of all command line options.
Changed in version 2.5: The
--helpvariant.
1.1.3. Miscellaneous options¶
-B
¶
If given, Python won’t try to write
.pycor
.pyofiles on the import of source modules. See also
PYTHONDONTWRITEBYTECODE.
New in version 2.6.
-d
¶
Turn on parser debugging output (for wizards only, depending on compilation options). See also
PYTHONDEBUG.
-E
¶
Ignore all
PYTHON*environment variables, e.g.
PYTHONPATHand
PYTHONHOME, that might be set.
New in version 2.2.
.
-O
¶
Turn on basic optimizations. This changes the filename extension for compiled (bytecode) files from
.pycto
.pyo. See also
PYTHONOPTIMIZE.
-Q
<arg>¶
Division control. The argument must be one of the following:
old
- division of int/int and long/long return an int or long (default)
new
- new division semantics, i.e. division of int/int and long/long returns a float
warn
- old division semantics with a warning for int/int and long/long
warnall
- old division semantics with a warning for all uses of the division operator
-R
¶.
-s
¶
Don’t add the
user site-packages directoryto
sys.path.
New in version 2.6.
-S
¶
Disable the import of the module
siteand the site-dependent manipulations of
sys.paththat it entails.
-t
¶
Issue a warning when a source file mixes tabs and spaces for indentation in a way that makes it depend on the worth of a tab expressed in spaces. Issue an error when the option is given twice (
-tt).
-u
¶).
Starting from Python 2.7,
DeprecationWarningand its descendants are ignored by default. The
-Wdoption can be used to re-enable them.
Warnings can also be controlled from within a Python program using the
warningsmodule.
The simplest form of argument is one of the following action strings (or a unique abbreviation) by themselves:.
-3
¶
Warn about Python 3.x possible incompatibilities by emitting a
DeprecationWarningfor features that are removed or significantly changed in Python 3.
New in version 2.6.
1.1.4. Options you shouldn’t use¶
-U
¶
Turns all string literals into unicodes globally. Do not be tempted to use this option as it will probably break your world. It also produces
.pycfiles with a different magic number than normal. Instead, you can enable unicode literals on a per-module basis by using:
from __future__ import unicode_literals
at the top of the file. See
__future__for details.
1.2. Environment variables¶
These environment variables influence Python’s behavior, they are processed before the command-line switches other than -E.in this file.
PYTHONY2K¶
Set this to a non-empty string to cause the
timemodule to require dates specified as strings to include 4-digit years, otherwise 2-digit years are converted based on rules described in the
timemodule documentation., OS X, OS/2, and RiscOS.
PYTHONDONTWRITEBYTECODE¶
If this is set, Python won’t try to write
.pycor
.pyofiles on the import of source modules. This is equivalent to specifying the
-Boption.
New in version 2.6.
PYTHONHASHSEED¶
If this variable is set to
random, the effect is the same as specifying the
-Roption:.
PYTHONIOENCODING¶
Overrides the encoding used for stdin/stdout/stderr, in the syntax
encodingname:errorhandler. The
:errorhandlerpart is optional and has the same meaning as in
str.encode().
New in version 2.6.
PYTHONNOUSERSITE¶
If this is set, Python won’t add the
user site-packages directoryto
sys.path.
New in version 2.6.
PYTHONUSERBASE¶
Defines the
user base directory, which is used to compute the path of the
user site-packages directoryand Distutils installation paths for
python setup.py install --user.
New in version 2.6.HTTPSVERIFY¶
If this environment variable is set specifically to
0, then it is equivalent to implicitly calling
ssl._https_verify_certificates()with
enable=Falsewhen
sslis first imported.
Refer to the documentation of
ssl._https_verify_certificates()for details.
New in version 2.7.12.
1.2.1. Debug-mode variables¶
Setting these variables only has an effect in a debug build of Python, that is,
if Python was configured with the
--with-pydebug build option.
PYTHONTHREADDEBUG¶
If set, Python will print threading debug info.
Changed in version 2.6: Previously, this variable was called
THREADDEBUG.
PYTHONDUMPREFS¶
If set, Python will dump objects and reference counts still alive after shutting down the interpreter. | https://docs.python.org/2/using/cmdline.html?highlight=pythonhashseed | CC-MAIN-2017-13 | refinedweb | 800 | 53.07 |
I recently needed a configuration mechanism which would detect changes without requiring an application domain restart. I also wanted to move away from XML. This is what I came up with (and hopefully I’ll get some helpful feedback).
First, we declare a ConfigurationData object which holds our actual configuration values:
public interface IConfigurationData { bool LogAll{ get; } string CdnUrl{ get; } } public class ConfigurationData : IConfigurationData { private bool _logAll; private int _cdnUrl; public bool LogAll { get { return _logAll; } set{ _logAll = value; } } public string CdnUrl { get { return _cdnUrl; } set { _cdnUrl = value; } } }
There isn’t much to explain here, so let’s move on to the class that does the heavy lifting:
public static class Configuration { private static readonly string _applicationPath = HttpRuntime.AppDomainAppPath; private static ConfigurationData _instance = LoadInitialConfiguration(); public static IConfigurationData GetInstance { get { return _instance; } } private static ConfigurationData LoadInitialConfiguration() { var watcher = new FileSystemWatcher(_applicationPath, "settings.config"); watcher.NotifyFilter = NotifyFilters.LastWrite; watcher.Changed += (s, e) => _instance = LoadConfiguration(); watcher.EnableRaisingEvents = true; return LoadConfiguration(); } private static ConfigurationData LoadConfiguration() { return Converter.DeserializeFromFile<ConfigurationData>(_applicationPath + "settings.config", "_"); } }
The class essentially loads the
~settings.config file, and sets up a watch to reload it whenever it changes. Here I’m using JSON (and my own JSON library) to read the file, but you could use anything, such as yaml or xml. The file might look something like:
{ "logAll": true, "_cdnUrl": "" }
If you are using a DI framework, such as ninject, you can hook it via:
Bind<IConfigurationData>().ToMethod(c => Configuration.GetInstance);
otherwise, you can call it directly from code with:
if (Configuration.GetInstance.LogAll) {...}
There are two things you’ll need to be careful with. First you’ll want to make sure to avoid caching values within a class. For example, if you did something like:
public static class HtmlExtensions { private static readonly string _cdnUrl = Configuration.GetInstance.CdnUrl; }
then a copy of
CdnUrl would be stored in _cdnUrl and changes to your
file wouldn’t be available to this class.
Secondly, the
Changed event is known to fire twice. This shouldn’t cause any threading issues, but if your LoadConfiguration is particularly intensive, or if you happen to be debugging something, then you’ll at least know to expect it.
I liked your concept and have extended it a tiny bit. I have posted code here:.
The configuration management has been refactored into its own class. This allows the implementation of a configuration manager that reads the web.config or another file (you provide your own parsing routine).
The advantage to me of this code is it is reusable. I am a bit uneasy with the static class. I implemented the analogous class, but will probably not use it. Getting config data from static classes has the disadvantages for testing. For instance, you need to read / write to the file system for testing. I will probably instantiate a config object and then cache it.
I have been working with FileSystemWatcher in a Windows Service today. I posted up a little mechanism to get around that double event firing problem. I probably shouldn’t share it here as it doesn’t have any interfaces or design patterns in it
See the bottom Community Content entry at
I basically have a static DateTime on my class and discard any additional events if it happens less than a second after the first one.
I’ve been playing around with using MongoDB to handle my configuration settings. The fluidity of the object database and low friction of the C# API are nice.
However, the service itself needing to run is a little painful.
Bob, for some cases there may be advantages to doing something more clever with the path. For simpler cases, I think this is a fine first approach.
I find JSON readable, but that may depend on the amount and type of information you need. YAML or XML may be preferred by others.
Good think you caught the static class, ‘cuz I was about to apologize for not having a private constructor
…just saw that the whole class is ‘static’ so that takes care of #1 from my previous response…
General concept looks great. What about the following…
1. You are allowing users to ‘new’ up an instance of the ‘Configuration’, when it looks like you intend to have a singleton. If so, make the ctor private.
2. The config file path is tightly constrained to the a certain file in a web app. I have had projects where there are multiple config files. I would prefer to see this path injected into the design. Or at least able to override. This may add design pressure that breaks your ability to keep the objects / methods static. I think that is okay. You can still instantiate an instance and then cache the non-static object.
3. Generally config files are considered to be never touched by a person. However they are. So my preference would be to keep the config file as verbose as possible to add clarity. JSON is great for transport because it really strips out a lot of the XML tags. However, for clarity…maybe not so good.
Just be sure that the exception handling is solid. The trouble with config is it’s “configurable”… Someone screwing around with the production settings should ideally not take down the app if you can avoid it. (Try to deserialize, if failed, log it, e-mail it, keep the current settings, and try again later.)
Instead of filesystem watcher, you can keep the singleton instance in asp.net cache with cache dependency of the actual web settings file itself.
If any changes to your settings file will invalidate the cache. While reading the singleton websettings you can look for the existence of the cache, if cache exists use the existing singleton instance else recreate the singleton instance.
By doing above, you don’t need to rely of file system watcher.
Karl,
Cool as always, thanks!
Just a reminder, though: FileSystemWatcher won’t work in Medium Trust
.
I think Phil was using App_Code folder for that…
Anyway, thanks man!
@brad
The code should be fixed. In ConfigurationData, change:
public int CdnUrl
{
get { return _cdnUrl; }
set { _cdnUrl = value; }
}
to
public string CdnUrl
{
get { return _cdnUrl; }
set { _cdnUrl = value; }
}
(int changed to string)
Karl, Looks pretty cool. I’m recieving this error though:
‘BeyondWebConfig.ConfigurationData’ does not implement interface member ‘BeyondWebConfig.IConfigurationData.CdnUrl’. ‘BeyondWebConfig.ConfigurationData.CdnUrl’ cannot implement ‘BeyondWebConfig.IConfigurationData.CdnUrl’ because it does not have the matching return type of ‘string’.
Looks like the CdnUrl property in the ConfigurationData class needs to be of type string and not int.
Interesting. When I was fed up with the XML mess, I used C# as config “language” (instead of our JSON). C# makes the config file a C# script with all the language features, conditional programming, etc. See my blog post “Scripting Configuration for C#”
Isn’t this what Cache and CacheDependency are for?
I’ve always used WCF (and XML) deserialization to read configuration files and never parse it by hand. Lot less code to maintain.
Some changes you might want to consider: You can make the Configuration class non-static and have it injected to its uses through DI.
One of the guys here speaks highly of
Might be worth considering as a better way to generate your strongly typed classes.
Very nice, I like it. Clever idea deserializing.
Don’t think I have the time to implement such a thing. We’re still on xml, I’ve written a DSL using ruby to handle all the config variables for us, and place them in the *.config file.
Works ok for us. And our xml configs aren’t too bad to look over (at the moment).
@Jeremy:
I’ve considered running my own Timer, say on a 1 minute interval, and checking the file’s last write time specifically because of issues with the FileSystemWatcher. Might actually do it.
I did something fairly similar to this a couple of years ago, and ran into a problem where the FileSystemWatcher simply failed to fire sometimes, for no apparent reason. After doing a lot of research, I found a number of reliable sources saying that the FileSystemWatcher is unpredictable, and shouldn’t be solely relied upon. Computers aren’t random, so I’m sure there’s an underlying cause, but I could never find a pattern.
Since my app was already exposed as a web service, I ended up writing a little utility method so I could manually kick off a load of the new data if the watcher didn’t fire. Just something you might want to consider.
The SystemClock idea is great for testing purposes, so that you can inject values into otherwise random static calls (like DAteTime.Now and Random.Next), but I don’t see how it’d be applicable to something like changing runtime configurations.
What about doing something like Jimmy Bogard’s SystemClock? Not sure if it would work, but check it out anyways. | http://codebetter.com/karlseguin/2010/01/28/beyond-web-config/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+CodeBetter+%28CodeBetter.Com%29 | crawl-003 | refinedweb | 1,494 | 56.66 |
Agenda
See also: IRC log
<JacekK> minutes:
Minutes are approved
<scribe> ACTION: Eric to upgrade the SPDL page for SAWSDL readers and then work things out with the Usage Guide [PENDING] [recorded in]
<scribe> ACTION: JacekK to move the test suite to the W3C site [DONE] [recorded in]
<JacekK>
<scribe> ACTION: JacekK to reply to Mary about the limitation for only global element declaration and type definition for schema mappings [DONE] [recorded in]
Jacek: due to US daylight savings time, next calls will be an hour later in US
... hope to be able to go Rec at same time as WSDL
... Please supply resources for our W3C page
Jacek: Precond and effects material has been put in Usage Guide
John: SWRL example is long and unreadable. Can it be in a better format?
Jacek: Can have both, can get previous abstract representation from CVS
John: will send e-mail on readable version.
Jacek: Some problems in WSMO syntax
BNS: agree, will fix.
Tomas: should also clarify namespaces
John: What is o#package?
Jacek: hash instead of colon is a WSMO convention
BNS: Will make a more usable namespace name
Jacek: Karthic has proposed some text, but will have to wait until he is here to discuss it
Ajith: It was for Semantics of non-functional aspects of the services. Should be done in a different way.
... shouldn't be a modelReference
Jacek: Could be a modelReference on service since it is not forbidden
... Not a real clean separation, but that is the WSDL distinction between service and interface
... non-functional annotations should be on service
... could also be done using policy, but shouldn't preclude modelReference for non-functional semantics
John: We can send a word document around and then can send to WS-Policy group.
Jacek: Yes (hopefully as PDF), we could discuss and then suggest to policy group.
Joel: So this is to see how tools use particular 2.0 and 1.1 wsdl
Jacek: Want to see if parsers can interpret these files.
... I made the WSDL as specific as possible, single annotation.
Ajith: WSDL 1.1 files should be that way
Jacek: Ajith, please check and let me know.
... Also included a unannotated WSDL
... Meeting ends | http://www.w3.org/2002/ws/sawsdl/minutes/20070206 | CC-MAIN-2015-27 | refinedweb | 370 | 62.58 |
Feature #16123
Allow calling a private method with `self.`
Description
Problem¶
There is an inconsistency between calling a private attribute writer being allowed with
self.value = syntax and
self.value not being allowed on a private attribute writer.
Calling a private method in this way can be useful when trying to assign the return value of this private method to a local variable with the same name.
Solution¶
The attached patch handles this by compiling the calling into a function call by using the
VM_CALL_FCALL flag, so it is as if the call were made without the
self. prefix, except it won't be confused with local variables at the VM instruction level. It is also compiled like an assignment call, except I didn't use the
COMPILE_RECV macro, since that would remove the
CHECK macro usage around the
COMPILE line.
Files
History
Updated by shevegen (Robert A. Heiler) 23 days ago
I may not completely understand the issue description. What is the inconsistency? (That is a honest
question, by the way; I am not fully understanding the issue domain.)
I am not even entirely sure what a private attribute writer is either; can we use these terms when
we can use e. g. send() at all times? I may not understand this, but I assume you can get the value
of any method via .send() and assign it to the local variable?
Updated by dylants (Dylan Thacker-Smith) 23 days ago
Here is a script to help demonstrate the inconsistency, where
self.bar = 123 is allowed by
self.bar is not.
class Foo def foo self.bar = 123 # allowed self.bar # raises end private attr_accessor :bar end Foo.new.foo
By attribute writer, I was just referring to an assignment method like the one defined by
attr_writer, although the same applies to any assignment method like
def bar=(value); value; end. The inconsistency is just more obvious when dealing with the pair of methods defined by
attr_accessor if they are private because
self. works with one of them but not the other as shown above.
shevegen (Robert A. Heiler) wrote:
I may not understand this, but I assume you can get the value of any method via .send() and assign it to the local variable?
Yes, it can be easily worked around, it just doesn't seem like it should be necessary to workaround this limitation. The point of
private is to keep things from being accessible from other objects, which we know isn't the case when a call is made on
self. directly.
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/16123 | CC-MAIN-2019-39 | refinedweb | 431 | 66.23 |
How to Handle Nested Data in Apache Druid vs Rockset
July 19, 2021
Apache Druid is a distributed real-time analytics database commonly used with user activity streams, clickstream analytics, and Internet of things (IoT) device analytics. Druid is often helpful in use cases that prioritize real-time ingestion and fast queries.
Druid’s list of features includes individually compressed and indexed columns, various stream ingestion connectors and time-based partitioning. It is known to perform efficiently when used as designed: to perform fast queries on large amounts of data. However, using Druid can be problematic when used outside its normal parameters — for example, to work with nested data.
In this article, we’ll discuss ingesting and using nested data in Apache Druid. Druid doesn’t store nested data in the form often found in, say, a JSON dataset. So, ingesting nested data requires us to flatten our data before or during ingestion.
Flattening Your Data
We can flatten data before or during ingestion using Druid’s field flattening specification. We can also use other tools and scripts to help flatten nested data. Our final requirements and import data structure determine the flattening choice.
Several text processors help flatten data, and one of the most popular is jq. jq is like JSON’s grep, and a jq command is like a filter that outputs to the standard output. Chaining filters through piping allows for powerful processing operations on JSON data.
For the following two examples, we’ll create the
governors.json file. Using your favorite text editor, create the file and copy the following lines into it:
[ { "state": "Mississippi", "shortname": "MS", "info": {"governor": "Tate Reeves"}, "county": [ {"name": "Neshoba", "population": 30000}, {"name": "Hinds", "population": 250000}, {"name": "Atlanta", "population": 19000} ] }, { "state": "Michigan", "shortname": "MI", "info": {"governor": "Gretchen Whitmer"}, "county": [ {"name": "Missauki", "population": 15000}, {"name": "Benzie", "population": 17000} ] } ]
With jq installed, run the following from the command line:
$ jq --arg delim '_' 'reduce (tostream|select(length==2)) as $i ({}; .[[$i[0][]|tostring]|join($delim)] = $i[1] )' governors.json
The results are:
The most flexible data-flattening method is to write a script or program. Any programming language will do for this. For demonstration purposes, let’s use a recursive method in Python.
def flatten_nested_json(nested_json): out = {} def flatten(njson, name=""): if type(njson) is dict: for path in njson: flatten(njson[path], name + path + ".") elif type(njson) is list: i = 0 for path in njson: flatten(path, name + str(i) + ".") i += 1 else: out[name[:-1]] = njson flatten(nested_json) return out
The results look like this:
Flattening can also be achieved during the ingestion process. The FlattenSpec is part of Druid’s ingestion specification. Druid applies it first during the ingestion process.
The column names defined here are available to other parts of the ingestion specification. The FlattenSpec only applies when the data format is JSON, Avro, ORC, or Parquet. Of these, JSON is the only one that requires no further extensions in Druid. In this article, we’re discussing ingestion from JSON data sources.
The FlattenSpec takes the form of a JSON structure. The following example is from the Druid documentation and covers all of our discussion points in the specification:
The
useFieldDiscovery flag is set to true above. This allows the ingestion specification to access all fields on the root node. If this flag were to be false, we’d add an entry for each column we wished to import.
In addition to root, there are two other field definition types. The
path field definition contains an expression of type JsonPath. The “jq” type contains an expression with a subset of jq commands called jackson-jq. The ingestion process uses these commands to flatten our data.
To explore this in more depth, we’ll use a subset of IMDB, converted to JSON format. The data has the following structure:
Since we are not importing all the fields, we don’t use the automatic field discovery option.
Our FlattenSpec looks like this:
The newly created columns in the ingested data are displayed below:
Querying Flattened Data
On the surface, it seems that querying denormalized data shouldn’t present a problem. But it may not be as straightforward as it seems. The only non-simple data type Druid supports is multi-value string dimensions.
The relationships between our columns dictate how we flatten your data. For example, consider a data structure to determine these three data points:
- The distinct count of movies released in Italy OR released in the USA
- The distinct count of movies released in Italy AND released in the USA
- The distinct count of movies that are westerns AND released in the USA
Simple flattening of the country and genre columns produces the following:
With the above structure, it’s not possible to get the distinct count of movies that are released in Italy AND released in the USA because there are no rows where country = “Italy” AND country = “USA”.
Another option is to import data as multi-value dimensions:
In this case, we can determine the “Italy” AND/OR “USA” number using the LIKE operator, but not the relationship between countries and genres. One organization proposed an alternative flattening, where Druid imports both the data and list:
In this case, all three distinct counts are possible using:
- Country = ‘Italy’ OR County = ‘USA’
- Countries LIKE ‘Italy’ AND Countries LIKE ‘USA’
- Genre = ‘Western’ AND Countries LIKE ‘USA’
Alternatives to Flattening Data
In Druid, it’s preferable to use flat data sources. Yet, flattening may not always be an option. For example, we may want to change dimension values post-ingestion without re-ingesting. Under these circumstances, we want to use a lookup for the dimension.
Also, in some circumstances, joins are unavoidable due to the nature and use of the data. Under these conditions, we want to split the data into one or more separate files during ingestion. Then, we can adapt the affected dimension to link to the “external” data whether by lookup or join.
The memory-resident lookup is fast by design. All lookup tables must fit in memory, and when this isn’t possible, a join is unavoidable. Unfortunately, joins come at a performance cost in Druid. To show this cost, we’ll perform a simple join on a data source. Then we’ll measure the time to run the query with and without the join.
To ensure this test was measurable, we installed Druid on an old 4GB PC running Ubuntu Server. We then ran a series of queries adapted from those Xavier Léauté used when benchmarking Druid in 2014. Although this isn’t the best approach to joining data, it does show how a simple join affects performance.
As the chart demonstrates, each join makes the query run a few seconds slower — up to twice as slow as queries without joins. This delay adds up as your number of joins increases.
Nested Data in Druid vs Rockset
Apache Druid is good at doing what it was designed to do. Issues occur when Druid works outside those parameters, such as when using nested data.
Available solutions to cope with nested data in Druid are, at best, clunky. A change in the input data requires adapting your ingestion method. This is true whether using Druid’s native flattening or some form of pre-processing.
Contrast this with Rockset, a real-time analytics database that fully supports the ingestion and querying of nested data, making it available for fast queries. The ability to handle nested data as is saves a lot of data engineering effort in flattening data, or otherwise working around this limitation, as we explored earlier in the blog.
Rockset indexes every individual field without the user having to perform any manual specification. There is no requirement to flatten nested objects or arrays at ingestion time. An example of how nested objects and arrays are presented in Rockset is shown below:
If your need is for flat data ingestion, then Druid may be an appropriate choice. If you need deeply nested data, nested arrays, or real-time results from normalized data, consider a database like Rockset instead. Learn more about how Rockset and Druid compare.
More from Rockset
Get started with $300 in free credits. No credit card required. | https://rockset.com/blog/how-to-handle-nested-data-in-apache-druid/ | CC-MAIN-2022-21 | refinedweb | 1,372 | 53.92 |
Svante Signell wrote: > > I have the following problem when coding in Octave/Matlab. After reading > a string from an external file into a variable, like x=fscanf(...) > resulting in x='string' how can I use this variable content as an > lvalue, like x.a='something', where x is replaced by its string value, > resulting in string.a='something' instead of x.a='something'. As others suggested, you have picked the slow and intricate solution, which is probably an overkill, so I would suggest to reassess whether you really need the eval(). I am guessing that you want a flexible namespace for your variables, but the same goal would be accomplished if you used a hash array, like in Perl or Tcl: $x="foobar" set x foobar $myVariables{$x} = 'something' set myVariables($x) 1 Given the first-class hash arrays, you can implement all the other compound structures: $myVariables{$x}{field}, etc. Octave doesn't have a built-in hash array type, but it comes pretty close with the variant structure construct: similarly to the examples above, you can have a structure with fields determined by a variable: x='abc'; myVariables.(x)='something' The added advantage of this approach relative to what you proposed is that it creates a sort of a namespace: all related variables are collected under the common 'root' name ('myVariables' in the example above); you can pass them all together as a variable to subroutines, make them global, etc. | https://lists.gnu.org/archive/html/help-octave/2008-01/msg00323.html | CC-MAIN-2019-43 | refinedweb | 242 | 54.86 |
The Buffer object is simply a block of memory that is delineated and initialized by the user. Many OpenGL functions return data to a C-style pointer, however, because this is not possible in python the Buffer object can be used to this end. Wherever pointer notation is used in the OpenGL functions the Buffer object can be used in it's BGL wrapper. In some instances the Buffer object will need to be initialized with the template parameter, while in other instances the user will want to create just a blank buffer which will be zeroed by default.
Example with Buffer:
import Blender from Blender import BGL myByteBuffer = BGL.Buffer(BGL.GL_BYTE, [32,32]) BGL.glGetPolygonStipple(myByteBuffer) print myByteBuffer.dimensions print myByteBuffer.list sliceBuffer = myByteBuffer[0:16] print sliceBuffer | http://www.blender.org/documentation/249PythonDoc/GE/BGL.Buffer-class.html | crawl-003 | refinedweb | 130 | 58.18 |
Created on 2011-02-23 10:22 by rhettinger, last changed 2011-02-26 01:20 by eric.araujo. This issue is now closed.
Attaching a documentation patch.
This is nice, but IMO there is some information lacking, e.g.:
- when an underlying mapping is mutated, does the ChainMap get updated too?
- does it work with arbitrary mappings or only with dicts or dicts subclasses?
I think new_child() isn't very useful. It seems two specialized for a one-liner. Ditto for parents().
("too specialized", sorry)
I don't think that new_child and parents are too specialized at all, indeed they are essential to one of the primary use cases for the construct. I find Django's push and pop much more intuitive than new_child and parents, however, and would prefer those methods.
An important distinction with Django's push/pop is that they mutate the Context (ChainMap) rather than return a fresh instance.
Yes, that's part of what I find more intuitive about it. I think of the chainmap as a stack. Perhaps if I had a different application (I would use it for either configuration or namespace management) I'd want a different API, but for those two the stack approach seems most natural to me.
In particular, since only the top dict can be updated, it seems most natural to pop it off the top of the stack in order to modify the next one down (and then push it back, if desired). If instead the way to modify the next one down is to do parents, then I'm mutating the chainmap I just did the parents call on, but I'm not referencing that object, I'm referencing the one I got back from the parents call. It just seems more natural that the mutation operations should be carried out via a single chainmap object by using pop and push rather than effectively modifying (potentially multiple) chainmap objects by manipulating other chainmap objects. (Yes, I realize that it is really the underlying dicts that are being modified, but conceptually I'm thinking of the chainmap as a single data structure).
FWIW, the new_child() and parents() part of the API was modeled after contexts in ANLTR where they are needed to overcome the limitations of Django's push/pop style which precludes a context from having multiple, independent children at the same time. The module docstring in the recipe shows how new_child() can be used to easily model both dynamic scoping and nested scoping.
The other advantage of the new_child/parents API over the push/pop API is that it overcomes the occasional templating need to keep two copies of the context (before a push and after a push).
In some ways, it is more difficult to keep track of a mutating chain that is being continuously pushed and popped. It is simpler to assign a chain to a variable and always know that it is associated with a given template and not have to worry about whether some utility function pushed a new context and failed to pop it when it was done. A push/pop style introduces the same problems as matching matching malloc() with free() in C.
Minor doc issue: s/__builtin__/builtins/
Antoine. Thanks. I put in a paragraph re-emphasizing that ChainMap is a view and that changes in the underlying mappings get reflected in the ChainMap. Also, the first sentence says that ChainMap groups multiple dicts or other mappings. So, any mapping-like object will work.
Éric, I changed built-in to builtin. It is used as an adjetive, not as a module reference (that's the usual practice when referring the builtin functions).
See r88628.
I was thinking of adding a recipes section to show how to extend or override the class:
class DjangoContext(ChainMap):
def push(self):
self.maps.insert(0, {})
def pop(self):
self.maps.pop(0)
class NestedScope(ChainMap):
'Mutating methods that write to first matching dict'
def __setitem__(self, key, value):
'''Find the first matching *key* in chain and set its value.
If not found, sets in maps[0].
'''
for m in self.maps:
if key in m:
break
else:
m = self.maps[0]
try:
cs = m.chain_set
except AttributeError:
m[key] = value
else:
cs(key, value)
def __delitem__(self, key):
'''Find and delete the first matching *key* in the chain.
Raise KeyError if not found.
'''
for m in self.maps:
if key in m:
break
try:
cd = m.chain_del
except AttributeError:
del m[key]
else:
cd(key)
def popitem(self):
for m in self.maps:
if m:
break
return m.popitem()
def clear(self):
for m in self.maps:
m.clear()
Raymond: Sorry I was imprecise. I was referring specifically to “import __builtin__” in collections.rst. | https://bugs.python.org/issue11297 | CC-MAIN-2019-09 | refinedweb | 793 | 64.1 |
I am trying to set a variable that I will refer to in a custom JSP tag, so I have something like this in my JSP:
<%@ taglib prefix="c" uri="" %>
<c:set var="path" ...
How to assign a value from JavaScript to Java variable in the JSP page? Is it possible to do this?
The following is the javascript function:
function loadGroupMembers(beanArrayVal){
document.getElementById("beanVal").value=beanArrayVal;
...
I have the following code:
<c:forEach
<c:forEach
I have something like the below code. I need to pass the variable selectedIndex to the JSTL code. How can I do this?
function updateSP(selectedIndex)
{
<c:if
}
I'm getting a little bit frustrated since I can't find out which variables I can access with the ${...} syntax in a Struts tag, placed in a JSP page.
As an example ...
${...}
How to set the JSTL variable value in java script?
<script>
function function1()
{
var val1 = document.getElementById('userName').value;
<c:set // how do i set val1 here? ...
How do I assign hidden value to JSTL variable?
Example:
<input type="hidden" name="userName" value="Administrator" />
<c:set // How do I set hidden variable value (Administrator) here?
I want to iterate a HashMap in javascript using jstl. is it possible to do like this?
function checkSelection(group,tvalue){
alert(group);
alert(tvalue);
<c:forEach
alert("aa<c:out");
<c:if
...
How do I pass Javascript variable to and JSTL?
<script>
var name = "john";
<jsp:setProperty // How ...
In my webapp, I want to set a default cookie to store a locale of 'en_US'. I have functionality in place for the user to change this successfully.
However, I've removed a ...
ím trying to add jstl/jsp component to a spring+flex+hibernate project. Im using Tomcat 5, downloaded and added the jakarta-taglibs-standard-1.1.2 dependencies, made some changes to the web.xml, etc... dont want to get ...
I want to use the actual item from my c:forEach in a <% JavaCode/JSPCode %>.
How do I access to this item?
c:forEach
<% JavaCode/JSPCode %>
<c:forEach
<% MyProduct p = (MyProduct) ${item}; ...
arrays.jsp:
//...
var x = <c:out
<c:if
processExternalArrays();
</c:if>
//...
I'm refactoring some legacy code that uses Struts 1 (No flames please), and I am having a difficult time retrieving a parameter I set in my Action class. Here is the ...
I habe a problem with my latest Liferay Portlet or rather a JSP I'm using in this portlet.
I am using a string array that contains strings which are shown on the ...
I want to do something like this:
<c:set
I have a JSP file
<%@ taglib prefix="s" uri="/struts-tags"%>
<% response.setContentType("application/javascript"); %>
var collegename = '<s:text';
I have a custom tag
<%@ tag body-content="scriptless"%>
<%
MyClass mc = request.getAttribute("someValue");
%>
<custom:aboveTag> ...
I am working with ExpressionFactory inside my program, and I wan to make ValueExpressions and variables be accessible from JSP with EL expressions.
I can't understand something: looks like my variables I ...
I wrote the following code:
<%
int accountNumber = Integer.parseInt(request.getParameter("accountNumber"));
int depositAmount = Integer.parseInt(request.getParameter("depositAmount"));
%>
...
Forgive my ignorance, I am stuck with this. What I need to do is, access a date member in my bean and pass it to a method defined, something like below ...
Can anybody tell me how to assign javascript variables to jsp request or to jsp session.
I am doing something like this
Here deletedRows is a hidden field.
deletedRows
var del=45;
document.getElementById("deletedRows").value=del
alert(document.getElementById("deletedRows").value);
<%String del_values = request.getParameter("deletedRows");%>
<%request.getSession().setAttribute("del_rows", del_values);%>
I have something like this
<c:forEach
<c:if
</c:if>
</c:forEach>
I have some problem with using jstl.
I have this:
<jsp:useBean</jsp:useBean>
<jsp:useBean</jsp:useBean>
<c:if
<c:out</c:out>
</c:if>
package user;
import java.lang.StringBuilder;
public class View {
public ...
I've got a PersistenceSet and would like to check if it contains a certain variable.
How can I check in the JSTL whether subitem exists or not?
However when I try to access ...
I want to dynamically create variable names in java el.
The problem is that the second line returns sessionScope.saved_activity as a string instead of data.
<c:set
<td> <input type="text" ...
In my experience, it is rarely/never necessary to set scope="request" on an EL variable.
For example, I have a page that, given an item parameter, constructs a URL specific to that item ...
scope="request"
item
I'm having a look at Tiles and I'm having a few problems getting it to work with my JSTL. The templating works fine and I see the page structure I expect. The trouble is that none of my JSTL/EL variables are complied. If I just try putting a variable straight to the page, it'll just print it out as is (e.g. ...
Hi For lot of simpler calculations and validations using Javascript we used to declare variables and then do them and use it in Javascript before submitting a form. Say, int x=2,y=3;x=x+y; and use it in the form as a default value and then we may validate the user input from Javascript and then post the form date using Javascript itself. For ...
Hi, all! Correct. JavaScript is executed on the client-side. But that doesn't mean that JSTL can't be evaluated and mixed with JavaScript. Not only can you pass JSTL evaluated code to JavaScript function, you can add JSTL code within the
// now want to access this "msg" variable in JSp, how ? Questions 1. Session Object "NameList" includes a List of Names Strings, so I want to iterate through it, and concatnate its name String with a comma separator, and eventually the msg should ...
How can I get the value from the method of an Object (e.g., length of a String by calling myString.length()) and set the value of the length to a variable? In the code below, my objective is to set myVar with the length() value obtained from myString of myForm object. Code is somewhat like: ...
Using JSP 2.0 and JSTL 1.1.2 I have an object defined within ${sessionScope.sessionData} This sessionData object has a getter and setter for a variable of another object. While I can access the getter like this ${sessionScope.sessionData.myVar} I am not sure how to set/reset it I have tried both ${sessionScope.sessionData.var["null"]} and Both does not give any error, but ...
The need to do this is usually an indicator that JSTL isn't being used properly. JSTL is meant for scriptless JSPs where all of the heavy lifting is done with Java objects before context is ever forwarded to the JSP for markup. If you were comparing objects, you could simply bind your scriptlet variable to request scope with request.setAttribute("name", value) and ...
Hi, I would like to declare a int variable and use it as the looping variable. Just like this for(int i=0; i<10; i++) { // do something } I want to do this in a JSP. I can do it in scriplet. But I dont want to use a scriplet and want the do it using either a JSTL () or ...
Hi All, A little problem am facing. I done know how I will get a users entered username and password from the form in browser and put the value in JSTL tag. I have tried quite a fre but got invalid datasource errors. Please help me get out of this. My code is like below:
Hi All , Im trying to use a scoped variable in my jsp to help with formatting of the code .However, I think Im doing something wrong at two points , however cant figure out what exactly im doing wrong. // Point 1 ...
Hi, I am having the following line in notes.jsp.
Hi, I need to call a function in JSP that takes a parameter to get the content of an EBase form field. The parameter is defined by a field in a database. I am getting the data back using JSTL but I need a way of embedding the database field value into the JSP function call. Here's the code: Java Code: ...
Using JSTL, how I can both capture the 'referer' data, strip it down to just wwwdotsomedomaindotcom, and then declare it as a variable for later equality/inequality comparisons? And I had to write out the dummy domain name above because this message board still prevents me from posing URLs or images until I hit 20 posts. Thanks in advance.
Hi everybody as u might know i'm one more newbie with one more problem. The thing is, i'm doing a simple shopping cart and i can't show the results of my bussines logic. I use Interceptors, Struts 2, Hibernate and Jsp tech to do this. Here is my code: books.jsp | http://www.java2s.com/Questions_And_Answers/JSP-Servlet/JSTL/variable.htm | CC-MAIN-2017-30 | refinedweb | 1,499 | 67.15 |
1029 2017-07-17T23:08:24-04:00 IBM Connections - Blogs urn:lsid:ibm.com:blogs:entry-d838599f-676d-4085-ac3f-f65ca896a380 IBM Announces new IBM z14 mainframe and DS8880 features TonyPearson 120000HQFF active false TonyPearson 120000HQFF active false Entradas de comentarios application/atom+xml;type=entry Número de "me gusta" true 2017-07-17T22:52:02-04:00 2017-07-17T23:08:24-04:00 <p dir="ltr">Well, it's Tuesday again, and you know what that means? IBM Announcements! I am here in New York for the exciting news!</p> <blockquote dir="ltr">(<b>FCC Disclosure:</b> I work for IBM. This blog post can be considered a "paid celebrity endorsement" for the IBM z14 mainframe and DS8880 Storage System.)</blockquote> <p dir="ltr">In support of the [<a href="">IBM z14</a>] mainframe announcement, IBM has also disclosed R8.3 enhancements for the DS8880 Storage System. Here is a quick recap:</p> <dl dir="ltr"> <dt><b>New Tier-1 Flash Capacities available for HPFE Gen2 drawers</b></dt> <dd> <p>IBM introduces the new Tier-1 flash card capacity 3.84 TB flash card. In the past, IBM DS8880 only supported Tier-0 cards that support 10 Drive Writes per Day (10 DWPD), with capacities 400, 800, 1600 and 3200 GB. The Tier-1 flash card only handles 1 DWPD, often dubbed "Read-Intensive" devices, but can actually handle about 90 percent of most production workloads.</p> </dd> <br> <dt><b>zHyperLink™</b></dt> <dd> <p>zHyperLink™ drastically reduces the latency between the IBM z14 mainframe and the DS8880 storage systems. Traditional FICON paths through SAN switches or directors introduced about 140 to 175 microseconds of latency between systems. This new system is a direct cable, with 20 microsecond latency.</p> <p>The I/O bays on the DS8880 used for HPFE Gen2 already have zHyperLink ports on them. This direct cable is limited to 150 meters, however, so plan accordingly.</p> </dd> <br> <dt><b>Transparent Cloud Tiering</b></dt> <dd> <p>IBM already announced Transparent Cloud Tiering to IBM Bluemix, IBM Cloud Object Storage and the IBM TS7760 virtualization engine in R8.2.3 release. The new Release 8.3 of DS8880 now adds support for Amazon S3, providing yet another choice for where to migrate data sets to. IBM also adds replication, allowing the data set to be migrated to two separate target locations, for added availability, much like writing to separate ML2 tape cartridges.</p> </dd> <br> <dt><b>Cascading FlashCopy</b></dt> <dd> <p>Cascading FlashCopy is a feature that has existing for awhile now on IBM XIV and SAN Volume Controller platforms, so this is just a port of that concept over to the DS8880 microcode. Now, if you FlashCopy target can become the source of a follow-on FlashCopy request. You can make copies of copies. This applies to both the volume and data set level functions.</p> <p>Why would anyone do this? Well, you might suspend your application at midnight and create a clean FlashCopy of a 24-by-7 ever-changing database. Then in the following morning, workers who need a static "midnight version" of the database now can use this as their source and perform additional FlashCopy requests for their own needs.</p> </dd> <br> <dt><b>IBM DS8880 MES Support</b></dt> <dd>MES is an abbreviation for "Miscellaneous Equipment Specification", one of the many Three Letter Acronyms [<a href="">TLA</a>] that doesn't help knowing what the words stand for. In short, an MES is a formal supported option to upgrade a piece of hardware that is already installed and running at a client location. IBM will offer MES to upgrade existing DS8880 systems to have the additional HPFE Gen2 drawers, and to upgrade the I/O bays to support zHyperLink connections.</dd> </dl> <p dir="ltr">To learn more, my colleague, Jeff Barber, adds his thoughts on the <b>In the Making</b> group blog with his post [<a href="">Integration by design: DS8880 and IBM Z</a>].</p> <blockquote dir="ltr">(Final note: you might notice the change in upper and lower case. The IBM z14 (lower case) refers to the specific mainframe model, consistent with its predecessors the z13 and z13s, but the family name "IBM z Systems" has been shortened to "IBM Z®" (upper case). IBM Storage Systems and IBM POWER Systems were already upper case, so the mainframe guys just wanted to follow suit. I suspect "IBM i" will remain lower case, however.)</blockquote> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">IBM Z</a>, <a href="" rel="tag">IBM z14</a>, <a href="" rel="tag">mainframe</a>, <a href="" rel="tag">DS8000</a>, <a href="" rel="tag">DS8880</a>, <a href="" rel="tag">HPFE</a>, <a href="" rel="tag">HPFE Gen2</a>, <a href="" rel="tag">zHyperLinks</a>, <a href="" rel="tag">Cascading FlashCopy</a>, <a href="" rel="tag">MES</a>, <a href="" rel="tag">DWPD</a></p> Well, it's Tuesday again, and you know what that means? IBM Announcements! I am here in New York for the exciting news! ( FCC Disclosure: I work for IBM. This blog post can be considered a "paid celebrity endorsement" for the IBM z14 mainframe... 0 0 374-1ea082b5-970f-4c53-a7d5-47528ac9416c IBM Announcements 2017 July 11 TonyPearson 120000HQFF active false Entradas de comentarios application/atom+xml;type=entry Número de "me gusta" true 2017-07-11T20:36:01-04:00 2017-07-11T20:36:01-04:00 <p dir="ltr">Well, it's Tuesday again, and you know what that means? IBM Announcements!</p> <dl dir="ltr"> <dt><b>IBM Elastic Storage Server</b></dt> <dd> <p>Replacing the older "GSn" and "GLn" models, IBM announces the "Second Generation" GSnS and GLnS models (the second "S" stands for Second Generation), the "n" continues to refer to the number of storage drawers. All of these have a pair of POWER8 servers to drive amazing performance at a low price point.</p> <p>The "GSnS" models are based on smaller 2U, 24-drive storage drawers, with 3.84 and 15.36 TB Tier-1 Read-intensive Solid-State Drives (SSD). The "GLnS" models are based on larger 5U, 84-drive storage drawers, with 4TB, 8TB and 10TB nearline (7200 rpm) spinning disk.</p> These new models have the latest IBM Spectrum Scale software pre-installed. <p> </p> <p>To learn more, see [<a href="">IBM ESS GLnS models</a>], [<a href="">IBM ESS GSnS models</a>], and [<a href=""> IBM Elastic Storage Server v5.2 delivers flash-based storage models and Power I/O and server enhancements</a>] press releases.</p> </dd> <dt><b>Nutanix on IBM Power servers</b></dt> <dd> <p>In addition to IBM's two existing Hyperconverged offerings--IBM Spectrum Accelerate for x86 servers, and IBM Spectrum Scale for x86, POWER and z Systems servers--IBM Power Systems now offers a third option. This integrated offering combines Nutanix's Enterprise Cloud Platform software with IBM Power Systems™ hardware to deliver a turnkey hyperconverged solution that targets critical workloads in large enterprises.</p> <p>Nutanix is offered and will be defaulted/required on these Power® servers only:</p> <ul> <li>IBM CS821 (8005-12N)</li> <li>IBM CS822 (8005-22N)</li> </ul> <p>To learn more, see [<a href=""> IBM Hyperconverged Systems</a>] and [<a href=""> IBM Systems powered by Nutanix</a>] press releases.</p> </dd> </dl> <p dir="ltr">While "Hyperconvergence" is still fairly new, and only about 1 percent of data centers have deployed this new technology, I am glad that IBM is a leader in this space with multiple offerings across both x86 and POWER systems platforms.</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">ESS</a>, <a href="" rel="tag">Elastic Storage Server</a>, <a href="" rel="tag">Nutanix</a>, <a href="" rel="tag">CS821</a>, <a href="" rel="tag">CS822</a>, <a href="" rel="tag">Hyperconverged Systems</a>, <a href="" rel="tag">Hyperconvergence</a>, <a href="" rel="tag">x86</a>, <a href="" rel="tag">POWER</a></p> Well, it's Tuesday again, and you know what that means? IBM Announcements! IBM Elastic Storage Server Replacing the older "GSn" and "GLn" models, IBM announces the "Second Generation" GSnS and GLnS models (the second... 0 0 418cb0c218-33e0-4700-adc5-0b383c4ce321 IBM Systems Technical University - Day 5 morning and Final Thoughts TonyPearson 120000HQFF active false Entradas de comentarios application/atom+xml;type=entry Número de "me gusta" true 2017-05-26T17:43:55-04:00 2017-05-26T17:43:55 5, the last day of the conference.</p> <dl dir="ltr"> <dt><b>Integrating IBM Storage in Container Environments</b></dt> <dd> <p>Dr. Robert Haas, IBM CTO Storage for Europe, presented IBM Storage for Docker containers. These are different from containers in IBM Cloud Object Storage, and different from the Container Pools used in Spectrum Protect.</p> <p>Robert gave an overview of IBM Spectrum Conductor, part of the IBM Software Defined Infrastructure (SDI) Spectrum Compute family of software products. The goal is to analyze large amounts of data, access these data efficiently, and protect the data, results and insights as intellectual property.</p> <p>IBM Spectrum Compute comes in several offerings. IBM Spectrum LSF (Load Sharing Facility) manages long-running batch jobs for modeling, design and simulations. IBM Spectrum Symphony provides low-latency for risk analytics in the financial services sector. IBM Spectrum Conductor comes in two flavors. Conductor for Spark (CFS) manages Spark analytics. Conductor for Containers (CFC) handles Docker and Kubernetes containers.</p> <p>Docker is the run-time platform. While there are other container run-time platforms like RKT and LXD, Docker is clearly the marketshare leader, growing 40 percent per year.</p> <p>Statistics from the latest DockerCon2016 conference showed the most popular use cases and workloads for Docker. What can run in Docker: Lots of applications can be "containerized", including Redis, MongoDB, PostgreSQL, OracleDB, Java, to name a few. Docker is well established in enterprises, including service providers, healthcare, insurance and financial services, public sector, and technology firms.</p> <p>Kubernetes, Mesos and Docker/Swarm are a layer above, as orchestrators. Spectrum Conductor for Containers uses Kubernetes and other open source tools to coordinate activity. Orchestrators restart failed applications, and can scale up or scale down the number of instances as needed. Orchestrators can manage groups of applications, across clusters on-premises and off-premises Cloud.</p> <p>From a storage perspective, containers access storage like bare-metal operating systems, bypassing all of the layers normally associated with bloated Virtual Machine hypervisors. It also eliminates single root I/O virtualization (SR-IOV) that VMs use to compensate.</p> <p>Persistent storage can be isolated, so that containers cannot see the files of other containers. This provides multi-tenancy.</p> <p>There are four ways to attach storage.</p> <ol> <li>Writeable layer, ephemeral (non-persistent) storage.</li> <li>Internal persistent storage (directory on host file system). However, if you move a container from one host to another, you may lose access to this internal storage.</li> <li>External volume, manually mounted.</li> <li>Volume driver plug-in REST API that automatically mounts it.</li> </ol> <p> </p> <p>The fourth method is preferred. Plug-ins are available for IBM Spectrum Scale, GlusterFS, Portworx, Rancher Convoy, RexRay, and Contiv. The start-up Flocker have gone out of business last year.</p> <p>The Docker hosts can attach to IBM Spectrum Scale in all of its supported offerings, including POSIX, NFS and SMB protocol. Containerized applications can move from one Docker host to another, and continue access the IBM Spectrum Scale namespace.</p> <p>IBM has created the "Ubiquity Volume Service" that provides a consistent API for Docker and Kubernetes. This will use IBM Spectrum Control Base Edition to support IBM Spectrum Scale, Spectrum Accelerate, Spectrum Virtualize and DS8000 storage systems. For IBM Spectrum Scale, volumes are mapped to iSCSI volumes, filesets or directories. For other devices, volumes are mapped to block LUNs. Ubiquity is publicly available on GitHub.</p> </dd> <dt><b>Enterprise Applications for IBM Cloud Object Storage</b></dt> <dd> <p>Andy Kutner, IBM Cloud Architect, presented the various options available for NAS gateways that can front IBM Cloud Object Storage.</p> <p>Ctera offers NAS gateways, and Endpoint agents for backup and Enterprise File Sync & Share (EFSS). This vendor targets Remote Office/Branch Office (ROBO) and small NAS consolidation that have less than 60 TB per office IBM is a reseller of Ctera, so you can get both Ctera and IBM COS from the same IBM sales rep.</p> <p>Nasuni offers a global file system, accessible from any device, smartphone, tablet or desktop. They are focused on taking out EMC and NetApp NAS solutions. Performance at the edge, combined with capacity in the client's chosen Cloud (including IBM Cloud Object Storage or IBM Bluemix). Infinite snapshots replace backups, offering RPO of 1 minute for Disaster Recovery. Their global file system "UniFS" offers file locking.</p> <p>Panzura focuses on Cloud Integrated NAS, File Distribution, and Collaboration. This can help eliminate "islands of storage". The File Distribution can be any type of file, but was originally designed for Media and Entertainment, such as videos. Collaboration employs EFSS features for workgroup shared file folders, such as CAD/CAM or engineering blueprints.</p> <p>IBM Spectrum Scale can provide NFS and SMB access to files, and then move colder, less active data to IBM Cloud Object Storage, using Transparent Cloud Tiering feature. Spectrum Scale offers WAN caching across locations.</p> <p>IBM COS now offers a native NFS v3 interface. This allows read/write NFS access, with S3 API read of the same content. Each file is mapped to a single object.</p> <p>This is targeted for large scale archive, static-and-stable data, NFS-based backup software, and applications going through the transition from file-based to object-based. This is not intended for multi-site collaboration or primary NAS replacement. Regardless of the number of geographically dispersed IBM COS sites, the NAS can run on only one or two sites initially.</p> <p>To provide NFS v3 support, IBM introduces new F5100 File Accessers, which talk to an IBM COS Accesser, which in turn acts on specific Vaults in the storage pools. The file-to-object mapping metadata is replicated on-premises across three File Accessers, and optionally replicated asynchronously to a second site for High Availability. S3 API can read access the file by file name, or by Object URI.</p> <p>Initially, the "File Accesser" is only available as pre-built system, not as software-only.</p> <p>There was not enough time to cover other solutions, including Avere, NetApp AltaVault, or Open Source S3FS.</p> </dd> </dl> <p dir="ltr">This was a great event, just the right size, between 1,500 and 2,000 attendees. Similar IBM Technical University events coming up later this year:</p> <ul dir="ltr"> <li>August 8 - 10 -- Sao Paulo, Brazil</li> <li>August 15 - 17 -- Melbourne, Australia</li> <li>September 12 - 14 -- Johannesburg, South Africa</li> <li>October 9 - 13 -- Munich, Germany</li> <li>October 16 - 20 -- New Orleans, LA, USA</li> <li>November 6 - 10 -- Prague, Czech Republic</li> <li>November 13 - 17 -- Washington, D.C., USA</li> </ul> <p dir="ltr">Check out the [<a href="">IBM Systems Technical University 2017</a>] page.</p> <p dir="ltr">Save the date: Next year, we will be back in Orlando, April 30-May 4!</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">#ibmtechu</a>, <a href="" rel="tag">Robert Haas</a>, <a href="" rel="tag">Spectrum Conductor</a>, <a href="" rel="tag">Software Defined Infrastructure</a>, <a href="" rel="tag">SDI</a>, <a href="" rel="tag">Spectrum LSF</a>, <a href="" rel="tag">Spectrum Symphony</a>, <a href="" rel="tag">Docker</a>, <a href="" rel="tag">RKT</a>, <a href="" rel="tag">LXD</a>, <a href="" rel="tag">Spectrum Conductor for Spark</a>, <a href="" rel="tag">Spectrum Conductor for Containers</a>, <a href="" rel="tag">Spectrum Scale</a>, <a href="" rel="tag">GlusterFS</a>, <a href="" rel="tag">Portworx</a>, <a href="" rel="tag">Rancher Convoy</a>, <a href="" rel="tag">RexRay</a>, <a href="" rel="tag">Contiv</a>, <a href="" rel="tag">Flocker</a>, <a href="" rel="tag">Redis</a>, <a href="" rel="tag">MongoDB</a>, <a href="" rel="tag">PostgreSQL</a>, <a href="" rel="tag">OracleDB</a>, <a href="" rel="tag">Java</a>, <a href="" rel="tag">Kubernetes</a>, <a href="" rel="tag">Mesos</a>, <a href="" rel="tag">Docker+Swarm</a>, <a href="" rel="tag">SR-IOV</a>, <a href="" rel="tag">Andy Kutner</a>, <a href="" rel="tag">Ctera</a>, <a href="" rel="tag">IBM Cloud Object Storage</a>, <a href="" rel="tag">Nasuni</a>, <a href="" rel="tag">Panzura</a>, <a href="" rel="tag">Avere</a>, <a href="" rel="tag">NetApp AltaVault</a>, <a href="" rel="tag">S3FS</a>, <a href="" rel="tag">F5100</a>, <a href="" rel="tag">File Accessers</a>, <a href="" rel="tag">NFS</a>, <a href="" rel="tag">SMB</a></p> This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here is my recap of the sessions on the morning of Day 5, the last day of the conference. Integrating IBM Storage in Container Environments Dr. Robert... 0 0 783-feaee67a-8c43-40a8-a6f5-45e5a78a4f00 IBM Systems Technical University - Day 4 Meet the Experts in Storage TonyPearson 120000HQFF active false Entradas de comentarios application/atom+xml;type=entry Número de "me gusta" true 2017-05-26T17:12:35-04:00 2017-05-26T17:12:35. Thursday evening, we had the "Meet The Experts" sessions. There were four: Storage, Power Systems, z/OS, and a fourth one focused on z/VM and Linux on z Systems. I was on the expert panel for Storage.</p> <p dir="ltr">Mo McCullough was the emcee. Special thanks for Shelly Howrigon in her help with this event.</p> <blockquote dir="ltr">(<b>Disclaimer:</b> Do not shoot the messenger! We had a dozen or so experts on the panel, representing System Storage hardware, software and services. I took notes, trying to capture the essence of the questions, and the answers given by the various IBM experts. The answers from individual IBMers may not reflect the official position of IBM management. I leave out any references to unannounced plans or products. Where appropriate, <i>my own commentary will be in italics.</i>)</blockquote> <table border="0" dir="ltr"> <tbody> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>When will IBM offer a single pane of glass management for all of its IBM storage products?</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">IBM is working hard on this. Our strategy is to focus on IBM Spectrum Control as the primary answer. We have extended support across block, file and object, with support for IBM Spectrum Scale and IBM Cloud Object Storage System. We have also provided plug-ins for VMware, Cisco UCS Director, and OpenStack Horizon, for those who prefer those management systems instead.</td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>What we really need are REST APIs!</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">Good point. IBM already has some REST APIs for the DS8000, XIV and Spectrum Protect, now that IBM has browser-based GUI across its entire product line, it is our strategy to offer REST API across our product line as well.</td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>What is the next generation of ProtecTIER Data Deduplication going to look like?</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">IBM is focused on provided "data deduplication" for backup workloads directly through IBM Spectrum Protect backup software. IBM continues to sell IBM ProtecTIER.<br> <br> <i>(Virtual Tape Libraries like IBM ProtecTIER and Dell EMC Data Domain were created to handle the fact that many backup software back only were designed for tape drives and libraries. VTL was disk that pretended to be tape library. Now that IBM Spectrum Protect, NetBackup, Commvault, and all of the other modern backup products write natively to disk, object storage or Cloud services, there really isn't a need for VTL products any more.)</i></td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>Why does IBM bother with all-Flash version of DS8000 when it already has IBM FlashSystem?</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">Different products for different workloads. IBM DS8000 offers unique support for z System mainframe FICON attachment and 520-byte block support for IBM i. IBM also offers all-Flash Elastic Storage Server, all-Flash SVC and Storwize products, that complement the IBM FlashSystem product line.</td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>We like how XIV can hot-enable encryption, even with existing data on it. Why doesn't DS8000 offer this?</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">Two separate implementations. At the time IBM DS8000 encryption was designed, it was decided that the client needed to enable encryption before writing any data.</td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>Will we see a spinning disk version of the FlashSystem A9000</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">Flash is now less expensive than spinning disk, I don't see why IBM would go backwards. The future is Flash.</td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>We would like Spectrum Control to manage our Dell EMC Isilon</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">Yes, we have heard that from others. We are working on extending our third party support. Send in your cards and letters to help us prioritize. Or, better yet, submit a "Request For Enhancement" (RFE).</td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>The difference between Tier 0 (Write Endurance) flash and Tier 1 (Read Intensive) flash is confusing, are there any plans in the IT industry to simplify this?</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">No, if anything it will get worse. Today, IBM's Tier 0 is 10 Drive Write Per Day (DWPD), and Tier 1 is 1 DWPD. Other SSD drives offer 2, 3, 5, 10, 15 and 25 DWPD. As people buy more Flash, and less disk, expect more differentiation in this area.</td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>We would like to tune Easy Tier on the Storwize products</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">Understood. IBM typically implements new features on the DS8000 platform first, then rolls them over to Spectrum Virtualize. The ability to influence allocation order, pin or avoid tiers, and have application API to influence the placement are already in DS8000.</td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>What will the future of Storwize look like?</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">We don't have enough time to cover that in this meeting.</td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>Recently, you raised the maximum Storwize FlashCopy background copy rate from 64 MB/sec to 2 GB/sec, but is that realistic?</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">The setting provides the background task a target "grains per second" to try to achieve. It may not be possible depending on your configuration and the number of concurrent tasks. Your Storwize may be so busy with background activity that it won't take host I/O.</td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>We have been giving you our wishlist, but are there any questions the IBM experts have for the audience</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">Yes, are there any clients being asked to secure storage against Ransomware and insider threats from disgruntled employees?<br> <br> <i>(Several hands went up, and we collected their names to have further discussions.)</i></td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>How should we assign business value to data?</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">IBM Spectrum Virtualize allows you to assign metadata tags to files, so that these can be used to drive different policies.<br> <br> <i>(The process of assigning business value is often called "Data Rationalization" and is part of ILM, BC/DR, and Data Governance efforts.)</i></td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>I am concerned that AES 256 encryption is not good enough now that there is Quantum Computing.</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">It will be decades before Quantum Computing will be good enough to break these codes.</td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>Will Blockchain drive huge or unique storage requirements?</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">No. The entries are small. You are appending small transactions to the end of existing ledgers. Nothing unique or different.</td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>Were there any topics not adequately covered at this conference?</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">IBM didn't have much to offer for Spectrum Compute family of software, the Software Defined Infrastructure (SDI) that runs on both x86 and POWER systems. This should be done under the POWER brand, but many clients use Spectrum Compute with x86 servers. Ironically, Spectrum Compute products are managed under the Storage division, since Spectrum Compute and Spectrum Storage work well together.</td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>We would like Storwize's clever NPIV to be implemented in all of the other IBM arrays, starting with DS8000.</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">That probably won't happen, as they are different architectures. Whereas Storwize and the rest of IBM Spectrum Virtualize family were designed for nodes to fail, and take their ports down with them, the DS8000 has independent I/O bays that continue to run independent of either POWER8 node. Likewise, FlashSystem 900 has similar separation between the FCP adapters and the processing nodes.</td> </tr> <tr> <td colspan="3"> </td> </tr> <!--–– separator ––--> <tr> <td width="45px"><img alt="Question: " src=""></img></td> <td colspan="2" style="vertical-align:middle"><b>Can we have consistent licensing across the entire IBM Spectrum Virtualize set of products, please?</b></td> </tr> <tr> <td> </td> <td valign="top"><img alt="Answer: " src=""></img></td> <td style="vertical-align:middle">We have a task force to investigate this, and will gladly add your name to the list for input and feedback.</td> </tr> <tr> <td colspan="3"> </td> </tr> </tbody> </table> <p dir="ltr">While the conference continues Friday morning, for many attendees, this was the last event.</p> <p dir="ltr"> </p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">#ibmtechu</a>, <a href="" rel="tag">Mo McCullough</a>, <a href="" rel="tag">Shelly Howrigon</a></p> This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Thursday evening, we had the "Meet The Experts" sessions. There were four: Storage, Power Systems, z/OS, and a fourth one focused on z/VM and... 0 0 69491ea8ca-98aa-4f6f-af16-51a019e178bf IBM Systems Technical University - Day 4 afternoon TonyPearson 120000HQFF active false Entradas de comentarios application/atom+xml;type=entry Número de "me gusta" true 2017-05-26T14:39:18-04:00 2017-05-26T14:39:18 4.</p> <dl dir="ltr"> <dt><b>Spectrum Scale for File and Object Storage</b></dt> <dd> <div style="float:left;padding:20px;"><a data-<img alt="s014066-Scale-ESS-Orlando-v1705a-cover" height="247" src="" width="320"></img></a><br> Slides are available on IBM Expert<br> Network on [<a href="">Slideshare.net</a>]</div> <p>IBM Spectrum Scale was formerly called GPFS and has been around since 1998. I am glad it was renamed, as GPFS suffered from "guilt by association" with other file systems, AFS, DFS, XFS, ZFS, and so on.</p> <p>Spectrum Scale does so much more, supports volume, file and object level access, supports POSIX standards for Windows, AIX and Linux, support Hadoop and Spark with 100 percent compatible HDFS Transparency Connector, support NFS, SMB and iSCSI protocols, as well as OpenStack Swift and Amazon S3 object based access.</p> <p>Initially designed for video streaming and High Performance Computing (HPC), IBM has extended its reach to work in a variety of workloads across different industries. More than 5,000 production systems are running at client locations.</p> </dd> <dt><b>IBM Spectrum Protect solution design: Server, Deduplication and Disaster Recovery decisions</b></dt> <dd> <p>Dan Thompson, IBM Storage Software Technical Sales Specialist, presented this session.</p> <p>To make it easier to deploy, IBM Spectrum Protect now has a set of tested "blueprints" that are organized into small, medium and large. Find the one that fits your needs, and it will tell you exactly how the server should be configured. Dan recommends having a "test system" to try out new releases of IBM Spectrum Protect.</p> <p>For multiple server configurations, Dan recommends adopting a standard naming convention, and to make use of Enterprise Configuration and server-side Client Option Sets. You may want to consider discrete instances for special non-backup functions, like library manager or Operations Center hub server, which allows you to upgrade more aggressively without affecting your backup clients.</p> <p>If you plan to run multiple Spectrum Protect instances on the same VMware host, set the DBmemPercent to avoid having DB2 consume all of the memory, which will interfere with out Spectrum Protect instances.</p> <p>For clustered servers, IBM supports Active/Passive, Active/Active, Many/One, and Many/Few configurations. You can mix and match these as needed.</p> <p>For data spill remediation, consider NIST 800-88 data shredding. This depends on the type of storage media used.</p> <p>IBM Spectrum Protect for Data Retention, formerly called System Storage Archive Manager (SSAM), offers For Non-erasable, Non-Rewriteable (NENR) enforced Immutability protection. (This used to be called Write-Once-Read-Many or WORM for short, but since WORM applies only to tape and optical media, and IBM Spectrum Protect now supports Flash, Disk, Object Storage and Cloud repositories, IBM has adopted the term NENR instead). Third party KPMG has certified IBM Spectrum Protect for Data Retention meets to their satisfaction the requirements for SEC 17a-4 regulations.</p> <p>When sizing your server, Dan recommends that you always "over-size" it and grow into it. Use the published "Performance Optimization Guide" to help. Monitor the server and storage using OS and device specific monitoring, in combination with IBM Spectrum Protect reports.</p> <p>If you are still on BC Tiers 1 or 2, transmitting tapes to a remote vaulting facility or secondary data center, consider upgrading to BC Tier 3 at least. This can be done via electronic vaulting to an Automated Tape Library (ATL), Virtual Tape Library (VTL) or IBM Cloud Object Storage, or a Cloud service provider such as IBM Bluemix or Amazon Web Services. This can be supplemented using DB2 HADR for the IBM Spectrum Protect database.</p> <p>While Spectrum Protect server can run bare-metal or as a VM, the VM instance will not have support for FCP-based tape or Virtual Tape Library. Many people are moving off tape, especially VTL, and using native Disk, Directory or Cloud container pools instead.</p> <p>Lastly, take advantage that Operations Center can view all Spectrum Protect servers across all locations. This can be helpful.</p> </dd> <dt><b>Enabling Mission Critical NoSQL workloads using IBM trillions of operations technology</b></dt> <dd> <p>TJ Harris, from the IBM Storage CTO office, and Scott Brewer, FlashSystem Team Lead, co-presented this session.</p> <p> </p> <p>They gave a background on NoSQL, the most popular being MongoDB. The IT industry estimates that NoSQL will grow 38 percent CAGR from 2015-2020.</p> <p>The problem occurs when NoSQL applications go through a full file system stack to work with low-latency devices like Flash, especially when the writes are small, often just a few dozen bytes to 100 KB. Fortunately, IBM Research has created the "Trillions of Operations" project to explore ways to take reduce the software stack, and make use of NVMe protocol.</p> <p>The top three challenges for NoSQL deployments are: (a) Cost, (b) Data management and retention, and (c) Data relevancy.</p> <p>To enable innovation, MongoDB offers a "Storage Engine API" that allows others to compete at this space. Currently MMAP v1 and WiredTiger are supported. IBM Research implemented its "Trillion Operations" project as a plug-in to this API, optimized for high rates of ingest for data. Compared to Facebook's RocksDB, IBM was 14x faster write, and 2.1x faster read.</p> <p>Another challenge is coordinate backups and disaster recovery when applications mix traditional RDBMS with these new NoSQL databases.</p> </dd> </dl> <p dir="ltr">The week is nearly over, and I can see the light at the end of the tunnel. Everyone had a great time last night's event at the Universal City Walk and Blue Man Group.</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">#ibmtechu</a>, <a href="" rel="tag">Spectrum Scale</a>, <a href="" rel="tag">POSIX</a>, <a href="" rel="tag">HDFS</a>, <a href="" rel="tag">Hadoop</a>, <a href="" rel="tag">AFS</a>, <a href="" rel="tag">DFS</a>, <a href="" rel="tag">XFS</a>, <a href="" rel="tag">ZFS</a>, <a href="" rel="tag">HDFS Transparency Connector</a>, <a href="" rel="tag">NFS</a>, <a href="" rel="tag">SMB</a>, <a href="" rel="tag">iSCSI</a>, <a href="" rel="tag">OpenStack Swift</a>, <a href="" rel="tag">Amazon S3</a>, <a href="" rel="tag">Dan Thompson</a>, <a href="" rel="tag">Spectrum Protect</a>, <a href="" rel="tag">Client Option Sets</a>, <a href="" rel="tag">NIST 800-88</a>, <a href="" rel="tag">data spill remediation</a>, <a href="" rel="tag">data shredding</a>, <a href="" rel="tag">SSAM</a>, <a href="" rel="tag">NENR</a>, <a href="" rel="tag">WORM</a>, <a href="" rel="tag">KPMG</a>, <a href="" rel="tag">Immutability</a>, <a href="" rel="tag">SEC 17a-4</a>, <a href="" rel="tag">Performance Optimization Guide</a>, <a href="" rel="tag">ATL</a>, <a href="" rel="tag">VTL</a>, <a href="" rel="tag">DB2 HADR</a>, <a href="" rel="tag">TJ Harris</a>, <a href="" rel="tag">Scott Brewer</a>, <a href="" rel="tag">FlashSystem</a>, <a href="" rel="tag">NVMe</a>, <a href="" rel="tag">NoSQL</a>, <a href="" rel="tag">MongoDB</a>, <a href="" rel="tag">RDBMS</a></p> This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here's my recap of the afternoon sessions of Day 4. Spectrum Scale for File and Object Storage Slides are available on IBM Expert Network on [... 0 0 571b0e5-cce3-4e8d-b04b-5a33c9d7e425 IBM Systems Technical University - Day 4 morning TonyPearson 120000HQFF active false Entradas de comentarios application/atom+xml;type=entry Número de "me gusta" true 2017-05-26T13:47:08-04:00 2017-05-26T13 4.</p> <dl dir="ltr"> <dt><b>Configurable IBM Spectrum Scale</b></dt> <dd> <p>Kent Koeninger presented IBM Spectrum Scale software, which Kent refers to as "Configurable Spectrum Scale" (or CSS for short), as opposed to the pre-built system known as Elastic Storage Server (ESS).</p> <p>Why choose CSS versus ESS? Lower entry price. You can start with just two single-socket servers and a drawer of disk.</p> <p>IBM Spectrum Scale was formerly called IBM General Parallel File System (GPFS). Many who tried earlier versions of GPFS found it difficult to configure, because it only had a command line interface. Now, Spectrum Scale has a fully-functional GUI, and clients have been able to install and configure Spectrum Scale in just 30 minutes!</p> <p>How big can Spectrum Scale grow? As much as your budget can afford! With an architecture that can support YottaBytes of data and 900 quintillion files, you won't hit any limits anytime soon.</p> <p>There are some unique capabilities of ESS not available in CSS. For example, ESS offers Spectrum Scale Native RAID (erasure coding) with fast rebuild times, and ESS is certified for SAP HANA. You can combine any combination of CSS and ESS in the same Spectrum Scale to create a "data lake" for mixed workloads.</p> <p>A good use case for Spectrum Scale, either CSS or ESS, is backup. Kent explained why it is an excellent option to store backups with enterprise backup software such as IBM Spectrum Protect or Commvault.</p> </dd> <dt><b>VersaStack - Hybrid Cloud like no other</b></dt> <dd> <p>This session was jointly presented by Chris Vollmar, IBM Storage Architect, and Brent Anderson, Cisco Global Consulting Systems Engineer. IBM and Cisco have been partners for more than 25 years.</p> <p>VersaStack combines Cisco UCS x86 servers, Cisco Nexus and MDS switches, and IBM FlashSystem or Spectrum Virtualize storage.</p> <p>What if you have a SAN Infrastructure built entirely from IBM b-type or Brocade-based switches? Cisco supports their SAN switches for this, but nobody has tested VersaStack in this combination, and UCS Director does not manage this combination, so IBM does not support this. Instead, for this situation, IBM recommends doing external connection via Ethernet, or using direct-attach configurations.</p> <p>The Cisco Validated Design spends four months testing, and gives you bulletproof process to deploy the solution.</p> <p>There is a difference between Cisco UCS Manager and UCS Director. UCS Manager is available at no additional charge, but only manages the Cisco x86 servers. UCS Director is optionally extra priced, and manages Cisco servers, Cisco networking, and IBM Spectrum Virtualize storage.</p> <p>Brent explained the benefits of UCS Management through policies and profiles.</p> <p>Chris covered Cisco CloudCenter, which the Cisco team shortens to just "C3". IBM Spectrum Copy Data Management can be used to move snapshots of data between on-premises and off-premises Cloud to help in Hybrid Cloud configurations.</p> </dd> <dt><b>How to Design an IBM Spectrum Scale solution</b></dt> <dd> <p>Tomer Perry, IBM Spectrum Scale I/O Development, presented this session.</p> <p>For those who want to bring up a quick IBM Spectrum Scale environment to play around with, you can do this in as little as 30 minutes. But to design a mission critical deployment, additional requirements may need to be addressed. You may need to consult with not just storage admins, but also application owners, network admins and security personnel.</p> <p>Large companies have hundreds or thousands of applications, so Tomer recommends to group these into "Workload families", based on data set types, access patterns and performance requirements. For NAS take-out, 80 percent of NAS I/O is "get attribute" that can easily be served directly from cache memory.</p> <p>For each workload family, you may need to decide on snapshots, quotas, namespace (bind mounts, symlinks, etc.), security (ACL, encryption), estimated capacity, replication BC/DR, backup and ILM requirements.</p> <p>Unless this is completely greenfield deployment, the existing infrastructure needs to be evaluated. This includes the LAN and WAN network topology, name resolution (DNS), time services (NTP), Authentication (AD, LDAP, NIS, Keystone), Keyserver (IBM SKLM), Monitoring and Migration requirements.</p> <p>Tomer suggests designing the environment in this order: Cluster, File System, Storage Pools, Fileset, Replication, and finally Monitoring.</p> <p>Generally, you need three NSD servers per cluster. For those licensing Spectrum Scale Standard Edition by the socket, you may be tempted to put everything into one big cluster. The new capacity-based Spectrum Scale Data Management Edition eliminates that concern, so Tomer recommends having separate computer clusters and storage clusters, connected by cross-cluster mount. All nodes in a cluster are considered an "ssh" administration domain.</p> <p>A single Spectrum Control namespace can support up to 256 file systems. There are various reasons to have multiple file systems: block size, backup/recovery, snapshot, quotas, and cross-cluster isolation. If a file system gets corrupted, it will not affect other file systems. In an internal test, an "fsck" on 1 billion, 1 PB of data file system took only 30 minutes to repair.</p> <p>Storage Pool design can separate metadata from content, and workloads can be separated to different storage media. With ILM, HSM and TCT, you can move colder data to Cloud, Object Storage, Spectrum Protect or Spectrum Archive.</p> <p>Filesets are tree branches for each file system. IBM Spectrum Scale supports both dependent and independent filesets. Filesets can be used for Non-erasable, Non-Rewriteable (NENR) Immutability, policies, quotas, snapshots. Consider using a fileset instead of carving off a new file system.</p> <p>Spectrum Scale offers both synchronous and asynchronous replication. For Synchronous, the ReadReplicaPolicy can be set to default, local or fastest. For Asynchronous, there are a variety of AFM modes (Read-only, Local-Update, Single-Writer, Independent-Writer, and Disaster Recovery). You may need to decide if your AFM gateways are dedicated or collocated. You will need to tune your TCP buffers for WAN performance to get the RPO you desire.</p> </dd> </dl> <p dir="ltr">The nice thing about IBM solutions is that you can start small, and grow big. In all of these examples above, IBM offers sizes to match nearly any IT budget.</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">#ibmtechu</a>, <a href="" rel="tag">Kent Koeninger</a>, <a href="" rel="tag">Spectrum Scale</a>, <a href="" rel="tag">Elastic Storage Server</a>, <a href="" rel="tag">General Parallel File System</a>, <a href="" rel="tag">GPFS</a>, <a href="" rel="tag">Spectrum Scale Native RAID</a>, <a href="" rel="tag">Erasure Coding</a>, <a href="" rel="tag">SAP+HANA</a>, <a href="" rel="tag">Spectrum Protect</a>, <a href="" rel="tag">Commvault</a>, <a href="" rel="tag">VersaStack</a>, <a href="" rel="tag">Hybrid Cloud</a>, <a href="" rel="tag">Chris Vollmar</a>, <a href="" rel="tag">Brent Anderson</a>, <a href="" rel="tag">Cisco</a>, <a href="" rel="tag">Cisco Validated Design</a>, <a href="" rel="tag">Brocade</a>, <a href="" rel="tag">UCS Manager</a>, <a href="" rel="tag">UCS Director</a>, <a href="" rel="tag">Cisco CloudCenter</a>, <a href="" rel="tag">Tomer Perry</a>, <a href="" rel="tag">SKLM</a>, <a href="" rel="tag">NSD</a>, <a href="" rel="tag">fsck</a>, <a href="" rel="tag">NENR</a>, <a href="" rel="tag">AFM</a></p> This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here is my recap of the sessions on the morning of Day 4. Configurable IBM Spectrum Scale Kent Koeninger presented IBM Spectrum Scale software, which Kent... 0 0 5272ab37-599a-401d-8893-a6569ff5c753 IBM Systems Technical University - Day 3 TonyPearson 120000HQFF active false Entradas de comentarios application/atom+xml;type=entry Número de "me gusta" true 2017-05-26T12:37:56-04:00 2017-05-26T12:37 sessions of Day 3.</p> <dl dir="ltr"> <dt><b>Ethernet-only SANs -- Myth or Reality?</b></dt> <dd> <p>Anuj Chandra, IBM Advisory Engineer, presented an excellent overview of Ethernet-based SANs. He started with a quick history of Ethernet, starting with Robert Metcalfe's original drawing for his concept.</p> <p>In the past, Ethernet was used for email and message transfer, and so dropped packets were tolerated. However, with the use of Ethernet for SANs, many standards have been adopted to make Ethernet networks more robust. These meet requirements for Flow Control, Congestion management, low latency, data integrity and confidentiality, network isolation, and high availability.</p> <p>These standards are known as IEEE 802.1Q "Data Center Bridging", including 8012.Qbb Priority Flow Control, 802.1Qaz Enhanced Transmission Selection, 802.1Qau Congestion Notification. There is also the IETF Transparent Interconnection of Lots of Links (TRILL) to replace Spanning Tree Protocol (STP). All of these features are negotiated between endpoints server and storage. Ethernet that supports these new standards is often referred to as "Converged Ethernet" since it handles both traditional email/message traffic as well as SAN data traffic.</p> <p> </p> <div style="float:left;padding:20px;"><a data-<img alt="s015065-RDMA" height="180" src="" width="320"></img></a></div> <p>In addition to 1GbE and 10GbE, we now have 2.5, 5, 20, 40, 50, 100 Gb Ethernet speeds. By 2020, Anuj estimates over half of all Ethernet ports will be 25 GbE or faster. Amazingly, some of these can work on existing 10BASE-T cables.</p> <p>Anuj also covered Remote Direct Memory Access (RDMA), and the RDMA-capable Network Interface Cards (RNIC) that support them. In one chart, shown here, Anuj explained Infiniband, RDMA over Converged Ethernet (RoCE) and RoCE v2, and Internet Wide Area RDMA Protocol (iWARP).</p> <p>While many of these enhancements were intended for Fibre Channel over Ethernet (FCoE), the beneficiary has been iSCSI. Now there is iSCSI Extensions for RDMA (iSER) to take even more advantage of these changes, and can work with Infiniband, RoCE or iWARP. All of these networks can also be used as the basis for NVMe over Fabric (NVMeOF).</p> <p>Ethernet is the backbone of Cloud usage, and IBM is well positioned to take advantage of these new networking technologies.</p> </dd> <dt><b>Digital Video Surveillance solutions for extended video evidence protection</b></dt> <dd> <p>Dave Taylor, IBM Executive Architect for Software Defined Storage solutions, presented this session on Digital Video Surveillance (DVS).</p> <p>Most video surveillance is either analog-based, going to standard VHS tapes, or file-based. Sadly, security guards that watch live camera feeds lose their attention span after 22 minutes.</p> <p>There are an estimated 72 million cameras globally, with 1.5 million more every year.</p> <p>City governments spend 57 percent of their budget on "public safety". This can include body cams for police departments. Taser International, now called AXON, dominates the body-cam market.</p> <p>City budgets may not be prepared to store all of this video content into a cloud that complies with Criminal Justice Information Services (CJIS) standards. These Cloud services tend to be more expensive, as the videos must be treated as evidence, tamper-proof, and with appropriate chain of custody.</p> <p>DVS is not just storing movies. IBM offers Intelligent Video Analytics. It is important to be able to derive insight and actionable response.</p> <p>Storage capacity adds up quickly. Standard 1080p (1920 by 1080 pixel) camera generates 2.92 GB per hour, 70 GB per day, and over 2TB per month. If you have 1,000 cameras, that's over 2PB of data.</p> <p>For xProtect servers running Windows, the Tiger Bridge Connector can be used to move the video files to either IBM Spectrum Scale or IBM Cloud Object Storage.</p> </dd> <dt><b>Deep Dive into HyperSwap for Active-Active applications and Disaster Recovery</b></dt> <dd> <p>Andrew Greenfield, IBM Global Engineer for Storage, explained the different ways HyperSwap is implemented across the IBM storage portfolio.</p> <p>For IBM DS8000, HyperSwap is based on Metro Mirror synchronous replication. In the event that the primary DS8000 fails, the host server can automatically re-direct all I/O to the secondary DS8000. This is often referred to as "High Availability" (HA), and in some cases can serve as Disaster Recovery.</p> <p>For IBM Spectrum Virtualize products, including SAN Volume Controller (SVC), FlashSystem V9000, Storwize V7000 and V5000 products, as well as Spectrum Virtualize sold as software, the implementation is different.</p> <p>Previously, SVC offered Stretched Clusters, which put one node in one site, and a second node at another site, which allows for an Active/Active configuration. Unfortunately, the nodes in FlashSystem V9000 and Storwize are "connected at the hip", effectively bolted together, so putting separate nodes in different locations was not possible. To solve this, IBM developed HyperSwap that allows one node-pair to replicate across sites to another node-pair in the same Spectrum Virtualize cluster.</p> <p>However, even though it is called "HyperSwap", it is not implemented in any way similar to the DS8000 method. Instead, Spectrum Virtualize uses the Global Mirror with Change Volumes to replicate data between sites.</p> </dd> <dt><b>IBM Storage and VMware Integration</b></dt> <dd> <p>This session was co-presented by Brian Sherman, IBM Distinguished Engineer, and Steve Solewin, IBM Corporate Solutions Architect.</p> <div style="float:left;padding:20px;"><a data-<img alt="s014474-SCBE" height="240" src="" width="320"></img></a></div> <p>For nearly two decades, IBM is a "Technology Alliance Partner" with VMware. To provide consistent integration to all the features and functions of VMware, IBM Spectrum Control Base Edition (SCBE) is provided at no additional charge for IBM DS8000, XIV, FlashSystem and Spectrum Virtualize products.</p> <p>SCBE is downloadable as an RPM for RedHat Enterprise Linux (RHEL) can run bare-metal or as a VM.</p> <p>For those using Hyper-Scale Manager, it will automatically install a special A-line-only version of SCBE. It will install SCBE, but it will only manage the A-line products (FlashSystem A9000, FlashSystem A9000R, XIV and Spectrum Accelerate).</p> <p>Storage admins can define "storage services" that can be assigned to vCenter. This allows VMware admins to allocate storage in self-service mode.</p> </dd> </dl> <p dir="ltr">After the meetings were over, IBM had a special event at the Universal City Walk to enjoy some drinks, food, and conversation, and to watch Blue Man Group.</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">#ibmtechu</a>, <a href="" rel="tag">Anuj Chandra</a>, <a href="" rel="tag">Dave Taylor</a>, <a href="" rel="tag">Andrew Greenfield</a>, <a href="" rel="tag">Brian Sherman</a>, <a href="" rel="tag">Steve Solewin</a>, <a href="" rel="tag">Ethernet</a>, <a href="" rel="tag">Robert Metcalfe</a>, <a href="" rel="tag">IEEE</a>, <a href="" rel="tag">Data Center Bridge</a>, <a href="" rel="tag">IETF</a>, <a href="" rel="tag">TRILL</a>, <a href="" rel="tag">Converged Ethernet</a>, <a href="" rel="tag">RDMA</a>, <a href="" rel="tag">RoCE</a>, <a href="" rel="tag">iWARP</a>, <a href="" rel="tag">iSCSI</a>, <a href="" rel="tag">iSER</a>, <a href="" rel="tag">NVMe</a>, <a href="" rel="tag">NVMeOF</a>, <a href="" rel="tag">Video Surveillance</a>, <a href="" rel="tag">Digital Video Surveillance</a>, <a href="" rel="tag">DVS</a>, <a href="" rel="tag">VHS</a>, <a href="" rel="tag">TASER</a>, <a href="" rel="tag">AXON</a>, <a href="" rel="tag">CJIS</a>, <a href="" rel="tag">xProtect</a>, <a href="" rel="tag">Tiger Bridge Connector</a>, <a href="" rel="tag">Spectrum Scale</a>, <a href="" rel="tag">IBM Cloud Object Storage</a>, <a href="" rel="tag">SAN Volume Controller</a>, <a href="" rel="tag">SVC</a>, <a href="" rel="tag">FlashSystem V9000</a>, <a href="" rel="tag">Storwize V7000</a>, <a href="" rel="tag">Storwize V5000</a>, <a href="" rel="tag">Spectrum Virtualize</a>, <a href="" rel="tag">Stretch Cluster</a>, <a href="" rel="tag">HyperSwap</a>, <a href="" rel="tag">DS8000</a>, <a href="" rel="tag">Spectrum Control</a>, <a href="" rel="tag">Spectrum Control Base Edition</a>, <a href="" rel="tag">RHEL</a>, <a href="" rel="tag">Hyper-Scale Manager</a>, <a href="" rel="tag">FlashSystem A9000</a>, <a href="" rel="tag">FlashSystem A9000R</a>, <a href="" rel="tag">XIV</a>, <a href="" rel="tag">Spectrum Accelerate</a></p> This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here's my recap of the sessions of Day 3. Ethernet-only SANs -- Myth or Reality? Anuj Chandra, IBM Advisory Engineer, presented an excellent overview... 0 0 5-87fbae87-f03e-429f-b611-07288fdcfb3b IBM Systems Technical University - Day 2 afternoon TonyPearson 120000HQFF active false Entradas de comentarios application/atom+xml;type=entry Número de "me gusta" true 2017-05-26T08:31:56-04:00 2017-05-26T08:31 2.</p> <dl dir="ltr"> <dt><b>IBM Spectrum Protect deep dive into Container Storage Pools</b></dt> <dd> <p>Ron Henkhaus, IBM Certified Consulting IT Specialist, presented the new Spectrum Protect concept of "Container Pools" that can either be "Directory Pools" on SAN or NAS-based disk storage, or "Cloud Pools". Container pools can contain deduplicated and non-dedupe data.</p> <p>Ron cautioned that directory pools should not be placed on the same file system as your Spectrum Protect database or logs. Also, best practice for any directory pool is to assign an "overflow" pool to any non-directory pool, such as disk, tape or cloud container.</p> <p>Cloud pools can use either OpenStack Swift, V1 Swift, Amazon S3 protocol, Amazon Web Services, IBM Bluemix, and IBM Cloud Object Storage. You can pre-define the vaults and buckets in the configuration.</p> <p>For off-premises Cloud pools, the data is encrypted by default. For other container pools, encryption is optional. Performance to Cloud pools have been improved by using "accelerator storage", basically a disk cache to collect data before sending over to the Cloud pool. Backups to Cloud pools can reach 8 TB per hour. Restore times varies from 500 to 1500 GB per hour.</p> <p>Container Pools were designed for the new "Deduplication 2.0" feature introduced in version 7. Traditional Dedupe 1.0 to Device Class FILE is still available, but not recommended.</p> <p>Version 7.1.6 changed the compression algorithm from LZW to LZ4. In all cases, Spectrum Protect performs these actions in this order: deduplication, compression, encryption. Data that is encrypted by the Spectrum Protect client is therefore not deduped.</p> <p>The "Protect Storage Pool" command can replicate a directory pool to either a remote directory pool or Cloud pool. In addition to this remote replication, you can copy a directory pool to tape to offer air-gap protection against ransomware. Such tapes are considered part of the "Copy Container Pool". In the event of directory pool corruption, the data can be repaired from either replication or tape.</p> <p>IBM Aspera can now be used for replication, using SSL and AES-128 bit encryption. If your latency is greater than 50 msec, and have more than 0.5 percent packet loss, Aspera might help. This is available for Linux on x86 platforms running v7.1.6 or higher.</p> <p>For existing customers, IBM Spectrum Protect allows you to convert your FILE, VTL and TAPE device class pools to directory or Cloud pools.</p> <p> </p> </dd> <dt><b>Introduction to IBM Cloud Object Storage (powered by Cleversafe)</b></dt> <dd> <div style="float:left>In 2015, IBM acquired Cleversafe, recognized as the #1 Object Storage vendor. Their flagship product was officially renamed to the IBM Cloud Object Storage System, which some abbreviate informally as IBM COS. IBM offers the IBM Cloud Object Storage System in three ways: as software, as pre-built systems, and as a cloud service on IBM Bluemix (formerly known as SoftLayer).</p> <p>Since then, IBM has been busy integrating IBM COS into the rest of the storage portfolio. I explained how IBM COS can be used for all kinds of static-and-stable data, but not suited for frequently changed data, such as Virtual machines or Databases.</p> <p>Object storage can be access via NFS or SMB NAS-protocols using a gateway product, like IBM Spectrum Scale, or those from third-party partners like Ctera, Avere, Nasuni or Panzura. It can also be used as an alternative to tape for backup copies, and is already supported by the major backup software like IBM Spectrum Protect, Commvault Simpana, or Veritas NetBackup.</p> <p>While other cloud service providers have offered data storage in the cloud, this new offering also allows hybrid configurations with geographically dispersed erasure coding.</p> <p>Unlike RAID which protects against the loss of one or two drives, erasure coding can protect against a larger number of concurrent failures. For example, using an Information Dispersal Algorithm (IDA) of "7+5", where seven pieces of data are encoded on twelve independent disks, the system can lose up to five disk drives without losing any data.</p> <p>Combining this with Geographically Dispersed Configuration across three or more sites means that you can lose an entire data center, four of the twelve disks, and still have instant full access to all of your data from eight drives at the other locations. In the graphic, you see two on-premise data centers combined with a third location in IBM SoftLayer.</p> <p>For more on recent announcements on this, see my blog post: [<a href="">New IBM Cloud Object Storage Offerings for March 2017</a>].</p> </dd> <dt><b>New Generation of Storage Tiering: Simpler Management, Lower Costs, and Improved Performance</b></dt> <dd> <div style="float:right;padding:20px;"><a data-<img alt="s014071-Storage-Tiering-Orlando-v1705a-cover" height="247" src="" width="320"></img></a></div> <p>With ever changing amounts of storage, it is hard to find metrics that are consistent year to year. Fortunately, we found I/O density as the metric to focus my efforts, armed with real data from Intelligent Information Lifecycle Management (IILM) studies done at various clients. From that, I was able to talk about storage tiering on three fronts:</p> <ul> <li>Storage tiering between Flash and disk. IBM FlashSystem and IBM Easy Tier on DS8000 and Spectrum Virtualize family for hybrid Flash-and-disk configurations.<br> </li> <li>Storage tiering between disk, tape, and Cloud. HSM and Information Lifecycle Management (ILM) on Spectrum Scale, Elastic Storage Server (ESS), Spectrum Archive and IBM Cloud Object Storage System.<br> </li> <li>Storage tiering automation across your entire environment. IILM studies can help identify a target mix of Tier 0, Tier 1, Tier 2 and Tier 3 storage. IBM Spectrum Storage Suite and the Virtual Storage Center (VSC) can recommend or perform the movement of LUNs to more appropriate tiers, based on age and I/O density measurements. <p> </p> </li> </ul> </dd> </dl> <p dir="ltr">It's hard to say what the correct sequence of presentations should be. Some thought it might have been better for my talk on IBM Cloud Object Storage System prior to Ron's talk on Cloud container pools, but perhaps hearing Ron first helped drive more interest to my session.</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> , <a href="" rel="tag">IBM</a>, <a href="" rel="tag">#ibmtechu</a>, <a href="" rel="tag">Ron Henkhaus</a>, <a href="" rel="tag">Spectrum Protect</a>, <a href="" rel="tag">Container Pool</a>, <a href="" rel="tag">Directory Pool</a>, <a href="" rel="tag">Cloud Pool</a>, <a href="" rel="tag">OpenStack Swift</a>, <a href="," rel="tag">V1 Swift,</a>, <a href="" rel="tag">Amazon S3</a>, <a href="" rel="tag">Amazon Web Services</a>, <a href="" rel="tag">AWS</a>, <a href="," rel="tag">IBM Bluemix,</a>, <a href="" rel="tag">IBM+Cloud+Object+Storage</a>, <a href="" rel="tag">Aspera</a>, <a href="" rel="tag">LZ4 Compression</a>, <a href="" rel="tag">Ctera</a>, <a href="" rel="tag">Avere</a>, <a href="" rel="tag">Nasuni</a>, <a href="" rel="tag">Panzura</a>, <a href="" rel="tag">Commvault Simpana</a>, <a href="" rel="tag">Veritas NetBackup</a>, <a href="" rel="tag">FlashSystem</a>, <a href="" rel="tag">Information Dispersal Algorithm</a>, <a href="" rel="tag">Information Lifecycle Management</a>, <a href="" rel="tag">ILM</a>, <a href="" rel="tag">IILM</a>, <a href="" rel="tag">Easy Tier</a>, <a href="" rel="tag">Spectrum Scale</a>, <a href="" rel="tag">Elastic Storage Server</a>, <a href="" rel="tag">Spectrum Archive</a>, <a href="" rel="tag">Spectrum Storage</a>, <a href="" rel="tag">Virtual Storage Center</a>, <a href="" rel="tag">Hierarchical Storage Manager</a>, <a href="" rel="tag">HSM</a></p> This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here's my recap of the afternoon sessions of Day 2. IBM Spectrum Protect deep dive into Container Storage Pools Ron Henkhaus, IBM Certified Consulting... 0 0 48ff2f1f2-7fa4-41b3-8155-6b37b52d0b41 IBM Systems Technical University - Day 2 morning TonyPearson 120000HQFF active false TonyPearson 120000HQFF active false Entradas de comentarios application/atom+xml;type=entry Número de "me gusta" true 2017-05-26T07:43:55-04:00 2017-05-26T07 2.</p> <dl dir="ltr"> <dt><b>Business Continuity -- The Seven Business Tiers of Business Continuity and Disaster Recovery</b></dt> <dd> <div style="float:right;padding:20px;"><a data-<img alt="s014072-Business-Continuity-Orlando-v1705e-cover" height="247" src="" width="320"></img></a><br> Slides are available on IBM Expert<br> Network on [<a href="">Slideshare.net</a>]</div> <p>I have been involved with Business Continuity and Disaster Recovery my entire career at IBM System Storage. However, with new workloads like Hadoop analytics and new Hybrid Cloud deployments, I thought it would be good to provide a refresh.</p> <p>The need for Business Continuity and Disaster Recovery has increased recently due to (a) climate change caused by human activity, (b) ransomware and other cyber attacks, and (c) disgruntled employees.</p> <p>Back in 1983, a task force of IBM clients at a GUIDE conference developed "Seven Business Continuity Tiers for Disaster Recovery", which I refer to as "BC Tiers". I divided the presentation into three sections:</p> <ul> <li>Backup and Restore: BC tiers 1 through 3 are based on backup and restore methodologies. I explained how to backup Hadoop analytics data, all of the various options for IBM Spectrum Protect software, and how to encrypt the tape data that gets sent off premises.<br> </li> <li>Rapid Data Recovery: BC tiers 4 and 5 reduce the Recovery Point Objective (RPO) and Recovery Time Objective (RTO) with snapshots, database journal shadowing, and IBM Cloud Object Storage.<br> </li> <li>Continuous Operations: BC tiers 6 and 7 provide data replication mirroring across locations. I covered 2-site, 3-site and 4-site configurations.</li> </ul> <p> </p> </dd> <dt><b>IBM Spectrum Virtualize - How it works - Deep dive</b></dt> <dd> <p>Barry Whyte, IBM Master Inventor and ATS for Spectrum Virtualize, covered a variety of internal topics "under the hood" of Spectrum Virtualize. This covers the SAN Volume Controller (SVC), FlashSystem V9000, Storwize V7000 and V5000 products, as well as Spectrum Virtualize sold as software.</p> <p>In version 7.7, IBM raised the limits. You can now have 10,000 virtual disks per cluster, rather than 2,048 per node-pair. Also, you can now have up to 512 compressed volumes per node-pair. With the new 5U-high 92-drive expansion drawers, Storwize V7000 can now support up to 3,040 drives, and Storwize V5030 can support up to 1,520 drives.</p> <p>While each Spectrum Virtualize node has redundant components, the architecture is designed to handle entire node failure. The term "I/O Group" was created to refer to the node-pair of Spectrum Virtualize engines and the set of virtual disks it manages. This made sense when virtual disks were dedicated to a single node-pair. Now, virtual disks can be assigned to multiple node-pairs, dynamically adding or removing node-pairs as needed for each virtual disk.</p> <p>However, even if you have a virtual disk assigned to multiple node-pairs, only one node-pair would manage its cache, causing all other node-pairs to coordinate I/O through the cache-owning node-pair. The other node-pairs are called "access I/O groups".</p> <p>The architecture allows for linear scalability, double the number of nodes, and you double your performance. Some competitors use n-way caching across four or more nodes, and it is a semi-religious argument on the pros and cons of each approach. Barry feels the 2-way caching implemented by Spectrum Virtualize is the most effective and efficient for performance.</p> <p>All of the nodes are connected over IP network, but there is one designated as a "config node", and one, often the same, as a "boss node".</p> <p>A cluster can have up to three physical quorum disks (either drive or mDisk) and optionally up to five IP-based quorums. The IP-based is just a Java program that runs on any server or Cloud, provided it can respond within 80 msec.</p> <p>Either IP-based or physical quorum can be used for "tie-breaking" a split-brain situations. In the event there is no "active" quorum, the administrator can now serve as the tie-breaker manually. Barry recommends for Storwize clusters, where physical quorum disks are attached to a single node-pair, that you have at least one IP-based quorum for tie-breaking.</p> <p>However, only physical quorum can be used for T3 Recovery. T3 Recovery happens after power outages. All of the nodes update the quorum disk with critical information of all of the virtual mappings of blocks to volumes, and this is used when bringing up the nodes again.</p> <p>To protect against one pool consuming all of the cache, Spectrum Virtualize will partition the cache, and prevent any one pool from consuming more than a certain percentage of the total cache. The percentage depends on the number of pools:</p> <table border="2" width="99%"> <tbody> <tr> <th>Number of Pools</th> <th>Max percentage of any individual pool</th> </tr> <tr> <td>2</td> <td>66 percent</td> </tr> <tr> <td>3</td> <td>40 percent</td> </tr> <tr> <td>4</td> <td>33 percent</td> </tr> <tr> <td>5 or more</td> <td>25 percent</td> </tr> </tbody> </table> <p>Barry explained how failover works in the event of node failure. There is voting involved, and the majority remains in the cluster. In the case of an even split, called a "split brain" situation, the quorum decides. Orphaned nodes in a node-pair go into write-through mode, since the cache is no longer mirrored.</p> <p>The I/O forwarding layer has been split between upper and lower roles. The upper layer handles access I/O groups. The lower layer handles asymmetric access to drives, mDisks and arrays.</p> <p>N-port ID Virtualization (NPIV) drastically improves multi-pathing. Perhaps one of the coolest improvements in awhile, NPIV allows us to assign "Virtual" WWPN to other ports. When an I/O sent to a single port fails, it retries one or more times again, then waits 30 seconds, and then invokes multi-pathing to find a completely different path to the data. With NPIV, when a port fails, its WWPN is re-assigned to a different port, so the retries are likely to be successful before having to wait 30 seconds!</p> <p>Lastly, Barry covered the delicate art of Software upgrades. Software is rolled forward one node at a time, and the "cluster state" is maintained during this time.</p> </dd> </dl> <p dir="ltr">Different presentations this week are at different technical levels. My session was meant to be an overview of the concepts of Business Continuity, independent of specific operating system platform, using specific IBM products to help illustrate specific examples. Barry's was a deep dive into a single product family.</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">#ibmtechu</a>, <a href="" rel="tag">Business Continuity</a>, <a href="" rel="tag">Disaster Recovery</a>, <a href="" rel="tag">BC Tiers</a>, <a href="" rel="tag">GUIDE Conference</a>, <a href="" rel="tag">RPO</a>, <a href="" rel="tag">RTO</a>, <a href="" rel="tag">Barry Whyte</a>, <a href="" rel="tag">Spectrum Virtualize</a>, <a href="" rel="tag">FlashSystem V9000</a>, <a href="" rel="tag">Storwize</a>, <a href="" rel="tag">Storwize V7000</a>, <a href="" rel="tag">Storwize V5030</a>, <a href="" rel="tag">Storwize V5000</a>, <a href="" rel="tag">Linear Scalability</a>, <a href="" rel="tag">NPIV</a>, <a href="" rel="tag">quorum disk</a>, <a href="" rel="tag">split brain</a>, <a href="" rel="tag">WWPN</a>, <a href="" rel="tag">T3 Recovery</a></p> This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here is my recap of the sessions on the morning of Day 2. Business Continuity -- The Seven Business Tiers of Business Continuity and Disaster Recovery... 0 0 49-abb78716-8fa7-4924-bd4e-9f02506094b8 IBM Systems Technical University - Day 1 afternoon TonyPearson 120000HQFF active false TonyPearson 120000HQFF active false Entradas de comentarios application/atom+xml;type=entry Número de "me gusta" true 2017-05-23T20:13:13-04:00 2017-05-23T20:13:51 1.</p> <dl dir="ltr"> <dt><b>Storage Brand Opening Session - Craig Nelson</b></dt> <dd> <p>Craig Nelson, Brocade manager for IBM Field Sales Channel, indicated the network equipment is the bridge that brings servers and storage together.</p> <p>The squeeze -- faster servers and Flash storage causes storage networking to become the bottleneck. Fibre Channel will remain the protocol of choice for the next decade.</p> <div style="float:left;padding:20px;color:#990000"> <blockquote>"Speed is the net currency of Business" -- Marc Benioff, Salesforce CEO.</blockquote> </div> <p>Craig drew an analogy. We have been focused on making hard disk drives faster, and then Flash changed the game. Likewise, car manufacturers have focused on making gas engines better, and then Tesla Motors introduces an electric car with <i>insane</i> performance. The early models actually had an "Insane Mode".</p> <p>The new Gen6 models of IBM b-type SAN equipment will support 32Gbps and 128Gbps ports. That's Insane!</p> <p>Later models of Tesla Motors offer a "Ludicrous Mode". For flash storage, it is NVMe. NVMe can get storage down to 20 microsecond latency. That's Ludicrous!</p> <p>Craig put in a plug for two Brocade sessions: "BEWARE - The four potholes on your road to success when deploying flash storage" and "Tune up your storage network! Is it healthy enough for flash storage and next-gen server platforms?"</p> </dd> <dt><b>Storage Brand Opening Session - Clod Barrera</b></dt> <dd> <p>Clod Barrera, IBM Distinguished Engineer and Chief Technical Strategist, presenting storage industry trends.</p> <p>IDC predicts data capacity to grow 60-80% CAGR. This would require 44 percent drop in $/GB per year to maintain flat budget. Unfortunately, flash media cost is only dropping 25-30 percent per year, and spinning disk only 19 percent per year.</p> <p>Since storage media will not offset capacity growth, we need other technologies to compensate, including compression, deduplication, defensible disposal, and "cold" storage to tape or optical media.</p> <p>The smallest persistent storage that IBM has been able to achieve is 12 atoms. Current disk technology is 1200 atoms. Since 1956, IBM and the rest of the IT industry have improved storage 9 orders of magnitude, and now there are only 2 orders of magnitude left.</p> <p>Clod poked fun at the "Star Wars: Rogue One" movie, indicating that their idea of the future of storage was a huge tape library. See my December 2016 blog post [<a href="">Has your data gone rogue?</a>]</p> <p>What does it take to storage information forever? Tape will certainly be around. IBM Zurich demonstrated a 220TB back in 2015 as proof of technology.</p> <p>A good example of the need for long-term retention are US films. Of those from the silent era, over 90 percent are lost. Over half of the films prior to 1950 are lost. The silver nitrate film stock that the reels were made of have deteriorated. Now that more movies are made digitally, can we do better?</p> <p>Clouds will move from 10GbE to 25GbE. No slow down for FC in datacenters. Flash storage and object storage are both growing quickly</p> <p>Move over Software-Defined Storage, Converged and Hyperconverged systems, the new up-and-coming thing are "Composable Systems deployed in Pods" adjustable hourly by workload requirements.</p> <p>To protect against Ransomware, use "air gap" protection, not on the same network as production workload.</p> <p>New storage models are needed for Cognitive workloads. Clod put in a plug for Joe Dain's presentation "Introducing cognitive index and search for IBM Cloud Object Storage leveraging Watson"</p> </dd> <dt><b>Storage Brand Opening Session - Axel Koester</b></dt> <dd> <p>Axel Koester, IBM Storage Chief Technologist, presented more storage industry directions.</p> <p>What will the world look like in 10 years. Today mostly procedural programming, with some statistical big data, and a bit of machine learning. In 10 years, it will be mostly statistical and machine learning, with very little procedural programming. Why? Because it is faster to train computers with Machine Learning, than to program procedurally.</p> <p>Examples of machine learning are IBM Watson, Google AlphaGo, drive-AI. Axel would rather be a passenger in a machine-learned self-driving car, than a procedurally-programmed one.</p> <p>Neural networks to interpret hand-written numbers. Welcome to "Unsupervised learning".</p> <p>A subset of Machine Learning is Deep Learning, a major breakthrough in 2006. Deep Learning is a subset of Machine Learning that uses three or more layers of neural networks. For example, face recognition "deep learning" algorithms can also be used to detect defects through visual inspection of circuit boards.</p> <p>How does this impact storage?</p> <ul> <li>Procedural -- archive test cases used</li> <li>Statistical -- store all data for parallel processing</li> <li>Machine Learning - train sample data, then archive and re-train yearly. Driving 5 minutes = 4 TB of sensor data used for self-driving cars</li> </ul> <p>For Neural processing, x86 CPU are suitable for prototyping. GPU co-processors better, efficient but uncommon. IBM has developed the "TrueNorth" chip does nothing by Neural - 4096 cores with only 70 mW of energy consumption. No clock, instead dendrites, synapses, axons and neurons.</p> <p>Instead of "Build or Buy?" the new question is "Train or Buy?" Train with confidential data, or buy ready-to-run 100% pre-trained cognitive systems as a service.</p> <p>AI Frameworks are available on Docker containers with Kubernetes with Persistent storage (Ubiquity) such as Spectrum Scale. These frameworks include DL4J, Chainer, Caffe, torch, theano, tensorflow.</p> <p>NVMe -- NVM is local only, how to do HA and DR? There are three options:</p> <ul> <li>DB asynchronous shadowing</li> <li>DB mirroring over NVMeOF</li> <li>Cluster file system replication of persistent data, such as IBM Spectrum Scale</li> </ul> <p>Example car manufacturer with 50 SAP HANA in memory instances on 4 Spectrum Scale nodes. IBM achieved 50,000 new files per second. Most NAS systems do much less.</p> <p>Faster media on smaller electronics Holmium atoms on Magnesium Oxide on silver base, resulting in "single atom storage." ATM needle tip magnetizes, measured with Tunnel Magneto-resistance. Unfortunately, reading the data causes it to lose its value, so it is not as persistent as the 12-atom method described by Clod earlier.</p> </dd> <dt><b>Software-Defined Storage -- Why? What? How?</b></dt> <dd> <div style="float:right;padding:20px;"><a data-<img alt="s014069-SDS-Why-What-How-Orlando-v1705b-cover" height="247" src="" width="320"></img></a><br> Slides are available on IBM Expert<br> Network on [<a href="">Slideshare.net</a>]</div> <p>David Vaughn, IBM Manager of Information Infrastructure Platform Marketing, explains that Software-Defined Storage is one of the [<a href="">Three current storage trends that will surprise you</a>].</p> <p>For three years in a row now, IDC has ranked [<a href="">IBM #1 in Software-Defined Storage</a>]. In previous posts, I have answered questions related to this, including:</p> <ul> <li>[<a href="">How is Software Defined different than what we have now?</a>]</li> <li>[<a href="">Which IBM storage products are Software-Defined?</a>]</li> <li>[<a href="">Is Software-Defined Storage always less expensive than Pre-Built Systems?</a>]</li> </ul> <p>As the title suggests, I explained why there is so much interest in Software-Defined Storage in the IT industry, what software-defined storage is, and how to deploy these solutions in your existing infrastructure without the full rip-and-replace. I covered which IBM products are available as software, pre-built systems and/or Cloud services.</p> </dd> </dl> <p dir="ltr"> </p> <p dir="ltr">IBM #ibmtechu Axel+Koester Craig+Nelson Brocade Clod+Barrera Software-Defined+Storage David+Vaughn IDC Solution+Center Craig+Nelson Brocade Marc+Benioff Salesforce Tesla+Motors IBM+Zurich IDC Ransomware air+gap IBM+Cloud+Object+Storage Cognitive+Systems IBM+Watson Machine+Learning Deep+Learning Google+AlphaGO NVMe NVMeOF DL4J Chainer Caffe Torch Theano Tensorflow SAP+HANA Holmium</p> This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Here's my recap of the afternoon sessions of Day 1. Storage Brand Opening Session - Craig Nelson Craig Nelson, Brocade manager for IBM Field Sales... 0 0 5defb143-ade3-4900-afe1-f131031a685a IBM Systems Technical University - Day 1 morning TonyPearson 120000HQFF active false Entradas de comentarios application/atom+xml;type=entry Número de "me gusta" true 2017-05-23T07:25:54-04:00 2017-05-23T07:25. Day 1 included keynote sessions. Here is my recap for the morning.</p> <dl dir="ltr"> <dt><b>General Session "The Quantum Age"</b></dt> <dd> <p>Amy Hirst, IBM Director of Systems Training, served as emcee for the General Session. The theme this week is "Power of Knowledge, Power of Technology, Power of You. You to the IBM'th power".</p> <p>Chris Schnabel, IBM Q Offering Manager, explained what "IBM Q" is.</p> <p>Chris feels "our intuition of what we can compute is wrong". Classical (non-Quantum) computing has evolved over past 100 years.</p> <p>Consider Molecular geometry. The best supercomputer can only handle the smallest molecules, those with 40 to 50 electrons, and even then are unable to calculate bond lengths within 10 percent accuracy. Quantum computing can.</p> <p>Another area is what computer scientists call the "Traveling Salesman Problem". If you had a list of 57 cities, what would be the optimal path to minimize the distance traveled to get to all of the cities. Doing an exhaustive search would be 10 to the 76th power. Dynamic Programming techniques provide some shortcuts, reducing this down to 10 to the 20th power, but still, that is impossible on most computers.</p> <p>Chris mentioned that there are <i>easy</i> problems to solve in polynomial time, and <i>hard</i> problems that are exponential, in that they get worse and worse the bigger the input set. There will always be hard problems.</p> <div style="float:left;padding:20px;color:#990000"><blockqoute> "Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy."<br> -- Richard Feynman </blockqoute></div> <p>Nature encodes information, but not in ones and zeros. Quantum computers are measured on the number of Qubits, their error rate, etc. The three factors that IBM focuses on are Coherence, Controllability and Connectivity.</p> <p>Chris explained how Superposition and Entanglement are used in Quantum Computers. I won't bore you with the details here, but rather save this for a future post.</p> <ul> <li>Today: 5 to 16 Qubits (can be simulated with today's classical computers. 5 Qubits is the power of your typical laptop)</li> <li>Near future: 50-100 Qubits (too big to simulate on supercomputers), with answers that are approximate or correct only 2/3 of the time.</li> <li>Future: millions of Qubits, fault-tolerant to provide exact, precise answers consistently.</li> </ul> <p>Quantum Computing opens up a new range of problems, what Chris call "Quantum Easy" problems. Problems that might take years to solve on classical supercomputers could be solved in seconds on a Quantum computer.</p> <p>Chris showed a picture of [<a href="">Colossus</a>], the first digital electronic computer used in the 1940s. Quantum computing today is like 1940's of classical computing.</p> <p>IBM is now working on Hybrid Quantum-Classical algorithms, for example:</p> <ul> <li>Quantum Chemistry - can be used in material design, healthcare pharmaceuticals</li> <li>Optimization - logistics/shipping, risk analytics</li> </ul> <p>There are different ways to build a quantum computer. IBM chose a single-junction transmon design, using Josephson junctions. While the chips are small, the refrigerators they are contained in are huge, and have to keep the chips at very cold 15 milliKelvin temperature (minus 459 Fahrenheit)!</p> <p>To get people excited about Quantum computing, IBM created the "IBM Q Experience" [<a href="">ibm.com/ibmq</a>] that allows the public to run algorithms on a basic 5 Qubit system using a simple drag-and-drop interface to put different transformational gates in sequence.</p> <p>IBM Research team were shocked to see 17 publications in prestigious journals make practical use of this 5 qubit system! Since then, IBM now offers a Software Developers Kit (SDK) called QISkit (pronounced Cheese-kit) as a text-based alternative to the drag-and-drop interface.</p> <p>Amy Hirst came back on stage to remind people to use Twitter hashtag #ibmtechu to follow the event. There are two more events like this planned for the end of the year. A Power/Storage conference in New Orleans, October 16-20, and another event focused on z Systems mainframe, November 13-17.</p> <p> </p> </dd> <dt><b>Pendulum Swings Back -- Understanding Converged and Hyperconverged Systems</b></dt> <dd> <div style="float:left;padding:20px;"><a data-<img alt="s014068-Pendulum-Swings-Orlando-v1705c-cover" height="247" src="" width="320"></img></a><br> Slides are available on IBM Expert<br> Network on [<a href="">Slideshare.net</a>]</div> <p>This presentation has an interesting back-story. At a client briefing, I was asked to explain the difference between "Converged" and "Hyperconverged" Systems, which I did with the analogy of a pendulum. I used the whiteboard, and then later made it into a single chart.</p> <p>At the far left of the pendulum, I start with mainframe systems of the early 1950s that had internal storage. As the pendulum swings to the middle, I discuss the added benefits of external storage, from RAID protection and Cache memory to centralized management and backup.</p> <p>To the far right of the pendulum, it swings over to networked storage, from NAS to SAN attached devices for flash, disk and tape. This offers excellent advantages, including greater host connectivity, and greater distances supported to help with things like disaster recovery.</p> <p>Here is where the pendulum swings back. IBM introduced the AS/400 a long while ago, and more recently IBM on commodity servers. There are two kinds:</p> <ul> <li>Pre-built systems like Nutanix, Simplivity or EVO:Rail which are x86 based server systems, pre-installed with software and internal flash and disk storage.</li> <li>Software that can be deployed on your own choice of hardware, such as IBM Spectrum Accelerate, IBM Spectrum Scale FPO, or VMware VSAN.</li> </ul> <p>So, over time, my single slide has evolved, and fleshed out into a full blown hour-long presentation!</p> <p> </p> </dd> <dt><b>IBM's Cloud Storage Options</b></dt> <dd> <div style="float:right>Cloud storage comes in four flavors: persistent, ephemeral, hosted, and reference. The first two I refer to as "Storage for the Computer Cloud" and the latter two I refer to as "Storage as the Storage Cloud".</p> <p>I also explained the differences between block, file and object access, and why different Cloud storage types use different access methods.</p> <p>Finally, I covered some of our new public cloud storage offerings, using OpenStack Swift and Amazon S3 protocols to access objects off premises, including the new Cold Vault and Flex pricing on IBM Cloud Object Storage System in IBM Bluemix Cloud.</p> </dd> </dl> <p dir="ltr">This was a great way to start the week!</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">#ibmtechu</a>, <a href="" rel="tag">Amy Hirst</a>, <a href="" rel="tag">Chris Schnabel</a>, <a href="" rel="tag">IBM Q</a>, <a href="" rel="tag">Quantum Computing</a>, <a href="" rel="tag">Quantum Easy</a>, <a href="" rel="tag">Converged Systems</a>, <a href="" rel="tag">IBM PureSystems</a>, <a href="" rel="tag">VersaStack</a>, <a href="" rel="tag">Hyperconverged Systems</a>, <a href="" rel="tag">HCI</a>, <a href="" rel="tag">Nutanix</a>, <a href="" rel="tag">Simplivity</a>, <a href="" rel="tag">EVO:RAIL</a>, <a href="" rel="tag">VMware VSAN</a>, <a href="" rel="tag">Spectrum Accelerate</a>, <a href="" rel="tag">Spectrum Scale FPO</a>, <a href="" rel="tag">Supermicro</a>, <a href="" rel="tag">VCE</a>, <a href="" rel="tag">Vblock</a>, <a href="" rel="tag">NetApp</a>, <a href="" rel="tag">Flexpod</a>, <a href="" rel="tag">Cloud Storage</a>, <a href="" rel="tag">IBM Bluemix</a></p> This week, I am presenting at the IBM Systems Technical University in Orlando, Florida, May 22-26, 2017. Day 1 included keynote sessions. Here is my recap for the morning. General Session "The Quantum Age" Amy Hirst, IBM Director of Systems Training,... 0 0 62329665-1ecb-481c-b05f-4fc58bd814ac Is Software-Defined Storage always less expensive than Pre-Built Systems? TonyPearson 120000HQFF active false TonyPearson 120000HQFF active false Entradas de comentarios application/atom+xml;type=entry Número de "me gusta" true 2017-05-18T19:46:56-04:00 2017-05-19T19:00:59-04:00 <p dir="ltr">Earlier this year, the kind folks at Storage Newsletter wrote an article [<a href="">Cost of Eight Online Archive Storage Solutions</a>], which discussed the [<a href="">IT Brand Pulse</a>] [<a href="">5 Year TCO Case Study (14 pages)</a>].</p> <blockquote dir="ltr">(<b>FCC Disclosure:</b> I work for IBM. I have no financial interest in SUSE, Scality, or any other storage vendor mentioned in this post. This blog post can be considered a "paid celebrity endorsement" for IBM Storwize, IBM Cloud Object Storage, and IBM Spectrum Storage software mentioned below.)</blockquote> <p dir="ltr">The study takes a realistic request for 250 TB of storage, at 25 percent compound annual growth rate (CAGR), to store infrequently accessed data in an online archive, and then looks at the Total Cost of Ownership (TCO) over five year period.</p> <p dir="ltr">The study compares five different Software-Defined Solutions and three pre-built systems. The Software-defined solutions come as software-only, requiring that you purchase the hardware separately and build it yourself. The three pre-built systems were chosen from the top three storage vendors in the marketplace: Dell EMC, IBM and NetApp.</p> <p dir="ltr">The cost of support is factored in, as it should be. To keep things equal, no data reduction like data deduplication or compression were used.</p> <p dir="ltr">In an odd approach, the study mixes block, file and object based approaches all in the same study.</p> <p dir="ltr">You can read the full 14-page study (linked above). I have organized the results into a single table, ranked from best to worst, color coded for the best deals in green ($100K to $200K), moderate solutions in yellow ($200K to $300K) and most expensive in red (over $300K). I put the software-only options on the left and pre-built systems on the right.</p> <p dir="ltr"> </p> <table border="2" dir="ltr" width="99%"> <tbody> <tr> <th>Build-your-Own</th> <th>5-year TCO</th> <th>Pre-Built Systems</th> </tr> <tr> <td align="center" bgcolor="#99FF33">SUSE Enterprise Storage 4</td> <td align="center" bgcolor="#99FF33">$149,408</td> <td align="center"> </td> </tr> <tr> <td align="center" bgcolor="#99FF33">Scality RING</td> <td align="center" bgcolor="#99FF33">$193,384</td> <td align="center"> </td> </tr> <tr> <td align="center"> </td> <td align="center" bgcolor="#99FF33">$195,458</td> <td align="center" bgcolor="#99FF33">IBM Storwize V5010</td> </tr> <tr> <td align="center"> </td> <td align="center" bgcolor="#FFFF99">$211,534</td> <td align="center" bgcolor="#FFFF99">NetApp FAS2554</td> </tr> <tr> <td align="center" bgcolor="#FFFF99">DataCore SAN Symphony</td> <td align="center" bgcolor="#FFFF99">$245,824</td> <td align="center"> </td> </tr> <tr> <td align="center" bgcolor="#FFFF99">VMware VSAN</td> <td align="center" bgcolor="#FFFF99">$258,151</td> <td align="center"> </td> </tr> <tr> <td align="center" bgcolor="#FF66FF">Red Hat Ceph Storage</td> <td align="center" bgcolor="#FF66FF">$328,847</td> <td align="center"> </td> </tr> <tr> <td align="center"> </td> <td align="center" bgcolor="#FF66FF">$330,865</td> <td align="center" bgcolor="#FF66FF">Dell EMC Unity 300</td> </tr> </tbody> </table> <p dir="ltr"> </p> <p dir="ltr">I am often asked, "Isn't the software-only, build-it-yourself approach, always the lowest cost option?" Now, I can answer, "Sometimes yes, sometimes no." Fortunately, IBM offers Software-Defined Storage in a variety of packaging options including software-only, pre-built systems, and in the Cloud as a service.</p> <p dir="ltr">IBM Storwize V5010 is based on IBM Spectrum Virtualize software, which you can deploy as software-only on your own x86 servers. This was not mentioned in the study, and perhaps it is my job to remind people that this option is also available for those who want to build their own storage.</p> <p dir="ltr">For that matter, IBM Cloud Object Storage System -- available as software-only, pre-built systems, and in the Cloud -- might also be a cost-effective alternative.</p> <p dir="ltr">Next week I will be in Orlando, Florida for the IBM Systems Technical University. If you are attending, stop by one of my presentations, or look for me at the Solution Center at one of the IBM peds, or attend the "Meet the Experts for IBM Storage" on Thursday!</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">Storage Newsletter</a>, <a href="" rel="tag">IT Brand Pulse</a>, <a href="" rel="tag">Total Cost Ownership</a>, <a href="" rel="tag">TCO Case Study</a>, <a href="" rel="tag">CAGR</a>, <a href="" rel="tag">Software-Defined Storage</a>, <a href="" rel="tag">SDS</a>, <a href="" rel="tag">Dell EMC</a>, <a href="" rel="tag">IBM</a>, <a href="" rel="tag">NetApp</a>, <a href="" rel="tag">SUSE</a>, <a href="" rel="tag">SUSE Enterprise Storage</a>, <a href="" rel="tag">CEPH</a>, <a href="" rel="tag">CephFS</a>, <a href="" rel="tag">IBM Storwize</a>, <a href="" rel="tag">Storwize V5000</a>, <a href="" rel="tag">Storwize V5010</a>, <a href="" rel="tag">Spectrum Virtualize</a>, <a href="" rel="tag">IBM Cloud Object Storage</a></p> Earlier this year, the kind folks at Storage Newsletter wrote an article [ Cost of Eight Online Archive Storage Solutions ], which discussed the [ IT Brand Pulse ] [ 5 Year TCO Case Study (14 pages) ]. ( FCC Disclosure: I work for IBM. I have no financial... 0 0 680ee348d-0064-4a0f-aea7-9bbd273f93b7 The Tortoise, the Hare and the Cheetah TonyPearson 120000HQFF active false TonyPearson 120000HQFF active false Entradas de comentarios application/atom+xml;type=entry Número de "me gusta" true 2017-05-10T22:16:31-04:00 2017-05-13T13:40:11-04:00 <p dir="ltr">I have been blogging for more than 10 years now, so I am no stranger to commenting on competitive comparisons. In some cases, I am setting the record straight, and other times, poking fun at competitor results, claims or conclusions. This comparison from Brian Carmody was too juicy to ignore.</p> <blockquote dir="ltr">(<b>FCC Disclosure:</b> I work for IBM. I have no financial interest in Infinidat, Dell EMC, nor Pure Storage, mentioned in this post. I do have friends and former co-workers who now work for Infinidat. This blog post can be considered a "paid celebrity endorsement" for IBM FlashSystem products.)</blockquote> <p dir="ltr">Fellow blogger Brian Carmody, formerly with IBM but now Chief Technology Officer at a startup called Infinidat, wrote [<a href="">Flash is not Fast, and the Sky is Falling</a>].</p> <p dir="ltr">Here is an excerpt, I have added <i>(Infinidat)</i> wherever Brian says "we" just so there is no confusion:</p> <div dir="ltr" style="float:left;color:#006633;padding:20px;"> <p>"... So last week we <i>(Infinidat)</i>.</p> <p>In summary, we <i>(Infinidat)</i> wrecked the Pure and EMC systems. Here are the results side by side with EMC's data:</p> << <table border="2" width="99%"> <tbody> <tr bgcolor="#66CCCC"> <th>Workload</th> <th>Pure //m50</th> <th>EMC Unity 600F</th> <th>INIFINIDAT 56K</th> <th>INFINIDAT Advantage</th> </tr> <tr> <td>16K IOPS (80% Read)</td> <td align="right">33,460</td> <td align="right">58,807</td> <td align="right">293,178</td> <td>9x Pure, 5x Unity</td> </tr> <tr> <td>256K BW MBps</td> <td align="right">674 (@c3ms)</td> <td align="right">2,396 (@c3ms)</td> <td align="right">7,200 (@3ms)</td> <td>10.6x Pure, 3x Unity</td> </tr> <tr> <td>Steady-state IOPS</td> <td align="right">42,000</td> <td align="right">116,000</td> <td align="right">192,000</td> <td>4.5x Pure, 1.6x Unity</td> </tr> <tr> <td>Steady-state latency (ms)</td> <td align="right">13.6</td> <td align="right">4.4</td> <td align="right">2</td> <td>1/7 Pure, 1/2 Unity</td> </tr> </tbody> </table> <p>By the way, we <i>(Infinidat)</i> took the liberty of running the test with a 200TB data set instead of Pure and EMC's 50TB because modern workloads require performance at scale, and we ran it with in-line compression enabled because our compression algorithm doesn't hurt performance.</p> <p>This was an interesting test to run, and we <i>(Infinidat)</i> hope it helps the storage industry move away from media type wars and benchmarks (you will lose every time on performance if INFINIDAT is in the mix) ..."</p> </div> <p dir="ltr">Notice anything wrong here? anything missing?</p> <div dir="ltr" style="float:left;padding:20px;"><a data-<img alt="Tortoise-Hare" height="210" src="" width="240"></img></a></div> <p dir="ltr">The Tortoise beat "Hare 1" and "Hare 2", but did not invite the Cheetah to the race?</p> <p dir="ltr">Brian was smart enough not to compare their product to anything from IBM. IBM has a wide variety of All-Flash Arrays, including the DS8880F models, the Storwize V7000F and V5030F models, and Elastic Storage Server models. However, for this workload, IBM would probably recommend the FlashSystem V9000, A9000 or A9000R.</p> <p dir="ltr">Any All-Flash Array with a steady-state latency of 2 milliseconds or greater is embarassing, but then Infinibox is not really an All-Flash Array.</p> <p dir="ltr">The architecture of their Infinibox appears much like the original XIV. It has a mix of DRAM memory and SSD cache, combined with spinning drives. It offers only compression, not data deduplication. Unlike the IBM XIV powered by six to 15 servers, the Infinibox appears under-powered with just three servers.</p> <p dir="ltr">The Infinibox uses software-based in-line compression, which must put a huge tax on the few CPUs they have in those three servers. Infinidat chose not to compress the data in their cache, probably to reduce the additional overhead on their over-taxed CPUs.</p> <p dir="ltr">The IBM FlashSystem V9000 has an innovative design, based on IBM Spectrum Virtualize, the mature software that you also find in the IBM SAN Volume Controller and Storwize family of products.</p> <p dir="ltr">The FlashSystem V9000 offers hardware-accelerated compression. IBM takes advantage of the integrated Intel QuickAssist co-processor which runs the compression algorithm 20 times faster than standard Intel Broadwell CPU.</p> <p dir="ltr">IBM compresses its cache, using a two-tier approach. The "upper cache" receives the data uncompressed, so that it can then tell the application to continue, for fastest turn-around time. Then the data is compressed, and stored in the "lower cache", optimizing the value and benefits of DRAM memory. Many databases get up to 80 percent savings, resulting in a 5-to-1 benefit in DRAM cache memory.</p> <p dir="ltr">The IBM FlashSystem A9000 and A9000R also have an innovative, based on IBM Spectrum Accelerate, the code originally developed for IBM XIV storage system.</p> <blockquote dir="ltr">(<b>Fun fact:</b> Infinidat's founder, [<a href="">Moshe Yanai</a>], was formerly the founder and designer of XIV, and it appears that Infinidat is just a re-design of old XIV technology architecture, re-packaged with a few differences. Since Moshe left, IBM has drastically enhanced the IBM XIV.)</blockquote> <p dir="ltr">Like the IBM Spectrum Virtualize family, the IBM FlashSystem A9000 and A9000R have hardware-accelerated in-line compression, and two-tier approach to cache. The "upper cache" receives the data uncompressed, then the data is compressed and deduplicated, and stored in the "lower cache", optimizing the value and benefits of DRAM memory.</p> <p dir="ltr">The IBM FlashSystem A9000 and A9000R also offer in-line data deduplication. Modern workloads are virtualized, and Virtual Machine (VM) and Virtual Desktop Infrastructure (VDI) get significant benefits from data deduplication. Infinidat does not play here. For the FlashSystem A9000, most of the metadata related to data deduplication is in cache, minimizing the overhead.</p> <p dir="ltr">IBM FlashSystem A9000 and A9000R have full performance that blows these published Infinibox results away WITH compression and deduplication turned on.</p> <p dir="ltr">Brian ran a workload that used the DRAM and SSD cache exclusively, eliminating the reality that any REAL WORLD workload would have to tap into those much slower spinning drives. This is not really a side-to-side benchmark. He is comparing his live run on Infinibox to published numbers from a previous comparison run on a completely different set of data.</p> <p dir="ltr">This raises the question, why pay for all those spinning drives at all, if you plan to only use the DRAM and Flash storage for your workloads?</p> <p dir="ltr">A week later, Brian followed up with another post [<a href="">The INFINIDAT Challenge</a>], acknowledging his comparison was bogus. Here's an excerpt. Again, I have added <i>(Infinidat)</i> wherever Brian is referring to his employer just so there is no confusion:</p> <p dir="ltr"> </p> <div dir="ltr" style="float:left;color:#006633;padding:20px;"> <p>"....</p> <p>So, what can we <i>(Infinidat)</i> do about it?</p> <p>We <i>(Infinidat)</i> cordially invite every enterprise storage customer who wants lower latency and lower storage cost to visit [<a href="FasterThanAllFlash.com">FasterThanAllFlash.com</a>] and sign up for The INFINIDAT Challenge.</p> <p>In summary:</p> <ul> <li>We <i>(Infinidat)</i> will Give you an Infinibox system to test</li> <li>We <i>(Infinidat)</i> will Help you clone and test your environment with Infinibox</li> <li>We <i>(Infinidat)</i> Guarantee your applications will run faster on Infinibox than your All-Flash Array.</li> <li>If we <i>(Infinidat)</i> fail, we'll take the system back and Donate $10,000 to the charity of your choice.</li> <li>If our technology delivers, you can keep the system, and we'll <i>(Infinidat)</i> Donate $10,000 in your name to the charity of our choice (The American Cancer Society).</li> </ul> <p...."</p> </div> <p dir="ltr">As consolidation play doing full range of data services, I do not see this Infinibox working out. Talking to clients who have the Infinibox, the performance deteriorates in REAL WORLD workloads as you add more data to the unit.</p> <p dir="ltr">The Infinibox seems fine for workloads that do not demand high performance, so I was surprised Brian compared it to All-Flash arrays. The Infinibox is out of its league!</p> <blockquote dir="ltr">(To be fair, Pure Storage and EMC XtremeIO aren't really in the same league as IBM FlashSystem, either, given that both of those products are based on commodity SSD. IBM FlashSystem models are consistently 4 to 10 times lower latency than these Commodity-SSD based competitors.)</blockquote> <p dir="ltr">The Infinibox also lacks features many people expect in an Enterprise-class storage array, like Call-Home capability to identify problems quickly, and Synchronous remote mirroring for disaster recovery. It is often common for startups like Infinidat to deliver a [<a href="">Minimum Viable Product</a>] as their first offering.</p> <p dir="ltr">To paraphrase Brian himself, your applications will lose every time on performance if INFINIDAT is in your datacenter.</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">FlashSystem</a>, <a href="" rel="tag">A9000</a>, <a href="" rel="tag">A9000R</a>, <a href="" rel="tag">Brian Carmody</a>, <a href="" rel="tag">Infinidat</a>, <a href="" rel="tag">Infinibox</a>, <a href="" rel="tag">Pure Storage</a>, <a href="" rel="tag">EMC</a>, <a href="" rel="tag">EMC Unity</a>, <a href="" rel="tag">Infinidat F6230</a>, <a href="" rel="tag">Infinibox F6230</a>, <a href="" rel="tag">IBM XIV</a>, <a href="" rel="tag">Moshe Yanai</a>, <a href="" rel="tag">SSD</a>, <a href="" rel="tag">VDI</a>, <a href="" rel="tag">All-Flash Array</a>, <a href="" rel="tag">AFA</a>, <a href="" rel="tag">Call-Home</a>, <a href="" rel="tag">Synchronous Mirror</a>, <a href="" rel="tag">Disaster Recovery</a>, <a href="" rel="tag">Minimum Viable Product</a>, <a href="" rel="tag">Spectrum Virtualize</a>, <a href="" rel="tag">Intel QuickAssist</a>, <a href="" rel="tag">American Cancer Society</a></p> <p dir="ltr"> </p> I have been blogging for more than 10 years now, so I am no stranger to commenting on competitive comparisons. In some cases, I am setting the record straight, and other times, poking fun at competitor results, claims or conclusions. This comparison from Brian... 0 2 1095-ac522896-bb8d-4b5a-993f-49fc1e12a55d IBM Announcements for Tape and CDM May 2017 TonyPearson 120000HQFF active false Entradas de comentarios application/atom+xml;type=entry Número de "me gusta" true 2017-05-09T16:29:43-04:00 2017-05-09T16:29:43-04:00 <p dir="ltr">Well, it's Tuesday again, and you know what that means? IBM Announcements!</p> <dl dir="ltr"> <dt><b>IBM TS7760 Virtual Tape System</b></dt> <dd> <p>The IBM TS7760, the world's best virtual tape system for IBM mainframe environments, now supports 8TB drives.</p> <p>This raises the maximum capacity of a TS7760 system to 2.45 PB. That is 600TB for the base frame, and 925 TB for each of the two expansion frames, for a total of three frames.</p> <p>The drives are configured using Distributed RAID (DRAID) which drastically reduces the time needed for rebuilds.</p> <p>The TS7760 can optionally attach to a TS3500 or TS4500 physical tape library for added capacity via 16 Gbps Fibre Channel.</p> <p>To learn more, see [<a href="">IBM TS7700 delivers maximum capacity</a>] press release.</p> </dd> <dt><b>TS1155 Tape Drive models</b></dt> <dd> <p>The new TS1155 enterprise tape drive can write up to 15 TB uncompressed data to existing JD/JZ/JL media.</p> <p>It can read/write existing 10TB-formatted JD media, and 7TB-formatted JC media, written by former TS1150 drives. It also can offer read-only support for older 4TB-formatted JC media from TS1140 drives.</p> <p>These are uncompressed capacities, and some clients achieve 2x or 3x compression on top of these capacities. This depends heavily on the type of data. Your mileage may vary, as they say.</p> <p>Most of the rest of the features of the TS1150 drives carry forward., The performance 360 MB/sec is similar, encryption via IBM Security Key Lifecycle Manager (SKLM) is similar, and support for IBM Spectrum Archive via Linear Tape File System (LTFS) format is similar.</p> <p>An interesting development is that the TS1155, in addition to standard 8Gb Fibre Channel attach, is the first IBM enterprise drive to also offer 10Gb Ethernet support. IBM will offer both RDMA over Converged Ethernet (RoCE) as well as iSCSI support.</p> <p>To learn more, see [<a href=""> TS1155 Tape Drive models deliver up to 15 TB native capacity</a>] press release.</p> </dd> <dt><b>IBM Spectrum Copy Data Management V2.2.6 </b></dt> <dd> <p>The newest member of the IBM Spectrum Storage software family, IBM Spectrum Copy Data Management automates the creation of snapshot images (FlashCopy for those familiar with IBM terminology) on IBM, NetApp and EMC storage arrays. These copies can be made for various uses, such as DevOps, Dev/Test, Backup/Restore, and Disaster Recovery.</p> <p>At some data centers, these copies can consume as much as 60 percent of your total storage space, because often each developer and tester are generating their own copies. Instead, having copies automated, registered, cataloged, and made available to developers and testers eliminates rogue copies.</p> <p>This release adds support for additional databases, including Microsoft SQL Server on physical machines, SAP HANA in-memory databases, and Epic/Caché from InterSystems used in Electronic Health Records (EHR) management systems.</p> <p>IBM also adds support for long-distance Vmotion for VMware virtual machine images. The target for this movement is IBM Spectrum Accelerate running on IBM Bluemix Cloud, supporting Hybrid Cloud configurations.</p> <p>To learn more, see [<a href="">IBM Spectrum Copy Data Management V2.2.6 </a>] press release.</p> </dd> </dl> <p dir="ltr">To learn more about these and other recent enhancements, come to the IBM Systems Technical University, May 22-26, 2017 in Orlando, Florida. I'll be there!</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> , <a href="" rel="tag">IBM</a>, <a href="" rel="tag">TS7760</a>, <a href="" rel="tag">Virtual Tape Library</a>, <a href="" rel="tag">VTL</a>, <a href="" rel="tag">Virtual Tape System</a>, <a href="" rel="tag">VTS</a>, <a href="" rel="tag">TS3500</a>, <a href="" rel="tag">TS4500</a>, <a href="" rel="tag">TS1155</a>, <a href="" rel="tag">TS1150</a>, <a href="" rel="tag">SKLM</a>, <a href="" rel="tag">LTFS</a>, <a href="" rel="tag">Spectrum Archive</a>, <a href="" rel="tag">RDMA</a>, <a href="" rel="tag">RoCE</a>, <a href="" rel="tag">iSCSI</a>, <a href="" rel="tag">Spectrum Copy Data Management</a>, <a href="" rel="tag">Spectrum CDM</a>, <a href="" rel="tag">CDM</a>, <a href="" rel="tag">DevOps</a>, <a href="" rel="tag">Disaster Recovery</a>, <a href="" rel="tag">MS SQL</a>, <a href="" rel="tag">SAP HANA</a>, <a href="" rel="tag">EPIC</a>, <a href="" rel="tag">InterSystems</a>, <a href="" rel="tag">EHR</a>, <a href="" rel="tag">VMware</a>, <a href="" rel="tag">Vmotion</a>, <a href="" rel="tag">Spectrum Accelerate</a>, <a href="" rel="tag">IBM Bluemix</a>, <a href="" rel="tag">IBM Cloud</a>, <a href="" rel="tag">Hybrid Cloud</a></p> Well, it's Tuesday again, and you know what that means? IBM Announcements! IBM TS7760 Virtual Tape System The IBM TS7760, the world's best virtual tape system for IBM mainframe environments, now supports 8TB drives. This raises the maximum capacity of... 0 0 9dfdd448-ef2b-4612-a68d-97862ead2e97 Hyper-Scale Manager, the First Year TonyPearson 120000HQFF active false Entradas de comentarios application/atom+xml;type=entry Número de "me gusta" true 2017-05-05T16:10:39-04:00 2017-05-05T16:10:39-04:00 <p dir="ltr">Over the past ten years, my co-workers have asked to write a "guest post" on this blog. This time, Moshe Weiss, IBM Senior Manager, Development and Design, has offered the following post, not in his own voice, but in the voice of his "baby", the Hyper-Scale Manager software.</p> <p dir="ltr">You might think this is a strange approach, but today we have robots that can dance, and cars that can drive themselves! If software could talk, this is what IBM Hyper-Scale Manager would say:</p> <p dir="ltr">"I was born a year ago.</p> <p dir="ltr">It wasn't an easy birth… there were many complications. In fact, so many, that I was almost prematurely born!</p> <p dir="ltr">Most of my development, in preparation for labor and delivery, was done within the last 6 months of the overall 18 months. I was shaped and designed, and sometimes re-shaped, three times. Lots of assumptions had to be made in hopes to ease a successful delivery and help bring me to full term of the birthing process.</p> <p dir="ltr">During my first year of maturity, I focused on learning how customers used me; what frustrated them the most, and what they loved or 'almost' loved, while still needing refinement and redesign.</p> <p dir="ltr">The number of customers adopting me grew higher and higher, as did the number of complaints and bugs that I had to deal with, and my users’ frustrations and dislikes because I wasn't yet a complete solution and still had some missing features.</p> <p dir="ltr">I was renewed four times! Each time of which improved me and made my senses better, faster, adding new capabilities that helped make me more approachable, intuitive and delightful.</p> <p dir="ltr">Choosing how to renew, and what to add to each renewal, is not an easy task. Basically, it was about prioritizing user experience versus gaps that were deferred from my birth, versus differentiators to make me unique and sell more, versus features in my roadmap, versus investing huge efforts in my quality.</p> <p dir="ltr">Each renewal was a complex process with lots of features and behaviors to add, while trying to make my customers’ life a bit easier, since features that were important to them were sometimes considered low priority.</p> <p dir="ltr">But, there were also good times during my first year:</p> <dl dir="ltr"> <dt><b>Huge customer adoption rate</b></dt> <dd> <p>100 new customers in two months!</p> <p>Growing was a great thing and my parents were and are still so proud! But, like with most things, it came with a price - a lot of sustain issues from the field, requests for changes and bad feedback that I am hard to use and missing core elements.</p> <p>Being a new baby in the Storage world is not a simple thing, as expectations are huge (mainly because of my successful elder brother, the XIV GUI) and I must quickly keep up with all of them.</p> <p>Although, I am getting tons of good feedback for being revolutionary and unique. People are emotionally engaged with me, and being that I’m a baby, I love to see emotions!</p> </dd> <dt><b>Huge marketing efforts to put me center stage</b></dt> <dd> <p>However, because of some initial problems at the start -- I am a new product, remember? -- I was thrown out of multiple customer sites, and some sales/marketing guys just stopped believing in me. That made me sad.</p> <p>My parents did a great job, though, in talking, explaining and demonstrating what I can do, together with what I can’t do now, but will do soon. This really helped in some areas, and customers began to see what my parents saw in me for so many years.</p> <p>I’m really enthusiastic to hear what people will think of me when I’m two years old!</p> </dd> <dt><b>Invention everywhere…</b></dt> <dd> <p>As part of the renewal I had four times during my first year, design elements were reconsidered, redesigned and rewritten to find the best solutions ever. No product has come even close to what I suggest to the world… I am so proud of myself!</p> <p>Additionally, my parents wrote approximately 20 patents on my User Interface (UI) elements and User Experience (UX) concepts, which makes me extremely unique.</p> <p> </p> </dd> <dt><b>Prioritizing</b></dt> <dd> <p>Prioritization of what goes in and what doesn't, especially during a time when fewer and fewer babysitters handled me during that year. It was a real challenge. Read my parent's post [<a href="">How to drive forward an exhausted team?</a>] for more details.</p> <p>But my parents did it! They succeeded to add cool features like:</p> <ul> <li>Filter analytics and free text, making the filter a great experience that everyone is using.</li> <li>Great UX improvements like redesigning the tabs, adding right click menus, and adding more on-boarding enablers</li> <li>Improving the dashboard.</li> <li>Improving my core business, capacity management (four different times!), and still working on it.</li> <li>Adding features that were initially deferred in my birth. Deferring features back then was the way to make my birth go smoother. Now, these missing features annoy people.</li> <li>Improving quality dramatically, adding automation to the way people test me.</li> <li>Adding differentiators, like the health widget, with more than 20 best practices that provide helpful tips to the customer when there’s a need to change something in their environment, to avoid future issues.</li> <li>Continue to bring added values for the 'A-family'. I am monitoring: FlashSystem A9000/R, XIV and Spectrum Accelerate, both on and off premises. This added value makes for a family with the most powerful management solutions and experience."</li> </ul> </dd> </dl> <div dir="ltr" style="float:left;padding:20px;"><a data-<img alt="Hyper-Scale-Manager-orlando2017" height="160" src="" width="320"></img></a></div> <p dir="ltr">If you are planning to attend the upcoming IBM Systems Technical University, Orlando Florida, May 22-26, There will also be a variety of hands-on labs. I recommend participating in the hands-on session to feel and witness the next release of IBM Hyper-Scale Manager.</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">Moshe Weiss</a>, <a href="" rel="tag">Hyper-Scale Manager</a>, <a href="" rel="tag">XIV</a>, <a href="" rel="tag">FlashSystem A9000</a>, <a href="" rel="tag">FlashSystem A9000R</a>, <a href="" rel="tag">Spectrum Accelerate</a></p> Over the past ten years, my co-workers have asked to write a "guest post" on this blog. This time, Moshe Weiss, IBM Senior Manager, Development and Design, has offered the following post, not in his own voice, but in the voice of his... 0 0 1473b94d22d-f594-418c-8632-ec4ba2a6c71c Modernize and Transform Healthcare with IBM Storage Rochester Event TonyPearson 120000HQFF active false Entradas de comentarios Número de "me gusta" true 2017-04-20T11:57:05-04:00 2017-04-20T11:57:05-04:00 <p dir="ltr"><a data-<img alt="Healthcare-icon" height="160" src="" width="500"></img></a></p> <p dir="ltr">This week, I was part of an all-day event called "Healthcare and Research Trends & Directions in a Cognitive World" at the IBM Executive Briefing Center (EBC) in Rochester, MN. I was one of many presenters covering Information Technology to improve healthcare outcomes. Todd Stacy, IBM Director Server Sales for US Public Market, served as our emcee.</p> <p dir="ltr">This was a great day. Special thanks to Kathy Lehr, Trish Froeschle, and Scott Gass for organizing this event! We had clients from a variety of Health Care and Life Science industry backgrounds. I certainly learned a few things myself.</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">Michael Weiner</a>, <a href="" rel="tag">Todd Stacy</a>, <a href="" rel="tag">Watson Health</a>, <a href="" rel="tag">Obesity Epidemic</a>, <a href="" rel="tag">Aging Population</a>, <a href="" rel="tag">HITECH</a>, <a href="" rel="tag">Greg Tevis</a>, <a href="" rel="tag">Raj Tandon</a>, <a href="" rel="tag">Spectrum Storage</a>, <a href="" rel="tag">Spectrum Virtualize</a>, <a href="" rel="tag">Spectrum Control</a>, <a href="" rel="tag">Spectrum Protect</a>, <a href="" rel="tag">Spectrum Scale</a>, <a href="" rel="tag">Spectrum Copy Data Management</a>, <a href="" rel="tag">IBM Cloud Object Storage</a>, <a href="" rel="tag">EPIC</a>, <a href="" rel="tag">Cache Database</a>, <a href="" rel="tag">Jason Crites</a>, <a href="" rel="tag">Wayland Vacek</a>, <a href="" rel="tag">IBM Merge</a>, <a href="" rel="tag">HIMSS</a>, <a href="" rel="tag">Jane Yu</a>, <a href="" rel="tag">Frank Lee</a>, <a href="" rel="tag">Sidra Medical Research Center</a>, <a href="" rel="tag">Qatar</a>, <a href="" rel="tag">Spectrum Compute</a>, <a href="" rel="tag">VersaStack</a>, <a href="" rel="tag">FlashSystem</a>, <a href="" rel="tag">Kathy Lehr</a>, <a href="" rel="tag">Trish Froeschle</a>, <a href="" rel="tag">Scott Gass</a></p> <dl dir="ltr"> <dt><b>The Cognitive Healthcare Organization </b></dt> <dd> <p>Dr. Michael Weiner, IBM Chief Medical Information Officer, Watson Health, covered some of the real challenges not just facing the United States, but also other countries. On average, healthcare in USA [<a href="">costs over $10,000 USD per American citizen</a>]! Compare that to only $3,700 USD for the folks in the United Kingdom! In fact, nearly all industrial nations spend between $2,000 and $5,000 per person. Where does all the U.S. money go?</p> <dl> <dt><i>Aging Population</i></dt> <dd> <p>A big challenge is our ever-aging population. Every day, there are 10,000 [<a href="">Baby Boomers</a>] reaching their 65th birthday, with fewer people in the 25-44 age group to work as nurses to take care of them. About 15 percent of the US population are elderly (over age 65) and this is expected to grow to 20 percent in year 2040. The situation is even worse in Japan, where 25 percent of the population today is elderly, and this is expected to be 40 percent by year 2060.</p> </dd> <dt><i>New Care Models</i></dt> <dd> <p>In some countries, like Australia and Japan, post office workers who spent their time delivering mail, now can stop in to check in on elderly people. As people ship less mail, using social media or email instead, this keeps the postal workers employed, in a manner that provides society value.</p> <p>The USA enjoys one of the lowest costs for food, but then suffers from an epidemic of obesity, with over 34 percent of Americans are obese. When New York City eliminated Trans Fats, heart attacks dropped considerably.</p> <p>In 2009, the Health Information Technology for Economic and Clinical Health [<a href="">HITECH</a>] Act required the digitization of medical information, known as "Meaningful Use", which has greatly influenced healthcare facilities. This was implemented by a combination of incentives and penalties. Now, more than than 92 percent of hospitals in the USA have digitized medical information! The rest are still using paper and Xray film images. Some places were initially exempted, such as Assisted Living Homes for example, so there is still more work to be done.</p> </dd> <dt><i>New Technology</i></dt> <dd> <p>An advantage of using computer-based solutions like Artificial Intelligence is that it eliminates bias. When a woman walks into an Emergency Room complaining about chest pains, few health staff would consider this a sign of heart attack. When a man does same, health staff considers heart attack as the first diagnosis, at the risk of missing out on other possibilities.</p> <p>Every year, over a million articles related to healthcare research are published. Who can read all this in a timely manner? IBM Watson! After [<a href="">winning in Jeopardy</a>], IBM Watson was "sent to medical school" to learn how to assist doctors in diagnosing patients.</p> </dd> </dl> </dd> <dt><b>Transforming Health Care Data Management with IBM Spectrum Storage</b></dt> <dd> <p>Greg Tevis, IBM Software Defined Storage Architect, and Raj Tandon, IBM Senior Strategist, co-presented this introduction to IBM Spectrum Storage family of products. They covered examples with IBM Spectrum Virtualize, IBM Spectrum Control, IBM Spectrum Protect, IBM Spectrum Scale, IBM Cloud Object Storage, and IBM Copy Data Management. The latter having support directly for EPIC and Cache databases.</p> </dd> <dt><b>Cognitive Imaging Solutions for Healthcare Providers</b></dt> <dd> <p>Jason Crites, IBM Healthcare and Life Sciences Data Solutions Leader, and Wayland Vacek, Enterprise Sales Manager for Merge, presented IBM Watson Imaging Clinical Review, from IBM's acquisition of the Merge company. The solution is based on IBM Spectrum Scale as the back-end storage repository.</p> <p>Merge has been around for more than 20 years, with clinical workflow offerings in Cardiology, Radiology, Orthopedics and Eye care. Often, IBM Watson is able to identify things in medical images that escape the review or radiologists or other medical specialists.</p> <p>At HIMSS conference earlier this year, The human radiologists were shown a collection of images used to train IBM Watson. The human radiologists only identified 20 percent of the images correctly, while IBM Watson got all of them, every time. In many cases, human radiologists have only a few seconds to look at an Xray image. Computers like IBM Watson are now fast enough to compete directly with human radiologists in the same number of seconds.</p> </dd> <dt><b>Building a Foundation for the Cognitive Era in Healthcare and Life Sciences</b></dt> <dd> <p>Dr. Jane Yu, IBM Systems Architect, Healthcare & Life Sciences, and Dr. Frank Lee, IBM Global Sales Leader, IBM Software Defined Infrastructure & Life Sciences, co-presented this topic. They present five challenges:</p> <ul> <li>Growing data volumes are making it more difficult to manage, process and store this data.<br> </li> <li>Scientists find themselves spending more than 80 percent of their time manually integrating data from silos, and less than 20 percent of their time doing actual research and deriving insights from their analyses.<br> </li> <li>Compute- and data-intensive workflows may take days to complete on existing server and storage systems.<br> </li> <li>IT organizations must keep up with rapidly evolving applications, development frameworks, and databases for preferred. Health care Life Science (HCLS) applications. This includes SAS, Matlab, Hadoop, Spark, NoSQL databases, as well as Deep Learning and Machine Learning workloads.<br> </li> <li>Scientific integrity and government mandates increasingly require collaboration across organizational boundaries.</li> </ul> <p>In one example, Sidra Medical and Research Center plans to map the genomes of all 250,000 citizens in the Middle Eastern country of Qatar. Imagine that processing each Qatari citizen will generate 200 GB of data for this project, resulting in 50 Petabytes (PB) of data!</p> <p>Combining IBM Spectrum Compute products with IBM Spectrum Scale storage, can help address these challenges.</p> </dd> <dt><b>Modernize & Transform Helathcare with IBM Storage Solutions</b></dt> <dd> <div style="float:right;padding:20px;"><a data-<img alt="IBM-Modernize-Healhcare-Storage-2017" height="249" src="" width="320"></img></a><br> Slides are available at IBM Expert<br> Network on [<a href="">Slideshare.net</a>]</div> <p>Finally, I presented a 90-minute breakout session that covered three solution areas:</p> <ul> <li>Flash storage to speed up medical records and research. Those who have already implemented Electronic Health Records (EHR) for "Meaningful Use" compliance recognize the value this provides to improving healthcare. Adding All-Flash Arrays such as IBM FlashSystem, Storwize V7000F or DS8000F can drastically improve application performance.<br> </li> <li>Spectrum Scale and IBM Cloud Object Storage for Vendor Neutral Archive. It seems silly that each PACS vendor has its own little island of storage. A better approach is to send all PACS data from various vendors into a "Vendor-Neutral" storage repository. Both IBM Spectrum Scale and IBM Cloud Object Storage System, either linked together or used separately, can be part of a VNA solution.<br> </li> <li>VersaStack to simplify deployments. VersaStack is a Converged System that combines best-of-breed Cisco servers and switches with best-of-breed IBM storage, pre-cabled, pre-configured, and pre-loaded with all the necessary software to manage the environment as a single entity. This can reduce the time it takes to deploy new medical applications from weeks to just hours.</li> </ul> </dd> </dl> This week, I was part of an all-day event called "Healthcare and Research Trends & Directions in a Cognitive World" at the IBM Executive Briefing Center (EBC) in Rochester, MN. I was one of many presenters covering Information Technology to... 0 1 780-a422dcfa-bb1a-4f37-ab7b-650ae8d20c7d Early Bird Special for IBM Systems Technical University 2017 in Orlando Florida TonyPearson 120000HQFF active false TonyPearson 120000HQFF active false Entradas de comentarios Número de "me gusta" true 2017-04-06T13:05:33-04:00 2017-04-27T12:53:29-04:00 <p dir="ltr"><a data-<img alt="Banner-IBM-TechU-Orlando-2017" height="131" src="" width="500"></img></a></p> <p dir="ltr" server and storage products at one conference.</p> <p dir="ltr">There are over 600 topics that will be presented! You can take a look at the [<a href="">IBM Technical Events Agenda Preview Tool</a>].</p> <p dir="ltr">I will be there! Here are the topics I will be presenting:</p> <table border="2" dir="ltr" width="99%"> <tbody> <tr> <th>Date</th> <th>Time</th> <th>Title</th> </tr> <tr> <td bgcolor="#FFCC99" rowspan="3">Monday</td> <td align="right" bgcolor="#99CCFF">10:15am</td> <td bgcolor="#99CCFF">The pendulum swings back -- Understanding Converged and Hyperconverged environments</td> </tr> <tr> <td align="right" bgcolor="#FFFFFF">11:30am</td> <td bgcolor="#FFFFFF">IBM cloud storage options</td> </tr> <tr> <td align="right" bgcolor="#99CCFF">03:15pm</td> <td bgcolor="#99CCFF">Software Defined Storage -- Why? What? How?</td> </tr> <tr> <td bgcolor="#99FFFF" rowspan="3">Tuesday</td> <td align="right" bgcolor="#FFFFFF">09:00am</td> <td bgcolor="#FFFFFF">Business continuity -- The seven tiers of business continuity and disaster recovery</td> </tr> <tr> <td align="right" bgcolor="#99CCFF">03:15pm</td> <td bgcolor="#99CCFF">Introduction to object storage and its applications - Cleversafe</td> </tr> <tr> <td align="right" bgcolor="#FFFFFF">04:30pm</td> <td bgcolor="#FFFFFF">New generation of storage tiering: Less management, lower investment and increased performance</td> </tr> <tr> <td bgcolor="#FFCC99">Thursday</td> <td align="right" bgcolor="#99CCFF">01:45pm</td> <td bgcolor="#99CCFF">IBM Spectrum Scale for file and object storage</td> </tr> </tbody> </table> <div dir="ltr" style="float:left;padding:20px;"><a data-<img alt="Hyper-Scale-Manager-orlando2017" height="160" src="" width="320"></img></a></div> <p dir="ltr">This conference is not all lectures, which some refer to as "Death by Powerpoint".</p> <p dir="ltr">There will also be a variety of hands-on labs. I recommend participating in the hands-on session to feel and witness the next release of IBM Hyper-Scale Manager, which is the management application for what IBM calls its A-line storage family -- FlashSystem A9000/R, XIV Storage System, and Spectrum Accelerate software.</p> <p dir="ltr">Hyper-Scale Manager is the most advanced GUI in the market today, may help reduce your management total cost of ownership (TCO) in half!</p> <p dir="ltr">You can [<a href="">Enroll Today!</a>] There is an "early-bird" special to save hundreds of dollars if you enroll by April 16!</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">#ibmtechu</a>, <a href="" rel="tag">IBM Technical Events</a>, <a href="" rel="tag">#ibmedge</a>, <a href="" rel="tag">Pendulum Swings</a>, <a href="" rel="tag">Converged Systems</a>, <a href="" rel="tag">Hyperconverged</a>, <a href="" rel="tag">Cloud Storage</a>, <a href="" rel="tag">Software Defined Storage</a>, <a href="" rel="tag">SDS</a>, <a href="" rel="tag">Business Continuity</a>, <a href="" rel="tag">Disaster Recovery</a>, <a href="" rel="tag">IBM Cloud Object Storage</a>, <a href="" rel="tag">Cleversafe</a>, <a href="" rel="tag">Storage Tiering</a>, <a href="" rel="tag">Spectrum Scale</a>, <a href="" rel="tag">early bird</a></p>... 0 0 131ad0a0e-139e-46b8-8fea-125bb3a8636e New IBM Cloud Object Storage Offerrings for March 2017 TonyPearson 120000HQFF active false TonyPearson 120000HQFF active false Entradas de comentarios Número de "me gusta" true 2017-04-04T16:07:26-04:00 2017-04-04T16:09:22-04:00 <p dir="ltr">Well, it's Tuesday again, and you know what that means? IBM Announcements!</p> <p dir="ltr">Last week, at the InterConnect conference, IBM's premiere Cloud and Mobile event, there were announcements regarding IBM Cloud Object Storage.</p> <dl dir="ltr"> <dt><b>IBM Cloud Object Storage Cold Vault</b></dt> <dd> <p>It seems that all the major Cloud Service Providers (CSPs) offer storage for warm, cool, and cold data.</p> <p>IBM Cloud Object Storage Cold Vault service gives clients access to cold storage data on the IBM Cloud and is designed to lead the category for cold data among major competitors. Cold Vault joins the existing Standard and Vault tiers.</p> <p>Fellow IBM blogger Meryl Veramonti describes the three tiers well in her post [<a href="">Bluemix Expands IBM Cloud Object Storage Availability</a>]:</p> <dl> <dt><u>Cold Vault</u> (new!)</dt> <dd> <ul> <li>For rarely accessed mixed workloads and applications</li> <li>Archiving, long-term data retention, historical data compliance</li> </ul> </dd> </dl> <dl> <dt><u>Vault</u></dt> <dd> <ul> <li>For cooler workloads accessed once a month or less</li> <li>Archiving, short-term data retention, digital asset preservation, tape replacement and disaster recovery</li> </ul> </dd> </dl> <dl> <dt><u>Standard</u></dt> <dd> <ul> <li>For warm data, active workloads accessed multiple times per month</li> <li>DevOps, Social, mobile, collaboration and analytics</li> </ul> </dd> </dl> <p>These tiers are available in both "Regional" and "Cross Regional" deployment models. Customers can choose between Regional Service, which spreads their data across multiple data centers in a given region, and Cross Regional Service, which spans at least three geographically dispersed regions. For example, Regional may have data in three data centers all located in Texas, but Cross-Regional might have data in California, Texas and New York.</p> </dd> <br> <br> <br> <dt><b>IBM Cloud Object Storage Flex</b></dt> <dd> <p>In the past, clients used "Information Lifecycle Management" (ILM) policies to place data where it was initially needed, then move it to less expensive tiers storage as it ages and is accessed less frequently. This was is done between Flash, Disk and Tape storage on premises, but the concept could also be applied to off-premises Cloud storage as well.</p> <p>Flex is a new cloud storage service offering simplified pricing for clients whose data usage patterns are difficult to predict. Flex enables clients to benefit from the cost savings of cold storage for rarely accessed data, while maintaining high accessibility to all data.</p> <p>Data that is accessed several times per month (warm) will be charged at Standard rates, data that is accessed once a month or so (cool) will be charged Vault rates, and data that is rarely accessed (cold) will be charged at Cold Vault rates.</p> <p>In effect, this gives you ILM cost savings without ever having to move your data between Standard, Vault or Cold Vault tiers.</p> </dd> <br> <br> <br> <dt><b>IBM Cloud Object Storage partnership with NetApp AltaVault</b></dt> <dd> <p>Many backup software products have been enhanced to write directly to IBM Cloud Object Storage, including IBM Spectrum Protect (formerly known as IBM Tivoli Storage Manager or TSM for short), as well as Commvault Simpana and Veritas NetBackup.</p> <p>IBM extends this partnership to NetApp. NetApp.</p> </dd> <br> <br> <br> <dt><b>IBM Cloud Object Storage for Bluemix Garage</b></dt> <dd> <p>For software developers, [<a href="">IBM Bluemix Garage</a>] provides a global consultancy with the DNA of a startup that combines the best practices of Design Thinking, Agile Development and Lean Startup to accelerate application development and cloud innovation.</p> <p>Now, you can use IBM Cloud Object Storage in the IBM Cloud to support Garage method projects.</p> </dd> </dl> <p dir="ltr">Want to try it out? With Promo Code "COSFREE", you can get a free year of Standard Cross-region IBM Cloud Object Storage with up to 25 GB/month access! See [<a href="">Free Storage Promotion</a><a>] for details. </a></p> <p dir="ltr"><a> </a></p> <p dir="ltr"><a> To learn more about these announcements, see the [</a><a href="">New Pricing Model to Change Economics of Cloud Storage</a>] press release.</p> <p dir="ltr"> </p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">#InterConnect</a>, <a href="" rel="tag">IBM Cloud</a>, <a href="" rel="tag">IBM Cloud Object Storage</a>, <a href="" rel="tag">Cloud Service Provider</a>, <a href="" rel="tag">CSP</a>, <a href="" rel="tag">Cold Vault</a>, <a href="" rel="tag">Cloud Object Flex</a>, <a href="" rel="tag">Spectrum+Protect</a>, <a href="" rel="tag">Commvault</a>, <a href="" rel="tag">NetBackup</a>, <a href="" rel="tag">NetApp AltaVault</a>, <a href="" rel="tag">Bluemix Garage</a>, <a href="" rel="tag">COSFREE</a></p> Well, it's Tuesday again, and you know what that means? IBM Announcements! Last week, at the InterConnect conference, IBM's premiere Cloud and Mobile event, there were announcements regarding IBM Cloud Object Storage. IBM Cloud Object Storage Cold... 0 0 12-3f7327f6-4017-4c7a-b852-badab479119e Advice for working the booth at Trade Shows TonyPearson 120000HQFF active false Entradas de comentarios Número de "me gusta" true 2017-03-20T18:26:14-04:00 2017-03-20T18:26:14-04:00 <p dir="ltr">This week, IBM InterConnect conference is going on in Las Vegas, Nevada.</p> <div dir="ltr" style="float:left;padding:20px;"><a data-<img alt="gondoleer" height="282" src="" width="500"></img></a></div> <p dir="ltr">One time in Las Vegas, I took the gondola ride at the Venetian Hotel. These are not boats with a motor on a chain or track, a but actually steered and propelled independently by the gondolier. At various points on our path, our gondolier would serenade our group with beautiful Italian songs.</p> <p dir="ltr">As the ride was ending, I asked our gondolier how long their training program was to do this job. He told me "six weeks". I said "Wow, I would love to learn how to sing Italian songs like that in six weeks". He corrected me, "No, silly, they only hire experienced singers, and teach them six weeks to manage the gondola by turning the oar in the water."</p> <blockquote dir="ltr">(<b>FCC Disclosure:</b> I work for IBM. I have no financial interest in the Venetian Hotel, CBS Studios, or the producers of any television shows mentioned in this post. David Spark has provided me a complimentary copy of his book. This blog post can be considered an "unpaid celebrity endorsement" for the book reviewed below.)</blockquote> <p dir="ltr">InterConnect 2017 includes "Concourse", a trade show floor with people showing off the latest technologies. In the past 25 years, I have attended many conferences, and on occasion I have worked "booth duty". I am not in Las Vegas this week, so this post is advice to those that are.</p> <p dir="ltr">One time, when the coordinators for an upcoming conference announced at an all-hands meeting they were looking for "a number of knowledgeable and outgoing volunteers" to work the IBM booth, one of the employees in the audience asked "How many of each?" While this might have meant to draw laughs, it underscored a real problem.</p> <div dir="ltr" style="float:right;padding:20px;"><a data-<img alt="the-big-bang-theory-cast__140615222725" height="299" src="" width="446"></img></a></div> <p dir="ltr">In many IT and engineering fields, the terms "knowledgeable" and "outgoing" are seen as mutually exclusive. People are either one or the other. A study titled [<a href="">Personality types in software engineering</a>], by Luiz Fernando Capretz of The University of Western Ontario, analyzed Myers-Briggs Type Indicator of personality and found the majority of engineers were "Introverts".</p> <p dir="ltr">This line of thinking is further reinforced by the various characters on the television shows like "The Big Bang Theory". If you are familiar with the show, you have Sheldon and Amy are the most knowledgeable, but also the most socially awkward, and then you have Penny and Howard, less knowledgeable but at the more outgoing end of the spectrum.</p> <p dir="ltr">I understand that for many engineers, working a booth at a trade show is far outside their "comfort zone". But what do you think is more likely, that you can train an engineer to work a booth in six weeks, be more outgoing, hold the right conversations, tell the right stories -- or -- train a professional model, a young, good looking man or woman, who is already outgoing and friendly, to answer technical engineering questions about your products and services?</p> <p dir="ltr">I have been attending conferences for over 25 years, and occasionally have worked a booth or two. I started out as an engineer, but went through extensive training for public speaking, talking to the media and press, and moderating Q&A Expert panels.</p> <p dir="ltr">Sadly, most people who work the booth get little to no training at all. You might be told your scheduled hours, how to scan bar codes on badges, and where the brochures and swag are stored. Then, you get your official "shirt" and told to wear it with a certain color pants, so that everyone looks like part of the team.</p> <div dir="ltr" style="float:left;padding:20px;"><a data-<img alt="Cover-three-feet-book" height="500" src="" width="335"></img></a></div> <p dir="ltr">Fortunately, fellow blogger David Spark, of Spark Media Solutions, has written a book titled "Three feet from Seven Figures" with loads of advice on how to work a booth with one-on-one engagement techniques to qualify more leads at trade shows.</p> <p dir="ltr">The title of his book warrants a bit of explanation. When you are working a booth, potential buyers and influencers are walking by, often just three feet away from you, and these could represent million-dollar opportunities.</p> <p dir="ltr">Too often, the folks working a booth take a passive approach. They look down at their phones, chat with their colleagues, and basically wait for complete strangers to ask them a question or request a demo. This non-verbal communication can really be a turn-off. David explains this in all-too-familiar detail and how to be more actively engaged.</p> <p dir="ltr">David shows how to break the ice and build rapport with each attendee, how to qualify them as legitimate leads, and how to handle each type of situation.</p> <p dir="ltr">For qualified leads, you need to maximize the opportunity. If you imagine how much a company spends to send its employees to work the booth, plus the cost of the booth itself, and divide it by the limited number of hours that the trade show floor is open, you quickly realize that each hour is precious.</p> <p dir="ltr">Your time is valuable, and certainly their time is valuable also. Let's not spend too much time on a single lead, but rather capture the information, end the conversation, and move on.</p> <p dir="ltr">If you are working a booth at IBM InterConnect, or plan to work a booth at an event later this year, I highly recommend getting this book! It is available in a variety of hard copy and online formats at [<a href="">ThreeFeetBook.com</a>].</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">InterConnecct</a>, <a href="" rel="tag">Venetian Hotel</a>, <a href="" rel="tag">Las Vegas</a>, <a href="" rel="tag">InterConnect Concourse</a>, <a href="" rel="tag">CBS Studios</a>, <a href="" rel="tag">Big Bang Theory</a>, <a href="" rel="tag">comfort zone</a>, <a href="" rel="tag">David Spark</a>, <a href="" rel="tag">Three Feet from Seven Figures</a></p> This week, IBM InterConnect conference is going on in Las Vegas, Nevada. One time in Las Vegas, I took the gondola ride at the Venetian Hotel. These are not boats with a motor on a chain or track, a but actually steered and propelled independently by the... 0 0 928eb917-92c5-4436-8cd5-d14979bff771 IBM Conference Sandwich for 2017 TonyPearson 120000HQFF active false Entradas de comentarios Número de "me gusta" true 2017-03-20T12:03:03-04:00 2017-03-20T12:03:03-04:00 <p dir="ltr">This week, IBM begins its first of three major conferences.</p> <dl dir="ltr"> <dt><b>IBM InterConnect - March 19-23, 2017 - Las Vegas, NV</b></dt> <dd> <p>IBM InterConnect is a Cognitive Solutions and Cloud Platform conference, with an emphasis on Cloud, Mobile and DevOps technologies.</p> <p>Nearly every IBM storage product can support a Cloud deployment. I presented IBM Cloud Storage options at IBM InterConnect 2016. Here were my blog posts from that event.</p> <ul> <li>[<a href="">Day 0 - IBM Research Presentations</a>]</li> <li>[<a href="">Day 1 - Morning sessions</a>]</li> <li>[<a href="">Day 1 - Afternoon sessions</a>]</li> <li>[<a href="">Day 2 - Break-out sessions</a>]</li> <li>[<a href="">Day 3 - Morning break-out sessions</a>]</li> <li>[<a href="">Day 3 - Lunchtime break-out sessions</a>]</li> <li>[<a href="">Day 3 - Afternoon break-out sessions</a>]</li> <li>[<a href="">Day 4 - Final break-out sessions</a>]</li> </ul> <p>I am not in Las Vegas this week for this year's event, but the sessions will be streamed live through [<a href="">IBM GO</a>].</p> </dd> <br> <br> <br> <dt><b>IBM Systems Technical University - May 22-26, 2017 - Orlando, FL</b></dt> <dd><a data-<img alt="Banner-IBM-TechU-Orlando-2017" height="131" src="" width="500"></img></a> <p>IBM Systems Technical University is the evolution of a variety of other conferences related to servers, storage and software. Starting out as the "IBM Storage Symposium", then added "System x" servers and renamed to "Storage and System x University", then dropped "System x" when IBM sold off that business to Lenovo.</p> <p>A few years ago, it was renamed "Edge", initially just focused on Storage, but then two years ago combined with System z mainframe servers and POWER Systems for IBM i and AIX platforms. It also covers software products that previously had their own conferences, like IBM Pulse or MaximoWorld</p> <p>Last year, the IBM Marketing team tried a daring experiment. Let's change "Edge" to be a "Cognitive Solutions and Cloud Platform" conference, with emphasis on IT Infrastructure.</p> <p>The experiment failed. Not because IBM Systems don't support these new initiatives, but because the audience were more interested to hear about how IBM Systems help their current day-to-day business. As many attendees told me, "If we wanted to hear about Cognitive or Cloud, we have plenty of other of conferences that cover that already!"</p> <p>While 40 percent of IBM revenues are generated from Cognitive Solutions and Cloud Platform, the other 60 percent are traditional, on-premise, systems-of-record application workloads, the kind that business, non-profit groups, and government agencies have been using for the past few decades!</p> <p>To address this need, IBM offered three-day "IBM Systems Technical University" events at various locations. Last year, I presented storage topics at events in Atlanta, Austin, Bogota, Boston, Chicago, Dubai, Nairobi, and São Paulo.</p> <p>We will have several of those this year as well. The main one will be a full 5-day event, May 22-26, in Orlando Florida. I will be there presenting various sessions on storage!</p> </dd> <br> <br> <br> <dt><b>IBM World of Watson - October 29-November 2, 2017 - Las Vegas, NV</b></dt> <dd> <p>This is a Cognitive Solutions and Cloud Platform conference, with an emphasis on Analytics and Database technologies.</p> <p>I did not attend World of Watson, or WoW for short, last year, but it was an evolution of the conference previously called "IBM Insight". I am sure everything from DB2 and Open Source databases to Hadoop and Spark will be covered this year as well.</p> </dd> </dl> <p dir="ltr">In writing this post, I realize that this year will be like a "Conference Sandwich". Cognitive-and-Cloud at the top and bottom, with all the meat, veggies and garnish in the middle!</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">InterConnect</a>, <a href="" rel="tag">Cognitive Solutions</a>, <a href="" rel="tag">Cloud Platform</a>, <a href="" rel="tag">IBM Cloud</a>, <a href="" rel="tag">Mobile</a>, <a href="" rel="tag">DevOps</a>, <a href="" rel="tag">IBM GO</a>, <a href="" rel="tag">Las Vegas</a>, <a href="" rel="tag">IBM Systems</a>, <a href="" rel="tag">IBM Systems Technical University</a>, <a href="" rel="tag">Orlando FL</a>, <a href="" rel="tag">IBM Pulse</a>, <a href="" rel="tag">Storage Symposium</a>, <a href="" rel="tag">Storage University</a>, <a href="" rel="tag">World of Watson</a></p> This week, IBM begins its first of three major conferences. IBM InterConnect - March 19-23, 2017 - Las Vegas, NV IBM InterConnect is a Cognitive Solutions and Cloud Platform conference, with an emphasis on Cloud, Mobile and DevOps technologies. Nearly every... 0 0 104762096-d34f-45be-815c-fefe578443b8 IBM Storage Event in San Juan Puerto Rico TonyPearson 120000HQFF active false Entradas de comentarios Número de "me gusta" true 2017-03-18T15:50:54-04:00 2017-03-18T15:50:54-04:00 <div dir="ltr" style="float:right;padding:20px;"><a data-<img alt="IBM_Storage-Promo-Video-2017-Mar" height="158" src="" width="320"></img></a><br> Watch the [<a href="">video on YouTube</a>]</div> <p dir="ltr">This week, IBM sponsored a nice multi-client event in San Juan, Puerto Rico. I was quite impressed with the quality of this video. Our marketing department has really done a good job on this!</p> <p dir="ltr">This event was not just multi-client, but also spanned different industry sectors. IBM recently has realigned to five different sectors, and we had clients from different sectors attending the event.</p> <p dir="ltr">The night before, I was able to meet most of the other IBM executives who came down for the event. Unfortunately, two were delayed because of the snow storms in the Northeast part of the United States, but they were able to arrive the next day.</p> <div dir="ltr" style="float:left;padding:20px;"><a data-<img alt="San Juan at day" height="240" src="" width="320"></img></a></div> <p dir="ltr">The venue was the El Touro restaurant, near the Hilton Caribe. The weather was just right, about 75 degrees and breezy. It was a little humid for me, but everyone else were just happy to be out of the cold. Meanwhile it is nearly 90 degrees in Tucson, Arizona where I am from.</p> <p dir="ltr">This was billed as a "Lunch and Learn" and the food was delicious! In an effort to keep it simple, we had small dishes of fish with fruit-based cream sauce, paella with rabbit meat and rice, pork belly, Crema Catalana and a churo for dessert. This gave everyone a sample taste of everything, without having to order off a menu.</p> <div dir="ltr" style="float:right;padding:20px;"><a data-<img alt="Pres-2017-March15" height="247" src="" width="320"></img></a><br> Slides are available on IBM Expert<br> Network on [<a href="">Slideshare.net</a>]</div> <p dir="ltr">We basically took the same approach with the presentation. First, Marcos Obermaeir and Marcos Otero, the two leads for this event, thanked the audience and explained their new roles. Marcos Obermaeir is focused on Financial and Insurance sector, while Marcos Otero focused on Communications sector.</p> <p dir="ltr">Next we had Debbie Niven and Roopam Master, both IBM Executives, explain their roles, and how IBM can help both clients and Business Partners in Puerto Rico.</p> <div dir="ltr" style="float:left;padding:20px;"><a data-<img alt="San Juan at night" height="320" src="" width="320"></img></a></div> <p dir="ltr">I presented samples of much larger presentations on three topics. First, the excitement over Software Defined Storage with IBM Spectrum Storage family of products. Second, IBM Spectrum Scale as a better replacement for Hadoop File System (HDFS) for Hadoop, IBM BigInsights and Hortonworks analytics deployments. Third, IBM Cloud Object Storage, and how this can be combined with IBM Spectrum Protect to backup your data to object storage either on premises, or in the Cloud.</p> <p dir="ltr">I could have easily spoken an hour on each topic, but instead, we shortened to about 20 minutes each, in keeping with the "Tapas" theme of the restaurant. This allowed those clients who wanted to hear more to have a reason to request a follow-up visit or call.</p> <p dir="ltr">After the clients left, the IBM team had a reception for the IBM Business Partners. About 80 percent of IBM's storage business in Puerto Rico is done through IBM Business Partners, so they are an important link in IBM's "Go-to-Market" strategy.</p> <p dir="ltr">The moon was nearly full, and the breeze and waves were a spectacular backdrop to the conversations I had with each person I met.</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">San Juan</a>, <a href="" rel="tag">Puerto Rico</a>, <a href="" rel="tag">El Touro</a>, <a href="" rel="tag">Hilton Caribe</a>, <a href="" rel="tag">Marcos Obermaeir</a>, <a href="" rel="tag">Marcos Otero</a>, <a href="" rel="tag">Deborah Niven</a>, <a href="" rel="tag">Roopam Master</a>, <a href="" rel="tag">IBM Expert Network</a>, <a href="" rel="tag">Software Defined Storage</a>, <a href="" rel="tag">Spectrum Storage</a>, <a href="" rel="tag">Spectrum Scale</a>, <a href="" rel="tag">Hadoop</a>, <a href="" rel="tag">Hortonworks</a>, <a href="" rel="tag">HDFS</a>, <a href="" rel="tag">IBM Cloud Object Storage</a>, <a href="" rel="tag">Spectrum Protect</a></p> Watch the [ video on YouTube ] This week, IBM sponsored a nice multi-client event in San Juan, Puerto Rico. I was quite impressed with the quality of this video. Our marketing department has really done a good job on this! This event was not just multi-client,... 0 0 9d917bcb-3cf3-4f5c-b66f-b7177fe2686e IBM Announcements for March 14 TonyPearson 120000HQFF active false Entradas de comentarios Número de "me gusta" true 2017-03-15T07:04:01-04:00 2017-03-15T07:04:01-04:00 <p dir="ltr">Well, it's Tuesday again, and you know what that means? IBM Announcements!</p> <dl dir="ltr"> <dt><b>IBM Storwize V5030F and V7000F all-flash high-density expansion enclosure</b></dt> <dd> <p>The 5U-high, 92-drive expansion enclosure introduced for the IBM Storwize V5000 and V7000 is now available for the all-flash models V5030F and V7000F. High-density expansion enclosure Model A9F requires IBM Spectrum Virtualize Software V7.8, or later, for operation.</p> <p>The enclosure allows any mix of "Tier 0" write-endurance SSD at 1.6TB and 3.2TB capacities, and "Tier 1" read-intensive SSD at 1.92TB, 3.84TB, 7.68TB and 15.36TB capacities.</p> <p>Storwize V5030F control enclosure models support attachment of up to 40U of expansion enclosures, which equates to eight high-density expansion enclosures, up to 760 drives per control enclosure, and up to 1,056 per clustered system.</p> <p>Storwize V7000F control enclosure models support attachment of up to eight high-density expansion enclosures, up to 760 drives per control enclosure, and up to 3,040 drives per clustered system.</p> <p>To learn more, see [<a common="" docurl="/common/ssi/rep_ca/5/897/ENUS117-025/index.html&lang=en&request_locale=en&ce=ISM0461&ct=stg&cmp=ibmsocial&cm=h&cr=storage&ccy=us"" href="" http:="" ssi="" www-01.ibm.IBM Storwize V5030F and V7000F all-flash high-density expansion enclosure</a>] press release.</p> </dd> <dt><b>Spectrum Virtualize 7.8.1 Announcement</b></dt> <dd> <p>IBM has adopted "Agile" process for all of its IBM Spectrum Storage software. Spectrum Virtualize is offered in a variety of forms. IBM offers the FlashSystem V9000, SAN Volume Controller, Storwize family, and Spectrum Virtualize as software that runs on Lenovo and SuperMicro servers. This means quarterly delivery of new features and functions!</p> <p>Lots of small enhancements were added in this release:</p> <ul> <li>Apply Quality-of-Service (QoS) to a Host Cluster in terms of IOPS and or MB/s throughput.<br> </li> <li>SAN Congestion reporting, via buffer credit starvation reporting in Spectrum Control and via the XML statistics reporting, for the 16Gbps FCP Host Bus Adapter (HBA).<br> </li> <li>Resizing for Metro Mirror and Global Mirror remote copy services of thin provisioned volumes.<br> </li> <li>Consistency Protection for Metro Mirror and Global Mirror. You can now define "Change Volumes" to be used in the event of problems with MM or GM, it will switch over to GMCV mode.<br> </li> <li>Increased FlashCopy Background Copy Rates<br> </li> <li>Proactive Host Failover during temporary and permanent node removals from cluster</li> </ul> <p>To learn more, read fellow IBM blogger Barry Whyte's summary in his [<a href="">Spectrum Virtualize 7.8.1 Announcement</a>] blog post.</p> </dd> <dt><b>IBM Aspera Files Elite edition</b></dt> <dd> <p>IBM Aspera® Files cloud service.</p> <ul> <li>Personal edition now includes 20 authorized users and a single workspace.<br> </li> <li>Business edition now includes 100 authorized users, 100 workspaces, support for IBM Aspera Drive, support for IBM Mobile applications, and support for Single-Sign-On.<br> </li> <li>Enterprise edition now includes 500 authorized users, no limit on number of workspaces, support for IBM Aspera Drive, support for IBM Mobile applications, and support for Single-Sign-On.</li> </ul> <p>IBM is now introducing a new "Elite edition" includes 2500 authorized users, no limit on number of workspaces, support for IBM Aspera Drive, support for IBM Mobile applications, support for Single-Sign-On, and access to IBM Aspera Developer Network and nonproduction organization.</p> <p>With the addition of the new Elite edition, clients have the flexibility to subscribe to additional functionality in Aspera Files that helps provide higher value and greater differentiation. The Elite edition is available as a subscription and on a pay-per-use basis.</p> <p>In addition to the existing charge metric of data transferred, a user subscription metric is now included for all four editions. Each edition comes with an included number of authorized users in addition to other key features and capabilities.</p> <p>To learn more, see [<a href=""> IBM Aspera Files expands Cloud Service offering with new Elite edition</a>] press release.</p> </dd> </dl> <p dir="ltr">Whether you buy storage solutions as software, as pre-built systems, or as cloud service, IBM has something for you!</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">Storwize</a>, <a href="" rel="tag">Storwize V5030F</a>, <a href="" rel="tag">Storwize V7000F</a>, <a href="" rel="tag">all-flash array</a>, <a href="" rel="tag">Spectrum Virtualize</a>, <a href="" rel="tag">Solid State Drive</a>, <a href="" rel="tag">SSD</a>, <a href="" rel="tag">expansion enclosure</a>, <a href="" rel="tag">QoS</a>, <a href="" rel="tag">Metro Mirror</a>, <a href="" rel="tag">Global Mirror</a>, <a href="" rel="tag">Change Volumes</a>, <a href="" rel="tag">Barry Whyte</a>, <a href="" rel="tag">Aspera Files</a>, <a href="" rel="tag">Elite Edition</a></p> Well, it's Tuesday again, and you know what that means? IBM Announcements! IBM Storwize V5030F and V7000F all-flash high-density expansion enclosure The 5U-high, 92-drive expansion enclosure introduced for the IBM Storwize V5000 and V7000 is now available... 0 0 9-0b0cc667-f6ac-4acb-8174-ae89139f668e IBM Announcements for Feb 28 for Spectrum Storage TonyPearson 120000HQFF active false Entradas de comentarios Número de "me gusta" true 2017-02-28T19:01:56-05:00 2017-02-28T19:01:56 Spectrum Virtualize Software V7.8.1</b></dt> <dd> <p>IBM Spectrum Virtualize&trade: V7.8.1 is the latest software for FlashSystem V9000, SAN Volume Controller and Storwize products.</p> <p>Last release, IBM introduced "Host Groups" for clusters that needed to share a common set of volumes. This release offers "Host cluster I/O throttling": I/O throttling can be managed at the host level (individual or groups) and at managed disk levels for improved performance management,and GUI support.</p> <p>Increased background FlashCopy transfer rates: This feature enables you to increase the rate of background FlashCopy transfers, providing faster copies as the infrastructure allows. This takes advantage of the higher performance capabilities of today's systems, processing the copy in a shorter period of time. The default was 64 MB/sec, and now we can go up to 2 GB/sec, for those who want their FlashCopy to be done as fast as possible.</p> <p>Port Congestion Statistic: Zero buffer credits help detect SAN congestion in performance-related issues, improving support in high-performance environments. IBM had this for the 8Gbps FCP cards, but not for the 16Gbps cards, so now that's fixed.</p> <p>Resizing of volumes in remote mirror relationships: Target volumes in remote mirror relationships will be automatically resized when source volumes are resized. Lots of clients asked for this, and IBM delivered!</p> <p>Consistency protection for Metro/Global Mirror relationships: An automatic restart of mirroring relationships after a link fails between the mirror sites improves disaster recovery scenarios, helping to ensure the applications are protected throughout the process.</p> <p>When IBM introduced "Global Mirror with Change Volumes" (GM CV), I wanted to call it "Trickle Mirror", because the primary site takes a FlashCopy, trickles the data over, then FlashCopy at the remote site. Now, clients using traditional Metro or Global Mirror can add "Change Volumes" as protection. In the unlikely event a network disruption occurs, it drops down to GMCV until the link resumes full speed.</p> <p>Support of SuperMicro servers for the Spectrum Virtualize as Software Only offering: Support for x86-based Intel™ servers by SuperMicro for Spectrum Virtualize Software is available with this release.</p> <p>Last year, IBM offered Spectrum Virtualize as software that could run on Lenovo servers. However, now there are clients who want alternative server choices.</p> <p>Supermicro SuperServer 2028U-TRTP+ is supported to run Spectrum Virtualize Software. This is a great option for end clients, managed service or cloud service providers deploying private clouds, building hosted services, or using software-defined storage on third party Intel servers. This a fully inclusive license with all key features available on Spectrum Virtualize in a single, downloadable image.</p> <p> </p> <p>To learn more, see [<a href=""> IBM Spectrum Virtualize Software V7.8.1</a>] press release.</p> </dd> <dt><b>IBM Spectrum Control V5.2.13 and IBM Virtual Storage Center V5.2.13</b></dt> <dd> <p>We often joke that IBM Virtual Storage Center is the [<a href="">Happy Meal</a>] combining storage virtualization with Spectrum Virtualize hardware like FlashSystem V9000, SAN Volume Controller or Storwize as the "hamburger", Spectrum Control as the "fries" and "Spectrum Protect Snapshot" as the "soft drink". Storage Analytics was included as a "prize inside" only available in the VSC bundle to entice clients to chose this option.</p> <p>Whenever IBM updates Spectrum Control, they often put out a new version of the Virtual Storage Center bundle as well. I was the Chief Architect for Spectrum Control 2001-2002, and Technical Evangelist for SVC in 2003 when we first introduced the product, so I have long history with both products.</p> <p>This release provides additional information and performance metrics on Dell EMC VMAX and EMC VNX devices. This is done natively, they do not need to be virtualized by Spectrum Virtualize as was often done in the past.</p> <p>IBM now offers better visibility of drives within IBM Cloud Object Storage Slicestor® nodes. IBM acquired Cleversafe 18 months ago, and are working to get it under the Spectrum Control management umbrella.</p> <p>IBM Spectrum Scale™ file system to external pool correlation. Spectrum Scale can migrate data to three different type of "external pools":</p> <ol> <li>Cloud Object pool, either on-premise Object Storage or off-premise Cloud Service Provider storage.</li> <li>Spectrum Protect pool, where Spectrum Protect manages the migrated data on one of 700 supported devices, including tape, virtual tape, optical, flash, disk, object storage or cloud.</li> <li>Spectrum Archive pool, where data is written directly to physical tape using the Industry-standard LTFS format. <p>This release provides additional information on the copy data panel about SAN Volume Controller (SVC) HyperSwap® and vDisk mirror.</p> <p>While the "Virtual Storage Center" bundle is an awesome deal, some clients have asked for the "Vegetarian Option" (Fries and Drink only). Why? Because they want the advanced storage analytics (prize inside) for other devices like DS8000, XIV, etc. So, IBM created the "IBM Spectrum Control Advanced Edition", which has everything in VSC except the Spectrum Virtualize itself.</p> <p>Advanced edition adds improvements to the chargeback report. It also includes IBM Spectrum Protect™ Snapshot V8.1 release.</p> <p>To learn more, see [<a href="">IBM Spectrum Control V5.2.13 and IBM Virtual Storage Center V5.2.13</a>] press release.</p> <ul> </ul> </li> </ol> </dd> <dt><b>IBM Spectrum Control Storage Insights Software as a Service</b></dt> <dd> <p>Storage Insights is IBM's "Software-as-a-Service" reporting-only offering subset of Spectrum Control Advanced Edition. It includes direct support for Dell EMC VMAX, VNX, and VNXe storage systems. This is huge! Now, clients who have only EMC hardware can now, on a monthly basis, figure out where they are wasting money and decrease their costs.</p> <p>Other features carried over include the enhanced drive support for IBM® Cloud Object Storage, enhanced external capacity views for IBM Spectrum Scale™ and additional replication views for vDisk mirror and HyperSwap® relationships for SAN Volume Controller (SVC) and Storwize® devices that I mention above.</p> <p>To learn more, see [<a href="">IBM Spectrum Control Storage Insights Software as a Service</a>] press release.</p> </dd> </dl> <p dir="ltr">For Tape and Cloud announcements, see my other post!</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">Spectrum Virtualize</a>, <a href="" rel="tag">Spectrum Storage</a>, <a href="" rel="tag">Metro Mirror</a>, <a href="" rel="tag">Global Mirror</a>, <a href="" rel="tag">Global Mirror with Change Volumes</a>, <a href="" rel="tag">Host Groups</a>, <a href="" rel="tag">Host Clusters</a>, <a href="" rel="tag">FlashCopy</a>, <a href="" rel="tag">SuperMicro</a>, <a href="" rel="tag">Spectrum Control</a>, <a href="" rel="tag">Virtual Storage Center</a>, <a href="" rel="tag">Storage Insights</a>, <a href="" rel="tag">VMAX</a>, <a href="" rel="tag">VNX</a>, <a href="" rel="tag">VNXe</a>, <a href="" rel="tag">IBM Cloud</a>, <a href="" rel="tag">IBM Cloud Object Storage</a>, <a href="" rel="tag">Object Storage</a>, <a href="" rel="tag">SAN Volume Controller</a>, <a href="" rel="tag">SVC</a>, <a href="" rel="tag">Storwize</a>, <a href="" rel="tag">FlashSystem V9000</a></p> Well, it's Tuesday again, and you know what that means? IBM Announcements! There were lots of announcements today, so I have split this up into two posts. One for the Tape and Cloud announcements, and the other for the Spectrum Storage family. IBM Spectrum... 0 0 9-12724d30-d95a-4f0c-a403-10a65708bbc5 IBM Announcements for Feb 28 Tape and Cloud Storage TonyPearson 120000HQFF active false Entradas de comentarios Número de "me gusta" true 2017-02-28T18:28:13-05:00 2017-02-28T18:28:13 TS7700 Virtual Tape System</b></dt> <dd> <p>IBM TS7700 release 4.1.1 now supports seven- and eight-way grids with approved RPQs. Before this, grids could only have up to six TS7700 systems connected together.</p> <p>IBM also plans to extend the capacity of the TS7760 base frame to over 600 TB, and to extend the capacity of a fully configured TS7760 system to over 2.45 PB, before compression, by supporting 8 TB disk drives. This is a huge increase over the 4TB and 6TB drives used today.</p> <p>To learn more, see [<a href="">TS7700 expands the grid to support up to 8 systems</a>] press release.</p> </dd> <dt><b>IBM Cloud Object Storage System</b></dt> <dd> <p>IBM offers the IBM Cloud Object Storage System in three ways: as software, as pre-built systems, and as a cloud server on IBM Bluemix (formerly known as SoftLayer).</p> <p>For those not familiar with IBM Cloud Object Storage (IBM COS), consider it "Valet Parking" for your storage. In a valet parking environment, you have valet parking attendants that drive the cars, parking garages that hold the cars, and a manager that oversees the operation. With IBM COS, you have Accesser® nodes that receive and retrieve your data like valet parking attendants, you have Slicestor® nodes that store your objects like cars in a parking garage, and you have IBM COS Manager to oversee the operation.</p> <p>Today, IBM announced new HDD options for their S01, S03 and S03 models of Slicestor nodes. These are all 7200 rpm, 3.5-inch Nearline drives, at capacities of 4 TB, 6 TB, 8 TB and 10 TB.</p> <p>In addition, a short-range 40 GbE SFP+ transceiver is available for ordering on IBM Cloud Object Storage Accesser models A00, A01, and A02, and IBM Cloud Object Storage Slicestor models S01 and S02. This improves the performance of data transfer between the Accesser nodes and the Slicestor nodes. Think of it like shortening the distance valet parking attendants have to drive your car to the garage and run back.</p> <p>To learn more, see [<a href="">IBM Cloud Object Storage System HDD features</a>] press release.</p> </dd> <dt><b>IBM Multi-Cloud Data Encryption V1.0</b></dt> <dd> <p>I have been presenting Cloud Storage for nearly 10 years now. People are often shocked to learn that most of the major cloud providers -- including Amazon, Google, Microsoft -- do not offer "Data at Rest" encryption on their storage offerings.</p> <p>Why not? Because it would mean investing in Self-Encrypting Drives, Key management software, and other related technology to make it happen. Instead, Cloud Service Providers (CSPs) expect you to encrypt the data in software. Most users encrypt data before it lands on the cloud, but what if you create the data in the cloud?</p> <p>IBM solved this by offering IBM Cloud Object Storage in its IBM Cloud (formerly known as SoftLayer). It has integrated encryption software that takes care of this for you.</p> <p>This new product, IBM Multi-Cloud Data Encryption V1.0, enables you to encrypt files, folders, and volumes in <u>any</u> cloud while maintaining local control of encryption keys. It integrates with IBM Security Key Lifecycle Manager (SKLM). This is designed to allow you to move cipher data between clouds that are running Multi-Cloud Data Encryption without decrypting and re-encrypting the data.</p> <p>For example, you can use IBM Multi-Cloud Data Encryption to protect your data on Amazon, Google or Microsoft, then later realize that you can save a ton of money moving to IBM Cloud instead, and you are now able to move the data over seamlessly!</p> <p>To learn more, see [<a href="">IBM Multi-Cloud Data Encryption V1.0</a>] press release.</p> </dd> </dl> <p dir="ltr">Lots of good stuff. For details on the Spectrum Storage family, see my other post!</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">TS7700</a>, <a href="" rel="tag">IBM Cloud</a>, <a href="" rel="tag">IBM Cloud Object Storage</a>, <a href="" rel="tag">IBM COS</a>, <a href="" rel="tag">Accesser</a>, <a href="" rel="tag">Slicestor</a>, <a href="" rel="tag">Multi-Cloud Data Encryption</a>, <a href="" rel="tag">Security Key Lifecycle Manager</a>, <a href="" rel="tag">SKLM</a></p> Well, it's Tuesday again, and you know what that means? IBM Announcements! There were lots of announcements today, so I have split this up into two posts. One for the Tape and Cloud announcements, and the other for the Spectrum Storage family. IBM TS7700... 0 0 1358eb5885-adfe-4e0d-8728-147abc16f473 IBM already offers FLobject Storage TonyPearson 120000HQFF active false Entradas de comentarios Número de "me gusta" true 2017-01-30T11:19:32-05:00 2017-01-30T11:19:32-05:00 <p dir="ltr">Last November, fellow blogger Chris Mellor from The Register wrote an interesting article [<a href="">EMC crying two SAN breakup tears</a>].</p> <blocakqoute> (Back in 2010, I poked fun at EMC with my post [<a href="">VPLEX: EMC's Latest Wheel is Round</a>]. I pointed out that EMC's announcement of "new features" that already existed in IBM's SAN Volume Controller. Oops! They did it again!) <p>Basically, Dell EMC is working on a new "2 Tiers" approach that combines high-performance flash tier with high-capacity object storage. Guess what? IBM already offers this! Why wait?</p> <div style="float:left;padding:20px;"><a data-<img alt="SpectrumScale" height="320" src="" width="179"></img></a></div> <p>IBM Spectrum Scale, formerly known as the General Parallel File System (GPFS), supports POSIX, HDFS, OpenStack Swift, Amazon S3, NFS, SMB and iSCSI protocols.</p> <p>Spectrum Scale can provide this front-end abstraction layer between flash and object storage, including IBM Cloud Object Storage system and IBM Bluemix (formerly SoftLayer) cloud services.</p> <p>But why limit yourself to just two tiers? IBM Spectrum Scale can also support 15K, 10K and 7200 RPM spinning disk drive tiers, as well as virtual or physical tape tier, the ultimate low-cost high-capacity tier!</p> <p>Several years ago, IBM coined the phrase "FLAPE" to discuss the two-tier approach of combining Flash with Tape using Spectrum Scale as the front-end abstraction layer.</p> <p>Perhaps we should call combinations of Flash and Object "FLobject" storage? If the name catches on, you read it here first!</p> <p><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">Chris Mellor</a>, <a href="" rel="tag">The Register</a>, <a href="" rel="tag">Dell EMC</a>, <a href="" rel="tag">2 Tiers</a>, <a href="" rel="tag">Spectrum Scale</a>, <a href="" rel="tag">GPFS</a>, <a href="" rel="tag">POSIX</a>, <a href="" rel="tag">HDFS</a>, <a href="" rel="tag">OpenStack Swift</a>, <a href="" rel="tag">Amazon S3</a>, <a href="" rel="tag">NFS</a>, <a href="" rel="tag">SMB</a>, <a href="" rel="tag">iSCSI</a>, <a href="" rel="tag">IBM Cloud Object Storage</a>, <a href="" rel="tag">IBM Bluemix</a>, <a href="" rel="tag">IBM SoftLayer</a>, <a href="" rel="tag">tape</a>, <a href="" rel="tag">FLAPE</a>, <a href="" rel="tag">FLobject</a></p> </blocakqoute> Last November, fellow blogger Chris Mellor from The Register wrote an interesting article [ EMC crying two SAN breakup tears ]. (Back in 2010, I poked fun at EMC with my post [ VPLEX: EMC's Latest Wheel is Round ]. I pointed out that EMC's announcement... 0 1 112424fd16-f056-4c56-af1d-11521db4c8ce Guiding ethics principles for the Cognitive Era TonyPearson 120000HQFF active false Entradas de comentarios Número de "me gusta" true 2017-01-23T12:10:16-05:00 2017-01-23T12:10:16-05:00 <p dir="ltr">IBM is in a transition from being a "Systems, Software and Services" company, to become the leading "Cognitive Solutions and Cloud Platform" company. IBM has been in this transformation for the past three years or so, and [<a href="">over 40 percent of its revenue</a>] now comes from these strategic initiatives.</p> <p dir="ltr">Last week, I wrote two blog posts on cognitive solutions. [<a href="">Pascal Compilers, Voltage Thresholds and Vending Machines</a>] and [<a href="">How Artificial Intelligence (AI) is depicted in movies</a>].</p> <p dir="ltr">This month at the [<a href="">World Economic Forum</a>], IBM's CEO Ginni Rometty presented [<a href="">IBM's guiding ethics principles for the Cognitive Era</a>]:</p> <dl dir="ltr"> <dt><b>Purpose: </b></dt> <dd> <p.</p> <p>Cognitive systems will not realistically attain consciousness or independent agency. Rather, they will increasingly be embedded in the processes, systems, products and services by which business and society function -- all of which will and should remain within human control.</p> </dd> <dt><b>Transparency: </b></dt> <dd> <p>For cognitive systems to fulfill their world-changing potential, it is vital that people have confidence in their recommendations, judgments and uses. Therefore, the IBM company will make clear:</p> <ul> <li>When and for what purposes AI is being applied in the cognitive solutions we develop and deploy.<br> </li> <li>The major sources of data and expertise that inform the insights of cognitive solutions, as well as the methods used to train those systems and solutions.<br> </li> <li.</li> </ul> </dd> <dt><b>Skills: </b></dt> <dd> <p>The economic and societal benefits of this new era will not be realized if the human side of the equation is not supported. This is uniquely important with cognitive technology, which augments human intelligence and expertise and works collaboratively with humans.</p> <p>Therefore, the IBM company will work to help students, workers and citizens acquire the skills and knowledge to engage safely, securely and effectively in a relationship with cognitive systems, and to perform the new kinds of work and jobs that will emerge in a cognitive economy.</p> </dd> </dl> <div dir="ltr" style="float:right;padding:20px;"><a data-<img alt="IEEE-Ethically-Aligned" height="320" src="" width="247"></img></a><br> [<a href="">138 page PDF</a>]</div> <p dir="ltr">IBM is not alone in thinking about this. The Institute of Electrical and Electronics Engineers (IEEE) gathered more than 100 experts to [<a href="">draft a report on Ethical Design for AI.</a>]</p> <p dir="ltr">The report is open to review and feedback. It is organized into eight sections:</p> <ol dir="ltr"> <li>General Principles</li> <li>Embedding values into Autonomous Intelligent Systems</li> <li>Methodologies to Guide Ethical Research and Design</li> <li>Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI)</li> <li>Personal Data and Individual Access Control</li> <li>Reframing Autonomous Weapon Systems</li> <li>Economics/Humanitarian Issues</li> <li>Law</li> </ol> <p dir="ltr">The developments in Artificial Intelligence and Cognitive Solutions can have a huge impact to society, so I am glad to see policy makers are investigating and thinking on how to proceed.</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">Ginni Rometty</a>, <a href="" rel="tag">Ethics</a>, <a href="" rel="tag">IEEE</a>,<a href="" rel="tag">Cognitive Era</a>, <a href="" rel="tag">Cognitive Solutions</a>, <a href="" rel="tag">Cognitive Systems</a>, <a href="" rel="tag">Artificial Intelligence</a></p> IBM is in a transition from being a "Systems, Software and Services" company, to become the leading "Cognitive Solutions and Cloud Platform" company. IBM has been in this transformation for the past three years or so, and [ over 40 percent... 0 0 18a5881b2-3dc6-4aca-8144-5159d83c0cf3 How Artificial Intelligence is depicted in movies TonyPearson 120000HQFF active false Entradas de comentarios Número de "me gusta" true 2017-01-20T14:22:32-05:00 2017-01-20T14:22:32-05:00 <p dir="ltr" heavily in Cognitive Solutions, should people be worried, or welcome the new technology?</p> <p dir="ltr">Back in 1950, Isaac Asimov proposed "Three laws of robots":</p> <ol dir="ltr"> > <p dir="ltr">Let's take a look at how Artificial Intelligence has been represented in the movies over the past few decades. I have put these in chronological order when they were initially released in the United States.</p> <blockquote dir="ltr">(<b>FCC Disclosure and Spoiler Alert:</b> I work for IBM. This blog post can be considered a "paid celebrity endorsement" for cognitive solutions made by IBM. While IBM may have been involved or featured in some of these movies, I have no financial interest in them. I have seen them all and highly recommend them. I am hoping that you have all seen these, or at least familiar enough with their plot lines that I am not spoiling them for you.)</blockquote> <dl dir="ltr"> <dt><b>2001: A Space Odyssey</b></dt> <dd> <div style="float:left;padding:20px;"><a data-<img alt="Space-Odyssey-220px-HAL9000.svg" height="150" src="" width="150"></img></a></div> <p>Back in 1968, Stanley Kubrick and Arthur C. Clarke made a masterpiece movie about a mysterious obelisk floating near Jupiter. To investigate, a crew of human beings takes a space ship managed by a sentient computer named [<a href="">HAL-9000</a>].</p> <blockquote> <p>(Many people thought HAL was a subtle reference to IBM. Stanley Kubrick clarifies:</p> <p>.'</p> <p>Now this is a pure coincidence, because HAL's name is an acronym of heuristic and algorithmic, the two methods of computer programming...an almost inconceivable coincidence. It would have taken a cryptographer to have noticed that."</p> <p>Source: The Making of 2001: A Space Odyssey, Eye Magazine Interview, Modern Library, pp. 249)</p> </blockquote> <p>The problem arises when HAL-9000 refuses commands from the astronauts. The astronauts are not in control, HAL-9000 was given separate orders from ground control back on earth, and it has determined it would be more successful without the crew.</p> </dd> <dt><b>Westworld</b></dt> <dd> <div style="float:right;padding:20px;"><a data-<img alt="Westworld-Yul-Brenner-Gunslinger-mondwest-12-g" height="320" src="" width="209"></img></a></div> <p>In 1973, Michael Crichton wrote and directed this movie about an amusement park with three uniquely themed areas: Medieval World, Roman World, and Westworld. Robots are used to staff the parks to make them more realistic, interacting with the guests in character appropriate for each time period.</p> <p>A malfunction spreads like a computer virus among the robots, causing them to harm or kill the park's guests. Yul Brenner played a robot called simply "the Gunslinger". Equipped with fast reflexes and infrared vision, the Gunslinger proves especially deadly!</p> <blockquote>(Michael Crichton also wrote "Jurassic Park", which had a similar story line involving dinosaurs with catastrophic results!)</blockquote> <p>Last year, HBO launched a TV series called "Westworld", based on the same themes covered in this movie. The first season of 10 episodes just finished, and the next season is scheduled for 2018.</p> </dd> <dt><b>Blade Runner</b></dt> <dd> <div style="float:right;padding:20px;"><a data-<img alt="Blade-Runner-art-roy-pris" height="135" src="" width="240"></img></a></div> <p>Directed by Ridley Scott, this 1982 movie stars Harrison Ford as Rick Deckard, a law enforcement officer. Rick is tasked to hunt down and "retire" four cognitive androids named "replicants" that have killed some humans and are now in search of their creator, a man named J. F. Sebastian.</p> <blockquote>(I enjoy the euphemisms used in these movies. Terms like kill, murder or assassinate apply to humans but not machines. The word "retire" in this movie refers to destruction of the robots. As we say in IBM, "retirement is not something you do, it is something done to you!")</blockquote> <p>Destroying machines does not carry the same emotional toll as killing humans, but this movie explores that empathy. A sequel called "Blade Runner 2049" will be released later this year.</p> </dd> <dt><b>WarGames</b></dt> <dd> <div style="float:right;padding:20px;"><a data-<img alt="WarGames-WOPR-maxresdefault" height="180" src="" width="320"></img></a></div> <p>In 1983, Matthew Broderick plays David, a young high school student who hacks into the U.S. Military's War Operation Plan Response (WOPR) computer. The WOPR was designed to run various strategic games, including war game simulations, learning as it goes. David decides to initiate the game "Global Thermonuclear War", and the military responds as if the threats were real.</p> <p>Can the computer learn that the only way to win a war is not to wage it in the first place? And if a computer can learn this, can our human leaders learn this too?</p> </dd> <dt><b>Terminator</b></dt> <dd> <div style="float:right;padding:20px;"><a data-<img alt="Terminator-576417" height="143" src="" width="240"></img></a></div> <p>In this series of movies, a franchise spanning from 1984 to 2009, the US Military builds a defense grid computer called [<a href="">Skynet</a>]. After cognitive learning at an alarming rate, Skynet becomes self-aware, and decides to launch missiles, starting a nuclear war that kills over 3 billion people.</p> <p>Arnold Schwarzenegger plays the Terminator model T-800, a cognitive solution in human form designed by Skynet to finish the job and kill the remainder of humanity.</p> </dd> <dt><b>I, Robot</b></dt> <dd> <div style="float:left;padding:20px;"><a data-<img alt="VIKI-I-Robot-Char_25547" height="240" src="" width="210"></img></a></div> <p>In this 2004 movie, Will Smith plays Del Spooner, a technophobic cop who investigates a crime committed by a cognitive robot.</p> <blockquote> <p>(Many people associate the title with author Isaac Asimov. A short story called "I, Robot" written by Earl and Otto Binder was published in the January 1939 issue of 'Amazing Stories', well before the unrelated and more well-known book 'I, Robot' (1950), a collection of short stories, by Asimov.</p> <p>Asimov admitted to being heavily influenced by the Binder short story. The title of Asimov's collection was changed to "I, Robot" by the publisher, against Asimov's wishes. Source: IMDB)</p> </blockquote> <p>Del Spooner uncovers a bigger threat to humanity, not just a single malfunctioning robot, but rather the Virtual Interactive Kinesthetic Interface, or simply VIKI for short, a cognitive solution that controls all robots. VIKI interprets Asimov's three laws in a manner not originally intended.</p> </dd> <dt><b>Ex Machina</b></dt> <dd> <div style="float:right;padding:20px;"><a data-<img alt="Ex-Machina-Ava-4422476-8002696915-ex-ma" height="240" src="" width="154"></img></a></div> <p>In this 2015 movie, Domhnall Gleeson plays Caleb, a 26 year old programmer at the world's largest internet company. Caleb wins a competition to spend a week at a private mountain retreat. However, when Caleb arrives he discovers that he must interact with Ava, the world's first true artificial intelligence, a beautiful robot played by Alicia Vikander.</p> <blockquote>(The title derives from the Latin phrase "Deus Ex-Machina," meaning "a god from the Machine," a phrase that originated in Greek tragedies. Sources: IMDB)</blockquote> <p>Nathan, the reclusive CEO of this company, relishes this opportunity to have Caleb participate in this experiment, explaining how Artificial Intelligence (AI) will transform the world.</p> <blockquote>(The three main characters all have appropriate biblical names. Ava is a form of Eve, the first woman; Nathan was a prophet in the court of David; and Caleb was a spy sent by Moses to evaluate the Promised Land. Source: IMDB)</blockquote> <p>The premise is based in part on the famous [<a href="">Turing Test</a>], developed by Alan Turing. This is designed to test a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.</p> </dd> </dl> <p dir="ltr">Movies that depict the bad guys as a particular nationality, ethnicity or religion may be offensive to some movie audiences. Instead, having dinosaurs, monsters, aliens or robots provides a villain that all people can fear equally. This helps movie makers reach a more global audience!</p> <p dir="ltr">Of course, if robots, androids and other forms of Artificial Intelligence did exactly what humans expect them to, we would not have the tense, thrilling action movies to watch on the big screen.</p> <p dir="ltr">This is not a complete list of movies. Enter in the comments below your favorite movie that features Artificial Intelligence and why it is your favorite!</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">Watson</a>, <a href="" rel="tag">Jeopardy</a>, <a href="" rel="tag">Ken Jennings</a>, <a href="" rel="tag">Brad Rutter</a>, <a href="" rel="tag">computer overlords</a>, <a href="" rel="tag">cognitive solutions</a>, <a href="" rel="tag">Isaac Asimov</a>, <a href="" rel="tag">three laws of robots</a>, <a href="" rel="tag">Artificial Intelligence</a>, <a href="" rel="tag">Stanley Kubrick</a>, <a href="" rel="tag">Arthur C. Clarke</a>, <a href="" rel="tag">HAL 9000</a>, <a href="" rel="tag">Space Odyssey</a>, <a href="" rel="tag">Westworld</a>, <a href="" rel="tag">Michael Crichton</a>, <a href="" rel="tag">Yul Brenner</a>, <a href="" rel="tag">Jurassic Park</a>, <a href="" rel="tag">HBO</a>, <a href="" rel="tag">Blade Runner</a>, <a href="" rel="tag">Ridley Scott</a>, <a href="" rel="tag">Harrison Ford</a>, <a href="" rel="tag">WarGames</a>, <a href="" rel="tag">Matthew Broderick</a>, <a href="" rel="tag">WOPR</a>, <a href="" rel="tag">Terminator</a>, <a href="" rel="tag">Skynet</a>, <a href="" rel="tag">Arnold Schwarzenegger</a>, <a href="" rel="tag">I, Robot</a>, <a href="" rel="tag">Will Smith</a>, <a href="" rel="tag">VIKI</a>, <a href="" rel="tag">Ex Machina</a>, <a href="" rel="tag">Domhnall Gleeson</a>, <a href="" rel="tag">Alicia Vikander</a>, <a href="" rel="tag">Turing Test</a>, <a href="" rel="tag">Alan Turing</a></p>... 1 2 294785ba8-28ba-4a03-8ca4-f415a92ce67f Pascal Compilers, Voltage Thresholds and Vending Machines TonyPearson 120000HQFF active false Entradas de comentarios Número de "me gusta" true 2017-01-19T15:17:39-05:00 2017-01-19T15:17:39-05:00 <p dir="ltr">Last week, fellow IBM blogger Barry Whyte Barry pointed out that my recent post on [<a href="">Cognitive University for Watson Systems SmartSeller</a>] was my 1,000th blog post. After 10 years of blogging, I have reached the 1,000 mark!</p> <blockquote dir="ltr">(As IBM is focused on its transformation from a "Systems, Software and Services" company to a "Cognitive Solutions and Cloud Platform" company, it seems appropriate to highlight my 1,000 blog post on the concept of cognitive solutions.)</blockquote> <p dir="ltr">A lot of people ask me to explain what exactly does IBM mean by "cognitive", which is a fair question. Let's start with the [<a href="">Dictionary definition</a>]:</p> <dl dir="ltr"> <dt><b><i>cognitive</i></b></dt> <dd> <ol> <li>of or relating to cognition; concerned with the act or process of knowing, perceiving, etc.</li> <li>of or relating to the mental processes of perception, memory, judgment, and reasoning, as contrasted with emotional and volitional processes.</li> </ol> </dd> </dl> <p dir="ltr">What exactly does IBM mean by Cognitive? IBM has taken this definition, and focused on four key strategic areas:</p> <dl dir="ltr"> <dt><b>Understanding</b></dt> <dd> <p>In the summer of 1981, I spent a summer debugging a "Pascal" compiler at the University of Texas at Austin. I wasn't told that was what I was doing. Rather, I was tasked with writing sample Pascal programs that would demonstrate the features and capabilities of the language.</p> <p>Every day, I would come up with a concept of a program, punch up the cards, run it through the CDC hopper, and verify that it would work properly. If I didn't have it working by lunch, I would take it to the "help desk", they would look it over, and tell me how to fix it after I got back.</p> <p>Most of the time, it was a mistake in my software. A few times, however, it was a flaw in the compiler itself. My programs were basically test cases, and the Pascal Compiler development team was fixing or enhancing the compiler code every time I had a problem.</p> <p>Compilers basically work by parsing the program text, looking for fixed keywords that are entered in a specifically prescribed order to make sense. Other keywords may represent data types, variables, constants or pre-defined macros.</p> <p>But compilers are not cognitive. Cognitive solutions can understand natural language, and have to handle all the ambiguity of words not being in the correct order, or different words having different meanings.</p> </dd> <dt><b>Reason</b></dt> <dd> <p>As an Electrical Engineer, I had to take many classes on classical analog signal processing. In fact, all computers have some amount of analog components, where threshold processing is used to differentiate a zero (0) from a one (1).</p> <p>For example, if a "zero" value was represented by 1 volt, and a "one" value by 5 volts, then you can set a threshold at 3 volts. Any voltage less than 3 would be considered a "zero" value, and anything 3 volts or greater a "one" value.</p> <p>But threshold processing is not cognitive. Cognitive solutions also use thresholds, but their thresholds are dynamically determined, through advanced analytics and statistical mathematical models, and may adjust up and down as needed, based on machine learning over time.</p> </dd> <dt><b>Learning</b></dt> <dd> <p>IBM Research is proud to have developed the world's most advanced caching algorithms for its storage systems. Cache memory is very fast, but also very expensive, so offered in limited quantities. Caching algorithms decide which blocks of data should remain in cache, and which should be kicked out.</p> <p>Ideally, a block in read cache would be kicked out precisely after the last time it was read, with little or no expectation for being read again anytime soon. Likewise, a block in write cache would be destaged to persistent storage precisely after the last time it was updated, with little or no expectation for being updated again anytime soon.</p> <p>Traditional approach is "Least Recently Used" or [<a href="">LRU</a>]. Cache entries that were read recently or updated recently, would be placed on the top of the list, and the least referenced would be at the bottom of the list. When space is needed in cache, the entries at the bottom of the list would be kicked out.</p> <p>IBM's [<a href="">Adaptive Cache Algorithm outperforms LRU</a>]. For example, on a workstation disk drive workload, at 16MB cache, LRU delivers a hit ratio of 4.24 percent while ARC achieves a hit ratio of 23.82 percent, and, for a SPC1 benchmark, at 4GB cache, LRU delivers a hit ratio of 9.19 percent while ARC achieves a hit ratio of 20 percent.</p> <p>But caching algorithms, including IBM's Adaptive Cache, are not cognitive. These algorithms respond pragmatically based on the current state of the cache. Cognitive solutions learn, and improve with usage. This is often referred to as "Machine Learning".</p> </dd> <dt><b>Interaction</b></dt> <dd> <p>The human-computer interface (HCI) has much room for improvement in a variety of areas.</p> <p>Take for example a snack vending machine. In college, we had assignments to simulate the computing logic of these. We had to interact with the buyer, receive coins entered into the slot--nickels, dimes and quarters representing 5, 10 and 25 cents--determine a total monetary balance, and then dispense snacks of various prices and return an appropriate amount of change, if any. There is even a [<a href="">greedy algorithm</a>] designed to optimize how the change is returned.</p> <p>But vending machines are not cognitive. Like the caching algorithms, vending machines interact based on fixed programmatic logic, treating all buyers in the same manner. Cognitive solutions can interact with different users in different ways, customized to their needs, and these interactions can improve over time, based on machine learning.</p> </dd> </dl> <p dir="ltr">IBM is exploring the use of Cognitive Solutions in a variety of different industries, from Healthcare to Retail, Financial Services to Manufacturing, and more.</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">Barry Whyte</a>, <a href="" rel="tag">Cognitive+University</a>, <a href="" rel="tag">cognitive computing</a>, <a href="" rel="tag">Pascal Compiler</a>, <a href="" rel="tag">CDC</a>, <a href="" rel="tag">Electrical Engineer</a>, <a href="" rel="tag">LRU</a>, <a href="" rel="tag">caching algoritm</a>, <a href="" rel="tag">Adaptive Cache</a>, <a href="" rel="tag">human-computer interface</a>, <a href="" rel="tag">HCI</a></p> Last week, fellow IBM blogger Barry Whyte Barry pointed out that my recent post on [ Cognitive University for Watson Systems SmartSeller ] was my 1,000th blog post. After 10 years of blogging, I have reached the 1,000 mark! (As IBM is focused on its... 0 0 11ace075a-9983-43f7-9ace-83853fd2b524 IBM introduces new HPFE-Gen2 and All-Flash Array DS8880F models TonyPearson 120000HQFF active false Entradas de comentarios Número de "me gusta" true 2017-01-12T13:08:35-05:00 2017-01-12T13:08:35-05:00 <p dir="ltr">Well, it's Tuesday again, and you know what that means? IBM Announcements!</p> <blockquote dir="ltr">(Yes, OK, it's actually Thursday. I wrote this post weeks ago, but was embargoed until Jan 10, and then was asked to wait until Jan 12 so that the IBM Marketing team could translate my text into 15 different languages.)</blockquote> <p dir="ltr">This week, the IBM DS8000 team announces a new High Performance Flash Enclosure (HPFE-Gen2) and a series of All-Flash Array DS8880F models that exploit this new technology.</p> <dl dir="ltr"> <dt><b>New High Performance Flash Enclosure (HPFE-Gen2)</b></dt> <dd> <div style="float:left;padding:20px;"><a data-<img alt="DS8000-HPFE" height="76" src="" width="240"></img></a></div> <p>The original HPFE was 1U high with 16 or 30 flash cards, and could support RAID-5 or RAID-10. Most used RAID-5, resulting in four array sites of 6+P each, leaving two cards for spare. These 1.8-inch cards were only 400 or 800 GB in size, so the maximum raw capacity was only 24TB per 1U enclosure.</p> <div style="float:right;padding:20px;"><a data-<img alt="Storwize-V7000F" height="68" src="" width="240"></img></a></div> <p>The new HPFE-Gen2 enclosure is a complete re-design, consisting of two Microbays and two TeraPacks. The I/O Bays attach to the Microbays via PCIe Gen3. The Microbays in turn attach to both TeraPacks via redundant 6 Gb or 12 Gb SAS.</p> <p>Each TeraPack holds 24 flash cards each. Since the TeraPacks come in pairs, you can install 16, 32 or 48 flash cards per enclosure. Each 16-card set represents two array sites, for a maximum of six array sites per HPFE-Gen2.</p> <ul> <li>RAID-5 for 400/800 GB. Two 6+P arrays, four 7+P arrays, and two spares.</li> <li>RAID-6 for 400/800/1600/3200 GB. Two 5+P+Q arrays, four 6+P+Q arrays, and two spares.</li> <li>RAID-10 for 400/800/1600/3200 GB. Two 3+3 arrays, four 4+4 arrays, and four spares.</li> </ul> <blockquote>(Technically, these new "Flash cards" are 2.5-inch Solid State Drives (SSD) placed into the HPFE Gen2 connected to the PCIe Gen3 interface, with 50 percent additional capacity to tolerate up to 10 drive-writes-per-day (DWDP). IBM will continue to call them "Flash Cards" for naming consistency between the two generations of HPFE.)</blockquote> <p>The new HPFE-Gen2 enclosures are substantially faster, offering up to 90 percent more IOPS, and up to 268 percent more throughput (GB/sec). The Microbays use a new flash-optimized ASIC to perform the RAID calculations.</p> </dd> <dt><b>New All-Flash Array DS8880F models</b></dt> <dd> <p>IBM introduces the DS8884F, DS8886F and DS8888F that are based entirely on the HPFE-Gen2 enclosures described above.</p> <table border="2" width="99%"> <tbody> <tr bgcolor="#9999FF"> <th>Business Class</th> <th>Enterprise Class</th> <th>Analytic Class</th> </tr> <tr> <td bgcolor="#33FF99">DS8884<br> Hybrid - HDD/SSD/HPFE mix</td> <td bgcolor="#33FF99">DS8886<br> Hybrid - HDD/SSD/HPFE mix</td> <td bgcolor="#FF99CC">DS8888<br> AFA - HPFE only</td> </tr> <tr bgcolor="#FF99CC"> <td>DS8884F<br> AFA - HPFE-Gen2 only</td> <td>DS8886F<br> AFA - HPFE-Gen2 only</td> <td>DS8888F<br> AFA - HPFE-Gen2 only</td> </tr> </tbody> </table> <br> </dd> <dt><b>New zHyperLink connection</b></dt> <dd>Also, as a "Statement of Direction", IBM intends to deliver field upgradable support for zHyperLink on existing IBM System Storage DS8880 machines for connection to z System servers. zHyperLink is a short-distance, mainframe-attach link designed for lower latency than High Performance FICON. <p> </p> <p>Typical latency with FICON/zHPF is around 140-170 microseconds, and this new zHyperLink is estimated to reduce this down to 20-30 microseconds, but is limited to 150 meter fiber optic cable distance. zHyperLink is intended to speed up DB2® for z/OS® transaction processing and improve active log throughput.</p> </dd> </dl> <p dir="ltr">To learn more, read the Announcement Letters for [<a href="">IBM DS8880 all-flash family meets the demand for high-speed storage</a>] and [<a href="">IBM DS8880 All-Flash family Function Authorizations</a>].</p> <p dir="ltr">The new HPFE-Gen2 architecture design, and the All-Flash Array models they are based on, positions IBM well into the future for newer 2.5-inch SSD capacities and technologies as they become available.</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">DS8000</a>, <a href="" rel="tag">All-Flash Array</a>, <a href="" rel="tag">HPFE</a>, <a href="" rel="tag">HPFE-Gen2</a>, <a href="" rel="tag">DS8884F</a>, <a href="" rel="tag">DS8886F</a>, <a href="" rel="tag">DS8888F</a>, <a href="" rel="tag">zHyperLink</a></p> Well, it's Tuesday again, and you know what that means? IBM Announcements! (Yes, OK, it's actually Thursday. I wrote this post weeks ago, but was embargoed until Jan 10, and then was asked to wait until Jan 12 so that the IBM Marketing team could... 0 0 6482-f2dbf82c-e437-4b43-b769-272403f27459 Cognitive University for Watson Systems SmartSeller TonyPearson 120000HQFF active false Entradas de comentarios Número de "me gusta" true 2016-12-23T12:49:13-05:00 2016-12-23T12:49:13-05:00 <p dir="ltr">Last month, I had the pleasure to help train Watson in its latest mission, to help answer questions from sellers, this are not just for the IBM feet on the street, but also for IBM distributors and IBM Business Partners as well.</p> <p dir="ltr">In their post [<a href="">Workers Spend Too Much Time Searching for Information</a>], Cottrill Research explains the problem all too well. Here is an excerpt:</p> <div dir="ltr" style="color:#993300;"> <blockquote>"... [<a href="">survey by SearchYourCloud</a>] revealed 'workers took up to 8 searches to find the right document and information.' Here are a few other statistics that help tell the tale of information overload and wasted time spent searching for correct information -- either external or internal: <ul> <li>: [<a href="">Time Searching for Information</a>]<br> </li> <li>'19.8 percent of business time -- the equivalent of one day per working week -- is wasted by employees searching for information to do their job effectively,' according to Interact. Source: [<a href="">A Fifth of Business Time is Wasted</a>]<br> </li> <li>IDC data shows that 'the knowledge worker spends about 2.5 hours per day, or roughly 30 percent of the workday, searching for information ... 60 percent [of company executives] felt that time constraints and lack of understanding of how to find information were preventing their employees from finding the information they needed.' Source: [<a href="">Information: The Lifeblood of the Enterprise</a>]."</li> </ul> </blockquote> </div> <p dir="ltr">In the early days of the Internet, before search engines like Google or Bing, I competed in [<a href="">Internet Scavenger Hunts</a>]. A dozen or more contestants would be in a room, and would be given a list of 20 questions to find answers for. Each of us would then hunt down answers on the Internet. The person to find the most documented answers before time runs out wins. It was quite the challenge!</p> <p dir="ltr">Over the years, I have honed my skills as a [<a href="">Search Ninja</a>]. With over 30 years of experience in IBM Storage, many sellers come to me for answers. Sometimes sellers are just too lazy to look for the answers themselves, too busy trying to meet client deadlines, or too green to know where to look.</p> <p dir="ltr">A good portion of my 60-hour week is spent helping sellers find the answers they are looking for. Sometimes I dig into the [<a href="">SSIC</a>], product data sheets, or various IBM Redbooks.</p> <p dir="ltr">Other times, I would confer with experts, engineers and architects in particular development teams. Often, I learn something new myself. In a few cases, I have turned some questions into ideas for blog posts!</p> <p dir="ltr">It was no surprise when I was asked to help train Watson for the new "Systems SmartSeller" tool. This will be a tool that runs on smartphones or desktops to help answer questions that sellers might need to respond to RFP or other client queries.</p> <p dir="ltr">The premise was simple. Treat Watson as a student at "Cognitive University" taking classes from dozens of IBM professors, in a series of semesters, or "phases".</p> <p dir="ltr">Phase I involved building the "Corpus", the set of documents related to z Systems, POWER systems, Storage and SDI solutions; and a "Grading Tool" that would be used as the Graphical User Interface. I was not involved in phase I.</p> <p dir="ltr">Phase II was where I came in. Hundreds of questions are categorized by product area. I worked on 500 questions for storage. For each question, Watson had up to eleven different responses, typically a paragraph from the Corpus. My job as a professor was to grade the responses to some 500 storage questions:</p> <table border="2" dir="ltr" width="99%"> <tbody> <tr> <th width="30%">Rating</th> <th>Meaning</th> </tr> <tr> <td>★ (one star)</td> <td>Irrelevant, answer not even storage-related</td> </tr> <tr> <td>★★ (two stars)</td> <td>Relevant, at least it is storage-related, but does not answer the question, or answers it poorly</td> </tr> <tr> <td>★★★ (three stars)</td> <td>Relevant, adequately answers the question</td> </tr> <tr> <td>★★★★ (four stars)</td> <td>Relevant, answers the question well</td> </tr> </tbody> </table> <p dir="ltr">Most of the answers were either 1-star (not storage related) or 2-star (mentioned storage, but poor response). I would search through the existing Corpus looking for a better answer, and at best found only 3-star responses, which I would add to the list and grade as a 3-star response.</p> <p dir="ltr">I then searched the Internet for better answers. Once I found a good match, I would type up a 4-star response, add it to the list, and point it to the appropriate resources on the Web.</p> <p dir="ltr">Other professors, who were also looking at these questions, would then get to grade my suggested responses as well. Watson would learn based on the consensus of how appropriate and accurate each response was graded.</p> <p dir="ltr">I don't know where the Cognitive University team got some of the questions, but they were quite representative of the ones I get every week. In some cases, the seller didn't understand the question he heard from the client, making it difficult for me to figure out what they were actually asking for.</p> <blockquote dir="ltr">It reminds me of that parlor game [<a href="">"Telephone" or "Chinese Whispers"</a>], in which one person whispers a message to the ear of the next person through a line of people until the last player announces the message to the entire group. I have actually played this at an IBM event in China!</blockquote> <p dir="ltr">Watson needs to parse the question into nouns and verbs, and use that Natural Linguistic Programming (NLP) to then search the Corpus for appropriate answer. I determined three challenges for Watson in this case:</p> <ul dir="ltr"> <li>The questions are not always fully formed sentences. For example, "Object storage?" Is this asking what is object storage in general, or rather what does IBM offer in this area?<br> </li> <li>The questions often do not spell the names of products correctly, or use informal abbreviations. "Can Store-wise V7 do RtC?" is a typical example, short for "Can the IBM Storwize V7000 storage controller perform Real-time Compression?"<br> </li> <li>The questions ask what is planned in the future. "When will IBM offer feature x in product y?" I am sorry, but Watson is not [<a href="">Zoltar, the fortune teller</a>]!</li> </ul> <p dir="ltr">I managed to grade the responses in the two weeks we were given. Part of my frustration was the grading tool itself was a bit buggy, and I spent some time trying to track down some of its flaws.</p> <p dir="ltr">The next phase is in late January and February. This will give the Cognitive University team a chance to update the Corpus, improve the grading interface, and find more professors and different set of questions. I volunteered the most recent four years' worth of my blog posts to be added to the Corpus.</p> <p dir="ltr">Maybe this tool will help me turn my 60-hour week back to the 40-hour week it should be!</p> <p dir="ltr"><img src=""></img><b>technorati tags:</b> <a href="" rel="tag">IBM</a>, <a href="" rel="tag">Watson</a>, <a href="" rel="tag">Cottrill Research</a>, <a href="" rel="tag">SearchYourCloud</a>, <a href="" rel="tag">McKinsey</a>, <a href="" rel="tag">IDC</a>, <a href="" rel="tag">Google</a>, <a href="" rel="tag">Bing</a>, <a href="" rel="tag">Search Ninja</a>, <a href="" rel="tag">Internet Scavenger Hunts</a>, <a href="" rel="tag">SSIC</a>, <a href="" rel="tag">Telephone Game</a>, <a href="" rel="tag">Chinese Whispers</a>, <a href="" rel="tag">NLP</a>, <a href="" rel="tag">RFP</a>, <a href="" rel="tag">Storwize</a>, <a href="" rel="tag">RtC</a>, <a href="" rel="tag">Zoltar</a>, <a href="" rel="tag">Cognitive University</a></p> Last month, I had the pleasure to help train Watson in its latest mission, to help answer questions from sellers, this are not just for the IBM feet on the street, but also for IBM distributors and IBM Business Partners as well. In their post [ Workers Spend... 1 1 11 | https://www.ibm.com/developerworks/mydeveloperworks/blogs/InsideSystemStorage/feed/entries/atom?lang=es | CC-MAIN-2017-30 | refinedweb | 35,878 | 50.97 |
The point is that python doesn't have a notion of abstract methods. Abstract methods are part of an base class that defines an interface, without any code. Abstract methods can't be called directly, because they don't contain any code in their definition.
In the definition of the base class, you may want to include a specific method that is part of the interface, but the specific implementation is still unknown. A popular example seems to be the drawing of a point or a line in a graphical application.
The classes Point and Line share several implementation details, but differ on other. In particular, the way they are drawn is completely different (you will want to optimize the drawing of a line). Suppose these two classes are derived from the same class, Object. It is possible to separate the implementation of the method draw of these two classes, while draw can still be called from the base class Object.
Discussion
A bit more explanation about why some things had to be done is in the article I originally wrote to introduce this code:
I don't understand. I'm a Python newbie; excuse me if don't get it right but i thought Python was about simplicity ...
What's wrong with:
def abstract_method(self):
Complexity. For one thing: clarity. My version gives the exact class and method names where the abstract method was defined.
Second: certainty. With that metaclass, it is guaranteed that you will never accitentally instantiate a class with one or more abstract classes.
Constructor of subclasses requires 2 arguments. I have created an abstract class which (besides self) has one argument on the __init__ method.
Later, I've created a subclass of that abstract one, which added one more parameter to the __init__.
When creating an instance of the later, the error below pops up:
Bug?
Am I restricted to __init__(self) only (no other parameters)?
Nice, while it is not amazingly clean it is great if you don't want somebody to instantiate an abstract class...This is probably the simplest recipe for creating an abstract class I have seen so far...can't wait for python 3000 when interfaces and abstract stuff is supported natively!!!
By the way if you want to have parameters in the __init__ of sub classes you just need to change line 80 for new to
Also, I noticed that you really don't need to pass the name of the function to instantiate AbstractMethod...maybe it was needed in previous versions of python but 2.5 works without it. I changed the __init__ for AbstractMethod to have func=None. That way in my abstract class I just do
that way it is not as redundant with passing the name again to the AbstractMethod. It still spits out the name when it says that I can't instantiate a class with abstract methods. | http://code.activestate.com/recipes/266468/ | crawl-002 | refinedweb | 484 | 63.7 |
On 1/16/07, Ringo Kamens <address@hidden> wrote:
Where is the half-forum located? I would love to help out.
The internationalization is half implmented. I've given it a namespace, but have not done work towards autochanging the pwiki text. I have done no work on forums. Brian
On 1/16/07, Brian Brazil <address@hidden> wrote: > > On 1/6/07, Brian Brazil <address@hidden> wrote: > > It seems that once again the topic of forums has come up. As it stands > > the only official forum is this mailing list, if people want an actual > > forum I suggest investigating what PmWiki has to offer and I'll > > implement whichever meets some basic requirements (same auth system, > > uses RecentChanges). > > I see there's been no interest in this. > > > On the matter of other languages, PmWiki can also provide. It seems > > the pmwiki website uses group suffixes and pulls in template > > translations based on that. A similar system wouldn't be too hard to > > implement if there was interest, although as-is there is nothing to > > stop people putting translations on the wiki. > > I've half-implemented this, but there's been no up-take. > > Brian > > > _______________________________________________ > gNewSense-users mailing list > address@hidden > > | https://lists.gnu.org/archive/html/gnewsense-users/2007-01/msg00094.html | CC-MAIN-2022-40 | refinedweb | 202 | 65.83 |
Because you already know the scoop on SAX, Java, and the Xerces SAX parser for Java, let's go ahead and jump right into the program code. Here are the first 12 lines of Java code:
import org.xml.sax.Attributes; import org.xml.sax.ContentHandler; import org.xml.sax.ErrorHandler; import org.xml.sax.Locator; import org.xml.sax.SAXParseException; import org.xml.sax.XMLReader; public class DocumentPrinter implements ContentHandler, ErrorHandler { // A constant containing the name of the SAX parser to use. private static final String PARSER_NAME = "org.apache.xerces.parsers.SAXParser";
This code imports classes that will be used later on and declares the class (program) that you're currently writing. The
import statements indicate which classes will be used by this program. In this case, all of the classes that will be used are from the
org.xml.sax package and are included in the
xercesImpl.jar and
xml-apis.jar archives.
This class, called
DocumentPrinter, implements two interfaces
ContentHandler and
ErrorHandler. These two interfaces are part of the standard SAX 2.0 package and are included in the
import list. A program that implements
ContentHandler is set up to handle events passed back in the normal course of parsing an XML document, and a program that implements
ErrorHandler can handle any error events generated during SAX parsing.
In the Java world, an interface is a framework that specifies a list of methods that must be defined in a class. An interface is useful because it guarantees that any class that implements it meets the requirements of that interface. If you fail to include all of the methods required by the interface, your program will not compile. Because this program implements
ContentHandler and
ErrorHandler, the parser can be certain that it is capable of handling all of the events it triggers as it parses a document.
After the class has been declared, a single member variable is created for the class,
PARSER_NAME. This variable is a constant that contains the name of the class that you're going to use as the SAX parser. As you learned earlier, there is any number of SAX parsers available. The Xerces parser just so happens to be one of the better Java SAX parsers out there, which explains the parser name of
org.apache.xerces.parsers.SAXParser.
Although SAX is certainly a popular Java-based XML parser given its relatively long history, it has some serious competition from Sun, the makers of Java. The latest version of Java (J2SE 5.0) now includes an XML API called JAXP that serves as a built-in XML parser for Java. To learn more about JAXP, visit.
The
main() Method
Every command-line Java application begins its life with the
main() method. In the Java world, the
main method indicates that a class is a standalone program, as opposed to one that just provides functionality used by other classes. Perhaps more importantly, it's the method that gets run when you start the program. The purpose of this method is to set up the parser and get the name of the document to be parsed from the arguments passed in to the program. Here's the code:
public static void main(String[] args) { if (args.length == 0) { System.out.println("No XML document path specified."); System.exit(1); } DocumentPrinter dp = new DocumentPrinter(); XMLReader parser; try { parser = (XMLReader)Class.forName(PARSER_NAME).newInstance(); parser.setContentHandler(dp); parser.setErrorHandler(dp); parser.parse(args[0]); } // Normally it's a bad idea to catch generic exceptions like this. catch (Exception ex) { System.out.println(ex.getMessage()); ex.printStackTrace(); } }
This program expects that the user will specify the path to an XML document as its only command-line argument. If no such argument is submitted, the program will exit and instruct the user to supply that argument when running the program.
Next, the program creates an instance of the
DocumentPrinter object and assigns it to the variable
dp. You'll need this object later when you tell the parser which
ContentHandler and
ErrorHandler to use. After instantiating
dp, a
try...catch block is opened to house the parsing code. This is necessary because some of the methods called to carry out the parsing can throw exceptions that must be caught within the program. All of the real work in the program takes place inside the
try block.
The
TRy...catch block is the standard way in which Java handles errors that crop up during the execution of a program. It enables the program to compensate and work around those errors if the user chooses to do so. In this case, you simply print out information about the error and allow the program to exit gracefully.
Within the
try...catch block, the first order of business is creating a parser object. This object is actually an instance of the class named in the variable
PARSER_NAME. The fact that you're using it through the
XMLReader interface means that you can call only those methods included in that interface. For this application, that's fine. The class specified in the
PARSER_NAME variable is then loaded and assigned to the variable
parser. Because SAX 2.0 parsers must implement
XMLReader, you can refer to the interface as an object of that type rather than referring to the class by its own name
SAXParser.
After the parser has been created, you can start setting its properties. Before actually parsing the document, however, you have to specify the content and error handlers that the parser will use. Because the
DocumentPrinter class can play both of those roles, you simply set both of those properties to
dp (the
DocumentPrinter object you just created). At this point, all you have to do is call the
parse() method on the URI passed in on the command line, which is exactly what the code does.
Implementing the
ContentHandler Interface
The skeleton for the program is now in place. The rest of the program consists of methods that fulfill the requirements of the
ContentHandler and
ErrorHandler interfaces. More specifically, these methods respond to events that are triggered during the parsing of an XML document. In this program, the methods just print out the content that they receive.
The first of these methods is the
characters() method, which is called whenever content is parsed in a document. Following is the code for this method:
public void characters(char[] ch, int start, int length) { String chars = ""; for (int i = start; i < start + length; i++) chars = chars + ch[i]; if ((chars.trim()).length() > 0) System.out.println("Received characters: " + chars); }
The
characters() method receives content found within elements. It accepts three arguments: an array of characters, the position in the array where the content starts, and the amount of content received. In this method, a
for loop is used to extract the content from the array, starting at the position in the array where the content starts, and iterating over each element until the position of the last element is reached. When all of the characters are gathered, the code checks to make sure they aren't just empty spaces, and then prints the results if not.
It's important not to just process all of the characters in the array of characters passed in unless that truly is your intent. The array can contain lots of padding on both sides of the relevant content, and including it all will result in a lot of extra characters along with the content that you actually want. On the other hand, if you know that the code contains parsed character data (PCDATA) that you want to read verbatim, then by all means process all of the characters.
The next two methods,
startDocument() and
endDocument(), are called when the beginning and end of the document are encountered, respectively. They accept no arguments and are called only once each during document parsing, for obvious reasons. Here's the code for these methods:
public void startDocument() { System.out.println("Start document."); } public void endDocument() { System.out.println("End of document reached."); }
Next let's look at the
startElement() and
endElement() methods, which accept the most complex set of arguments of any of the methods that make up a
ContentHandler:
public void startElement(String namespaceURI, String localName, String qName, Attributes atts) { System.out.println("Start element: " + localName); } public void endElement(String namespaceURI, String localName, String qName) { System.out.println("End of element: " + localName); }
The
startElement() method accepts four arguments from the parser. The first is the namespace URI, which you'll see elsewhere as well. The namespace URI is the URI for the namespace associated with the element. If a namespace is used in the document, the URI for the namespace is provided in a namespace declaration. The local name is the name of the element without the namespace prefix. The qualified name is the name of the element including the namespace prefix if there is one. Finally, the attributes are provided as an instance of the
Attributes object. The
endElement() method accepts the same first three arguments but not the final attributes argument.
SAX parsers must have namespace processing turned on in order to populate all of these attributes. If that option is deactivated, any of the arguments (other than the attributes) may be populated with empty strings. The method for turning on namespace processing varies depending on which parser you use.
Let's look at attribute processing specifically. Attributes are supplied to the
startElement() method as an instance of the
Attributes object. In the sample code, you use three methods of the
Attributes object:
getLength(), getLocalName(), and
getValue(). The
getLength() method is used to iterate over the attributes supplied to the method call, while
getLocalName() and
getValue() accept the index of the attribute being retrieved as arguments. The code retrieves each attribute and prints out its name and value. In case you're curious, the full list of methods for the
Attributes object appears in Table 17.1.
Table 17.1. Methods of the
Attributes Object
Getting back to the
endElement() method, its operation is basically the same as that of
startElement() except that it doesn't accept the attributes of the element as an argument.
The next two methods,
startPrefixMapping() and
endPrefixMapping(), have to do with prefix mappings for namespaces:
public void startPrefixMapping(String prefix, String uri) { System.out.println("Prefix mapping: " + prefix); System.out.println("URI: " + uri); } public void endPrefixMapping(String prefix) { System.out.println("End of prefix mapping: " + prefix); }
These methods are used to report the beginning and end of namespace prefix mappings when they are encountered in a document.
The next method,
ignorableWhitespace(), is similar to
characters(), except that it returns whitespace from element content that can be ignored.
public void ignorableWhitespace(char[] ch, int start, int length) { System.out.println("Received whitespace."); }
Next on the method agenda is
processingInstruction(), which reports processing instructions to the content handler. For example, a stylesheet can be associated with an XML document using the following processing instruction:
<?xml-stylesheet href="mystyle.css" type="text/css"?>
The method that handles such instructions is
public void processingInstruction(String target, String data) { System.out.println("Received processing instruction:"); System.out.println("Target: " + target); System.out.println("Data: " + data); }
The last method you need to be concerned with is
setDocumentLocator(), which is called when each and every event is processed. Nothing is output by this method in this program, but I'll explain what its purpose is anyway. Whenever an entity in a document is processed, the parser calls
setDocumentLocator() with a
Locator object. The
Locator object contains information about where in the document the entity currently being processed is located. Here's the "do nothing" source code for the method:
public void setDocumentLocator(Locator locator) { }
The methods of a
Locator object are described in Table 17.2.
Table 17.2. The Methods of a
Locator Object
Because the sample program doesn't concern itself with the specifics of locators, none of these methods are actually used. However, it's good for you to know about them in case you need to develop a program that somehow is interested in locators.
Implementing the
ErrorHandler Interface
I mentioned earlier that the
DocumentPrinter class implements two interfaces,
ContentHandler and
ErrorHandler. Let's look at the methods that are used to implement the
ErrorHandler interface. There are three types of errors that a SAX parser can generateerrors, fatal errors, and warnings. Classes that implement the
ErrorHandler interface must provide methods to handle all three types of errors. Here's the source code for the three methods:
public void error(SAXParseException exception) { } public void fatalError(SAXParseException exception) { } public void warning(SAXParseException exception) { }
As you can see, each of the three methods accepts the same argumenta
SAXParseException object. The only difference between them is that they are called under different circumstances. To keep things simple, the sample program doesn't output any error notifications. For the sake of completeness, the full list of methods supported by
SAXParseException appears in Table 17.3.
Table 17.3. Methods of the
SAXParseException Interface
Similar to the
Locator methods, these methods aren't used in the Document Printer sample program, so you don't have to worry about the ins and outs of how they work.
Testing the Document Printer Program
Now that you understand how the code works in the Document Printer sample program, let's take it for a test drive one more time. This time around, you're running the program to parse the
condos.xml sample document from the previous tutorial. Here's an excerpt from that document in case it's already gotten a bit fuzzy in your memory:
<proj status="active"> <location lat="36.122238" long="-86.845028" /> <description> <name>Woodmont Close</name> <address>131 Woodmont Blvd.</address> <address2>Nashville, TN 37205</address2> <img>condowc.jpg</img> </description> </proj>
And here's the command required to run this document through the Document Printer program:
java -classpath xercesImpl.jar;xml-apis.jar;. DocumentPrinter condos.xml
Finally, Listing 17.2 contains the output of the Document Printer program after feeding it the condominium map data stored in the
condos.xml document.
Listing 17.2. The Output of the Document Printer Example Program After Processing the
condos.xml Document
1: Start document. 2: Start element: projects 3: Start element: proj 4: Start element: location 5: End of element: location 6: Start element: description 7: Start element: name 8: Received characters: Woodmont Close 9: End of element: name 10: Start element: address 11: Received characters: 131 Woodmont Blvd. 12: End of element: address 13: Start element: address2 14: Received characters: Nashville, TN 37205 15: End of element: address2 16: Start element: img 17: Received characters: condowc.jpg 18: End of element: img 19: End of element: description 20: End of element: proj 21: ... 22: Start element: proj 23: Start element: location 24: End of element: location 25: Start element: description 26: Start element: name 27: Received characters: Harding Hall 28: End of element: name 29: Start element: address 30: Received characters: 2120 Harding Pl. 31: End of element: address 32: Start element: address2 33: Received characters: Nashville, TN 37215 34: End of element: address2 35: Start element: img 36: Received characters: condohh.jpg 37: End of element: img 38: End of element: description 39: End of element: proj 40: End of element: projects 41: End of document reached.
The excerpt from the
condos.xml document that you saw a moment ago corresponds to the first
proj element in the XML document. Lines 3 through 20 show how the Document Printer program parses and displays detailed information for this element and all of its content. | https://www.brainbell.com/tutorials/XML/Inside_The_SAX_Sample_Program.htm | CC-MAIN-2018-13 | refinedweb | 2,608 | 54.93 |
, Oct 22, 2012 at 12:41 PM, Mike C. Fletcher
<mcfletch@...> wrote:
>
>.
The suggestion I made off-thread was to have a repetitive starfield --
a 3x3 grid of the same pattern. That way you can invisibly loop the
scrolling of the pattern when you get far enough from the center --
since it's repetitive, there's no way to tell, say, the position (-10,
0) from the position (0, 0) (if each "cell" in the grid were 10x10).
Make the base pattern large enough (2x screen resolution should do)
and the repetition should be hard to notice.
Real-life parallax would be very hard to notice on these scales, but
scrolling starfields are a time-honored tradition for space games. :)
-Chris
On 12-10-19 12:52 PM, Ryan Hope wrote:
> I am working on a pygame app that I recently converted to using opegl
> for the backend. The game has a parallax star field in the background
> so I have been looking for the most efficient way of drawing multiple
> points. My game updates the display every 33ms. Bellow are the draw
> points methods I have tried.
VBO approach on my machine fullscreen at 1920x1200 shows around 280 fps
(no actually change with resolution change, btw). I'm just rendering a
random starfield of the same size as the Bright Star Catalog (9,100
stars). That looks good if you wander around near the center, but if
you make it small enough to see changes you rapidly run out of stars..
VBO approach looks like this:
from numpy import random
numpy_pointset = random.rand( 9110, 3 )
points = vbo.VBO( numpy_pointset )
then, during render:
glVertexPointerf(points)
glEnableClientState( GL_VERTEX_ARRAY )
glDrawArrays( GL_POINTS, 0, len(points))
glDisableClientState( GL_VERTEX_ARRAY )
I'm guessing that each update will wind up scatter-shot across your VBO
so you'll likely want to re-upload the whole thing each time you change
the star-field.
Code I'm using to test is here:
<>
Hope that helps, | https://sourceforge.net/p/pyopengl/mailman/pyopengl-users/?viewmonth=201210&viewday=22 | CC-MAIN-2017-30 | refinedweb | 327 | 77.47 |
Version 3.7
Add turn-by-turn directions to a page
The Directions component calculates directions between two or more locations. It adds a driving route to the map and provides turn-by-turn instructions. This tutorial describes how to add the Directions component to a map, customize the page layout to show the component in a side bar next to the map and (optionally) specify your own network analysis service instead of the default routing service.
Prerequisites
- Adobe Flash Builder 4.5.1 (or later)
- Download a copy of the ArcGIS API for Flex. The Directions component was made available at version 3.2 so make certain to download this version or later.
- An ArcGIS.com subscription with World Network Analysis is required. Alternatively, you could use your own ArcGIS Server Network Analysis service.
Note:The following steps assume you have completed the steps necessary to create a new project and add the Flex API library as discussed in the Getting started topic.
- In Flash Builder, create a new project and make certain to add the Flex API library and reference the esri namespace.
- In the Editor View, add the MXML below to create a map with a set spatial reference and extent and add a basemap layer
<s:Application xmlns: <esri:Map <esri:extent> <esri:WebMercatorExtent </esri:extent> <esri:ArcGISTiledMapServiceLayer/> </esri:Map> </s:Application>
Note:
The ArcGISTiledMapServiceLayer.url defaults to.
- In the Editor View, add the MXML below including reference to the layout tag.
<?xml version="1.0" encoding="utf-8"?> <s:Application xmlns: </s:layout> <esri:Map <esri:extent> ...
The above markup sets the padding on the left-hand side as to allow the directions component to display next to the map.
- Next, add the Directions component.
Note:
The Network Analyst service URL is referencing a sample from ArcGIS Online. This is limited to San Diego's extent. If you wish to work outside of this area, you will need to either: 1) access an ArcGIS.com subscription with World Network Analysis, or 2) use your own ArcGIS Server Network Analysis service.
... <s:layout> <s:HorizontalLayout </s:layout> <esri:Directions <esri:Map <esri:extent> ...
- Last, save and run the application.
- Enter your locations to route. You can do this by either: 1) typing in the address, or 2) clicking on a point on the map using the input tool provided to the right of the input box.
Calculate the driving directions after entering the destinations. You should see something similar to what is displayed in the screen capture below.
Calculating directions does not require a lot of coding when working with the provided Directions component. This component bundles a lot of the functionality within it so all you need to do is reference it in your code. We calculated simple driving directions in just a few steps. Conceptually, this is how the code breaks down:
- Set a map with a given extent and spatial reference
- Add a basemap layer
- Add the Directions component specifying the URL of the Network Analyst routing service
- Modify the page layout so that the component displays on the left-hand side of the application | https://developers.arcgis.com/flex/guide/tutorial-route-and-navigate.htm | CC-MAIN-2018-30 | refinedweb | 522 | 55.64 |
I'm trying to create a weblog with mvc. I made a database code first
with EF. Now I have a page where you can see one post per page. Below I
want to show all comments on the post. Thats all working fine. But now I
want to make a create comment functionality on that same page.
I'm not sure how to do this? Because this has to create a new object
'comment' instead of the 'post' object
View Replies
I have the following create method for a Contact controller:
def create puts "in create method" @contact =
Contact.create(params[:contact]) unless
@contact.vcard.path.blank? paperclip_vcard =
File.new(@contact.vcard.path) @vcard =
Vpim::Vcard.decode(paperclip_vcard).first @contact.title =
@vcard.title
I want to create a AJAX search to find and list topics in a forum (just
topic link and subject).The question is: Which one of the methods is
better and faster?
GET threads list as a JSON string and
convert it to an object, then loop over items and create a
<li/> or <tr>, write data (link,
subject) and append it to threads list. (jQuery Pow
<li/>
<tr>
User clicks the button, and an icon is added to the iPhone desktop.If possible, can I activate this button manually?
In a Rails application, given three models User, Article and Reviewer
with the following relationships and validations:
class User
< ActiveRecord::Base has_many :articles has_many
:reviewersendclass Reviewer < ActiveRecord::Base
belongs_to :user belongs_to :articleendclass Article
< ActiveRecord::Base belongs_to
please explain what this task is about?
"create a generic
linked list class that enables us to create a chain objects of different
types."
Do we need to create a class of type linkedlist and
implement list interface?
class
LinkedList<T>:IList<T>{ //implement interface
methods here?}
Please give example.<
What I want to do is to create something like that
hotmail/facebook-style list of selected contacts.. with 1 little block and
a "X" for removing each item.
How could I achieve that in
.NET?
I thought of creating new labels "on the fly" and use
.NET's Ajax UpdatePanel..
But, can I do it? if yes, how can i
create a label on the fly, and put it just where I want
For example, I have an ArrayList:
ArrayList<Animal>
zoo = new ArrayList<Animal>();// some code to add animals to
zoo
I want to create a copy of this old
zoo to new zoo, and I can use remove/add... to process
this new zoo.(and of course, it will not affect to old zoo). So, I have a
way that: create new zoo that clon
old
zoo
new zoo
Is it possible to create an IntelliJ plugin to create a new "Module
Type", That is I want to create a new module type in a project that can be
dependent on other modules of any type in the project and be a dependency
for any other modules in the project, and when building the new custom
module type execute code specific to the new type of module ( ie: it's
custom compiler or other external comm
I have a problem with thread.When I want set a GridView into a ListView
as View in another thread.It display a message which said:
Must create DependencySource on same Thread as the
DependencyObject.
// Create grid view
GridView grid = new GridView(); // Add column
// Name grid.C | http://bighow.org/tags/create/1 | CC-MAIN-2017-22 | refinedweb | 566 | 63.7 |
There, there will be a decimal point before any numbers, such as with something under a dollar. Try to think of a way that you'd incorporate the very large number of possibilities, it is pretty hard. This is where regular expressions come in.
Regular expressions are used to sift through text-based data to find things. Regular expressions express a pattern of data that is to be located. Regex is its own language, and is basically the same no matter what programming language you are using with it.
In Python 3, the module to use regular expressions is re, and it must be imported to use regular expressions. Re is a part of the standard library, meaning you will not need to do any downloading and installing to use it, it is already there.
Back to our example above, before getting to the video tutorial, let me break down how prices would be discovered in a regex frame of mind:
You'd tell the regular expression module that: You are looking for the string to BEGIN with a dollar sign. Then you are either looking a group of digits, or an immediate period / decimal point. From here, you would keep looking for digits, commas, and periods until you finally reach an ending period before a space (indicating the end of a sentence rather than a decimal point), or just a space. This is exactly how you will structure a real regular expression.
For the video, our task is to locate names and ages of people. The code:
Here is a quick cheat sheet for various rules in regular expressions:
Identifiers:
Modifiers:
White Space Charts:
Characters to REMEMBER TO ESCAPE IF USED!
Brackets:
The code:
So, we have the string we intend to search. We see that we have ages that are integers 2-3 numbers in length. We could also expect digits that are just 1, under 10 years old. We probably wont be seeing any digits that are 4 in length, unless we're talking about biblical times or something.
import re exampleString = ''' Jessica is 15 years old, and Daniel is 27 years old. Edward is 97 years old, and his grandfather, Oscar, is 102. '''
Now we define the regular expression, using a simple findall method to find all examples of the pattern we specify as the first parameter within the string we specify as the second parameter.
ages = re.findall(r'\d{1,3}',exampleString) names = re.findall(r'[A-Z][a-z]*',exampleString) print(ages) print(names) | https://pythonprogramming.net/regular-expressions-regex-tutorial-python-3/?completed=/urllib-tutorial-python-3/ | CC-MAIN-2019-26 | refinedweb | 421 | 62.27 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.