kellyshreeve commited on
Commit
d502fe4
·
verified ·
1 Parent(s): 58e8f2c

Upload 6 files

Browse files
Files changed (5) hide show
  1. Dockerfile +11 -0
  2. chainlit.md +14 -0
  3. chainlit_app.py +86 -0
  4. context_shortest.txt +1 -0
  5. requirements.txt +7 -0
Dockerfile ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10
2
+
3
+ WORKDIR /app
4
+
5
+ COPY ./requirements.txt ~/app/requirements.txt
6
+
7
+ RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
8
+
9
+ COPY . .
10
+
11
+ CMD["chainlit", "run", "chainlit_app.py", "--port", "7860"]
chainlit.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Welcome to Chainlit! 🚀🤖
2
+
3
+ Hi there, Developer! 👋 We're excited to have you on board. Chainlit is a powerful tool designed to help you prototype, debug and share applications built on top of LLMs.
4
+
5
+ ## Useful Links 🔗
6
+
7
+ - **Documentation:** Get started with our comprehensive [Chainlit Documentation](https://docs.chainlit.io) 📚
8
+ - **Discord Community:** Join our friendly [Chainlit Discord](https://discord.gg/k73SQ3FyUh) to ask questions, share your projects, and connect with other developers! 💬
9
+
10
+ We can't wait to see what you create with Chainlit! Happy coding! 💻😊
11
+
12
+ ## Welcome screen
13
+
14
+ To modify the welcome screen, edit the `chainlit.md` file at the root of your project. If you do not want a welcome screen, just leave this file empty.
chainlit_app.py ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from langchain import HuggingFacePipeline, PromptTemplate, LLMChain
2
+ from langchain.memory import ConversationBufferMemory
3
+ from langchain.vectorstores import FAISS
4
+ from langchain.llms import CTransformers
5
+ from langchain.chains import RetrievalQA
6
+ from langchain.embeddings import HuggingFaceEmbeddings
7
+ from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter
8
+ from langchain.document_loaders import TextLoader
9
+ import chainlit as cl
10
+ from huggingface_hub import login
11
+ import torch
12
+
13
+ from dotenv import dotenv_values
14
+
15
+ secrets = dotenv_values('.env')
16
+
17
+ login(token=secrets["HUGGING_FACE_KEY"])
18
+
19
+ model_id = "TheBloke/Llama-2-13B-chat-GGML"
20
+
21
+ llm = CTransformers(
22
+ model=model_id,
23
+ model_type='llama',
24
+ max_new_tokens=512,
25
+ temperature=0.5,
26
+ repitition_penalty=1.1,
27
+ config={'context_length':700}
28
+ )
29
+
30
+ with open('context_shortest.txt') as f:
31
+ text = f.read()
32
+
33
+ text_splitter = RecursiveCharacterTextSplitter(chunk_size=400, chunk_overlap=0)
34
+ texts = text_splitter.create_documents([text])
35
+
36
+ embeddings = HuggingFaceEmbeddings(model_name='sentence-transformers/all-MiniLM-L6-v2',
37
+ model_kwargs={'device':'cpu'})
38
+
39
+ vectorstore = FAISS.from_documents(texts, embeddings)
40
+
41
+
42
+ custom_prompt_template='''Use the following pieces of information to answer the users question.
43
+ If you don't know the answer, please just say you don't know. Don't make up an answer.
44
+ You are a helpful assistant, you always only answer for the assistant, then you stop.
45
+
46
+ Context:{context}
47
+ History: {history}
48
+ question:{question}
49
+
50
+ Only returns the helpful answer below and nothing else.
51
+ Helpful answer
52
+ '''
53
+
54
+ retriever = vectorstore.as_retriever()
55
+
56
+ @cl.on_chat_start
57
+ async def main():
58
+ prompt = PromptTemplate(template=custom_prompt_template, input_variables=['history', 'context', 'question'])
59
+
60
+ qa_chain = RetrievalQA.from_chain_type(
61
+ llm=llm,
62
+ chain_type='stuff',
63
+ retriever = retriever,
64
+ chain_type_kwargs={'prompt':prompt,
65
+ "memory": ConversationBufferMemory(
66
+ memory_key="history",
67
+ input_key="question")}
68
+ )
69
+
70
+ msg=cl.Message(content="Firing up the QA bot...")
71
+ await msg.send()
72
+ msg.content="Hello, welcome to the DataSpeak Chatbot. What is your query?"
73
+ await msg.update()
74
+ cl.user_session.set("qa_chain", qa_chain)
75
+
76
+
77
+ @cl.on_message
78
+ async def main(message: str):
79
+ qa_chain = cl.user_session.get("qa_chain")
80
+
81
+ res = await qa_chain.acall(message, callbacks=[cl.AsyncLangchainCallbackHandler()])
82
+
83
+ await cl.Message(
84
+ content=res["result"]
85
+ ).send()
86
+
context_shortest.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ open up a terminal applications utilities terminal and type this in locate insertfonthere br this will spit out every file that has the name you want. warning there may be alot to wade through. i haven't been able to find anything that does this directly. i think you'll have to iterate through the various font folders on the system system library fonts library fonts and there can probably be a user level directory as well library fonts . unfortunately the only api that isn't de cated is located in the applicationservices framework which doesn't have a bridge support file and thus isn't available in the bridge. if you're wanting to use ctypes you can use atsfontgetfilereference after looking up the atsfontref. cocoa doesn't have any native support at least as of . for getting the location of a font. there must be a method in cocoa to get a list of fonts then you would have to use the pyobjc bindings to call it.. depending on what you need them for you could probably just use something like the following.. import os def get_font_list fonts [] for font_path in [ library fonts os.path.expanduser library fonts ] if os.path.isdir font_path fonts.extend [os.path.join font_path cur_font for cur_font in os.listdir font_path ] return fonts you can use imagemagick's convert utility for this see some examples in https web.archive.org web http studio.imagemagick.org pipermail magick users may .html rel nofollow http studio.imagemagick.org pipermail magick users may .html blockquote convert taxes. taxes. will convert a two page file into [ ] files taxes. . taxes. . i can also convert these s to a thumbnail as follows convert size x taxes. . geometry x profile ' ' thumbnail. i can even convert the directly to a thumbnail as follows convert size x taxes. geometry x profile ' ' thumbnail. this will result in a thumbnail. . and thumbnail. . for the two pages. blockquote is the pc likely to have acrobat installed i think acrobat installs a shell extension so views of the first page of a document appear in windows explorer's thumbnail view. you can get thumbnails yourself via the iextractimage com api which you'll need to wrap. http www.vbaccelerator.com home net code libraries shell_projects thumbnail_extraction article.asp rel nofollow title domain specific development with visual studio dsl tools. vbaccelerator has an example in c that you could port to python. imagemagick delegates the bitmap conversion to ghostscript anyway so here's a command you can use it's based on the actual command listed by the ps alpha delegate in imagemagick just adjusted to use as output gs q dquiet dparanoidsafer dbatch dnopause dnoprompt \ dmaxbitmap dlastpage daligntopixels dgridfittt \ sdevice dtextalphabits dgraphicsalphabits r x \ soutputfile output f input where output and input are the output and input filenames. adjust the x to whatever resolution you need. obviously strip out the backslashes if you're writing out the whole command as one line. this is good for two reasons ol li you don't need to have imagemagick installed anymore. not that i have anything against imagemagick i love it to bits but i believe in simple solutions. li li imagemagick does a two step conversion. first ppm then ppm . this way the conversion is one step. li ol other things to consider with the files i've tested png com sses better than . if you want to use png change the sdevice to sdevice png m . one possibility is hudson. it's written in java but there's integration with python projects blockquote http redsolo.blogspot.com hudson embraces python.html rel nofollow hudson embraces python blockquote i've never tried it myself however. strong update strong sept. after a trademark dispute hudson has been renamed to http jenkins ci.org rel nofollow jenkins . we run http buildbot.net trac buildbot trac at work i haven't used it too much since my code base isn't part of the release cycle yet. but we run the tests on different environments osx linux win and it sends emails and it's written in python. second the buildbot trac integration. you can find more information about the integration on the http buildbot.net trac wiki buildbotandtrac buildbot website . at my vious job we wrote and used the plugin they mention tracbb . what the plugin does is rewriting all of the buildbot urls so you can use buildbot from within trac. http example.com tracbb . the really nice thing about buildbot is that the configuration is written in python. you can integrate your own python code directly to the configuration. it's also very easy to write your own buildsteps to execute specific tasks. we used buildsteps to get the source from svn pull the dependencies publish test results to webdav etcetera. i wrote an x interface so we could send signals with build results. when the build failed we switched on a red lava lamp. when the build succeeded a green lava lamp switched on. good times we use both buildbot and hudson for jython development. both are useful but have different stren hs and weaknesses. buildbot's configuration is pure python and quite simple once you get the hang of it look at the epydoc generated api docs for the most current info . buildbot makes it easier to define non testing tasks and distribute the testers. however it really has no concept of individual tests just textual html and summary output so if you want to have multi level browsable test output and so forth you'll have to build it yourself or just use hudson. hudson has terrific support for drilling down from overall results into test suites and individual tests it also is great for comparing test output between builds but the distributed master slave stuff is comparatively more complicated because you need a java environment on the slaves too also hudson is less tolerant of flaky network links between the master and slaves. so to get the benefits of both tools we run a single instance of hudson which catches the common test failures then we do multi platform regression with buildbot. here are our instances ul li http bob.underboss.org job jython lastbuild testreport jython hudson li li http www.acm.uiuc.edu jython buildbot waterfall jython buildbot li ul we are using http bitten.edgewall.org rel nofollow bitten wich is integrated with trac. and it's python based. teamcity has some python http www.jetbrains.net confluence display tw python unit test reporting rel nofollow integration . but teamcity is ul li not open source li li is not small but rather feature rich li li is free for small mid teams. li ul i have very good experiences with http travis ci.org rel nofollow travis ci for smaller code bases. the main advantages are ul li setup is done in less than half a screen of config file li li you can do your own installation or just use the free hosted version li li semi automatic setup for github repositories li li no account needed on website login via github li ul some limitations ul li python is not supported as a first class language as of time of writing but you can use pip and apt get to install python dependencies see http www.travisswicegood.com travis and python rel nofollow this tutorial li li code has to be hosted on github at least when using the official version li ul the canonical way is to use the built in cursor iterator. curs.execute 'select from people' for row in curs print row hr you can use fetchall to get all rows at once. for row in curs.fetchall print row it can be convenient to use this to create a python list containing the values returned curs.execute 'select first_name from people' names [row[ ] for row in curs.fetchall ] this can be useful for smaller result sets but can have bad side effects if the result set is large. ul li you have to wait for the entire result set to be returned to your client process. li li you may eat up a lot of memory in your client to hold the built up list. li li it may take a while for python to construct and deconstruct the list which you are going to immediately discard anyways. li ul hr if you know there's a single row being returned in the result set you can call fetchone to get the single row. curs.execute 'select max x from t' maxvalue curs.fetchone [ ] hr finally you can loop over the result set fetching one row at a time. in general there's no particular advantage in doing this over using the iterator. row curs.fetchone while row print row row curs.fetchone there's also the way psyco pg seems to do it... from what i gather it seems to create dictionary like row proxies to map key lookup into the memory block returned by the query. in that case fetching the whole answer and working with a similar proxy factory over the rows seems like useful idea. come to think of it though it feels more like lua than python. also this should be applicable to all http www.python.org dev peps pep rel nofollow pep dbapi . interfaces not just oracle or did you mean just em fastest em using em oracle em my ferred way is the cursor iterator but setting first the arraysize property of the cursor. curs.execute 'select from people' curs.arraysize for row in curs print row in this example cx_oracle will fetch rows from oracle rows at a time reducing the number of network round trips that need to be performed no you were not dreaming. python has a tty excellent list com hension system that lets you manipulate lists tty elegantly and depending on exactly what you want to accomplish this can be done a couple of ways. in essence what you're doing is saying for item in list if criteria.matches and from that you can just iterate through the results or dump the results into a new list. i'm going to crib an example from http diveintopython.net functional_programming filtering_lists.html rel nofollow dive into python here because it's tty elegant and they're smarter than i am. here they're getting a list of files in a directory then filtering the list for all files that match a regular ex ssion criteria. blockquote files os.listdir path test re.compile test\.py re.ignorecase files [f for f in files if test.search f ] blockquote you could do this without regular ex ssions for your example for anything where your ex ssion at the end returns true for a match. there are other options like using the filter function but if i were going to choose i'd go with this. eric sipple i think bin python br bar in dict foo br is what you are thinking of. when trying to see if a certain key exists within a dictionary in python python's version of a hash table there are two ways to check. first is the strong has_key strong method attached to the dictionary and second is the example given above. it will return a boolean value. that should answer your question. and now a little off topic to tie this in to the em list com hension em answer viously given for a bit more clarity . em list com hensions em construct a list from a basic em for loop em with modifiers. as an example to clarify slightly a way to use the in dict language construct in a _list com hension_ say you have a two dimensional dictionary strong foo strong and you only want the second dimension dictionaries which contain the key strong bar strong . a relatively straightforward way to do so would be to use a em list com hension em with a conditional as follows bin python br baz dict [ key value for key value in foo if bar in value] br note the strong if bar in value strong at the end of the statement strong this is a modifying clause which tells the em list com hension em to only keep those key value pairs which meet the conditional. strong in this case strong baz strong is a new dictionary which contains only the dictionaries from foo which contain bar hopefully i didn't miss anything in that code example... you may have to take a look at the list com hension documentation found in http docs.python.org tut node .html section rel nofollow docs.python.org tutorials and at http www.secnetix.de olli python list_com hensions.hawk rel nofollow secnetix.de both sites are good references if you have questions in the future. . are you looking to get a list of objects that have a certain attribute if so a http docs.python.org tut node .html section list com hension is the right way to do this. result [obj for obj in listofobjs if hasattr obj 'attributename' ] br what i was thinking of can be achieved using list com hensions but i thought that there was a function that did this in a slightly neater way. i.e. 'bar' is a list of objects all of which have the attribute 'id' the mythical functional way foo br foo in iter_attr bar 'id' the list com hension way foo br foo in [obj.id for obj in bar] in retrospect the list com hension way is tty neat anyway. you could always write one yourself def iterattr iterator attributename for obj in iterator yield getattr obj attributename will work with anything that iterates be it a tuple list or whatever. i love python it makes stuff like this very simple and no more of a hassle than neccessary and in use stuff like this is hugely elegant. if you plan on searching anything of remotely decent size your best bet is going to be to use a dictionary or a set. otherwise you basically have to iterate through every element of the iterator until you get to the one you want. if this isn't necessarily performance sensitive code then the list com hension way should work. but note that it is fairly inefficient because it goes over every element of the iterator and then goes back over it again until it finds what it wants. remember python has one of the most efficient hashing algorithms around. use it to your advantage. using a list com hension would build a temporary list which could eat all your memory if the sequence being searched is large. even if the sequence is not large building the list means iterating over the whole of the sequence before in could start its search. the temporary list can be avoiding by using a generator ex ssion foo foo in obj.id for obj in bar now as long as obj.id near the start of bar the search will be fast even if bar is infinitely long. as matt suggested it's a good idea to use hasattr if any of the objects in bar can be missing an id attribute foo foo in obj.id for obj in bar if hasattr obj 'id' the function you are thinking of is probably operator.attrgettter . for example to get a list that contains the value of each object's id attribute import operator ids map operator.attrgetter id bar if you want to check whether the list contains an object with an id then a neat and efficient i.e. doesn't iterate the whole list unnecessarily way to do it is any obj.id for obj in bar if you want to use 'in' with attrgetter while still retaining lazy iteration of the list import operator itertools foo foo in itertools.imap operator.attrgetter id bar sounds to me like you're trying to combine things that shouldn't be combined. if you need to do different processing in your view depending on if it's a user or group object you're trying to look at then you should use two different view functions. on the other hand there can be common idioms you'd want to extract out of your object_detail type views... perhaps you could use a decorator or just helper functions dan if you're simply displaying data from models why not use the https docs.djangoproject.com en . ref generic views rel nofollow django generic views they're designed to let you easy show data from a model without having to write your own view and stuff about mapping url paramaters to views fetching data handling edge cases rendering output etc. unless you want to do something a little complex using the generic views are the way to go. they are far more powerful than their name implies and if you are just displaying model data generic views will do the job. if you want to share common functionality between pages i suggest you look at custom tags. they're quite https docs.djangoproject.com en . howto custom template tags rel nofollow easy to create and are very powerful. also https code.djangoproject.com wiki extendin emplates rel nofollow templates can extend from other templates . this allows you to have a base template to set up the layout of the page and to share this between other templates which fill in the blanks. you can nest templates to any depth allowing you to specify the layout on separate groups of related pages in one place. you can always create a class override the em __call__ em function and then point the url file to an instance of the class. you can take a look at the http code.djangoproject.com browser django trunk django contrib formtools wizard.py rel nofollow formwizard class to see how this is done. i've created and used my own generic view classes defining strong __call__ strong so an instance of the class is callable. i really like it while django's generic views allow some customization through keyword arguments oo generic views if their behavior is split into a number of separate methods can have much more fine grained customization via subclassing which lets me repeat myself a lot less. i get tired of rewriting the same create update view logic anytime i need to tweak something django's generic views don't quite allow . i've posted some code at http www.djangosnippets.org snippets djangosnippets.org . the only real downside i see is the proliferation of internal method calls which may impact performance somewhat. i don't think this is much of a concern it's rare that python code execution would be your performance bottleneck in a web app. strong update strong django's own http docs.djangoproject.com en dev topics class based views generic views are now class based. strong update strong fwiw i've changed my opinion on class based views since this answer was written. after having used them extensively on a couple of projects i feel they tend to lead to code that is satisfyingly dry to write but very hard to read and maintain later because functionality is s ad across so many different places and subclasses are so dependent on every implementation detail of the superclasses and mixins. i now feel that https docs.djangoproject.com en dev ref template response templateresponse and view decorators is a better answer for decomposing view code. generic views will usually be the way to go but ultimately you're free to handle urls however you want. formwizard does things in a class based way as do some apps for restful apis. basically with a url you are given a bunch of variables and place to provide a callable what callable you provide is completely up to you the standard way is to provide a function but ultimately django puts no restrictions on what you do. i do agree that a few more examples of how to do this would be good formwizard is probably the place to start though. i needed to use class based views but i wanted to be able to use the full name of the class in my urlconf without always having to instantiate the view class before using it. what helped me was a surprisingly simple metaclass class callableviewclass type def __call__ cls args kwargs if args and isinstance args[ ] htt quest instance super callableviewclass cls .__call__ return instance.__call__ args kwargs else instance super callableviewclass cls .__call__ args kwargs return instance class view object __metaclass__ callableviewclass def __call__ self request args kwargs if hasattr self request.method handler getattr self request.method if hasattr handler '__call__' return handler request args kwargs return htt sponsebadrequest 'method not allowed' status i can now both instantiate view classes and use the instances as view functions or i can simply point my urlconf to my class and have the metaclass instantiate and call the view class for me. this works by checking the first argument to __call__ if it's a htt quest it must be an actual http request because it would be nonsense to attept to instantiate a view class with an htt quest instance. class myview view def __init__ self arg none self.arg arg def get request return htt sponse self.arg or 'no args provided' login_required class myotherview view def post request pass and all the following work as expected. urlpatterns patterns '' url r'^myview ' 'myapp.views.myview' name 'myview ' url r'^myview ' myapp.views.myview name 'myview ' url r'^myview ' myapp.views.myview 'foobar' name 'myview ' url r'^myotherview ' 'myapp.views.myotherview' name 'otherview' i posted a snippet for this at http djangosnippets.org snippets http djangosnippets.org snippets you can use the django generic views. you can easily achieve desired functionality thorough django generic views i don't have any experience with http www.siteground.com rel nofollow http www.siteground.com as a web host personally. this is just a guess but it's common for a shared host to support python and mysql with the mysqldb module e.g. godaddy does this . try the following cgi script to see if mysqldb is installed. usr bin python br br module_name 'mysqldb' br head '''content type text html br br s is ''' module_name br br try br __import__ module_name br print head 'installed' br except importerror br print head 'not installed' br i uploaded it and got an internal error mature end of script headers br after much playing around i found that if i had import cgi br import cgitb cgitb.enable br import mysqldb br it would give me a much more useful answer and say that it was not installed you can see it yourself http woarl.com db.py rel nofollow http woarl.com db.py oddly enough this would produce an error import mysqldb br import cgi br import cgitb cgitb.enable br i looked at some of the other files i had up there and it seems that library was one of the ones i had already tried. mysqldb is what i have used before. if you host is using python version . or higher support for sqlite databases is built in sqlite allows you to have a relational database that is simply a file in your filesystem . but buyer beware sqlite is not suited for production so it may depend what you are trying to do with it. another option may be to call your host and complain or change hosts. honestly these days any self respecting web host that supports python and mysql ought to have mysqldb installed. you could try setting up your own python installation using http peak.telecommunity.com devcenter easyinstall creating a virtual python rel nofollow virtual python . check out how to setup django using it http forums.site .com showthread.php t rel nofollow here . that was written a long time ago but it shows how i got mysqldb setup without having root access or anything like it. once you've got the basics going you can install any python library you want. you really want mysqldb for any mysql python code. however you shouldn't need root access or anything to use it. you can build install it in a user directory lib python .x site packages and just add that to your python_path env variable. this should work for just about any python library. give it a shot there really isn't a good alternative. take a pick at https docs.djangoproject.com en . ref databases rel nofollow https docs.djangoproject.com en . ref databases mysqldb is mostly used driver but if you are using python and django . .x that will not work then you should use mysqlclient that is a folk of mysqldb on the following link https pypi.python.org pypi mysqlclient rel nofollow https pypi.python.org pypi mysqlclient can you show us your code the example on the python docs is quite straightforward groups [] uniquekeys [] for k g in groupby data keyfunc groups.append list g store group iterator as a list uniquekeys.append k so in your case data is a list of nodes keyfunc is where the logic of your criteria function goes and then groupby groups the data. you must be careful to strong sort the data strong by the criteria before you call groupby or it won't work. groupby method actually just iterates through a list and whenever the key changes it creates a new group. as sebastjan said strong you first have to sort your data. this is important. strong the part i didn't get is that in the example construction groups [] uniquekeys [] for k g in groupby data keyfunc groups.append list g store group iterator as a list uniquekeys.append k k is the current grouping key and g is an iterator that you can use to iterate over the group defined by that grouping key. in other words the groupby iterator itself returns iterators. here's an example of that using clearer variable names from itertools import groupby things [ animal bear animal duck plant cactus vehicle speed boat vehicle school bus ] for key group in groupby things lambda x x[ ] for thing in group print a s is a s. thing[ ] key print this will give you the output blockquote a bear is a animal. br a duck is a animal. a cactus is a plant. a speed boat is a vehicle. br a school bus is a vehicle. blockquote in this example things is a list of tuples where the first item in each tuple is the group the second item belongs to. the groupby function takes two arguments the data to group and the function to group it with. here lambda x x[ ] tells groupby to use the first item in each tuple as the grouping key. in the above for statement groupby returns three key group iterator pairs once for each unique key. you can use the returned iterator to iterate over each individual item in that group. here's a slightly different example with the same data using a list com hension for key group in groupby things lambda x x[ ] listofthings and .join [thing[ ] for thing in group] print key s listofthings . this will give you the output blockquote animals bear and duck. br plants cactus. br vehicles speed boat and school bus. blockquote a neato trick with groupby is to run len h encoding in one line [ c len list cgen for c cgen in groupby some_string ] will give you a list of tuples where the first element is the char and the nd is the number of repetitions. captsolo i tried your example but it didn't work. from itertools import groupby [ c len list cs for c cs in groupby 'pedro manoel' ] output [ 'p' 'e' 'd' 'r' 'o' ' ' 'm' 'a' 'n' 'o' 'e' 'l' ] as you can see there are two o's and two e's but they got into separate groups. that's when i realized you need to sort the list passed to the groupby function. so the correct usage would be name list 'pedro manoel' name.sort [ c len list cs for c cs in groupby name ] output [ ' ' 'm' 'p' 'a' 'd' 'e' 'l' 'n' 'o' 'r' ] just remembering if the list is not sorted the groupby function strong will not work strong another example for key igroup in itertools.groupby xrange lambda x x print key list igroup results in [ ] [ ] [ ] note that igroup is an iterator a sub iterator as the documentation calls it . this is useful for chunking a generator def chunker items chunk_size '''group items in chunks of chunk_size''' for _key group in itertools.groupby enumerate items lambda x x[ ] chunk_size yield g[ ] for g in group with open 'file.txt' as fobj for chunk in chunker fobj process chunk another example of groupby when the keys are not sorted. in the following example items in xx are grouped by values in yy. in this case one set of zeros is output first followed by a set of ones followed again by a set of zeros. xx range yy [ ] for group in itertools.groupby iter xx lambda x yy[x] print group[ ] list group[ ] produces [ ] [ ] [ ] i would like to give another example where groupby without sort is not working. adapted from example by james sulak from itertools import groupby things [ vehicle bear animal duck animal cactus vehicle speed boat vehicle school bus ] for key group in groupby things lambda x x[ ] for thing in group print a s is a s. thing[ ] key print output is a bear is a vehicle. a duck is a animal. a cactus is a animal. a speed boat is a vehicle. a school bus is a vehicle. there are two groups with vehicule whereas one could expect only one group warning the syntax list groupby ... won't work the way that you intend. it seems to destroy the internal iterator objects so using for x in list groupby range print list x[ ] will produce [] [] [] [] [] [] [] [] [] [ ] instead of list groupby ... try [ k list g for k g in groupby ... ] or if you use that syntax often def groupbylist args kwargs return [ k list g for k g in groupby args kwargs ] and get access to the groupby functionality while avoiding those pesky for small data iterators all together. blockquote strong how do i use python's itertools.groupby strong blockquote you can use groupby to group things to iterate over. you give groupby an iterable and a optional strong key strong function callable by which to check the items as they come out of the iterable and it returns an iterator that gives a two tuple of the result of the key callable and the actual items in another iterable. from the help groupby iterable[ keyfunc] create an iterator which returns key sub iterator grouped by each value of key value . here's an example of groupby using a coroutine to group by a count it uses a key callable in this case coroutine.send to just spit out the count for however many iterations and a grouped sub iterator of elements import itertools def grouper iterable n def coroutine n yield queue up coroutine for i in itertools.count for j in range n yield i groups coroutine n next groups queue up coroutine for c objs in itertools.groupby iterable groups.send yield c list objs or instead of materializing a list of objs just return itertools.groupby iterable groups.send list grouper range prints [ [ ] [ ] [ ] [ ] ] thanks for lots of nice anwser this is a hackerrank challenge about strong groupby strong this is https www.hackerrank.com challenges com ss the string submissions code rel nofollow link . args for key group in groupby args print group you can look what group is. print len list group int key end the output just count the num occurrence. in python monkey patching generally works by overwriting a class or functions signature with your own. below is an example from the http wiki.zope.org zope monkeypatch zope wiki from someotherproduct.somemodule import someclass br def speak self br return ook ook eee eee eee br someclass.speak speak br that code will overwrite create a method called speak on the class. in jeff atwood's http www.codinghorror.com blog archives .html recent post on monkey patching . he shows an example in c . which is the current language i use for work. i don't know python syntax but i know ruby can do it and it is rather trivial. let's say you want to add a method to array that prints the len h to standard out class array br def print_len h br puts len h br end br end br if you don't want to modify the whole class you can just add the method to a single instance of the array and no other arrays will have the method array [ ] br def array.print_len h br puts len h br end br just be aware of the issues involved in using this feature. jeff atwood actually http www.codinghorror.com blog archives .html rel nofollow wrote about it not too long ago. in python there is a difference between functions and bound methods. def foo ... print foo ... class a ... def bar self ... print bar ... a a foo lt function foo at x a d a.bar lt bound method a.bar of lt __main__.a instance at x a bc bound methods have been bound how descriptive to an instance and that instance will be passed as the first argument whenever the method is called. callables that are attributes of a class as opposed to an instance are still unbound though so you can modify the class definition whenever you want def foofighters self ... print foofighters ... a.foofighters foofighters a a a .foofighters lt bound method a.foofighters of lt __main__.a instance at x a beb a .foofighters foofighters viously defined instances are updated as well as long as they haven't overridden the attribute themselves a.foofighters foofighters the problem comes when you want to attach a method to a single instance def barfighters self ... print barfighters ... a.barfighters barfighters a.barfighters traceback most recent call last file lt stdin line in lt module typeerror barfighters takes exactly argument given the function is not automatically bound when it's attached directly to an instance a.barfighters lt function barfighters at x a ef to bind it we can use the http docs.python.org library types.html highlight methodtype module types methodtype function in the types module import types a.barfighters types.methodtype barfighters a a.barfighters lt bound method .barfighters of lt __main__.a instance at x a bc a.barfighters barfighters this time other instances of the class have not been affected a .barfighters traceback most recent call last file lt stdin line in lt module attributeerror a instance has no attribute 'barfighters' more information can be found by reading about http users.rcn.com python download descriptor.htm descriptors and http www.onlamp.com pub a python metaclasses.html metaclass http www.gnosis.cx publish programming metaclass_ .html programming . what you're looking for is setattr i believe. use this to set an attribute on an object. def printme s print repr s br class a pass br setattr a 'printme' printme br a a br a.printme s becomes the implicit 'self' variable br lt __ main __ . a instance at xabcdefg br what jason pratt posted is correct. class test object ... def a self ... pass ... def b self ... pass ... test.b b type b lt type 'function' type test.a lt type 'instancemethod' type test.b lt type 'instancemethod' as you can see python doesn't consider b any different than a . in python all methods are just variables that happen to be functions. module strong new strong is de cated since python . and removed in . use strong types strong see http docs.python.org library new.html http docs.python.org library new.html in the example below i've deliberately removed return value from patch_me function. i think that giving return value may make one believe that patch returns a new object which is not true it modifies the incoming one. probably this can facilitate a more disciplined use of monkeypatching. import types class a object but seems to work for old style objects too pass def patch_me target def method target x print x x print called from target target.method types.methodtype method target add more if needed a a print a out lt __main__.a object at x b ac bfd patch_me a patch instance a.method out x out called from lt __main__.a object at x b ac bfd patch_me a a.method can patch class too out x out called from lt class '__main__.a' i think that the above answers missed the key point. let's have a class with a method class a object def m self pass now let's play with it in ipython in [ ] a.m out[ ] lt unbound method a.m ok so em m em somehow becomes an unbound method of em a em . but is it really like that in [ ] a.__dict__['m'] out[ ] lt function m at xa b b it turns out that em m em is just a function reference to which is added to em a em class dictionary there's no magic. then why em a.m em gives us an unbound method it's because the dot is not translated to a simple dictionary lookup. it's de facto a call of a.__class__.__getattribute__ a 'm' in [ ] class metaa type .... def __getattribute__ self attr_name .... print str self ' ' attr_name in [ ] class a object .... __metaclass__ metaa in [ ] a.m lt class '__main__.a' m lt class '__main__.a' m now i'm not sure out of the top of my head why the last line is printed twice but still it's clear what's going on there. now what the default __getattribute__ does is that it checks if the attribute is a so called http docs.python.org reference datamodel.html implementing descriptors rel nofollow descriptor or not i.e. if it implements a special __get__ method. if it implements that method then what is returned is the result of calling that __get__ method. going back to the first version of our em a em class this is what we have in [ ] a.__dict__['m'].__get__ none a out[ ] lt unbound method a.m and because python functions implement the descriptor protocol if they are called on behalf of an object they bind themselves to that object in their __get__ method. ok so how to add a method to an existing object assuming you don't mind patching class it's as simple as b.m m then em b.m em becomes an unbound method thanks to the descriptor magic. and if you want to add a method just to a single object then you have to emulate the machinery yourself by using types.methodtype b.m types.methodtype m b by the way in [ ] a.m out[ ] lt unbound method a.m in [ ] type a.m out[ ] lt type 'instancemethod' in [ ] type b.m out[ ] lt type 'instancemethod' in [ ] types.methodtype out[ ] lt type 'instancemethod' consolidating jason pratt's and the community wiki answers with a look at the results of different methods of binding especially note how adding the binding function as a class method em works em but the referencing scope is incorrect. usr bin python u import types import inspect dynamically adding methods to a unique instance of a class get a list of a class's method type attributes def listattr c for m in [ n v for n v in inspect.getmembers c inspect.ismethod if isinstance v types.methodtype ] print m[ ] m[ ] externally bind a function as a method of an instance of a class def addmethod c method name c.__dict__[name] types.methodtype method c class c r class attribute variable to test bound scope def __init__ self pass internally bind a function as a method of self's class note that this one has issues def addmethod self method name self.__dict__[name] types.methodtype method self.__class__ dfined function to compare with def f self x print 'f \tx d\tr d' x self.r a c created before modified instnace b c modified instnace def f self x bind internally print 'f \tx d\tr d' x self.r def f self x add to class instance's .__dict__ as method type print 'f \tx d\tr d' x self.r def f self x assign to class as method type print 'f \tx d\tr d' x self.r def f self x add to class instance's .__dict__ using a general function print 'f \tx d\tr d' x self.r b.addmethod f 'f ' b.__dict__['f '] types.methodtype f b b.f types.methodtype f b addmethod b f 'f ' b.f out f x r b.f out f x r b.f out f x r b.f out f x r b.f out f x r k print 'changing b.r from to '.format b.r k b.r k print 'new b.r '.format b.r b.f out f x r b.f out f x r b.f out f x r b.f out f x r b.f out f x r c c created after modifying instance let's have a look at each instance's method type attributes print '\nattributes of a ' listattr a out attributes of a __init__ lt bound method c.__init__ of lt __main__.c instance at x fd addmethod lt bound method c.addmethod of lt __main__.c instance at x fd f lt bound method c.f of lt __main__.c instance at x fd print '\nattributes of b ' listattr b out attributes of b __init__ lt bound method c.__init__ of lt __main__.c instance at x fe addmethod lt bound method c.addmethod of lt __main__.c instance at x fe f lt bound method c.f of lt __main__.c instance at x fe f lt bound method .f of lt class __main__.c at x ab f lt bound method .f of lt __main__.c instance at x fe f lt bound method .f of lt __main__.c instance at x fe f lt bound method .f of lt __main__.c instance at x fe print '\nattributes of c ' listattr c out attributes of c __init__ lt bound method c.__init__ of lt __main__.c instance at x addmethod lt bound method c.addmethod of lt __main__.c instance at x f lt bound method c.f of lt __main__.c instance at x personally i fer the external addmethod function route as it allows me to dynamically assign new method names within an iterator as well. def y self x pass d c for i in range addmethod d y 'f d' i print '\nattributes of d ' listattr d out attributes of d __init__ lt bound method c.__init__ of lt __main__.c instance at x addmethod lt bound method c.addmethod of lt __main__.c instance at x f lt bound method c.f of lt __main__.c instance at x f lt bound method .y of lt __main__.c instance at x f lt bound method .y of lt __main__.c instance at x f lt bound method .y of lt __main__.c instance at x f lt bound method .y of lt __main__.c instance at x since this question asked for non python versions here's javascript a.methodname function console.log yay a new method there are at least two ways for attach a method to an instance without types.methodtype class a ... def m self ... print 'im m invoked with ' self a a a.m im m invoked with lt __main__.a instance at x ec c a.m lt bound method a.m of lt __main__.a instance at x ec c def foo firstargument ... print 'im foo invoked with ' firstargument foo lt function foo at x c a.foo foo.__get__ a a or foo.__get__ a type a a.foo im foo invoked with lt __main__.a instance at x ec c a.foo lt bound method a.foo of lt __main__.a instance at x ec c instancemethod type a.m instancemethod lt type 'instancemethod' a.foo instancemethod foo a type a a.foo im foo invoked with lt __main__.a instance at x ec c a.foo lt bound method instance.foo of lt __main__.a instance at x ec c useful links br http docs.python.org reference datamodel.html invoking descriptors data model invoking descriptors br http docs.python.org . howto descriptor.html invoking descriptors descriptor howto guide invoking descriptors you guys should really look at http github.com clarete forbiddenfruit rel nofollow forbidden fruit it's a python library that provides support to monkey patching any python class even strings. if it can be of any help i recently released a python library named gorilla to make the process of monkey patching more convenient. using a function needle to patch a module named guineapig goes as follows import gorilla import guineapig gorilla.patch guineapig def needle print awesome but it also takes care of more interesting use cases as shown in the http gorilla.readthedocs.org en latest faq.html rel nofollow faq from the http gorilla.readthedocs.org rel nofollow documentation . the code is available on https github.com christophercrouzet gorilla rel nofollow github . you can use lambda to bind a method to an instance def run self print self._instancestring class a object def __init__ self self._instancestring this is instance string a a a.run lambda run a a.run this is instance string process finished with exit code blockquote h adding a method to an existing object instance h i've read that it is possible to add a method to an existing object e.g. not in the class definition in python i think this is called monkey patching or in some cases duck punching . i understand that it's not always a good decision to do so. strong but how might one do this strong blockquote h yes it is possible. but not recommended. h since it's instructive however i'm going to show you three ways of doing this. here's some setup code. we need a class definition. it could be imported but it really doesn't matter. class foo object '''an empty class to demonstrate adding a method to an instance''' create an instance foo foo create a method to add to it def sample_method self bar baz print bar baz h method one types.methodtype h first import types from which we'll get the method constructor import types now we add the method to the instance. to do this we require the methodtype constructor from the types module which we imported above . the argument signature for types.methodtype is function instance class foo.sample_method types.methodtype sample_method foo foo and usage foo.sample_method h method two lexical binding h first we create a wrapper function that binds the method to the instance def bind instance method def binding_scope_fn args kwargs return method instance args kwargs return binding_scope_fn usage foo.sample_method bind foo sample_method foo.sample_method h method three functools.partial h from functools import partial foo.sample_method partial sample_method foo foo.sample_method this makes sense when you consider that bound methods are partial functions of the instance. h unbound function as an object attribute why this doesn't work h if we try to add the sample_method in the same way as we might add it to the class it is unbound from the instance and doesn't take the implicit self as the first argument. foo.sample_method sample_method foo.sample_method traceback most recent call last file lt stdin line in lt module typeerror sample_method takes exactly arguments given we can make the unbound function work by explicitly passing the instance or anything since this method doesn't actually use the self argument variable but it would not be consistent with the expected signature of other instances if we're monkey patching this instance foo.sample_method foo hr h disclaimer h note just because this is possible doesn't make it recommended. in fact i suggest that you not do this unless you have a really good reason. it is far better to define the correct method in the class definition or less ferably to monkey patch the class directly like this foo.sample_method sample_method h this is actually an addon to the answer of jason pratt h although jasons answer works it does only work if one wants to add a function to a class. it did not work for me when i tried to reload an already existing method from the .py source code file. it took me for ages to find a workaround but the trick seems simple... .st import the code from the source code file .nd force a reload .rd use types.functiontype ... to convert the imported and bound method to a function you can also pass on the current global variables as the reloaded method would be in a different namespace .th now you can continue as suggested by jason pratt using the types.methodtype ... example this class resides inside reloadcodedemo.py class a def bar self print bar def reloadcode self methodname ''' use this function to reload any function of class a''' import types import reloadcodedemo as reloadmod import the code as module reload reloadmod force a reload of the module mym getattr reloadmod.a methodname get reloaded method mytempfunc types.functiontype convert the method to a simple function mym.im_func.func_code the methods code globals globals to use argdefs mym.im_func.func_defaults default values for variables if any mynewm types.methodtype mytempfunc self self.__class__ convert the function to a method setattr self methodname mynewm add the method to the function if __name__ '__main__' a a a.bar now change your code and save the file a.reloadcode 'bar' reloads the file a.bar now executes the reloaded code this question was opened years ago but hey there's an easy way to simulate the binding of a function to a class instance using decorators def binder function instance copy_of_function type function function.func_code copy_of_function.__bind_to__ instance def bound_function args kwargs return copy_of_function copy_of_function.__bind_to__ args kwargs return bound_function class supaclass object def __init__ self self.supaattribute def new_method self print self.supaattribute supainstance supaclass supainstance.supmethod binder new_method supainstance otherinstance supaclass otherinstance.supaattribute otherinstance.supmethod binder new_method otherinstance otherinstance.supmethod supainstance.supmethod there when you pass the function and the instance to the binder decorator it will create a new function with the same code object as the first one. then the given instance of the class is stored in an attribute of the newly created function. the decorator return a third function calling automatically the copied function giving the instance as the first parameter. br br in conclusion you get a function simulating it's binding to the class instance. letting the original function unchanged. as far as i can tell python up through . only supports hexadecimal amp octal literals. i did find some discussions about adding binary to future versions but nothing definite. i am tty sure this is one of the things due to change in python . with perhaps bin to go with hex and oct . edit lbrandy's answer is correct in all cases. print int ' ' print int ' ' another way. for reference mdash em future em python possibilities br starting with python . you can ex ss binary literals using the fix strong b strong or strong b strong b you can also use the new strong bin strong function to get the binary re sentation of a number bin ' b ' development version of the documentation http docs.python.org dev whatsnew . .html pep integer literal support and syntax what's new in python . blockquote h how do you ex ss binary literals in python h blockquote they're not binary literals but rather integer literals . you can ex ss integer literals with a binary format with a followed by a b or b followed by a series of zeros and ones for example b b from the python https docs.python.org reference lexical_analysis.html integer literals rel nofollow docs these are the ways of providing integer literals in python blockquote integer literals are described by the following lexical definitions integer decimalinteger octinteger hexinteger bininteger decimalinteger nonzerodigit digit nonzerodigit ... digit ... octinteger o o octdigit hexinteger x x hexdigit bininteger b b bindigit octdigit ... hexdigit digit a ... f a ... f bindigit there is no limit for the len h of integer literals apart from what can be stored in available memory. note that leading zeros in a non zero decimal number are not allowed. this is for disambiguation with c style octal literals which python used before version . . some examples of integer literals o b o xdeadbeef blockquote h other ways of ex ssing binary h you can have the zeros and ones in a string object which can be manipulated although you should probably just do bitwise operations on the integer in most cases just pass int the string of zeros and ones and the base you are converting from int ' ' you can optionally have the b or b fix int ' b ' if you pass it as the base it will assume base if the string doesn't specify with a fix int ' ' int ' b ' h converting from int back to human readable binary h you can pass an integer to bin to see the string re sentation of a binary literal bin ' b ' and you can combine bin and int to go back and forth bin int ' ' ' b ' you can use a format specification as well if you want to have minimum width with ceding zeros format int ' ' ' fill width b'.format width fill ' ' format int ' ' ' b' ' ' in the start here specifies that the base is not which is tty easy to see int ' ' if you don't start with a then python assumes the number is base . int ' ' if you are looking for user facing interaction stick with xml. it has more support understanding and general acceptance currently. if it's internal i would say that protocol buffers are a great idea. maybe in a few years as more tools come out to support protocol buffers then start looking towards that for a public facing api. until then... http en.wikipedia.org wiki json json protocol buffers are intended to optimize communications between machines. they are really not intended for human interaction. also the format is binary so it could not replace xml in that use case. i would also recommend http en.wikipedia.org wiki json json as being the most compact text based format. from your brief description it sounds like protocol buffers is not the right fit. the phrase structured content created by hand in a text editor tty much screams for xml. but if you want efficient low latency communications with data structures that are not shared outside your organization binary serialization such as protocol buffers can offer a huge win. another drawback of binary format like pb is that if there is a single bit of error the entire data file is not parsable but with json or xml as the last resort you can still manually fix the error because it is human readable and has redundancy built in.. strong on linux strong ul li set raw mode li li select and read the keystroke li li restore normal settings li ul import sys import select import termios import tty def getkey old_settings termios.tcgetattr sys.stdin tty.setraw sys.stdin.fileno select.select [sys.stdin] [] [] answer sys.stdin.read termios.tcsetattr sys.stdin termios.tcsadrain old_settings return answer print menu say foo say bar answer getkey if in answer print foo elif in answer print bar strong on windows strong import msvcrt answer msvcrt.getch wow that took forever. ok here's what i've ended up with c \python \python.exe br import msvcrt br print menu br say foo br say bar br while br char msvcrt.getch br if char chr escape br break br if char br print foo br break br if char br print bar br break br it fails hard using idle the python...thing...that comes with python. but once i tried it in dos er cmd.exe as a real program then it ran fine. no one try it in idle unless you have task manager handy. i've already forgotten how i lived with menus that arn't super instant responsive. the reason msvcrt fails in idle is because idle is not accessing the library that runs msvcrt. whereas when you run the program natively in cmd.exe it works nicely. for the same reason that your program blows up on mac and linux terminals. but i guess if you're going to be using this specifically for windows more power to ya. import os print os.name posix import platform platform.system 'linux' platform.release ' . . generic' see https docs.python.org library platform.html platform access to underlying platform s identifying data dang lbrandy beat me to the punch but that doesn't mean i can't provide you with the system results for vista import os os.name 'nt' import platform platform.system 'windows' platform.release 'vista' for the record here's the results on mac import os os.name 'posix' import platform platform.system 'darwin' platform.release ' . . ' you can also use sys.platform if you already have imported sys and you don't want to import another module import sys sys.platform 'linux ' i do this import sys print sys.platform docs here http docs.python.org library sys.html sys.platform sys.platform . everything you need is probably in the sys module. i am using the wlst tool that comes with weblogic and it doesn't implement the platform package. wls offline import os wls offline print os.name java wls offline import sys wls offline print sys.platform 'java . . _ ' apart from patching the system em javaos.py em http osdir.com ml lang.jython.devel msg .html rel nofollow issue with os.system on windows with jdk . which i can't do i have to use weblogic out of the box this is what i use def iswindows os java.lang.system.getproperty os.name return win in os.lower import platform platform.system in the same vein.... import platform is_windows platform.system .lower .find win if is_windows lv_dll lv_dll my_so_dll.dll else lv_dll lv_dll . my_so_dll.so usr bin python . def cls from subprocess import call from platform import system os system if os 'linux' call 'clear' shell true elif os 'windows' call 'cls' shell true for jython the only way to get os name i found is to check os.name java property tried with sys os and platform modules for jython . . on winxp def get_os_platform return platform name but for jython it uses os.name java property ver sys.platform.lower if ver.startswith 'java' import java.lang ver java.lang.system.getproperty os.name .lower print 'platform s' ver return ver a comparison between the different methods and what they return on different operating systems can be found here https github.com hpcugent easybuild wiki os_flavor_name_version os_flavor_name_version methods that are compared import platform import sys def linux_distribution try return platform.linux_distribution except return n a print python version s dist s linux_distribution s system s machine s platform s uname s version s mac_ver s sys.version.split '\n' str platform.dist linux_distribution platform.system platform.machine platform.platform platform.uname platform.version platform.mac_ver interesting results on windows import os os.name 'nt' import platform platform.system 'windows' platform.release 'post server' strong edit strong that's a http bugs.python.org issue rel nofollow bug if you not looking for the kernel version etc but looking for the linux distribution you may want to use the following in python . import platform print platform.linux_distribution 'centos linux' ' . ' 'final' print platform.linux_distribution [ ] centos linux print platform.linux_distribution [ ] . in python . import platform print platform.dist 'centos' ' . ' 'final' print platform.dist [ ] centos print platform.dist [ ] . obviously this will work only if you are running this on linux. if you want to have more generic script across platforms you can mix this with code samples given in other answers. sample code to differentiate os's using python from sys import platform as _platform if _platform linux or _platform linux linux elif _platform darwin mac os x elif _platform win windows check the available tests with module platform and print the answer out for your system import platform print dir platform for x in dir platform if x[ ].isalnum try result getattr platform x print platform. x result except typeerror continue try this import os os.uname and you can make it info os.uname info[ ] info[ ] watch out if you're on windows with cygwin where os.name is posix . import os platform print os.name posix print platform.system cygwin_nt . wow just for completeness os environment variable seems to be defined everywhere. on windows xp it is set to windows_nt . on linux suse sp it is set to x linux sles [ ] . i don't have access to os x or bsd machines would be interesting to check there as well. import os os_name os.getenv os if os_name windows_nt windows elif linux in os_name linux elif ... you can also use only platform module without importing os module to get all the information. import platform platform.os.name 'posix' platform.uname 'darwin' 'mainframe.local' ' . . ' 'darwin kernel version . . thu dec pst root xnu . . release_x _ ' 'x _ ' 'i ' a nice and tidy layout for reporting purpose can be achieved using this line for i in zip ['system' 'node' 'release' 'version' 'machine' 'processor'] platform.uname print i[ ] ' ' i[ ] that gives this output system darwin node mainframe.local release . . version darwin kernel version . . thu dec pst root xnu . . release_x _ machine x _ processor i what is missing usually is the operating system version but you should know if you are running windows linux or mac a platform indipendent way is to use this test in [] for i in [platform.linux_distribution platform.mac_ver platform.win _ver ] .... if i[ ] .... print 'version ' i[ ] the list [ ] is dynamic and flexible but that flexibility comes at a speed cost. the tuple is fixed immutable and therefore faster. from the http www.python.org doc faq general why are there separate tuple and list data types python faq blockquote lists and tuples while similar in many respects are generally used in fundamentally different ways. tuples can be thought of as being similar to pascal records or c structs they're small collections of related data which may be of different types which are operated on as a group. for example a cartesian coordinate is appropriately re sented as a tuple of two or three numbers. lists on the other hand are more like arrays in other languages. they tend to hold a varying number of objects all of which have the same type and which are operated on one by one. blockquote generally by convention you wouldn't choose a list or a tuple just based on its im mutability. you would choose a tuple for small collections of completely different pieces of data in which a full blown class would be too heavyweight and a list for collections of any reasonable size where you have a homogeneous set of data. tuples are a quick\flexible way to create em composite em data types. lists are containers for well lists of objects. for example you would use a list to store a list of student details in a class. each student detail in that list may be a tuple containing their roll number name and test score. `[ 'mark' 'john' ...]` br also because tuples are immutable they can be used as keys in dictionaries.
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ langchain==0.0.314
2
+ chainlit==0.7.2
3
+ huggingface_hub==0.19.0
4
+ torch==2.1.0
5
+ ctransformers
6
+ sentence-transformers
7
+ faiss-cpu