pingoured.fr Report : Visit Site


  • Ranking Alexa Global: # 10,765,938

    Server:Apache/2.4.33 (Fedor...
    X-Powered-By:PHP/7.1.18

    The main IP address: 163.172.38.87,Your server United Kingdom,Southend-on-Sea ISP:HM Customs and Excise HQ Network  TLD:fr CountryCode:GB

    The description :le blog de pingou, ses actualités sur fedora, ses rpms, ses tests, son linux... :-) pingou's weblog, his fedora's news, his rpms, his tests, his linux... :-)...

    This report updates in 12-Jun-2018

Created Date:2007-08-03
Changed Date:2016-04-02

Technical data of the pingoured.fr


Geo IP provides you such as latitude, longitude and ISP (Internet Service Provider) etc. informations. Our GeoIP service found where is host pingoured.fr. Currently, hosted in United Kingdom and its service provider is HM Customs and Excise HQ Network .

Latitude: 51.537818908691
Longitude: 0.71433001756668
Country: United Kingdom (GB)
City: Southend-on-Sea
Region: England
ISP: HM Customs and Excise HQ Network

the related websites

HTTP Header Analysis


HTTP Header information is a part of HTTP protocol that a user's browser sends to called Apache/2.4.33 (Fedora) OpenSSL/1.1.0h-fips SVN/1.9.7 mod_wsgi/4.5.15 Python/2.7 Phusion_Passenger/5.0.30 containing the details of what the browser wants and will accept back from the web server.

X-Powered-By:PHP/7.1.18
Transfer-Encoding:chunked
Keep-Alive:timeout=5, max=100
Server:Apache/2.4.33 (Fedora) OpenSSL/1.1.0h-fips SVN/1.9.7 mod_wsgi/4.5.15 Python/2.7 Phusion_Passenger/5.0.30
Last-Modified:Tue, 17 Apr 2018 08:56:29 GMT
Connection:Keep-Alive
ETag:"3aba93562773d40a44eb3045fd4ebd96"
Pragma:
Cache-Control:must-revalidate, max-age=0
Date:Mon, 11 Jun 2018 17:39:30 GMT
Content-Type:text/html; charset=UTF-8

DNS

soa:dns12.ovh.net. tech.ovh.net. 2017042001 86400 3600 3600000 86400
txt:"v=spf1 include:mx.ovh.com ~all"
ns:dns12.ovh.net.
ns12.ovh.net.
ipv4:IP:163.172.38.87
ASN:12876
OWNER:AS12876, FR
Country:GB
mx:MX preference = 100, mail exchanger = mail2.pingoured.fr.
MX preference = 1, mail exchanger = mail.pingoured.fr.

HtmlToText

to content | to menu | to search tuesday, april 17 2018 no more comments by pierre-yves on tuesday, april 17 2018, 09:56 - général just a small heads-up that i am hereby closing comments on this blog. the amount of spam i get versus legitimate comment is too high. thanks to everyone who ever commented on this blog! friday, april 13 2018 fedora infrastructure hackathon 2018 by pierre-yves on friday, april 13 2018, 04:20 - général events fedora fedora-infra fedora-planet this week, a good part of the fedora infrastructure team as well as some members from the centos infrastructure team met up in frederisksburg (virginia, usa) for a few days of hacking together. continue reading ... thursday, february 1 2018 spec change statistics by pierre-yves on thursday, february 1 2018, 08:55 - général fedora fedora-planet stats over the last couple of days i took a look at all the spec files in fedora . i wanted to find out how many packages have not been updated by someone else than release engineering for mass-rebuilds. here is a graphical representation of the data: and some numbers: 20994 spec files considered 11 have unknown last date changed (could be that only dennis changed those or something was wrong with their changelog) 13926 have been updated since january 1st 2017 (~66%) 17061 have been updated since january 1st 2016 (~81%) 18843 have been updated since january 1st 2015 (~90%) in other words, about 20% of our packages have not been updated by a human for 2 years and 10% for 3 years! here are the details used for these stats: the script used to generate the stats csv file used to generate the graph above (this is one of the two csv files produced by the script) friday, december 8 2017 introducing simple-koji-ci by pierre-yves on friday, december 8 2017, 15:39 - général fedora fedora-infra fedora-planet pagure test simple-koji-ci is a small fedmsg -based service that just got deployed in the fedora infrastructure. it aims at doing something really simple: for each pull-request opened in pagure on dist-git , kick off a scratch-build in koji and report the outcome of this build to the pull-request. this way, when someone opens a pull-request against a package that you are maintaining you can quickly see if that change would build (at least at the time the pull-request was opened). this service is currently really simple and straight forward, dumb in many ways and still missing some desired features such as: - kick off a new scratch build in the pr is rebased/updated - allow package maintainer to retrigger the build manually but it is a start and we will work on improving it :) happy packaging! ps: live example wednesday, august 30 2017 flock 2017 - day 1 by pierre-yves on wednesday, august 30 2017, 03:42 - général events fedora fedora-planet flock today was the first day of the flock 2017 conference in cape code. i arrived there on sunday, giving me a little time to adjust to the jet lag so i was somewhat ready for this first day. it started with the traditional talk from the fedora project leader about the state of fedora, updating us on statistics and explaining some of the challenges we as a community are facing and are working on. having followed or being involved in some of these changes, it was nice to see them brought forward as being important objectives for us to work on. after that we got a pitch from all the speakers at the conference about what they are going to present or work on. there is quite a large diversity of topics as usual which gets into the traditional struggle of "what do i attend?" :-) this afternoon, we had our pagure hackfest which has been quite productive considering how many people were present (there were some quite interesting talks at the same time, cf the question above). we fixed the milter integration which allows to comment on issue by just replying to the email/notification. turned out to be a simple configuration change, but in the long run i do not know if we shouldn't adjust the code a little bit more. so i may open a pull-request to change a bit the behavior there. we also had a new contributor set up his environment and working on an easyfix (pr incoming soon) and together with matt prahl we worked on a couple of pull-requests to get the tests running and behave as expected. after that i attended the presentation about fedora-hubs. having been involved in the early stages of the project it was nice to see where it is now and to see that it is in good hands! i then attended the presentation about the fedora magazine which was quite interesting and explained how the editorial board works and plan the articles or work with the authors writing them. the last presentation i attended was from will woods and was really interesting. i will likely going to butcher the ideas he presented, but it was about a r&d project he has been working on trying to improve the situation around composing artifacts with rpms. his findings were that rpm scriptlets are most often the limiting factor and that with some more structure we could improve the situation quite a bit. he showed us the compose of a qcow image being done in less than 30 seconds and, i quote, "before optimization". this sounds really quite interesting for the ci work that currently being done, though integrating both project is likely a long term idea. the day ended with a game night with pizzas and drinks allowing us to spend time and chat about all sorts of things, work-related and not. tuesday, february 28 2017 some stats about our dist-git and updates by pierre-yves on tuesday, february 28 2017, 14:33 - général datagrepper fedora fedora-planet i recently started looking at our dist-git usage but my data was a little limited. instead of querying datagrepper i managed to access directly the data in the database to get some stats: dist-git commits here is the output: over 1582 days (from 2012-10-08 to 2017-02-28) there was an average of 376.300884956 commits per day the median is of 327.0 commits per day the minimum number of commits was 1 the maximum number of commits was 34716 for the average and median we removed all the days where there were more than 3,000 builds since they mostly concern mass-rebuilds (18 days were above 3000, and thus removed). this is how it looks in a graph: commits in dist-git per day bodhi updates using the same data source, i went on to look at the number of bodhi updates flagged go to testing and the number of bodhi updates flagged to go to stable per day. here is the output: over 1541 days (from 2012-10-08 to 2017-02-28) there was an average of 76.9000648929 requests to testing per day the median is of 75 requests to testing per day the minimum number of requests to testing was 4 the maximum number of requests to testing was 291 over 1561 days (from 2012-10-08 to 2017-02-28) there was an average of 57.4477898783 requests to stable per day the median is of 54 requests to stable per day the minimum number of requests to stable was 1 the maximum number of requests to stable was 217 (no data were removed there since there are no equivalent to mass-rebuild for these). graphically: updates requests for testing: updates requests for stable: thursday, february 23 2017 some stats about our dist-git usage by pierre-yves on thursday, february 23 2017, 18:48 - général datagrepper fedora fedora-planet you may have heard that there are some thoughts going on around integrating some continuous integration for our packaging work in fedora. having in mind the question about how much resources we would need to offer ci in fedora, i tried to gather some stats about our dist-git usage. querying datagrepper was as always the way to go, although the amount of data in datagrepper is such that it starts to be hard to query some topics (such as koji builds) or to go back very far in history. anyway, i went on and retrieved 87600 messages from datagrepper, covering 158 days. here is the output: over 158 days (from 2016-09-19 to 2017-02-23) there was an average of 554.430379747 commits per day the median is of 418.0 commits per day the minimum number of commits was 51 the maximum number of commits was 10029 over 158 days (from 2016-09-19 to 2017-02-23) there was an average of 254.151898734 packages updated per day the median is of 119.5 package updated per day the minimum number of package updated was 20 the maximum number of package updated was 9612 to be honest i was expecting a little more, i'll try re-generating this data maybe in another way to see if that changes something, but that gives us a first clue one comment thursday, august 11 2016 back from flock 2016 by pierre-yves on thursday, august 11 2016, 11:18 - général events fedora fedora-planet flock2016 flock is always a peculiar time of the year for me. for one it is one of the few time i get to meet with my colleagues but more than that, it's also one of the few time i get to spend a few days with fellows from this fedora community that is so dear to me. i have to say that this year was no exception. flock 2016 has been really nice. i can, of course, only speak for myself, but from what i have seen we got a lot of work done and we are now ready to move forward on quite a few subjects. one of the most important aspect of flock is the fact that an important part of the community gathers in one place, but we need to be careful as the conference only represent about 10% of all the fedora contributors. so it is our duty as attendee to report to the broader community about the subjects that were discussed and the talks we have had. it is of course practically impossible to mention everything here, for one because i took very little note during the conference, but i would like to point out the topics that appeared the most important to me during that conference. fedora at large and its community during the opening keynote, mattdm gave an overview of how fedora is appreciated outside of our community. it seems that fedora 24 has been doing great, same for fedora 23 before that. the it world seems to appreciate the fedora.next program we have started and what it is leading to. matt also gave a few numbers on the side of our community and our contributor base. these were numbers that had already been presented in his talk at devconf 2016 (talk that i watched on youtube ). so there were really new to me, but i still like the fact that there is about 66% of our community that is not working for our primary sponsor, red hat. this is healthy for our community, this diversity ensures that we are not just an echo chamber and that it is not just us liking what we do. the fedora infrastructure this is a part of the project that i am directly involved in and that i think made some really good progress during these few days. we had a few session. it started with a presentation from kevin and i about the state of the fedora infrastructure. we went a little bit through the changes that happened in the last year and ones planned in the coming year, both from an infrastructure and an application point of view. this has lead a few questions and discussions, all in a nice atmosphere. i had one comment on the presentation that we have not included as much numbers in it as we had last year making it a little harder for people not accustomed with our work to follow. something to work on for next year. we also had a workshop session. over two hours we went through the changes we want to make to the infrastructure (opening our private cloud to our contributors, start investigate where and how to use docker, reflect the level of support provided to services by using different domain names for examples) and for each of these we came to some agreement and made a plan on how to move forward with it. i will not go to much in the details of what we discussed and what agreement we reached in this blog post as it has already been summarized on the fedora infrastructure list . fedora docker layered images so this an project that has been worked on for a few months now by adam miller in coordination with the rel-eng team. the idea is to allow fedora to start distributing more than just rpms and in this case, docker images. this service is about to land. there are still a few aspect to be worked on, including how to distribute the images to the mirrors and how to ensure users are being redirected to a mirror that is up to date. dennis , randy and i had a very interesting discussion around the work that remains to be done for this. it will imply making changes to mirrormanager and likely all of its three components (mirrormanager, mirrorlist and the backend services). it might also imply work on the docker side. being able to have these discussion while seating on comfortable harmchairs facing each other was really nice. we managed to have a list of applications that needs to be adjusted and a good idea of how the different pieces will work together. fedora atomic on a workstation patrick gave a very interesting presentation on how he builds and uses fedora atomic to run it on its laptop. this was really most interesting but it gave a little bit of mixed feelings. on the one side, it looks really promising and exciting to work with, on the other side it seems not really user-friendly and a little hard/time-consuming. i do wonder if, some aspects could not be simplified for me (for example retrieving the list of rpms currently installed on my machine to insert in the kickstart file instead of more or less starting from scratch). maybe i will try to make a little time available this year to try to play with this, it is really tempting. automation ralph has lead a very nice workshop on automation whose idea was to brainstorm around what we do and that we could automate and what we all have built script to do for us and which thus may need to be generalized. the discussion has been lively and quite a few ideas were exchanged. two of them sticked with me a little more generate a cron job gathering information from pkgdb, koji, bodhi, fedocal to give access at a single location about releases. what are they koji tags? what are their bodhi name? what are their current status (released? beta-freeze? beta released? alpha?...)? there are a few applications relying on this information and while a good part of it is present in pkgdb, not all is and it does not necessarily make sense to add it there. create some sort of service that triggers builds upon git push and even the creation of bodhi update if we want. there are quite a few use-case that people would like to see supported and some people do not want this at all, so this should be entirely opt-in. currently is idea is to ask packager to place a changelog file in their git repo, next to the spec and the sources files and place in this file the information needed to create the bodhi update. if in a push, this changelog file is updated, automatically trigger the build in koji and if it finished successfully, create the update in bodhi. that means that: without this changelog file, nothing changes from the current situation, if the changelog is not touched, nothing changes from the current situation, if the changelog file is touched but the build fails in koji, the only change from the current situation is that this service will have saved the user from triggering the build manually. these were the two ideas that stick with me the most. there has been more discussed and there are more possibilities (like making the service something that is ran locally by the user, as opposed to something ran in the infrastructure). pagure pagure has been a really nice surprise to me. many people talked to me about it most often in good terms and sometime with some interesting ideas. i am not sure all the ideas provided will be implemented but there is food for thoughts and enough to keep me busy a little while! modularity i had been following the modularity working group from a little far and i have been quite happy to discuss directly with the people working on this about the work they are trying to achieve. ralph and i have had a few lengthy discussion around the life-cycle of packages in this new model and, among others, the impact this would have on tools such as pkgdb. it seems clear to me that while the data model might not necessarily change that much, we will need to adjust pkgdb for this new distribution model. all the details are still not entirely clear, some features will need to be added, the ui will need to be adjusted, overall probably not enough to worth a rewrite of pkgdb but still enough that i will need to spend some cycles on it. the work done by the modularity group is quite fascinating, i have been involved or the spectator of some the discussion they had and there is really quite a lot of work still to be done and this is sounding really interesting. if you have not had a chance to see what they are doing, i encourage you to check out their wiki page and check langdon 's presentation at flock as soon as it is available on youtube. hubs fedora-hubs is a cool project aiming at simplifying the steps new contributors need to take to reach the old contributors. irc, mailing lists, tickets are all places where activities happen but that might be obscure to new contributors. fedora-hubs tries to fill this gap by aggregating all the activity around a group of people and provide it to new contributors so they know where to look to get aboard. we ran a workshop with a nice demo at flock. we received some good positive feedbacks and people seem to like how things are looking. personally flock has also been the occasion to pass the torch on this project. i have been leading it since ralph changed team but i am really not the expert in the technologies needed for hubs' frontend. so i passed on the torch to sayan who is much more experienced than i am and who i'm sure will do a great job leading hubs. i will still be around, i am very much interested in helping with backend bits and pieces. fmn still needs some work and a few other applications that i maintain might require adjustments to integrate with hubs the way we want it. so, do expect me around :) zanata finally, flock has also been the occasion for me to meet up with the folks from zanata (the platform used by fedora's translators). we exchanged a few emails before the conference as we asked them to expand on their web-hooks so we could gather some more stats and include them on fedora-hubs. it was really nice to be able to discuss with them regarding their plans and ours and how we may be able to help each other. final words well, this has been quite a lengthy blog post, if you made it so far : congratulations! as a final note, i would like to thank all the organizers of the conference, having tried to place a bid for this year, i have a small idea of the amount of work involved but they managed wonderfully and it was an excellent flock! wednesday, june 29 2016 profiling in python by pierre-yves on wednesday, june 29 2016, 13:08 - général fedora fedora-planet python when working on fmn's new architecture i been wanted to profile a little bit the application, to see where it spends most of its time. i knew about the classic cprofile builtin in python but it didn't quite fit my needs since i wanted to profile a very specific part of my code, preferrably without refactoring it in such a way that i could use cprofile. searching for a solution using cprofile (or something else), i ran into the pycon presentation of a. jesse jiryu davis entitled 'python performance profiling: the guts and the glory'. it is really quite an interesting talk and if you have not seen it, i would encourage you to watch it (on youtube) in this talk is presented yappi , standing for yet another python profiling implementation and writen by sümer cip, together with some code allowing to easy use it and write the output in a format compatible with callgrind (allowing us to use kcachegrind to visualize the results). to give you an example, this is how it looked before (without profiling): t = time.time() results = fmn.lib.recipients(prefs, msg, valid_paths, config) log.debug("results retrieved in: %0.2fs", time.time() - t) and this is the same code, integrated with yappi import yappi yappi.set_clock_type('cpu') t = time.time() yappi.start(builtins=true) results = fmn.lib.recipients(prefs, msg, valid_paths, config) stats = yappi.get_func_stats() stats.save('output_callgrind.out', type='callgrind') log.debug("results retrieved in: %0.2fs", time.time() - t) as you can see, all it takes is 5 lines of code to profile the function fmn.lib.recipients and dump the stats in a callgrind format. and this is how the output looks like in kcachegrind :) saturday, june 25 2016 new fmn architecture and tests by pierre-yves on saturday, june 25 2016, 13:23 - général fedmsg fedora fedora-planet fmn python rabbitmq new fmn architecture and tests introduction fmn is the fedmsg notification service. it allows any contributors (or actually, anyone with a fas account) to tune what notification they want to receive and how. for example it allows saying things like: send me a notification on irc for every package i maintain that has successfully built on koji send me a notification by email for every request made in pkgdb to a package i maintain send me a notification by irc when a new version of a package i maintain is found how it works the principile is that anyone can log in on the web ui of fmn there, they can create filters on a specific backend (email or irc mainly) and add rules to that filter. these rules must either be validated or invalited for the notification to be sent. then the fmn backend listens to all the messages sent on fedora's fedmsg and for each message received, goes through all the rules in all the filters to figure out who wants to be notified about this action and how. the challenge today, computing who wants to be notified and how takes about 6 seconds to 12 seconds per message and is really cpu intensive. this means that when we have an operation sending a few thousands messages on the bus (for example, mass-branching or a packager maintaining a lot of packages orphaning them), the queue of messages goes up and it can take hours to days for a notification to be delivered which could be problematic in some cases. the architecture this is the current architecture of fmn: | +--------\ | read | prefs | write | +---->| db |<--------+ | | \--------+ | | +-----+---+---+ +---+---+---+---+ +----+ | | |fmn.lib| | |fmn.lib| | |user| v | +-------+ | +-------+ | +--+-+ fedmsg+->|consumer | |central webapp |<-----+ + +-----+ +---+| +---------------+ | |email| |irc|| | +-+---+--+-+-++ | | | | | | v v v as you can see it is not clear where the cpu intensive part is and that's because it is in fact integrated in the fedmsg consumer. this design, while making things easier brings the downside of making it pratically impossible to scale it easily when we have an event producing lots of messages. we multi-threaded the application as much as we could, but we were quickly reaching the limit of the gil . to try improving on this situation, we reworked the architecture of the backend as follow: +-------------+ read | | write +------+ prefs db +<------+ | | | | + | +-------------+ | | | | +------------------+ +--------+ | | | | |fmn.lib| | | | | v | | +-------+ |<--+ user | | +----------+ +---+ | | | | | fmn.lib| | central webapp | +--------+ | | | +------------------+ | +----->| worker +--------+ | | | | | fedmsg | +----------+ | | | | | | +----------+ | | +------------------+ | | fmn.lib| | +--------------------+ | | fedmsg consumer | | | | | | backend | +-->| +------------>| worker +--------------->| | | | | | | | | +-----+ +---+ +---+ | +------------------+ | +----------+ | |email| |irc| |sse| | | | +--+--+---+-+-+--+-+-+ | | +----------+ | | | | | | | fmn.lib| | | | | | | | | | | | | | +----->| worker +--------+ | | | | rabbitmq | | rabbitmq | | | | +----------+ | | | | v v v | | | v the idea is that the fedmsg consumer listens to fedora's fedmsg, put the messages in a queue. these messages are then picked from the queue by multiple workers who will do the cpu intensive task and put their results in another queue. the results are then picked from this second queue by a backend process that will do the actually notification (sending the email, the irc message). we also included an sse component to the backend, which is something we want to do for fedora-hubs but this still needs to be written. testing the new architecture the new architecture looks fine on paper, but one would wonder how it performs in real-life and with real data. in order to test it, we wrote two scripts (one for the current architecture and one for the new) sending messages via fedmsg or putting in messages in the queue that the workers listens to, therefore mimiking there the behavior of the fedmsg consumer. then we ran different tests. the machine the machine on which the tests were run is: cpu: intel i5 760 @ 2.8ghz (quad-core) ram: 16g ddr2 (1333 mhz) disk: scandisk sdssda12 (120g) os: rhel 7.2, up to date dataset: 15,000 (15k) messages the results the current architecture the current architecture only allows to run one test, send 15k fedmsg messages and let the fedmsg consumer process them and monitor how long it takes to digest them. test #0 - fedmsg based lasted for 9:05:23.313368 maxed at: 14995 avg processing: 0.458672376874 msg/s the new architecture the new architecture being able to scale we performed a different tests with it, using 2 workers, then 4 workers, then 6 workers and finally 8 workers. this gives us an idea if the scaling is linear or not and how much improvement we get by adding more workers. test #1 - 2 workers - 1 backend lasted for 4:32:48.870010 maxed at: 13470 avg processing: 0.824487297215 msg/s test #2 - 4 workers - 1 backend lasted for 3:18:10.030542 maxed at: 13447 avg processing: 1.1342276217 msg/s test #3 - 6 workers - 1 backend lasted for 3:06:02.881912 maxed at: 13392 avg processing: 1.20500359971 msg/s test #4 - 8 workers - 1 backend lasted for 3:14:11.669631 maxed at: 13351 avg processing: 1.15160928467 msg/s conclusions looking at the results of the tests, the new architecture is clearly handling its load better and faster. however, the progress aren't as linear as we like. my feeling is that retrieve information from the cache (here redis) is at one point getting slower, eventually also because of the central lock we tell redis to use. as time permits, i will try to investigate this further to see if we can still gain some speed. monday, may 9 2016 playing with fmn by pierre-yves on monday, may 9 2016, 10:42 - général fedora fedora-infra fedora-planet fmn on friday, i have been started to play with fmn currently, there is a fedmsg consumer that listens to the messages coming from all over the fedora infrastructure, then based on the preferences set in fmn's web ui it decides whether to send a notification and how. there has been thoughts on reworking the process to allow splitting it over multiple nodes. the idea is to do something like this: +-> worker -+ these senders | | just do simple i/o | | +-> worker -+ +-> email sender | | | | | | fedmsg -> fmn consumer -> redis +-> worker -+-> redis -+-> irc sender | | | | | | +-> worker -+ +-> gcm sender | | | | +-> worker -+ my question was how to divide the message coming among the different worker. so i adjusted the consumer a little to forward each message received to a different redis channel. the code looks something like: i = random.randint(0, self.workers-1) log.debug('sending to worker %s' % i) print(self.redis[i]) self.redis[i].publish('%s' % i, json.dumps(raw_msg)) we're randomly picking one of the worker from the n workers we know are available (for my tests: 4). sounds simple enough right? but will it spread the load between the workers evenly? so over the week-end i left my test program running. this is the output collected: worker 0: 126468 messages received worker 1: 126908 messages received worker 2: 126993 messages received worker 3: 126372 messages received this makes a total of 506741 messages received over the week-end and the load is spread among the workers as such: worker 0: 24.95713% of the messages worker 1: 25.04396% of the messages worker 2: 25.06073% of the messages worker 3: 24.93818% of the messages looks good enough :) next step, splitting the code between fmn.consumer, fmn.worker and fmn.backend (the one doing the io) and figuring out how to deal with the cache. wednesday, march 2 2016 monitor performances of wsgi apps by pierre-yves on wednesday, march 2 2016, 17:02 - général fedora fedora-planet python accessing pagure's performances via mod_wsgi-express continue reading ... tuesday, january 5 2016 setting up pagure on a banana pi by pierre-yves on tuesday, january 5 2016, 18:59 - général documentation fedora fedora-planet pagure postgresql python this is a small blog post about setting up pagure on a banana pi . continue reading ... friday, december 11 2015 testing distgit in staging with fedpkgstg by pierre-yves on friday, december 11 2015, 12:24 - général documentation fedora fedora-planet pkgdb every once in a while we make changes to dist-git in the fedora infrastructure. this means, we need to test our changes to make sure they do not break (ideally, at all). these days, we are working on adding namespacing to our git repos so that we can support delivering something else than rpms (the first use-case being, docker). so with the current set-up we have, we added namespacing to pkgdb which remains our main endpoint to manage who has access to which git repo (pkgdb being in a way a glorified interface to manage our gitolite). the next step there is to teach gitolite about this namespacing. the idea is to move from: /srv/git/repositories/<pkg1>.git /srv/git/repositories/<pkg2>.git /srv/git/repositories/<pkg3>.git /srv/git/repositories/<pkg4>.git to something like: /srv/git/repositories/rpms/<pkg1>.git /srv/git/repositories/rpms/<pkg2>.git /srv/git/repositories/rpms/<pkg3>.git /srv/git/repositories/rpms/<pkg4>.git /srv/git/repositories/docker/<pkg2>.git /srv/git/repositories/docker/<pkg5>.git but, in order to keep things working with the current clone out there, we'll symlink the rpms namespace to one level higher in the hierarchy which should basically keep things running as they are currently. so the question at hand is, now that we have adjusted our staging pkgdb and dist-git, how do we test that fedpkg still works. this is a recipe from bochecha to make it easy to test fedpkg in staging while not breaking it for regular use. it goes in three steps: 1. edit the file /etc/rpkg/fedpkg.conf and add to it: [fedpkgstg] lookaside = http://pkgs.stg.fedoraproject.org/repo/pkgs lookasidehash = md5 lookaside_cgi = https://pkgs.stg.fedoraproject.org/repo/pkgs/upload.cgi gitbaseurl = ssh://%(user)[email protected]/%(module)s anongiturl = git://pkgs.stg.fedoraproject.org/%(module)s tracbaseurl = https://%(user)s:%(password)[email protected]/rel-eng/login/xmlrpc branchre = f\d$|f\d\d$|el\d$|olpc\d$|master$ kojiconfig = /etc/koji.conf build_client = koji 2. create a fedpkgstg (the name of the cli must be the same as the title of the section entered in the config file above) sudo ln -s /usr/bin/fedpkg /usr/bin/fedpkgstg 3. call fedpkgstg to test staging and fedpkg to do your regular operation against the production instances thanks bochecha! thursday, november 19 2015 introducing mdapi by pierre-yves on thursday, november 19 2015, 15:22 - général fedora fedora-infra mdapi python i have recently been working on a new small project, an api to query the information stored in the meta-data present in the rpm repositories (fedora's and epel's). these meta-data include, package name, summary, description, epoch, version, release but also changelog, the list of all the files in a package. it also includes the dependencies information, the regular provides, requires, obsoletes and conflicts but also the new ones for soft-dependencies: recommends, suggests, supplements and enhances. with this project, we are exposing all this information to everyone, in an easy way. mdapi will check if the package asked is present in either of the updates-testing, updates or release repositories (in this order) and it will return the information found in the first repo where there is a match (and say so) so for example: https://apps.fedoraproject.org/mdapi/f23/pkg/guake?pretty=true * shows the package information for guake in fedora 23, where guake has been updated but the latest version is in updates not updates-testing. therefore it says "repo": "updates". the application is written entirely in python3 using aiohttp which is itself based on asyncio , allowing it to handle some load very nicely. just to show you, here is the result of a little test performed with the apache benchmark tool: $ ab -c 100 -n 1000 https://apps.fedoraproject.org/mdapi/f23/pkg/guake this is apachebench, version 2.3 <$revision: 1663405 $> copyright 1996 adam twiss, zeus technology ltd, http://www.zeustech.net/ licensed to the apache software foundation, http://www.apache.org/ benchmarking apps.fedoraproject.org (be patient) completed 100 requests completed 200 requests completed 300 requests completed 400 requests completed 500 requests completed 600 requests completed 700 requests completed 800 requests completed 900 requests completed 1000 requests finished 1000 requests server software: python/3.4 server hostname: apps.fedoraproject.org server port: 443 ssl/tls protocol: tlsv1.2,ecdhe-rsa-aes128-gcm-sha256,4096,128 document path: /mdapi/f23/pkg/guake document length: 1843 bytes concurrency level: 100 time taken for tests: 41.825 seconds complete requests: 1000 failed requests: 0 total transferred: 2133965 bytes html transferred: 1843000 bytes requests per second: 23.91 [#/sec] (mean) time per request: 4182.511 [ms] (mean) time per request: 41.825 [ms] (mean, across all concurrent requests) transfer rate: 49.83 [kbytes/sec] received connection times (ms) min mean[+/-sd] median max connect: 513 610 207.1 547 1898 processing: 227 3356 623.2 3534 4025 waiting: 227 3355 623.2 3533 4024 total: 781 3966 553.2 4085 5377 percentage of the requests served within a certain time (ms) 50% 4085 66% 4110 75% 4132 80% 4159 90% 4217 95% 4402 98% 4444 99% 4615 100% 5377 (longest request) note the: time per request: 41.825 [ms] (mean, across all concurrent requests) we are below 42ms so (0.042 second) to retrieve the info of a package in the updates repo and that's while executing 100 requests at the same time on a server that is in the us while i am in europe. running at https://apps.fedoraproject.org/mdapi/ sources: https://pagure.io/mdapi/ note the ?pretty=true in the url, this is something handy to view the json returned but i advise against using it in your applications as it will increase the amount of data returned and thus slow things down. note2: your mileage may vary when testing mdapi yourself, but it should remain pretty fast! wednesday, august 5 2015 faitout changes home by pierre-yves on wednesday, august 5 2015, 11:02 - général database faitout fedora fedora-planet jenkins postgresql python unit-tests faitout is an application giving you full access to a postgresql database for 30 minutes. this is really handy to run tests against. for example, for some of my applications, i run the tests locally against a in-memory sqlite database (very fast) and when i push, the tests are ran on jenkins but this time using faitout (a little slower, but much closer to the production environment). this setup allows me to find early potential error in the code that sqlite does not trigger. faitout is running the cloud of the fedora infrastructure and since this cloud has just been rebuilt, we had to move it. while doing so, faitout got a nice new address: http://faitout.fedorainfracloud.org/ so if you are using it, don't forget to update your url ;-) see also: previous blog posts about faitout thursday, july 23 2015 introducing flask-multistatic by pierre-yves on thursday, july 23 2015, 08:43 - général documentation fedora-planet flask flask-multistatic python flask is a micro-web-framework in python. i have been using it for different projects for a couple of years now and i am quite happy with it. i have been using it for some of the applications ran by the fedora infrastructure . some of these applications could be re-used outside fedora and this is of course something i would like to encourage. one of the problem currently is that all those apps are branded for fedora, so re-using them elsewhere can become complicated, this can be solved by theming. theming means adjusting two components: templates and static files (images, css...). adjusting templates jinja2 the template engine in flask already supports loading templates from two different directories. this allows to ask the application to load your own template first and if it does not find them, then it looks for it in the directory of the default theme. code wise it could look like this: # use the templates # first we test the core templates directory # (contains stuff that users won't see) # then we use the configured template directory import jinja2 templ_loaders = [] templ_loaders.append(app.jinja_loader) # first load the templates from the theme_folder defined in the configuration templ_loaders.append(jinja2.filesystemloader(os.path.join( app.root_path, app.template_folder, app.config['theme_folder']))) # then load the other templates from the `default` theme folder templ_loaders.append(jinja2.filesystemloader(os.path.join( app.root_path, app.template_folder, 'default'))) app.jinja_loader = jinja2.choiceloader(templ_loaders) adjusting static files this is a little more tricky as static files are not templates and there is no logic in flask to allow overriding one or another depending on where it is located. to solve this challenge, i wrote a small flask extension: flask-multistatic that basically allows flask to have the same behavior for static files as it does for templates. getting it to work is easy, at the top of your flask application do the imports: import flask from flask_multistatic import multistaticflask and make your flask flask application multistatic app = flask.flask(__name__) app = multistaticflask(app) you can then specify multiple folders where static files are located, for example: app.static_folder = [ os.path.join(app.root_path, 'static', app.config['theme_folder']), os.path.join(app.root_path, 'static', 'default') ] note: the order the the folder matters, the last one should be the folder with all the usual files (ie: the default theme), the other ones are the folders for your specific theme(s). patrick uiterwijk pointed to me that this method, although working is not ideal for production as it means that all the static files are served by the application instead of being served by the web-server. he therefore contributed an example apache configuration allowing to obtain the same behavior (override static files) but this time directly in apache! so using flask-multistatic i will finally be able to make my apps entirely theme-able, allowing other projects to re-use them under their own brand. monday, june 29 2015 fesco vote history by pierre-yves on monday, june 29 2015, 09:41 - général elections fedora fedora-planet a while back i gathered some numbers about the number of participants to some election held in fedora. with the results of the new fesco election being announced i wanted to go back and see the new trend: fesco (voters) 2008-07 150 2008-12 169 2009-06 308 2009-12 216 2010-05 180 2010-11 240 2011-06 200 2011-12 225 2012-06 236 2012-12 206 2013-06 166 2014-02 265 2014-07 195 2015-01 283 2015-06 90 graphically: as you can see, this last election was the one with the lowest number of participants since at least july 2008. one comment friday, june 26 2015 packagers afk in pkgdb by pierre-yves on friday, june 26 2015, 07:37 - général astuces fedocal fedora fedora-planet pkgdb i just wanted to point out a small feature added to pkgdb recently. basically, it integrates with the vacation calendar of fedocal to show on the packager's info page if the person is on vacations or not. if you are dealing with someone who is slow to answer on bugs, irc or emails, it may give you an insight as to why that is. note: i am in no way saying that paul is slow to answer bugs, irc or email, and have merely used him to illustrate my thoughts following up on his post about the red hat summit and i shall not be held responsible for any variations in paul's response time :-) thursday, june 25 2015 eventsource/server-sent events: lesson learned by pierre-yves on thursday, june 25 2015, 10:49 - général documentation eventsource fedora fedora-planet python server-sent events sse recently i have been looking into server-sent events , also known as sse or eventsource . the idea of server-sent events is to push notification to the browser, in a way it could be seen as a read-only web-socket (from the browser's view). implementing sse is fairly easy code-wise, this article from html5rocks pretty much covers all the basics, but the principle is: add a little javascript to make your page connect to a specific url on your server add a little more javascript to your page to react upon messages sent by the server server-side, things are also fairly easy but also need a little consideration: you need to create basically a streaming server, broadcasting messages as they occurs or whenever you want. the format is fairly simple: data: <your data> \n\n you cannot run this server behind apache . the reason is simple, the browser keeps the connection open which means apache will keep the worker process running. so after opening a few pages, apache will reach its maximum number of worker processes running, thus ending up in a situation where it is waiting forever for an available worker process (ie: your apache server is not responding anymore). so after running into the third point listed above, i moved the sse server out of my flask application and into its own application, based on trollius (which is a backport of asyncio to python2), but any other async libraries would do (such as twisted or gevent ). after splitting the code out and testing it some more, i found that there is a limitation on the number of permanent connection a browser can make to the same domain. i found a couple of pages mentioning this issue, but the most useful resource for me was this old blog post from 2008: roundup on parallel connections , which also provides the solution on how to go around this limitation: the limit is per domain, so if you set-up a bunch of cname sub-domain redirecting to the main domain, it will work for as many connection as you like :-) (note: this is also what github and facebook are using to implement web-socket support on as many tabs as you want). the final step in this work is to not forget to set the http cross-origin access control (cors) policy in the response sent by your sse server to control cross-site http requests (which are known security risks). so in the end, i went for the following architecture: two users are viewing the same page. one of them edits it (ie: sends a post requests to the flask application), the web-application (here flask) processes the request as usual (changes something, updates the database...) and also queue a message in redis information about the changes (and depending on what you want to do, specifying what has changed). the sse server is listening to redis, picks up the message and sends it to the browser of the two users. the javascript in the page displayed picks up the message, processes it and updates the page with the change. this way, the first user updated the page and the second user had the changes displayed automatically and without having to reload the page. note: asyncio has a redis connector via asyncio-redis and trollius via trollius-redis . « previous entries - page 1 of 12 search categories général rpms biologie bioinformatique autre pages about tags astuces bioconductor documentation events fedora fedora-fr fedora-infra fedora-planet flock fosdem french fudcon guake install party pkgdb python r r2spec review rpm all tags links my github projects guake r2spec fedora fedora project fedora planet communauté francophone fedora documentation fedora planet fedora-fr friends the trashiest blog in the world... powered by dotclear

URL analysis for pingoured.fr


http://blog.pingoured.fr/index.php?category/biologie
http://blog.pingoured.fr/index.php?tag/fmn
http://blog.pingoured.fr/index.php?post/2013/09/12/fedora-vote-history
http://blog.pingoured.fr/index.php?tag/pagure
http://blog.pingoured.fr/index.php?post/2016/01/05/setting-up-pagure-on-a-banana-pi
http://blog.pingoured.fr/index.php?tag/mdapi
http://blog.pingoured.fr/index.php?tag/fedocal
http://blog.pingoured.fr/index.php?tag/postgresql
http://blog.pingoured.fr/public/dist_git_commit_per_day.png
http://blog.pingoured.fr/index.php?tag/flask
http://blog.pingoured.fr/index.php?tag/r
http://blog.pingoured.fr/index.php?tag/unit-tests
http://blog.pingoured.fr/index.php?post/2015/12/11/testing-distgit-in-staging-with-fedpkgstg
http://blog.pingoured.fr/index.php?post/2017/12/06/introducing-simple-koji-ci
http://blog.pingoured.fr/index.php?/page/2

Whois Information


Whois is a protocol that is access to registering information. You can reach when the website was registered, when it will be expire, what is contact details of the site with the following informations. In a nutshell, it includes these informations;

%%
%% This is the AFNIC Whois server.
%%
%% complete date format : DD/MM/YYYY
%% short date format : DD/MM
%% version : FRNIC-2.5
%%
%% Rights restricted by copyright.
%% See https://www.afnic.fr/en/products-and-services/services/whois/whois-special-notice/
%%
%% Use '-h' option to obtain more information about this service.
%%
%% [2600:3c03:0000:0000:f03c:91ff:feae:779d REQUEST] >> pingoured.fr
%%
%% RL Net [##########] - RL IP [#########.]
%%

domain: pingoured.fr
status: ACTIVE
hold: NO
holder-c: ANO00-FRNIC
admin-c: ANO00-FRNIC
tech-c: OVH5-FRNIC
zone-c: NFC1-FRNIC
nsl-id: NSL18490-FRNIC
registrar: OVH
Expiry Date: 05/04/2022
created: 03/08/2007
last-update: 02/04/2016
source: FRNIC

ns-list: NSL18490-FRNIC
nserver: dns12.ovh.net
nserver: ns12.ovh.net
source: FRNIC

registrar: OVH
type: Isp Option 1
address: 2 Rue Kellermann
address: 59100 ROUBAIX
country: FR
phone: +33 8 99 70 17 61
fax-no: +33 3 20 20 09 58
e-mail: [email protected]
website: http://www.ovh.com
anonymous: NO
registered: 21/10/1999
source: FRNIC

nic-hdl: ANO00-FRNIC
type: PERSON
contact: Ano Nymous
remarks: -------------- WARNING --------------
remarks: While the registrar knows him/her,
remarks: this person chose to restrict access
remarks: to his/her personal data. So PLEASE,
remarks: don't send emails to Ano Nymous. This
remarks: address is bogus and there is no hope
remarks: of a reply.
remarks: -------------- WARNING --------------
registrar: OVH
changed: 05/04/2008 anonymous@anonymous
anonymous: YES
obsoleted: NO
source: FRNIC

nic-hdl: OVH5-FRNIC
type: ROLE
contact: OVH NET
address: OVH
address: 140, quai du Sartel
address: 59100 Roubaix
country: FR
phone: +33 8 99 70 17 61
e-mail: [email protected]
trouble: Information: http://www.ovh.fr
trouble: Questions: mailto:[email protected]
trouble: Spam: mailto:[email protected]
admin-c: OK217-FRNIC
tech-c: OK217-FRNIC
notify: [email protected]
registrar: OVH
changed: 11/10/2006 [email protected]
anonymous: NO
obsoleted: NO
source: FRNIC


  REFERRER http://www.nic.fr

  REGISTRAR AFNIC

SERVERS

  SERVER fr.whois-servers.net

  ARGS pingoured.fr

  PORT 43

  TYPE domain

DISCLAIMER
%
% This is the AFNIC Whois server.
%
% complete date format : DD/MM/YYYY
% short date format : DD/MM
% version : FRNIC-2.5
%
% Rights restricted by copyright.
% See https://www.afnic.fr/en/products-and-services/services/whois/whois-special-notice/
%
% Use '-h' option to obtain more information about this service.
%
% [2600:3c03:0000:0000:f03c:91ff:feae:779d REQUEST] >> pingoured.fr
%
% RL Net [##########] - RL IP [#########.]
%

  REGISTERED yes

ADMIN

  HANDLE ANO00-FRNIC

  TYPE PERSON

  CONTACT Ano Nymous

REMARKS
-------------- WARNING --------------
While the registrar knows him/her,
this person chose to restrict access
to his/her personal data. So PLEASE,
don't send emails to Ano Nymous. This
address is bogus and there is no hope
of a reply.
-------------- WARNING --------------

  SPONSOR OVH

  CHANGED 2008-04-05

  ANONYMOUS YES

  OBSOLETED NO

  SOURCE FRNIC

TECH

  HANDLE OVH5-FRNIC

  TYPE ROLE

  CONTACT OVH NET

ADDRESS
OVH
140, quai du Sartel
59100 Roubaix

  COUNTRY FR

  PHONE +33 8 99 70 17 61

  EMAIL [email protected]

TROUBLE
Information: http://www.ovh.fr
Questions: mailto:[email protected]
Spam: mailto:[email protected]

  ADMIN-C OK217-FRNIC

  TECH-C OK217-FRNIC

  NOTIFY [email protected]

  SPONSOR OVH

  CHANGED 2006-10-11

  ANONYMOUS NO

  OBSOLETED NO

  SOURCE FRNIC

OWNER

  HANDLE ANO00-FRNIC

  TYPE PERSON

  CONTACT Ano Nymous

REMARKS
-------------- WARNING --------------
While the registrar knows him/her,
this person chose to restrict access
to his/her personal data. So PLEASE,
don't send emails to Ano Nymous. This
address is bogus and there is no hope
of a reply.
-------------- WARNING --------------

  SPONSOR OVH

  CHANGED 2008-04-05

  ANONYMOUS YES

  OBSOLETED NO

  SOURCE FRNIC

DOMAIN

  STATUS ACTIVE

  HOLD NO

  SPONSOR OVH

  EXPIRY DATE 05/04/2022

  CREATED 2007-08-03

  CHANGED 2016-04-02

  SOURCE FRNIC

  HANDLE NSL18490-FRNIC

NSERVER

  DNS12.OVH.NET 213.251.188.131

  NS12.OVH.NET 213.251.128.131

  NAME pingoured.fr

Go to top

Mistakes


The following list shows you to spelling mistakes possible of the internet users for the website searched .

  • www.upingoured.com
  • www.7pingoured.com
  • www.hpingoured.com
  • www.kpingoured.com
  • www.jpingoured.com
  • www.ipingoured.com
  • www.8pingoured.com
  • www.ypingoured.com
  • www.pingouredebc.com
  • www.pingouredebc.com
  • www.pingoured3bc.com
  • www.pingouredwbc.com
  • www.pingouredsbc.com
  • www.pingoured#bc.com
  • www.pingoureddbc.com
  • www.pingouredfbc.com
  • www.pingoured&bc.com
  • www.pingouredrbc.com
  • www.urlw4ebc.com
  • www.pingoured4bc.com
  • www.pingouredc.com
  • www.pingouredbc.com
  • www.pingouredvc.com
  • www.pingouredvbc.com
  • www.pingouredvc.com
  • www.pingoured c.com
  • www.pingoured bc.com
  • www.pingoured c.com
  • www.pingouredgc.com
  • www.pingouredgbc.com
  • www.pingouredgc.com
  • www.pingouredjc.com
  • www.pingouredjbc.com
  • www.pingouredjc.com
  • www.pingourednc.com
  • www.pingourednbc.com
  • www.pingourednc.com
  • www.pingouredhc.com
  • www.pingouredhbc.com
  • www.pingouredhc.com
  • www.pingoured.com
  • www.pingouredc.com
  • www.pingouredx.com
  • www.pingouredxc.com
  • www.pingouredx.com
  • www.pingouredf.com
  • www.pingouredfc.com
  • www.pingouredf.com
  • www.pingouredv.com
  • www.pingouredvc.com
  • www.pingouredv.com
  • www.pingouredd.com
  • www.pingoureddc.com
  • www.pingouredd.com
  • www.pingouredcb.com
  • www.pingouredcom
  • www.pingoured..com
  • www.pingoured/com
  • www.pingoured/.com
  • www.pingoured./com
  • www.pingouredncom
  • www.pingouredn.com
  • www.pingoured.ncom
  • www.pingoured;com
  • www.pingoured;.com
  • www.pingoured.;com
  • www.pingouredlcom
  • www.pingouredl.com
  • www.pingoured.lcom
  • www.pingoured com
  • www.pingoured .com
  • www.pingoured. com
  • www.pingoured,com
  • www.pingoured,.com
  • www.pingoured.,com
  • www.pingouredmcom
  • www.pingouredm.com
  • www.pingoured.mcom
  • www.pingoured.ccom
  • www.pingoured.om
  • www.pingoured.ccom
  • www.pingoured.xom
  • www.pingoured.xcom
  • www.pingoured.cxom
  • www.pingoured.fom
  • www.pingoured.fcom
  • www.pingoured.cfom
  • www.pingoured.vom
  • www.pingoured.vcom
  • www.pingoured.cvom
  • www.pingoured.dom
  • www.pingoured.dcom
  • www.pingoured.cdom
  • www.pingouredc.om
  • www.pingoured.cm
  • www.pingoured.coom
  • www.pingoured.cpm
  • www.pingoured.cpom
  • www.pingoured.copm
  • www.pingoured.cim
  • www.pingoured.ciom
  • www.pingoured.coim
  • www.pingoured.ckm
  • www.pingoured.ckom
  • www.pingoured.cokm
  • www.pingoured.clm
  • www.pingoured.clom
  • www.pingoured.colm
  • www.pingoured.c0m
  • www.pingoured.c0om
  • www.pingoured.co0m
  • www.pingoured.c:m
  • www.pingoured.c:om
  • www.pingoured.co:m
  • www.pingoured.c9m
  • www.pingoured.c9om
  • www.pingoured.co9m
  • www.pingoured.ocm
  • www.pingoured.co
  • pingoured.frm
  • www.pingoured.con
  • www.pingoured.conm
  • pingoured.frn
  • www.pingoured.col
  • www.pingoured.colm
  • pingoured.frl
  • www.pingoured.co
  • www.pingoured.co m
  • pingoured.fr
  • www.pingoured.cok
  • www.pingoured.cokm
  • pingoured.frk
  • www.pingoured.co,
  • www.pingoured.co,m
  • pingoured.fr,
  • www.pingoured.coj
  • www.pingoured.cojm
  • pingoured.frj
  • www.pingoured.cmo
Show All Mistakes Hide All Mistakes