[CFT] Hadoop preliminary port

classic Classic list List threaded Threaded
24 messages Options
12
Reply | Threaded
Open this post in threaded view
|

[CFT] Hadoop preliminary port

Clement Laforet-2
Hi,

You can find a preliminary port of hadoop 0.20.203.0 here:
http://people.freebsd.org/~clement/hadoop/

Features:
- Hadoop user creation (UID: 950)
- basic rc scripts for all hadoop services
- native library build for current platform (i.e. 386 or amd64)
- bin/hadoop wrapper

ToDo:
- Work on environment variables
- rc scripts clean up
- test test test
- install contrib
- install c++ stuff in ${PREFIX}

All configuration files live in ${PREFIX}/etc/hadoop, log files in
/var/log/hadoop, and $HADOOP_HOME is ${PREFIX}/hadoop.


Thanks for your help,

clem

attachment0 (203 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

wen heping
Great work !

wen

2011/8/8 Clement Laforet <[hidden email]>:

> Hi,
>
> You can find a preliminary port of hadoop 0.20.203.0 here:
> http://people.freebsd.org/~clement/hadoop/
>
> Features:
> - Hadoop user creation (UID: 950)
> - basic rc scripts for all hadoop services
> - native library build for current platform (i.e. 386 or amd64)
> - bin/hadoop wrapper
>
> ToDo:
> - Work on environment variables
> - rc scripts clean up
> - test test test
> - install contrib
> - install c++ stuff in ${PREFIX}
>
> All configuration files live in ${PREFIX}/etc/hadoop, log files in
> /var/log/hadoop, and $HADOOP_HOME is ${PREFIX}/hadoop.
>
>
> Thanks for your help,
>
> clem
>
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Clement Laforet-2
In reply to this post by Clement Laforet-2
On Mon, Aug 08, 2011 at 11:14:32AM +0200, Clement Laforet wrote:
> Hi,
>
> You can find a preliminary port of hadoop 0.20.203.0 here:
> http://people.freebsd.org/~clement/hadoop/

Basic hive and pig ports are available here too.

clem

> Features:
> - Hadoop user creation (UID: 950)
> - basic rc scripts for all hadoop services
> - native library build for current platform (i.e. 386 or amd64)
> - bin/hadoop wrapper
>
> ToDo:
> - Work on environment variables
> - rc scripts clean up
> - test test test
> - install contrib
> - install c++ stuff in ${PREFIX}
>
> All configuration files live in ${PREFIX}/etc/hadoop, log files in
> /var/log/hadoop, and $HADOOP_HOME is ${PREFIX}/hadoop.
>
>
> Thanks for your help,
>
> clem


attachment0 (203 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Bernhard Fröhlich
On Mon, 8 Aug 2011 14:43:04 +0200, Clement Laforet wrote:

> On Mon, Aug 08, 2011 at 11:14:32AM +0200, Clement Laforet wrote:
>> Hi,
>>
>> You can find a preliminary port of hadoop 0.20.203.0 here:
>> http://people.freebsd.org/~clement/hadoop/
>
> Basic hive and pig ports are available here too.
>
> clem
>
>> Features:
>> - Hadoop user creation (UID: 950)
>> - basic rc scripts for all hadoop services
>> - native library build for current platform (i.e. 386 or amd64)
>> - bin/hadoop wrapper
>>
>> ToDo:
>> - Work on environment variables
>> - rc scripts clean up
>> - test test test
>> - install contrib
>> - install c++ stuff in ${PREFIX}
>>
>> All configuration files live in ${PREFIX}/etc/hadoop, log files in
>> /var/log/hadoop, and $HADOOP_HOME is ${PREFIX}/hadoop.
>>
>>
>> Thanks for your help,
>>
>> clem

Thanks a lot! Hadoop and Pig are on the
http://wiki.freebsd.org/WantedPorts so I've updated the status there.

--
Bernhard Fröhlich
http://www.bluelife.at/
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

endzed
In reply to this post by Clement Laforet-2

Le 8 août 2011 à 14:43, Clement Laforet a écrit :

> On Mon, Aug 08, 2011 at 11:14:32AM +0200, Clement Laforet wrote:
>> Hi,
>>
>> You can find a preliminary port of hadoop 0.20.203.0 here:
>> http://people.freebsd.org/~clement/hadoop/
>
> Basic hive and pig ports are available here too.


Hello Clem,

I'm currently trying your preliminary port for hadoop and pig

For hadoop I had to launch su -m hadoop -c 'hadoop namenode -format' afterwhat all's running fine :)
=> Maybe some namenodeformat rc command would help for 1st time users (similar to the postgresql initdb rc command)

Pig was a little bit more complex to run, since I needed to setup /usr/local/etc/hadoop in HADOOP_CONF_DIR and PIG_CLASSPATH env vars for it to run (so if it is possible to make this path the default would be great ?), and I'm still unable to run with hadoop, only pig -x local is running

Here's what happen :

%pig -x local --version
Apache Pig version 0.9.0 (r1148983)
compiled Jul 20 2011, 17:49:23
%pig -x local
2011-09-01 06:16:25,649 [main] INFO  org.apache.pig.Main - Logging error messages to: /usr/home/hadoop/pig_1314857785643.log
2011-09-01 06:16:26,020 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:///
grunt> quit;
%pig
2011-09-01 06:16:35,671 [main] INFO  org.apache.pig.Main - Logging error messages to: /usr/home/hadoop/pig_1314857795666.log
2011-09-01 06:16:36,096 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://localhost:9000
2011-09-01 06:16:36,436 [main] ERROR org.apache.pig.Main - ERROR 2999: Unexpected internal error. Failed to create DataStorage
Details at logfile: /usr/home/hadoop/pig_1314857795666.log
%cat /usr/home/hadoop/pig_1314857795666.log
Error before Pig is launched
----------------------------
ERROR 2999: Unexpected internal error. Failed to create DataStorage

java.lang.RuntimeException: Failed to create DataStorage
        at org.apache.pig.backend.hadoop.datastorage.HDataStorage.init(HDataStorage.java:75)
        at org.apache.pig.backend.hadoop.datastorage.HDataStorage.<init>(HDataStorage.java:58)
        at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.init(HExecutionEngine.java:196)
        at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.init(HExecutionEngine.java:116)
        at org.apache.pig.impl.PigContext.connect(PigContext.java:184)
        at org.apache.pig.PigServer.<init>(PigServer.java:243)
        at org.apache.pig.PigServer.<init>(PigServer.java:228)
        at org.apache.pig.tools.grunt.Grunt.<init>(Grunt.java:46)
        at org.apache.pig.Main.run(Main.java:484)
        at org.apache.pig.Main.main(Main.java:108)
Caused by: java.io.IOException: Call to localhost/127.0.0.1:9000 failed on local exception: java.io.EOFException
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
        at org.apache.hadoop.ipc.Client.call(Client.java:743)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
        at $Proxy0.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
        at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
        at org.apache.pig.backend.hadoop.datastorage.HDataStorage.init(HDataStorage.java:72)
        ... 9 more
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:375)
        at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
================================================================================


I understand that this has something to do with hadoop and pig version compatibility, but since I'm only an enduser (i.e. don't know about ant or java stuffs) I'm a little bit lost as you can guess...

Can you help with this ?

please notice that for lazy reason I'm working with hadoop user directly (I had to set it home and shell btw), I didn't tried to run pig under another user for the moment.

Thanks,
David


_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Bleakwiser
In reply to this post by Clement Laforet-2
'Ello.

Singing in to let you know that I'm setting up FreeBSD 9.0 amd-64 in a Virtual-BOX machine (4096MB / 3CPU)  and plan on setting up a FTHHJ (FreeBSD Tomcat-7 Hadoop Hive Diablo-Java... just coined that o.0) server and writing up some basic MapReduce methods in Java using a customized Eclipse IDE in Win7.

How current is this port? I'm excited to get my hands on 1.0.0. I'll give your port a try, if I run into any obstacles I can't get past I'll use portmaster to remove your port from userland and try out Apache's CVS guide and repo.

Anyway... be expecting an update. Now to recall some of my University Composition/Rhetoric skills and write something up everyone who got here through Google would appreciate. Let me know if you (Not just the OP,  you, yes YOU can subscribe to this list at nabble.com speak and be heard) have any POI (Points of Interest) that you'd like me to engage in more detail.

If nobody replies expect a howto guide taking the reader from a-z. At the very least a shameful admission of failure and defeat. xD
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Bleakwiser
In reply to this post by Clement Laforet-2
Apologies in order if I'm wrong, but I am failing to find the tgz holding the Hadoop port on http://people.freebsd.org/~clement/hadoop/

Older versions in http://people.freebsd.org/~clement/hadoop/old/ are there, sure, but the latest (1.0.0) is what I'm looking for.
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Denis Generalov-3
On Sun, 12 Feb 2012 23:03:30 -0800 (PST)
Bleakwiser <[hidden email]> wrote:

> Apologies in order if I'm wrong, but I am failing to find the tgz holding the
> Hadoop port on http://people.freebsd.org/~clement/hadoop/
>
> Older versions in http://people.freebsd.org/~clement/hadoop/old/ are there,
> sure, but the latest (1.0.0) is what I'm looking for.

Look at http://people.freebsd.org/~clement/hadoop/hadoop-1.0.0.diff

>
> --
> View this message in context: http://freebsd.1045724.n5.nabble.com/CFT-Hadoop-preliminary-port-tp4677071p5478350.html
> Sent from the freebsd-ports mailing list archive at Nabble.com.
> _______________________________________________
> [hidden email] mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-ports
> To unsubscribe, send any mail to "[hidden email]"


--
Denis Generalov <[hidden email]>
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Bleakwiser
In reply to this post by Clement Laforet-2
I know this isn't exactly the place for it but I really have no idea what to do with this .diff file.

I've looked at the FreeBSD hadnbook, scoured forums and old forum posts and all I can find is a method to create them and how to make them if you're updating a port's source.
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Eygene Ryabinkin-2
Hi.

Sun, Feb 12, 2012 at 11:44:03PM -0800, Bleakwiser wrote:
> I know this isn't exactly the place for it but I really have no idea what to
> do with this .diff file.

Try
{{{
cd /usr/ports
fetch -o - http://people.freebsd.org/~clement/hadoop/hadoop-1.0.0.diff | patch -p1
}}}
then go to the relevant port directory (devel/hadoop, I suppose)
and build the port.

Then try to reading 'man diff' and 'man patch' and understand what
you did with the fetch/patch combo ;))
--
Eygene Ryabinkin                                        ,,,^..^,,,
[ Life's unfair - but root password helps!           | codelabs.ru ]
[ 82FE 06BC D497 C0DE 49EC  4FF0 16AF 9EAE 8152 ECFB | freebsd.org ]

attachment0 (235 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Bleakwiser
Kidding right?

patch -p1 isn't even mentioned in the man pages.....
And again, i've ran,
> patch hadoop-1.0.0.diff
Nothing happens, just blank cursor.

For starters it isn't really clear what I'm even supposed to be applying the patch to.

On Mon, Feb 13, 2012 at 1:49 AM, Eygene Ryabinkin-2 [via FreeBSD] <[hidden email]> wrote:
Hi.

Sun, Feb 12, 2012 at 11:44:03PM -0800, Bleakwiser wrote:
> I know this isn't exactly the place for it but I really have no idea what to
> do with this .diff file.

Try
{{{
cd /usr/ports
fetch -o - http://people.freebsd.org/~clement/hadoop/hadoop-1.0.0.diff | patch -p1
}}}
then go to the relevant port directory (devel/hadoop, I suppose)
and build the port.

Then try to reading 'man diff' and 'man patch' and understand what
you did with the fetch/patch combo ;))
--
Eygene Ryabinkin                                        ,,,^..^,,,
[ Life's unfair - but root password helps!           | codelabs.ru ]
[ 82FE 06BC D497 C0DE 49EC  4FF0 16AF 9EAE 8152 ECFB | freebsd.org ]

attachment0 (235 bytes) Download Attachment



If you reply to this email, your message will be added to the discussion below:
http://freebsd.1045724.n5.nabble.com/CFT-Hadoop-preliminary-port-tp4677071p5478425.html
To unsubscribe from [CFT] Hadoop preliminary port, click here.
NAML



--
Trae Barlow
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Bleakwiser
In reply to this post by Eygene Ryabinkin-2
Whelp, best of luck to you fellas then.

Unfortunately I'm lacking the experience that would help you folks with the testing of this port.

Hopefully things go well with CVS (it's at least well documented at Apache) otherwise I'll have to dump BSD off my server machine and stick CentOS (yuck) on it.
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Eygene Ryabinkin-2
In reply to this post by Bleakwiser
Sun, Feb 12, 2012 at 11:57:03PM -0800, Bleakwiser wrote:

> On Mon, Feb 13, 2012 at 1:49 AM, Eygene Ryabinkin-2 [via FreeBSD] <
> [hidden email]> wrote:
> > Sun, Feb 12, 2012 at 11:44:03PM -0800, Bleakwiser wrote:
> > > I know this isn't exactly the place for it but I really have no idea
> > what to
> > > do with this .diff file.
> >
> > Try
> > {{{
> > cd /usr/ports
> > fetch -o - http://people.freebsd.org/~clement/hadoop/hadoop-1.0.0.diff |
> > patch -p1
> > }}}
> > then go to the relevant port directory (devel/hadoop, I suppose)
> > and build the port.
>
> Kidding right?
No, I am dead serious.

> patch -p1 isn't even mentioned in the man pages.....

'patch -p1' can't be mentioned on the man pages, because 'patch' is
the utility and '-p1' is the argument to that utility.  Invoke 'man patch'
and look for '-p[number]'.

> And again, i've ran,
> > patch hadoop-1.0.0.diff
> Nothing happens, just blank cursor.

Well, it just happened that your mail reader broke the line starting
with 'fetch -o -' into two.  You should invoke the command
{{{
fetch -o - http://people.freebsd.org/~clement/hadoop/hadoop-1.0.0.diff | patch -p1 -E
}}}
with everything on one line: fetch should feed its output to the patch utility.

By the way, I forgot the -E flag -- it could be needed too.

> For starters it isn't really clear what I'm even supposed to be applying
> the patch to.

I think that the Wikipage article about patch is a good thing to start from,
  http://en.wikipedia.org/wiki/Patch_%28Unix%29

In a nutshell, since you're doing 'cd /usr/ports', patchfile contents specify
that diffs are relative to the ports directory,
{{{
--- ports/devel/Makefile 30 Jan 2012 09:15:00 -0000 1.4819
+++ ports/devel/Makefile 1 Feb 2012 16:26:31 -0000
}}}
and -p1 instructs 'patch' utility to strip one directory level
from the file names to be patched, you'll be patching every file
that is mentioned in the diff, but without the leading 'ports/'
prefix, and the base directory will be the $PWD (/usr/ports).

If you really want to understand, what files will be patched, then
here we go:
{{{
$ fetch -qo - http://people.freebsd.org/~clement/hadoop/hadoop-1.0.0.diff | grep -E '^(---|\+\+\+) ' | grep -v /dev/null | awk '{print $2;}' | sort | uniq | sed -e's,^,/usr/,'
/usr/ports/GIDs
/usr/ports/UIDs
/usr/ports/devel/Makefile
/usr/ports/devel/hadoop/Makefile
/usr/ports/devel/hadoop/distinfo
/usr/ports/devel/hadoop/files/000.java_home.env.in
/usr/ports/devel/hadoop/files/datanode.in
/usr/ports/devel/hadoop/files/hadoop.in
/usr/ports/devel/hadoop/files/jobtracker.in
/usr/ports/devel/hadoop/files/namenode.in
/usr/ports/devel/hadoop/files/patch-build.xml
/usr/ports/devel/hadoop/files/patch-src__c++__libhdfs__hdfs.c
/usr/ports/devel/hadoop/files/patch-src__c++__libhdfs__hdfsJniHelper.c
/usr/ports/devel/hadoop/files/patch-src__native__Makefile.in
/usr/ports/devel/hadoop/files/patch-src__native__configure
/usr/ports/devel/hadoop/files/patch-src__native__configure.ac
/usr/ports/devel/hadoop/files/patch-src__native__src__org__apache__hadoop__io__nativeio__NativeIO.c
/usr/ports/devel/hadoop/files/patch-src__native__src__org__apache__hadoop__security__JniBasedUnixGroupsNetgroupMapping.c
/usr/ports/devel/hadoop/files/pkg-deinstall.in
/usr/ports/devel/hadoop/files/pkg-install.in
/usr/ports/devel/hadoop/files/secondarynamenode.in
/usr/ports/devel/hadoop/files/tasktracker.in
/usr/ports/devel/hadoop/pkg-descr
/usr/ports/devel/hadoop/pkg-plist
}}}

Again, the command, starting from 'fetch' and ending on 'sed'
must be on a single line.
--
Eygene Ryabinkin                                        ,,,^..^,,,
[ Life's unfair - but root password helps!           | codelabs.ru ]
[ 82FE 06BC D497 C0DE 49EC  4FF0 16AF 9EAE 8152 ECFB | freebsd.org ]

attachment0 (235 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Matthew Seaman-2
In reply to this post by Bleakwiser
On 13/02/2012 07:57, Bleakwiser wrote:
> Kidding right?
>
> patch -p1 isn't even mentioned in the man pages.....
> And again, i've ran,
>> > patch hadoop-1.0.0.diff
> Nothing happens, just blank cursor.

patch expects to read a diff file on its standard input, so the command
you need to run is:

   patch < hadoop-1.0.0.diff

> For starters it isn't really clear what I'm even supposed to be applying
> the patch to.

The files to patch are determined from the content of the diff.

        Cheers,

        Matthew

--
Dr Matthew J Seaman MA, D.Phil.                   7 Priory Courtyard
                                                  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey     Ramsgate
JID: [hidden email]               Kent, CT11 9PW


signature.asc (275 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Mehmet Erol Sanliturk
In reply to this post by Eygene Ryabinkin-2
On Mon, Feb 13, 2012 at 2:48 AM, Eygene Ryabinkin <[hidden email]> wrote:

> Hi.
>
> Sun, Feb 12, 2012 at 11:44:03PM -0800, Bleakwiser wrote:
> > I know this isn't exactly the place for it but I really have no idea
> what to
> > do with this .diff file.
>
> Try
> {{{
> cd /usr/ports
> fetch -o - http://people.freebsd.org/~clement/hadoop/hadoop-1.0.0.diff |
> patch -p1
> }}}
> then go to the relevant port directory (devel/hadoop, I suppose)
> and build the port.
>
> Then try to reading 'man diff' and 'man patch' and understand what
> you did with the fetch/patch combo ;))
> --
> Eygene Ryabinkin                                        ,,,^..^,,,
> [ Life's unfair - but root password helps!           | codelabs.ru ]
> [ 82FE 06BC D497 C0DE 49EC  4FF0 16AF 9EAE 8152 ECFB | freebsd.org ]
>


If Hadoop is not stored specifically ,


it does NOT appear in

/usr/ports/devel/hadoop/
of FreeBSD 9.0 Release amd64 ( there is no hadoop directory )

or any one of the following :

http://www.freebsd.org/ports/devel.html
http://www.freebsd.org/ports/master-index.html
http://www.freshports.org/


Thank you very much .

Mehmet Erol Sanliturk
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Bleakwiser
In reply to this post by Matthew Seaman-2
I was able to find some better information on the patch command through wikipeida, their article on it is really great.

However I'm still not clear on what files I'm supposed to download to be running the patch on. I've dug around inside the .diff file with pico a bit and it's rather cryptic. Lots of things are commented (I assume by +) and I was also able to find some relevant man pages on 'patch' as opposed to 'man diff' i was using earlier.

The correct command would be 'patch -p1 < hadoop-1.0.0.diff' not just 'patch -p1' w/o any other arguments, which tells the patch program to use the directory structure inside the .diff file. But perhaps piping the output of 'fetch' handles that all for me?

However, we are getting way ahead of ourselves here and the same thing can be accomplished with different syntax. W/o having the correct software apply the patch to there isn't much sense in running the command at all anyway. Since I'm not familiar with the rather cryptic contents of .diff files I've wasted a good half hour just looking through it's contents for some clue as to exactly what I need to be downloading. Of course there is always the try and fail technique, where I would just simply try each tbz off the site and try and fail until it worked... for some reason I never liked that method.

Thanks and again. I am defiantly not eleete enough to be helping test this port so I'll leave it to those informed already on the list. I wouldn't want to continue to waste anyone's time w/ trivial questions much less waste hours and hours of my own time looking up syntax for nitche CLI utilities so take care and good luck with the port. It's defiantly a 'killer app' so for BSD's sake I hope you folks get it.

And as promiced, the /facepalm of failure...
/facepalm

On Mon, Feb 13, 2012 at 2:27 AM, Matthew Seaman-2 [via FreeBSD] <[hidden email]> wrote:
On 13/02/2012 07:57, Bleakwiser wrote:
> Kidding right?
>
> patch -p1 isn't even mentioned in the man pages.....
> And again, i've ran,
>> > patch hadoop-1.0.0.diff
> Nothing happens, just blank cursor.

patch expects to read a diff file on its standard input, so the command
you need to run is:

   patch < hadoop-1.0.0.diff

> For starters it isn't really clear what I'm even supposed to be applying
> the patch to.

The files to patch are determined from the content of the diff.

        Cheers,

        Matthew

--
Dr Matthew J Seaman MA, D.Phil.                   7 Priory Courtyard
                                                  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey     Ramsgate
JID: [hidden email]               Kent, CT11 9PW


signature.asc (275 bytes) Download Attachment



If you reply to this email, your message will be added to the discussion below:
http://freebsd.1045724.n5.nabble.com/CFT-Hadoop-preliminary-port-tp4677071p5478487.html
To unsubscribe from [CFT] Hadoop preliminary port, click here.
NAML



--
Trae Barlow
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Bernhard Fröhlich-2
On 13.02.2012 09:48, Bleakwiser wrote:

> I was able to find some better information on the patch command
> through
> wikipeida, their article on it is really great.
>
> However I'm still not clear on what files I'm supposed to download to
> be
> running the patch on. I've dug around inside the .diff file with pico
> a bit
> and it's rather cryptic. Lots of things are commented (I assume by +)
> and I
> was also able to find some relevant man pages on 'patch' as opposed
> to 'man
> diff' i was using earlier.
>
> The correct command would be 'patch -p1 < hadoop-1.0.0.diff' not just
> 'patch -p1' w/o any other arguments, which tells the patch program to
> use
> the directory structure inside the .diff file. But perhaps piping the
> output of 'fetch' handles that all for me?
>
> However, we are getting way ahead of ourselves here and the same
> thing can
> be accomplished with different syntax. W/o having the correct
> software
> apply the patch to there isn't much sense in running the command at
> all
> anyway. Since I'm not familiar with the rather cryptic contents of
> .diff
> files I've wasted a good half hour just looking through it's contents
> for
> some clue as to exactly what I need to be downloading. Of course
> there is
> always the try and fail technique, where I would just simply try each
> tbz
> off the site and try and fail until it worked... for some reason I
> never
> liked that method.
>
> Thanks and again. I am defiantly not eleete enough to be helping test
> this
> port so I'll leave it to those informed already on the list. I
> wouldn't
> want to continue to waste anyone's time w/ trivial questions much
> less
> waste hours and hours of my own time looking up syntax for nitche CLI
> utilities so take care and good luck with the port. It's defiantly a
> 'killer app' so for BSD's sake I hope you folks get it.
>
> And as promiced, the /facepalm of failure...
> /facepalm


There are at least 2 more ways to get hadoop. Clement has an
redports.org
account and is working on hadoop there. So you could just checkout his
svn tree and get the latest port:

1) just fetch his redports.org repository compressed as tar.bz2:

fetch http://redports.org/~clement/svn.tar.bz2
tar xvf svn.tar.bz2


2) you need devel/subversion installed for that

svn co https://svn.redports.org/clement/


Now you need to manually add the hadoop lines in GIDs/UIDs to your
/usr/ports/GIDs|UIDs files as well as copying over the devel/hadoop
directory to /usr/ports.

--
Bernhard Froehlich
http://www.bluelife.at/
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Bleakwiser
SVN worked like a charm. I didn't have devel/hadoop initally I guess because I'm running using 9.0 amd_64?

I just ran portsnap fetch too, so the port tree was 100% up to date (unless change were made in the last 5 minutes) so that was a non issue.
https://svn.redports.org/clement/devel/hadoop/ 
was the repo that I used and I followed the otherwise unrelated guide on how to use Subversion here....
http://developer.berlios.de/docman/display_doc.php?docid=394&group_id=2#checkhttps

I'm installing on a clean environment (only portmaster, diablo-jdk16, wget, and subversion installed in that order).

We'll see how things go. I'm glad you folks hadn't given up on me as it appears we have found the source of the issue (port tree missing from 9.0?).

On Mon, Feb 13, 2012 at 3:27 AM, Bernhard Froehlich-2 [via FreeBSD] <[hidden email]> wrote:
On 13.02.2012 09:48, Bleakwiser wrote:

> I was able to find some better information on the patch command
> through
> wikipeida, their article on it is really great.
>
> However I'm still not clear on what files I'm supposed to download to
> be
> running the patch on. I've dug around inside the .diff file with pico
> a bit
> and it's rather cryptic. Lots of things are commented (I assume by +)
> and I
> was also able to find some relevant man pages on 'patch' as opposed
> to 'man
> diff' i was using earlier.
>
> The correct command would be 'patch -p1 < hadoop-1.0.0.diff' not just
> 'patch -p1' w/o any other arguments, which tells the patch program to
> use
> the directory structure inside the .diff file. But perhaps piping the
> output of 'fetch' handles that all for me?
>
> However, we are getting way ahead of ourselves here and the same
> thing can
> be accomplished with different syntax. W/o having the correct
> software
> apply the patch to there isn't much sense in running the command at
> all
> anyway. Since I'm not familiar with the rather cryptic contents of
> .diff
> files I've wasted a good half hour just looking through it's contents
> for
> some clue as to exactly what I need to be downloading. Of course
> there is
> always the try and fail technique, where I would just simply try each
> tbz
> off the site and try and fail until it worked... for some reason I
> never
> liked that method.
>
> Thanks and again. I am defiantly not eleete enough to be helping test
> this
> port so I'll leave it to those informed already on the list. I
> wouldn't
> want to continue to waste anyone's time w/ trivial questions much
> less
> waste hours and hours of my own time looking up syntax for nitche CLI
> utilities so take care and good luck with the port. It's defiantly a
> 'killer app' so for BSD's sake I hope you folks get it.
>
> And as promiced, the /facepalm of failure...
> /facepalm

There are at least 2 more ways to get hadoop. Clement has an
redports.org
account and is working on hadoop there. So you could just checkout his
svn tree and get the latest port:

1) just fetch his redports.org repository compressed as tar.bz2:

fetch http://redports.org/~clement/svn.tar.bz2
tar xvf svn.tar.bz2


2) you need devel/subversion installed for that

svn co https://svn.redports.org/clement/


Now you need to manually add the hadoop lines in GIDs/UIDs to your
/usr/ports/GIDs|UIDs files as well as copying over the devel/hadoop
directory to /usr/ports.

--
Bernhard Froehlich
http://www.bluelife.at/
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "[hidden email]"



If you reply to this email, your message will be added to the discussion below:
http://freebsd.1045724.n5.nabble.com/CFT-Hadoop-preliminary-port-tp4677071p5478581.html
To unsubscribe from [CFT] Hadoop preliminary port, click here.
NAML



--
Trae Barlow
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Bleakwiser
In reply to this post by Bernhard Fröhlich-2
Well, the poster ^^ was right. The UIDs and GIDs need to be set manually.

I'm wanting to do this proper... I'm guessing I need to create a user hadoop, get it's UID and GID then add it to /usr/ports/UIDs /usr/ports/GIDs ?

I'm guessing the best way to find out what users I need to create is documented on the hadoop.apache.org site?

On Mon, Feb 13, 2012 at 8:00 AM, Trae Barlow <[hidden email]> wrote:
SVN worked like a charm. I didn't have devel/hadoop initally I guess because I'm running using 9.0 amd_64?

I just ran portsnap fetch too, so the port tree was 100% up to date (unless change were made in the last 5 minutes) so that was a non issue.
https://svn.redports.org/clement/devel/hadoop/ 
was the repo that I used and I followed the otherwise unrelated guide on how to use Subversion here....
http://developer.berlios.de/docman/display_doc.php?docid=394&group_id=2#checkhttps

I'm installing on a clean environment (only portmaster, diablo-jdk16, wget, and subversion installed in that order).

We'll see how things go. I'm glad you folks hadn't given up on me as it appears we have found the source of the issue (port tree missing from 9.0?).

On Mon, Feb 13, 2012 at 3:27 AM, Bernhard Froehlich-2 [via FreeBSD] <[hidden email]> wrote:
On 13.02.2012 09:48, Bleakwiser wrote:

> I was able to find some better information on the patch command
> through
> wikipeida, their article on it is really great.
>
> However I'm still not clear on what files I'm supposed to download to
> be
> running the patch on. I've dug around inside the .diff file with pico
> a bit
> and it's rather cryptic. Lots of things are commented (I assume by +)
> and I
> was also able to find some relevant man pages on 'patch' as opposed
> to 'man
> diff' i was using earlier.
>
> The correct command would be 'patch -p1 < hadoop-1.0.0.diff' not just
> 'patch -p1' w/o any other arguments, which tells the patch program to
> use
> the directory structure inside the .diff file. But perhaps piping the
> output of 'fetch' handles that all for me?
>
> However, we are getting way ahead of ourselves here and the same
> thing can
> be accomplished with different syntax. W/o having the correct
> software
> apply the patch to there isn't much sense in running the command at
> all
> anyway. Since I'm not familiar with the rather cryptic contents of
> .diff
> files I've wasted a good half hour just looking through it's contents
> for
> some clue as to exactly what I need to be downloading. Of course
> there is
> always the try and fail technique, where I would just simply try each
> tbz
> off the site and try and fail until it worked... for some reason I
> never
> liked that method.
>
> Thanks and again. I am defiantly not eleete enough to be helping test
> this
> port so I'll leave it to those informed already on the list. I
> wouldn't
> want to continue to waste anyone's time w/ trivial questions much
> less
> waste hours and hours of my own time looking up syntax for nitche CLI
> utilities so take care and good luck with the port. It's defiantly a
> 'killer app' so for BSD's sake I hope you folks get it.
>
> And as promiced, the /facepalm of failure...
> /facepalm

There are at least 2 more ways to get hadoop. Clement has an
redports.org
account and is working on hadoop there. So you could just checkout his
svn tree and get the latest port:

1) just fetch his redports.org repository compressed as tar.bz2:

fetch http://redports.org/~clement/svn.tar.bz2
tar xvf svn.tar.bz2


2) you need devel/subversion installed for that

svn co https://svn.redports.org/clement/


Now you need to manually add the hadoop lines in GIDs/UIDs to your
/usr/ports/GIDs|UIDs files as well as copying over the devel/hadoop
directory to /usr/ports.

--
Bernhard Froehlich
http://www.bluelife.at/
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "[hidden email]"



If you reply to this email, your message will be added to the discussion below:
http://freebsd.1045724.n5.nabble.com/CFT-Hadoop-preliminary-port-tp4677071p5478581.html
To unsubscribe from [CFT] Hadoop preliminary port, click here.
NAML



--
Trae Barlow



--
Trae Barlow
Reply | Threaded
Open this post in threaded view
|

Re: [CFT] Hadoop preliminary port

Bleakwiser
In reply to this post by Bernhard Fröhlich-2
I found some documentation here... although it pertains to RHEL and Debian and there is a good chance our IDs differ....
https://issues.apache.org/jira/browse/HADOOP-7603 

I'm no pro but by inducing some of the conversations in the above thread I'm guessing any UID or GID that isn't allocated for something else is safe to use?

PS: Something else that mind be interesting to the maintainer... some... disenchanting claims are made about FreeBSD and Hadoop...
http://search-hadoop.com/m/L1UhF1QJo982/UID+GID+FreeBSD&subj=What+s+the+problem+with+nio+on+FreeBSD+ 

On Mon, Feb 13, 2012 at 8:07 AM, Trae Barlow <[hidden email]> wrote:
Well, the poster ^^ was right. The UIDs and GIDs need to be set manually.

I'm wanting to do this proper... I'm guessing I need to create a user hadoop, get it's UID and GID then add it to /usr/ports/UIDs /usr/ports/GIDs ?

I'm guessing the best way to find out what users I need to create is documented on the hadoop.apache.org site?


On Mon, Feb 13, 2012 at 8:00 AM, Trae Barlow <[hidden email]> wrote:
SVN worked like a charm. I didn't have devel/hadoop initally I guess because I'm running using 9.0 amd_64?

I just ran portsnap fetch too, so the port tree was 100% up to date (unless change were made in the last 5 minutes) so that was a non issue.
https://svn.redports.org/clement/devel/hadoop/ 
was the repo that I used and I followed the otherwise unrelated guide on how to use Subversion here....
http://developer.berlios.de/docman/display_doc.php?docid=394&group_id=2#checkhttps

I'm installing on a clean environment (only portmaster, diablo-jdk16, wget, and subversion installed in that order).

We'll see how things go. I'm glad you folks hadn't given up on me as it appears we have found the source of the issue (port tree missing from 9.0?).

On Mon, Feb 13, 2012 at 3:27 AM, Bernhard Froehlich-2 [via FreeBSD] <[hidden email]> wrote:
On 13.02.2012 09:48, Bleakwiser wrote:

> I was able to find some better information on the patch command
> through
> wikipeida, their article on it is really great.
>
> However I'm still not clear on what files I'm supposed to download to
> be
> running the patch on. I've dug around inside the .diff file with pico
> a bit
> and it's rather cryptic. Lots of things are commented (I assume by +)
> and I
> was also able to find some relevant man pages on 'patch' as opposed
> to 'man
> diff' i was using earlier.
>
> The correct command would be 'patch -p1 < hadoop-1.0.0.diff' not just
> 'patch -p1' w/o any other arguments, which tells the patch program to
> use
> the directory structure inside the .diff file. But perhaps piping the
> output of 'fetch' handles that all for me?
>
> However, we are getting way ahead of ourselves here and the same
> thing can
> be accomplished with different syntax. W/o having the correct
> software
> apply the patch to there isn't much sense in running the command at
> all
> anyway. Since I'm not familiar with the rather cryptic contents of
> .diff
> files I've wasted a good half hour just looking through it's contents
> for
> some clue as to exactly what I need to be downloading. Of course
> there is
> always the try and fail technique, where I would just simply try each
> tbz
> off the site and try and fail until it worked... for some reason I
> never
> liked that method.
>
> Thanks and again. I am defiantly not eleete enough to be helping test
> this
> port so I'll leave it to those informed already on the list. I
> wouldn't
> want to continue to waste anyone's time w/ trivial questions much
> less
> waste hours and hours of my own time looking up syntax for nitche CLI
> utilities so take care and good luck with the port. It's defiantly a
> 'killer app' so for BSD's sake I hope you folks get it.
>
> And as promiced, the /facepalm of failure...
> /facepalm

There are at least 2 more ways to get hadoop. Clement has an
redports.org
account and is working on hadoop there. So you could just checkout his
svn tree and get the latest port:

1) just fetch his redports.org repository compressed as tar.bz2:

fetch http://redports.org/~clement/svn.tar.bz2
tar xvf svn.tar.bz2


2) you need devel/subversion installed for that

svn co https://svn.redports.org/clement/


Now you need to manually add the hadoop lines in GIDs/UIDs to your
/usr/ports/GIDs|UIDs files as well as copying over the devel/hadoop
directory to /usr/ports.

--
Bernhard Froehlich
http://www.bluelife.at/
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "[hidden email]"



If you reply to this email, your message will be added to the discussion below:
http://freebsd.1045724.n5.nabble.com/CFT-Hadoop-preliminary-port-tp4677071p5478581.html
To unsubscribe from [CFT] Hadoop preliminary port, click here.
NAML



--
Trae Barlow



--
Trae Barlow



--
Trae Barlow
12