2009年6月4日木曜日

Hi, everyone!
I'd leave from Sendai. At tomorrow(today!?).
...today, I prepared about departure and I studied NSUserDefault class and NSMutableDictionary class.
I thought I can get a feature of "save" and "restore".
But I couldn't it.
NSUserdefault clas is can save and restore a data includes NSData, NSString, NSNumber, NSDate, NSArray, or NSDictionary.
I save the UIImage class object, but NSUserDefault class can't save the UIImage class object.
Therefore, I save a NSMutableDictionary class object includes the UIImage classobject.
But I couldn't.
NSUserDefault class could restore the NSMUtableDictionary object, but not UIIMage class object is include the NSMutableDictionary object.

mmm...

After all, I have to translate the UIImage class object to the NSdata class object.
I don't know about it,,,,,..
I'll confer to apple engineer in the WWDC2009.


I'll just have to keep working in the WWDC2009.

2009年6月3日水曜日

6/1

Good morning.
I prepared to the WWDC2009.

I proposed a Schedule(reserved a bus, action in Sunfransisco, and more) and prepared a lab session.
I'll have a talk to Apple engineers in lab session.
I made a data for lab session.
Because I'd like to get good answer in lab session, I made the earnest data.


I'll go to lab for I get best answer about problem of a Quartz2D(My application).

And I'll go to User Interface design lab session.
I study about it until attend the WWDC2009.

2009年6月2日火曜日

Good morning.
I studied about UI Design by pdf "iPhone Human Inter face Guidelines
", because I'll attend the "iPhone Interface Design Consulting Lab"
but I couldn't study hard, because I had to do other work.

I read some of pages only...

2009年6月1日月曜日

5/31

Good morning !
I solved most of yesterday problem.
I didn't know a content of unknown address at yesterday.
But I could know most of a real identity.
It's alpha value of pixel.
Each values(RGBA) are exist in the each address.
Please see below.
+-----+ _
|alpha| |
+-----+ |
|blue | |
+-----+ Current pixel
|green| |
+-----+ |
|red | |
+-----+ _
|alpha| |
+-----+ |
|blue | |
+-----+ Next pixel
|green| |
+-----+ |
|red | |
+-----+  ̄
| : |
| : |

So, these exist it seem above.
I tried change value of alpha.
But It's not complete.
I could get result see below.
Image1
Image2
Image1+2


These alpha value are 100/255.
About half pixels are transparent.
But remains aren't, more likely, these pixel are seem to broken.
I wouldn't fix it until the WWDC2009 started, because I can't fix it soon.
And I think I should study another that useful for attend WWDC2009.
I took the statistic about differential of touchMoved method count per 10 seconds.
My algorithm of the "Nazca(change name from"YubiFude")" uses three CGLayer objects (depend on the situation, sometime 4layers).
Application speed is slowed by number of layers.
I took the statistic about it.
Result is this.



As a result, application is slowed by increase layer.
It's increase about 20counts per 10seconds.
I'll ask to Apple company member about more good algorithm in WWDC2009.

2009年5月31日日曜日

Good morning!
I could study bitmap and prepare for the WWDC2009.

when I get and change an information of pixel, I used pointer to access the pixel.
information of pixel is 4 components.
but I use 3components, there are red, green, blue.
I didn't know another component.
I studied it.
I accessed the componets and change value(0~255).
this is a result.

probably, some of pixel data is bloken.
but I don't know it what identity yet.

Because of I‘m lacking skill of drawing emvironment, I'll study it by reference(quarz2d drawing guide).
I spent plenty of time analysys code.
I have to get knowledge about 2D Drawing.

****************************************************************
with only 8 days left until the WWDC2009 started!!
****************************************************************

2009年5月30日土曜日

I studied about a bit map at yesterday.
I understood a algorithm about the bitmap.
See code and explanation below.

*****************************************************************************
//This method is invoked when choose button clicked
- (void)imagePickerController:(UIImagePickerController*)picker
didFinishPickingImage:(UIImage*)image
editingInfo:(NSDictionary*)editingInfo
{
//Restore the buttons
statusButton.hidden = NO;
colorButton.hidden = NO;
fillColorButton.hidden = NO;
doneButton.hidden = NO;
allClearButton.hidden = NO;
layer1ClearButton.hidden = NO;
layer2ClearButton.hidden = NO;
layer3ClearButton.hidden = NO;
saveButton.hidden = NO;
loadButton.hidden = NO;

//Hide an ImagePickerView
[self dismissModalViewControllerAnimated:YES];

//Create a new UIImage object(originalImage) and substitude originalImage for choose Image.
UIImage* originalImage;
originalImage = [editingInfo objectForKey:UIImagePickerControllerOriginalImage];

//Create a new CGSize object(size) and initize it.it size is full screen size.
CGSize size = { 320, 480 };
//Creates a bitmap-based graphics context and makes it the current context.
UIGraphicsBeginImageContext(size);

//create a CGRect object for use drawInRect
CGRect rect;
rect.origin = CGPointZero;//(0, 0)
rect.size = size;
[originalImage drawInRect:rect];

//Get an image by use UIGraphicsGetImageFromCurrentImageContext method. //this method is Returns an image based on the contents of the current bitmap-based graphics context.
shrinkedImage = UIGraphicsGetImageFromCurrentImageContext();

//Removes the current bitmap-based graphics context from the top of the stack.
UIGraphicsEndImageContext();



//create the CGIMageRef(cgImage) object and institute cgImage for shrinkedImage.CGImage. //CGImage is UIImage class property. CGImageRef cgImage;
cgImage = shrinkedImage.CGImage;

// get an information about image //__SIZE_TYPE__ is macro and iqual to unsigned int //typedef __SIZE_TYPE__ __darwin_size_t; /* sizeof() */ //typedef __darwin_size_t size_t; unsigned int width;//320
size_t height;//480
size_t bitsPerComponent;//8
size_t bitsPerPixel;//32
size_t bytesPerRow;//1280 = 4byte*320
CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;//8193
bool shouldInterpolate;//true
CGColorRenderingIntent intent;//14081600

width = CGImageGetWidth(cgImage);
height = CGImageGetHeight(cgImage);
bitsPerComponent = CGImageGetBitsPerComponent(cgImage);
bitsPerPixel = CGImageGetBitsPerPixel(cgImage);
bytesPerRow = CGImageGetBytesPerRow(cgImage);
colorSpace = CGImageGetColorSpace(cgImage);
bitmapInfo = CGImageGetBitmapInfo(cgImage);
shouldInterpolate = CGImageGetShouldInterpolate(cgImage);//Returns the interpolation setting for a bitmap image.
intent = CGImageGetRenderingIntent(cgImage);//Returns the rendering intent setting for a bitmap image.
/*kCGRenderingIntentDefault, kCGRenderingIntentAbsoluteColorimetric, kCGRenderingIntentRelativeColorimetric, kCGRenderingIntentPerceptual, kCGRenderingIntentSaturation*/
// Create a dataProvider object and get a data provider
CGDataProviderRef dataProvider;// which you use to move data into and out of Quartz. // CGDataProviderRef allow you to supply Quartz functions with data.
dataProvider = CGImageGetDataProvider(cgImage);//Returns the data provider for a bitmap image.




//get an information about bit map datas
CFDataRef data;
data = CGDataProviderCopyData(dataProvider);//data = 614400byte = 153600(320*480)*4 //Returns a copy of the provider’s data.

//A reference to an immutable CFData object.
UInt8* buffer;//buffer is a pointer to UItnt8 //typedef unsigned char UInt8;

buffer = (UInt8*)CFDataGetBytePtr(data);
//Returns a read-only pointer to the bytes of a CFData object.

// give the effect of change color per pixel
NSUInteger i, j;
for (j = 0; j < i =" 0;" style="font-weight: bold; font-style: italic; color: rgb(255, 0, 0);"> // get the pointer information /*address space↓↓↓// //___________// //1pixel// //___________// //blue // //___________// //green // //___________// //red // //___________// //next pixel // //___________// //blue // //. // //. // //. */
UInt8* tmp;
tmp = buffer + j * bytesPerRow + i * 4;
//tmpはbufferのメモリ空間を頭から4バイトずつ移動しながらピクセルに効果を与えている。
// RGBの値を取得する
UInt8 r, g, b;
r = *(tmp + 3);//r
g = *(tmp + 2);//g
b = *(tmp + 1);//b

// 輝度値を計算する
UInt8 y;
y = (77 * r + 28 * g + 151 * b) / 256;//重み付きの平均値 /* この重み付きの平均値 gray は輝度と呼ばれるもので、カラーテレビの信号処理 に使われます。輝度は色の明るさを表します。  重みを付けて平均を取るのは、人の視覚は赤・緑・青に対してそれぞれ感度が異 なるためで、     青 < 赤 < 緑 の順に感度が高くなります。最も感度の高い色に重みを持たせています。 http://www.asahi-net.or.jp/~uc3k-ymd/Glib32/loadbmp.html*/ // 輝度の値をRGB値として設定する /*********ネガポジ************ *(tmp + 1) = 255-b;//b *(tmp + 2) = 255-g;//g *(tmp + 3) = 255-r;//r ***************************/

*(tmp + 3) = 0;//r
*(tmp + 2) = g;//g
*(tmp + 1) = 0;//b

}
}
*****************************************************************************
I could get the pixel data of image and various effect to image.
for example gray scale nega


I couldn't understand some a method.((UInt8*)CFDataGetBytePtr(data);)
I'll understand it.

2009年5月28日木曜日

Good morning!

I studied about the bitmap, color coponent and algorithm of my application.

I have problem about my application algorithm.

I use an UIImagePickerController class.

But this class has a few APIs, and it have no API that I want.

I'd like to a get feature that is invoked when cansel button clicked.


Thus I'll make the new method for myproblem is solved.


Next, about the bitmap, color coponent.

These needs knowledge of Quarts2D. 

I got a knowledge of premultiplied alpha value.

Use premultiplied alpha value, so speed up of render.

,,,reference say

"For bitmaps that have an alpha component, whether the color components are already multiplied by the alphavalue. Premultiplied alpha describes a source color whose components are already multiplied by an alpha value. Premultiplying speeds up the rendering of animage bye liminating an extra multiplication operation per color component. For example, inan RGB color space, rendering animage 
with premultiplied alpha eliminates three multiplication operations(red times alpha, green times alpha, 
and blue times alpha)for each pixel in the image."


Bitmap has a difficult  problem.

I studied how to change UIImage to bitmap, and change color component of pixel of  bitmap.

Today, I cold get knowledge the ,,,

size_t macro  a unsigned int.

y = (77 * r + 28 * g + 151 * b) / 256;←this code represent a brightness for human.

concept of pointer of C language.


********************************

I'll study about the bitmap component.

And I'll prepare the attend the technical lab at wwdc2009.





2009年5月27日水曜日

Good morning!

I studied how to get a pixel data information from a image.

But I couldn't understand most it, because it's difficulty  complication and difficulty.

actually, I don't need this skill. But I need it when I extend my application "YubiFude".

In other words, I can' t extend if I solved it.

I tried solve a this code↓↓↓(included my comment)

"Returns a read-only pointer to the bytes of a CFData object."

I'll confer together

***********************************************************************************

- (void)imagePickerController:(UIImagePickerController*)picker 
  didFinishPickingImage:(UIImage*)image 
                 editingInfo:(NSDictionary*)editingInfo
{

    NSLog(@"editingInfo:%@",editingInfo);
    // UIImagePickerControllerCropRect = NSRect: {{0, 0}, {640, 425}};
  // UIImagePickerControllerOriginalImage = ;

    statusButton.hidden = NO;
    colorButton.hidden = NO;
    fillColorButton.hidden = NO;
    doneButton.hidden = NO;
    allClearButton.hidden = NO;
    layer1ClearButton.hidden = NO;
    layer2ClearButton.hidden = NO;
    layer3ClearButton.hidden = NO;
    saveButton.hidden = NO;
    loadButton.hidden = NO;
    
  // イメージピッカーを隠す
  [self dismissModalViewControllerAnimated:YES];
   
  // オリジナル画像を取得する
  UIImage* originalImage;
  originalImage = [editingInfo objectForKey:UIImagePickerControllerOriginalImage];
   
  // グラフィックスコンテキストを作る、小さくすると無理に引き延ばすのでモザイクがかかる
  CGSize size = { 320, 480 };
  UIGraphicsBeginImageContext(size);
    //Creates a bitmap-based graphics context and makes it the current context.
    

   
  // 表示する画像のサイズ
  CGRect rect;
  rect.origin = CGPointZero;//(0, 0)
  rect.size = size;
  [originalImage drawInRect:rect];
    
  // 描画した画像を取得する
  shrinkedImage = UIGraphicsGetImageFromCurrentImageContext();
    //Returns an image based on the contents of the current bitmap-based graphics context.
    

  UIGraphicsEndImageContext();
  //Removes the current bitmap-based graphics context from the top of the stack.
    

  // CGImageを取得する
  CGImageRef cgImage;
  cgImage = shrinkedImage.CGImage;
   
  // 画像情報を取得する
    //typedef __SIZE_TYPE__        __darwin_size_t;    /* sizeof() */
    //typedef __darwin_size_t        size_t;
  size_t width;//320
  size_t height;//480
  size_t bitsPerComponent;//8
  size_t bitsPerPixel;//32
  size_t bytesPerRow;//1280 = 4byte*320
  CGColorSpaceRef colorSpace;
  CGBitmapInfo bitmapInfo;//8193
  bool shouldInterpolate;//true
  CGColorRenderingIntent intent;//14081600
  width = CGImageGetWidth(cgImage);
  height = CGImageGetHeight(cgImage);
  bitsPerComponent = CGImageGetBitsPerComponent(cgImage);
  bitsPerPixel = CGImageGetBitsPerPixel(cgImage);
  bytesPerRow = CGImageGetBytesPerRow(cgImage);
  colorSpace = CGImageGetColorSpace(cgImage);
  bitmapInfo = CGImageGetBitmapInfo(cgImage);
  shouldInterpolate = CGImageGetShouldInterpolate(cgImage);//Returns the interpolation setting for a bitmap image.
  intent = CGImageGetRenderingIntent(cgImage);//Returns the rendering intent setting for a bitmap image.
                                                /*kCGRenderingIntentDefault,
                                                 kCGRenderingIntentAbsoluteColorimetric,
                                                 kCGRenderingIntentRelativeColorimetric,
                                                 kCGRenderingIntentPerceptual,
                                                 kCGRenderingIntentSaturation*/
  // データプロバイダを取得する
  CGDataProviderRef dataProvider;// which you use to move data into and out of Quartz. 
                                     // CGDataProviderRef allow you to supply Quartz functions with data.
  dataProvider = CGImageGetDataProvider(cgImage);//Returns the data provider for a bitmap image.
    

   
  // ビットマップデータを取得する
  CFDataRef data;
    //A reference to an immutable CFData object.
  UInt8* buffer;
    //typedef unsigned char UInt8;

  data = CGDataProviderCopyData(dataProvider);//data = 614400byte = 153600(320*480)*4
    //Returns a copy of the provider’s data.
    

  buffer = (UInt8*)CFDataGetBytePtr(data);
    //Returns a read-only pointer to the bytes of a CFData object.
  // ビットマップに効果を与える
     NSUInteger i, j;
     for (j = 0; j < height; j++)
     {
         for (i = 0; i < width; i++) 
         {
             // ピクセルのポインタを取得する
             UInt8* tmp;
             tmp = buffer + j * bytesPerRow + i * 4;
             //
             //NSLog(@"tmp:%d",tmp);
            // NSLog(@"buffer : %d",buffer);
             //NSLog(@"buffer : %d",&buffer);

             // RGBの値を取得する
             UInt8 r, g, b;
             r = *(tmp + 3);//b
             g = *(tmp + 2);//g
             b = *(tmp + 1);//r
        //     NSLog(@"\nr:%d\ng;%d\nb:%d",r,g,b);

     
             // 輝度値を計算する
             UInt8 y;
             y = (77 * r + 28 * g + 151 * b) / 256;//重み付きの平均値
            /* この重み付きの平均値 gray は輝度と呼ばれるもので、カラーテレビの信号処理
             に使われます。輝度は色の明るさを表します。
              重みを付けて平均を取るのは、人の視覚は赤・緑・青に対してそれぞれ感度が異
             なるためで、
             
                 青 < 赤 < 緑
             
             の順に感度が高くなります。最も感度の高い色に重みを持たせています。
             http://www.asahi-net.or.jp/~uc3k-ymd/Glib32/loadbmp.html*/
     
             // 輝度の値をRGB値として設定する
             *(tmp + 1) = y;//b
             *(tmp + 2) = y;//g
             *(tmp + 3) = y;//r
         }
    }
     
  // 効果を与えたデータを作成する
  CFDataRef effectedData;
  effectedData = CFDataCreate(NULL, buffer, CFDataGetLength(data));
   
  // 効果を与えたデータプロバイダを作成する
  CGDataProviderRef effectedDataProvider;
  effectedDataProvider = CGDataProviderCreateWithCFData(effectedData);
   
  // 画像を作成する
    // UIImage* effectedImage;
  effectedCgImage = CGImageCreate(
                                    width, height, 
                                    bitsPerComponent, bitsPerPixel, bytesPerRow, 
                                    colorSpace, bitmapInfo, effectedDataProvider, 
                                    NULL, shouldInterpolate, intent);
    //Creates a bitmap image from data supplied by a data provider.
    /*size_t width,
    size_t height,
    size_t bitsPerComponent,
    size_t bitsPerPixel,
    size_t bytesPerRow,
    CGColorSpaceRef colorspace,
    CGBitmapInfo bitmapInfo,
    CGDataProviderRef provider,
    const CGFloat decode[],
    bool shouldInterpolate,
    CGColorRenderingIntent intent
    );*/
    // effectedImage = [[UIImage alloc] initWithCGImage:effectedCgImage];
    // [effectedImage autorelease];
   
  // 画像を表示する
    // _imageView.image = effectedImage;

  [PaintingView loadImage];
  // 作成したデータを解放する
  CGImageRelease(effectedCgImage);
  CFRelease(effectedDataProvider);
  CFRelease(effectedData);
  CFRelease(data);
    [[UIApplication sharedApplication] setStatusBarHidden:YES];

    self.navigationController.navigationBarHidden = YES;
}

***********************************************************************************

This code is invoked when pushed UIImagePickerController's choose button.

I could understand theacceptation most of code.But  I didn't understand this code 

  buffer = (UInt8*)CFDataGetBytePtr(data);

,,,,,,what isCFDataGetBytePtr??

According to reference,,,

"Returns a read-only pointer to the bytes of a CFData object."

I'll talk to my togeher about this code at tomorrow.

And I don't complete the concept the "pointer".

I'll talk to teacher about it too.

I'd like to complete it tommorow.




***********************************************************************

"We have only more 12 days to go before the deadline."

***********************************************************************





2009年5月26日火曜日

Good morning everyone !

I fixed and did analysis of the my native application "YubiFude".

I fixed problem about an algorithm (order of drawing to specified layer).

I completed it. CGLayer objects are good working now.



I did analysis of code of the "YubiFude" application about save and import a image.

It's complication for me.

I don't understand some of this. therefore I'll study it.

But becouse of I could understand some of this, I post the code with comment.

*******************************************************************************

//**********************************************************************
//*******************↓↓↓Save Image to library↓↓↓************************
//**********************************************************************

- (void)saveViewToPhotoLibrary:(id)sender {
    
  CGRect screenRect = [[UIScreen mainScreen] bounds];
    //↑↑↑get the size of window(w:320 h:480)
  UIGraphicsBeginImageContext(screenRect.size);
    //↑↑↑Creates a bitmap-based graphics context and makes it the current context.
    // The drawing environment is pushed onto the graphics context stack immediately.
  CGContextRef ctx = UIGraphicsGetCurrentContext();
    //Returns the current graphics context.ビットマップコンテキストの収集
    //[[UIColor blackColor] set];
  CGContextFillRect(ctx, screenRect);
    //↑↑↑Paints the area contained within the provided rectangle, using the fill color in the current graphics state.
    
  [self.paintingViewController.view.layer renderInContext:ctx];
    //↑↑↑Renders the receiver and its sublayers into the specified context.
    //UIView class has a layer property.It is a CALayer class object 
   
  UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
    //↑↑↑Returns an image based on the contents of the current bitmap-based graphics context.
    
  UIImageWriteToSavedPhotosAlbum(screenImage, 
                                 nil//The object whose selector should be called after the image has been written to the user’s device.
                                 , nil//The selector of the target object to call. This method should be of the form:                        
                                 , nil//An optional pointer to any context-specific data that you want passed to the completion selector.
    );
    //↑↑↑Adds the specified image to the user’s Saved Photos album.
    
  UIGraphicsEndImageContext();    
    //↑↑↑Removes the current bitmap-based graphics context from the top of the stack.
}
//**********************************************************************
//*******************↑↑↑Save Image to library↑↑↑************************
//**********************************************************************


*******************************************************************************
//↓↓↓pushed a loadButton
- (void)showCameraSheet:(id)sender
{
    statusButton.hidden = YES;
    colorButton.hidden = YES;
    fillColorButton.hidden = YES;
    doneButton.hidden = YES;
    allClearButton.hidden = YES;
    layer1ClearButton.hidden = YES;
    layer2ClearButton.hidden = YES;
    layer3ClearButton.hidden = YES;
    saveButton.hidden = YES;
    loadButton.hidden = YES;
    
  // アクションシートを作る:下から出てくるアラートみたいなやつ
  UIActionSheet* sheet;
  sheet = [[UIActionSheet alloc] 
             initWithTitle:@"Add image to current layer" 
             delegate:self 
             cancelButtonTitle:nil 
             destructiveButtonTitle:nil 
             otherButtonTitles:@"Photo Library", @"Saved Photos",@"Cancel" ,nil];
  [sheet autorelease];
    [sheet showInView:self.view];
    
}

- (void)actionSheet:(UIActionSheet*)actionSheet clickedButtonAtIndex:(NSInteger)buttonIndex
{
  // ボタンインデックスをチェックする、どのボタン押された?
  if (buttonIndex >= 3) {
  return;
  }
    UIImagePickerControllerSourceType sourceType = 0;
  switch (buttonIndex) {
        case 0: {
            sourceType = UIImagePickerControllerSourceTypePhotoLibrary;
            // イメージピッカーを作る
            UIImagePickerController* imagePicker;
            imagePicker = [[UIImagePickerController alloc] init];
            [imagePicker autorelease];
            imagePicker.sourceType = sourceType;
            imagePicker.allowsImageEditing = YES;
            imagePicker.delegate = self;
            
            // イメージピッカーを表示する、写真選ぶ画面
            //モーダルな◆ユーザが応答しない限り、そのプログラムの他のコントロールは入力を受け付けない
            [self presentModalViewController:imagePicker animated:YES];
            
            
            break;
        }
        case 1: {
            sourceType = UIImagePickerControllerSourceTypePhotoLibrary;
            // イメージピッカーを作る
            UIImagePickerController* imagePicker;
            imagePicker = [[UIImagePickerController alloc] init];
            [imagePicker autorelease];
            imagePicker.sourceType = sourceType;
            imagePicker.allowsImageEditing = YES;
            imagePicker.delegate = self;
            
            // イメージピッカーを表示する
            [self presentModalViewController:imagePicker animated:YES];
            
            break;
        }
        case 2: {//自作キャンセルボタン、デフォルトだとメソッド追加命令不可能
            statusButton.hidden = NO;
            colorButton.hidden = NO;
            fillColorButton.hidden = NO;
            doneButton.hidden = NO;
            allClearButton.hidden = NO;
            layer1ClearButton.hidden = NO;
            layer2ClearButton.hidden = NO;
            layer3ClearButton.hidden = NO;
            saveButton.hidden = NO;
            loadButton.hidden = NO;
            
            [self dismissModalViewControllerAnimated:YES];
            //ModalViewControllerの除去
            break;
        }
            
  }
   
  // 使用可能かどうかチェックする(カメラはipodtouchにはないので)
  if (![UIImagePickerController isSourceTypeAvailable:sourceType]) {
        //Returns a Boolean value indicating whether the device supports picking images using the specified source.
  return;
  }
}

- (void)imagePickerController:(UIImagePickerController*)picker 
  didFinishPickingImage:(UIImage*)image 
                 editingInfo:(NSDictionary*)editingInfo
{
    NSLog(@"editingInfo:%@",editingInfo);
    // UIImagePickerControllerCropRect = NSRect: {{0, 0}, {640, 425}};
  // UIImagePickerControllerOriginalImage = ;

    statusButton.hidden = NO;
    colorButton.hidden = NO;
    fillColorButton.hidden = NO;
    doneButton.hidden = NO;
    allClearButton.hidden = NO;
    layer1ClearButton.hidden = NO;
    layer2ClearButton.hidden = NO;
    layer3ClearButton.hidden = NO;
    saveButton.hidden = NO;
    loadButton.hidden = NO;
    
  // イメージピッカーを隠す
  [self dismissModalViewControllerAnimated:YES];
   
  // オリジナル画像を取得する
  UIImage* originalImage;
  originalImage = [editingInfo objectForKey:UIImagePickerControllerOriginalImage];
   
  // グラフィックスコンテキストを作る、小さくすると無理に引き延ばすのでモザイクがかかる
  CGSize size = { 320, 480 };
  UIGraphicsBeginImageContext(size);
    //Creates a bitmap-based graphics context and makes it the current context.
    

   
  // 画像を縮小して描画する
  CGRect rect;
  rect.origin = CGPointZero;
  rect.size = size;
  [originalImage drawInRect:rect];
  // 描画した画像を取得する
  shrinkedImage = UIGraphicsGetImageFromCurrentImageContext();
  UIGraphicsEndImageContext();
   
  // CGImageを取得する
  CGImageRef cgImage;
  cgImage = shrinkedImage.CGImage;
   
  // 画像情報を取得する
  size_t width;
  size_t height;
  size_t bitsPerComponent;
  size_t bitsPerPixel;
  size_t bytesPerRow;
  CGColorSpaceRef colorSpace;
  CGBitmapInfo bitmapInfo;
  bool shouldInterpolate;
  CGColorRenderingIntent intent;
  width = CGImageGetWidth(cgImage);
  height = CGImageGetHeight(cgImage);
  bitsPerComponent = CGImageGetBitsPerComponent(cgImage);
  bitsPerPixel = CGImageGetBitsPerPixel(cgImage);
  bytesPerRow = CGImageGetBytesPerRow(cgImage);
  colorSpace = CGImageGetColorSpace(cgImage);
  bitmapInfo = CGImageGetBitmapInfo(cgImage);
  shouldInterpolate = CGImageGetShouldInterpolate(cgImage);//Returns the interpolation setting for a bitmap image.
  intent = CGImageGetRenderingIntent(cgImage);//Returns the rendering intent setting for a bitmap image.
   
  // データプロバイダを取得する
  CGDataProviderRef dataProvider;
  dataProvider = CGImageGetDataProvider(cgImage);//Returns the data provider for a bitmap image.
    

   
  // ビットマップデータを取得する
  CFDataRef data;
  UInt8* buffer;
  data = CGDataProviderCopyData(dataProvider);
  buffer = (UInt8*)CFDataGetBytePtr(data);
  // ビットマップに効果を与える
    /* NSUInteger i, j;
     for (j = 0; j < height; j++) {
     for (i = 0; i < width; i++) {
     // ピクセルのポインタを取得する
     UInt8* tmp;
     tmp = buffer + j * bytesPerRow + i * 4;
     
     // RGBの値を取得する
     UInt8 r, g, b;
     r = *(tmp + 3);
     g = *(tmp + 2);
     b = *(tmp + 1);
     
     // 輝度値を計算する
     UInt8 y;
     y = (77 * r + 28 * g + 151 * b) / 256;
     
     // 輝度の値をRGB値として設定する
     *(tmp + 1) = y;
     *(tmp + 2) = y;
     *(tmp + 3) = y;
     }
     }
     */
  // 効果を与えたデータを作成する
  CFDataRef effectedData;
  effectedData = CFDataCreate(NULL, buffer, CFDataGetLength(data));
   
  // 効果を与えたデータプロバイダを作成する
  CGDataProviderRef effectedDataProvider;
  effectedDataProvider = CGDataProviderCreateWithCFData(effectedData);
   
  // 画像を作成する
    // UIImage* effectedImage;
  effectedCgImage = CGImageCreate(
                                    width, height, 
                                    bitsPerComponent, bitsPerPixel, bytesPerRow, 
                                    colorSpace, bitmapInfo, effectedDataProvider, 
                                    NULL, shouldInterpolate, intent);
    // effectedImage = [[UIImage alloc] initWithCGImage:effectedCgImage];
    // [effectedImage autorelease];
   
  // 画像を表示する
    // _imageView.image = effectedImage;

  [PaintingView loadImage];
  // 作成したデータを解放する
  CGImageRelease(effectedCgImage);
  CFRelease(effectedDataProvider);
  CFRelease(effectedData);
  CFRelease(data);
}

- (void)imagePickerControllerDidCancel:(UIImagePickerController*)picker
{
  // イメージピッカーを隠す
  [self dismissModalViewControllerAnimated:YES];
}

*****************************************************************************


2009年5月25日月曜日

Good morning!

My head is tired...,, because I used my brain hard.


I could load a image of the library to CGLayer object.

********************************************************************

First, load the  image of the  library to UIImage object.

Second, import theUIImage object to CGImageRef type object.

It uses a "CGImage" property of UIImage. ↓↓↓↓↓↓

  CGImageref  = UIImage.CGImage;

Third,  import the CGImageref object image toCGContextRef type object.

It uses this function↓↓↓↓

     CGContextRef = CGLayerGetContext(CGLayerRef);


    CGContextDrawImage(CGContextRef, bounds, CGImageRef);

********************************************************************

It is Completed the import image of library.

But it has a poblem.(↓↓↓iPhone OS programming Guide)


I fixed it by refered to this.

I coomlete the most of  the "YubiFude"application.

But I don't understand some feature and algorithm of  the "YubiFide" application (ex. Import the image of library ).

Therefore, I'll study code of  the "YubiFude" application.



2009年5月24日日曜日

Good morning!


I studied the "YubiFude" application.

I could save the image of PaintingView(UIView class)

and imported image in the photo liblary to the PaintingView(but not complete)

 and changed the value of color and fill color (0.0~1.0→0~255).

I save image of the PaintingView by reffered to this.

 ******code ******

- (void)saveViewToPhotoLibrary:(id)sender {
 
  CGRect screenRect = [[UIScreen mainScreen] bounds];
 //↑↑↑get the size of window:
  UIGraphicsBeginImageContext(screenRect.size);
 //↑↑↑Creates a bitmap-based graphics context and makes it the current context.
 
  CGContextRef ctx = UIGraphicsGetCurrentContext();

  CGContextFillRect(ctx, screenRect);
 //↑↑↑Paints the area contained within the provided rectangle, using the fill color in the current graphics state.
 
  [self.paintingViewController.view.layer renderInContext:ctx];
 //↑↑↑Renders the receiver and its sublayers into the specified context.
 //UIView class has a layer property.It is a CALayer class object  
   
  UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
 //↑↑↑Returns an image based on the contents of the current bitmap-based graphics context.
 
  UIImageWriteToSavedPhotosAlbum(screenImage, nil, nil, nil);
 //↑↑↑Adds the specified image to the user’s Saved Photos album.
 
  UIGraphicsEndImageContext();  
 //↑↑↑Removes the current bitmap-based graphics context from the top of the stack.
}
***************

Therefore, I could do it.

I could import image in the photo liblary to the PaintingView.

But It's not complete yet.

It is difficulty problem here. ,,, layer problem,,,?

I don't know it well.

Therefore, I'll understand it at tomorrow.
 

2009年5月23日土曜日

Good morning!

I made a presentation about my pursuit (main of pursuit is development of the "YubiFude" application) today, because Apple company members came to 2132 room.

 I developed the "YubiFude" application today.

Contents of developed the it↓↓↓

*I complete the feature of zoom in.

*I hid the task bar and I could use the wide screen for drawing.

*I added the animation of UISlider. It's invoked when color button is pushed.

*I almost complete the design(GUI) of "YubiFude".

Zoom in feature is used these code↓↓↓

******************************************************************

  CGAffineTransform translate = CGAffineTransformMake(2.0, 0.0, 0.0, 2.0, 160-location.x, 240-location.y); 
  [self setTransform:translate];

******************************************************************

UIView has a property "translate". It can use various translation of the UIView.

CGAffineTransformMake function is a translate the UIView object by 6 arguments.



Zoom feature was completed. but I'd like to learn "Affine transform".

I think complete the "Quartz2D" by complete the "Affine transform"

I hid the task bar by added contents of "Info.plist" .

*place of blue row.



When I hid the task bar, I got the error of iPhone simulator.

I can't draw to space of task bar was existed in iPhone simulator.

but I can it in device.

←in simulator←in device



I added the animation to the UISlider. 

Probably, you can not understand it by the screen shot image.

I almost complete the design(GUI) of "YubiFude".

I change the GUI of "YubiFude". It's includ the UISlider animation.


 

2009年5月22日金曜日

Good morning!?

Today's contents

* I developed the "YubiFude" application.

* I prepared the my presentation.

I added a feature to "YubiFude" application.

It's zoom up view. 



I used a function "self.transform = CGAffineTransformMakeScale(3.0, 3.0);" for I added feature.

But It's not completed.

I couldn't control a clipped placement.

I'll fix it tomorrow.



I prepare the my presentation because apple company member comes MacRoom and I have to
presentation tomorrow.

2009年5月21日木曜日


Good morning!

I solved yesterday's problem today.

Yesterday's problem is I can hide the 4 buttons but 4 buttons not.

I declared the UIButton object above the @Interface in file(RootViewController:UIViewController).

I declared it below the @Interface in file until yesterday.

I could use those objects in other class(PaintingViewController:UIViewController).

I think most of the "YubiFude" application was completed.

I'll study new feature of iPhoneOS3.0 until WWDC2009 started.

↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓I solved problem↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓
 

2009年5月20日水曜日

Good morning!

I developed the "YubiFude" application and I watched the video about the iPhoneOS3.0.

Main contents of this video is API of new feature.

So all contents are Englich as well as WWDC sessions, I think it's nice.

I developed about the "YubiFude" GUI because I'll attend the lab session "iPhone Interface Design Consulting Lab".



I added effect. 

I think delete the buttons on drawing screen better than it remaining.

I added the feature is removing and restoring the 4 buttons by one tap.

So, It's became the wide the screen.

But I couldn't invoke same feature to other 4 buttons.

It belong to other class's object.

I used "Property" and "synthesize" and "import ".

But I couldn't the value of buttons.

I'll solve it tomorrow.



I couldon't get fastresponce of drawing yesterday but faster then yesterday at today.

I reduce the source code.





↓↓↓↓↓↓↓↓↓↓↓↓↓↓Screen shot of "YubiFude" ↓↓↓↓↓↓↓↓↓↓↓↓↓↓





2009年5月19日火曜日

Good morning everyone!

I finished developing the "YubiFude" application.

←This image is screen shot of  "YubiHude" application.

I finished it, but  I 'm not completed yet, because

It is very slowly responce.

Probably, source code of  "YubiFude" is heavy.

It has many objexcts.

*****************************************************

make UITextfield *about15

UIImage *newImage = [[UIImage imageNamed:@"whiteButton.png"] stretchableImageWithLeftCapWidth:12.0 topCapHeight:0.0];

 title =[[UITextField alloc] initWithFrame:s_label_title];
 NSString *tit = [[NSString alloc]initWithString:@"Status"];
 title.text = tit;
 title.font = [UIFont fontWithName:@"Georgia-Bold" size:50.0]; 
 title.borderStyle = UITextBorderStyleBezel;
 title.enabled = NO;
 title.textAlignment = UITextAlignmentCenter;
 title.backgroundColor = [UIColor colorWithRed:0.8 green:0.8 blue:0.8 alpha:1.0];
 
 [tit release]; 

make UIButton*about50

sizeM = [UIButton buttonWithType:UIButtonTypeRoundedRect];
 [sizeM setFrame:CGRectMake(85.0f,105.0f, 70.0f, 40.0f)];
 [sizeM addTarget:self action:@selector(Little:) forControlEvents:UIControlEventTouchUpInside];
 [sizeM setBackgroundImage:newImage forState:UIControlStateNormal];
 [sizeM setTitle:@"<" forState:UIControlStateNormal];
 
 [self.view addSubview:sizeM];

make UISlider *7

 slider_Size = [[UISlider alloc] initWithFrame:CGRectMake(10, 150, 300, 20)];
 [slider_Size addTarget:self action:@selector(Slider:) forControlEvents:UIControlEventValueChanged];
 slider_Size.backgroundColor = [UIColor clearColor]; 
 UIImage *LeftTrack_Size = [[UIImage imageNamed:@"slide_S.png"]
  stretchableImageWithLeftCapWidth:10.0 topCapHeight:0.0];
 UIImage *RightTrack_Size = [[UIImage imageNamed:@"slide.png"]
  stretchableImageWithLeftCapWidth:10.0 topCapHeight:0.0];
 [slider_Size setThumbImage: [UIImage imageNamed:@"slider_Ball_S.png"] forState:UIControlStateNormal];
 [slider_Size setMinimumTrackImage:LeftTrack_Size forState:UIControlStateNormal];
 [slider_Size setMaximumTrackImage:RightTrack_Size forState:UIControlStateNormal];
 slider_Size.minimumValue = 0.0;
 slider_Size.maximumValue = 500.0;
 slider_Size.continuous = YES;
 slider_Size.value = 5.0;
 
 [self.view addSubview:slider_Size];


2009年5月18日月曜日


Contents of today's studying is 

- Installed a iPhoneOS3.0.

- Developed a "YubiFude"

 I installed the iPhone OS very easily. I mistook I choose the iPhoneOS3.0 install file at yesterday.

The iPodtouch rented by my collage is no problem at now.

I developed the "YubiFude" at today. I finished make the UIViewController class's sub class (RootViewController, StatusViewController, ColorViewController, FillColorViewController, and PaintingViewController) and UIView class's subclass(PaintingView). PaintingView have the feature of the drawing.

*Screen shot "YubiFude" in my device. 



I'll complete the "YubiFude" until May 22.

2009年5月17日日曜日

I tried install an iPhone OS3.0 to iPodtouch. but I couldn't. 

I used the iTune and Organizer for I install the iPhoneOS3.0. The iTunes say "The iPod"xxxxxx" could not be updated because the firmware file is not compatible". The Orginezer was not selected the iPhoneOS3.0.

Because My acquaintance installed the iPhoneOS3.0, I'll learn about how to install iPhoneOS3.0 by him.



I restarted develop the my application "YubiFude", because I'll attend lab sessions "Quartz 2D Lab" in WWDC2009 . I 'll master understanding and accomplish "YubiFude" before WWDC2009 started.



Today, I started develop "YubiFude" without a Interface Builder. It's do fine. Next, I'll finish the developing before tomorrow.

2009年5月16日土曜日

Hello

I attended the WWDC2008 in last year and it was a wonderful memory.

I'll attend the WWDC2009. I have been studying and developing the iPhone and iPodtouch application since attended the WWDC2008.
I developing a native application "YubiHude". This application is used skill of Quartz2D. I'll bring it and show it to apple engineer in labs for get advices from them. 

I get the skill of development by WWDC2009 sessions(Advanced Debugging and Performance Analysis, Optimizing Performance on iPhone,User Interface Design for iPhone Apps, iPhone Performance Optimization with Instruments ,,,), WWDC2009 labs(Quartz 2D Lab, iPhone Interface Design Consulting Lab,,,).